CN110069190A - Equipment, method and the graphic user interface of system-level behavior for 3D model - Google Patents
Equipment, method and the graphic user interface of system-level behavior for 3D model Download PDFInfo
- Publication number
- CN110069190A CN110069190A CN201811165504.3A CN201811165504A CN110069190A CN 110069190 A CN110069190 A CN 110069190A CN 201811165504 A CN201811165504 A CN 201811165504A CN 110069190 A CN110069190 A CN 110069190A
- Authority
- CN
- China
- Prior art keywords
- input
- user interface
- cameras
- display
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 474
- 230000000007 visual effect Effects 0.000 claims abstract description 807
- 230000014509 gene expression Effects 0.000 claims abstract description 561
- 230000004044 response Effects 0.000 claims abstract description 420
- 230000033001 locomotion Effects 0.000 claims description 324
- 230000006399 behavior Effects 0.000 claims description 235
- 230000003190 augmentative effect Effects 0.000 claims description 164
- 230000008859 change Effects 0.000 claims description 140
- 238000001514 detection method Methods 0.000 claims description 140
- 230000001965 increasing effect Effects 0.000 claims description 119
- 241000406668 Loxodonta cyclotis Species 0.000 claims description 38
- 238000003860 storage Methods 0.000 claims description 28
- 239000011800 void material Substances 0.000 claims description 27
- 230000004913 activation Effects 0.000 claims description 26
- 230000007704 transition Effects 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 21
- 230000004438 eyesight Effects 0.000 claims description 19
- 238000005096 rolling process Methods 0.000 claims description 17
- 230000010365 information processing Effects 0.000 claims description 15
- 230000000977 initiatory effect Effects 0.000 claims description 15
- 238000007689 inspection Methods 0.000 claims description 11
- 238000004088 simulation Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 6
- 230000003542 behavioural effect Effects 0.000 claims description 3
- 230000008901 benefit Effects 0.000 claims 2
- 238000011156 evaluation Methods 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 426
- 230000005611 electricity Effects 0.000 description 131
- 238000003825 pressing Methods 0.000 description 109
- 230000008569 process Effects 0.000 description 98
- 230000001976 improved effect Effects 0.000 description 63
- 238000013519 translation Methods 0.000 description 51
- 230000002829 reductive effect Effects 0.000 description 32
- 230000002708 enhancing effect Effects 0.000 description 26
- 238000001994 activation Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 22
- 230000002093 peripheral effect Effects 0.000 description 22
- 238000010408 sweeping Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 21
- 230000000694 effects Effects 0.000 description 18
- 238000004898 kneading Methods 0.000 description 18
- 238000005259 measurement Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 17
- 230000009466 transformation Effects 0.000 description 16
- 230000005484 gravity Effects 0.000 description 15
- 230000003287 optical effect Effects 0.000 description 14
- 238000006073 displacement reaction Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 10
- 230000003247 decreasing effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 10
- 230000036544 posture Effects 0.000 description 10
- 238000012360 testing method Methods 0.000 description 10
- 230000008878 coupling Effects 0.000 description 9
- 238000010168 coupling process Methods 0.000 description 9
- 238000005859 coupling reaction Methods 0.000 description 9
- 230000007935 neutral effect Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000010355 oscillation Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 230000003213 activating effect Effects 0.000 description 6
- 238000009499 grossing Methods 0.000 description 6
- 230000036961 partial effect Effects 0.000 description 6
- 230000002441 reversible effect Effects 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000008447 perception Effects 0.000 description 5
- 230000000739 chaotic effect Effects 0.000 description 4
- 230000004087 circulation Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 4
- 230000001953 sensory effect Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000008093 supporting effect Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000003490 calendering Methods 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000005562 fading Methods 0.000 description 3
- 229910052738 indium Inorganic materials 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000005060 rubber Substances 0.000 description 3
- 230000021317 sensory perception Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000007600 charging Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000000881 depressing effect Effects 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 210000003746 feather Anatomy 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 229920000742 Cotton Polymers 0.000 description 1
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 230000005355 Hall effect Effects 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007598 dipping method Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004579 marble Substances 0.000 description 1
- 229940074869 marquis Drugs 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000032696 parturition Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- VBUNOIXRZNJNAD-UHFFFAOYSA-N ponazuril Chemical compound CC1=CC(N2C(N(C)C(=O)NC2=O)=O)=CC=C1OC1=CC=C(S(=O)(=O)C(F)(F)F)C=C1 VBUNOIXRZNJNAD-UHFFFAOYSA-N 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008771 sex reversal Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000002459 sustained effect Effects 0.000 description 1
- -1 timber Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Entitled " equipment, method and the graphic user interface of the system-level behavior for 3D model " of the invention.The invention discloses the expressions that virtual objects are shown in the first interface region of the computer system with display, touch sensitive surface and one or more cameras over the display.It detects at the corresponding position of expression with the virtual objects on display on touch sensitive surface by contacting the carry out first input.In response to detecting the first input carried out by contact: meeting the first standard by the first input that contact carries out according to determining, second user interface zone is shown over the display, this includes at least part of display for replacing the first interface region, first interface region has the expression of the visual field of one or more cameras, and when being switched to display second user interface zone from the first interface region of display, continuously display the expression of virtual objects.
Description
Related application
This application involves the U.S. Provisional Application No.62/621 submitted on January 24th, 2018,529, the U.S. Provisional Application
It is incorporated by reference and is incorporated herein.
Technical field
Present invention relates generally to the electronic equipment of display virtual objects, which includes but is not limited in various scenes
The electronic equipment of middle display virtual objects.
Background technique
In recent years, the development for the computer system of augmented reality dramatically increases.Example augmented reality environment includes extremely
The virtual element of few some replacements or enhancing physical world.Input equipment for computer system and other electronic computing devices
Such as touch sensitive surface is used to interact with virtual/augmented reality environment.Example touch sensitive surface includes touch tablet, touch-sensitive remote controler
And touch-screen display.Such surface is used to manipulate user interface and the object therein on display.Exemplary user interface
Object includes digital picture, video, text, icon and control element (such as, button) and other figures.
But for include at least some virtual elements environment (for example, application program, augmented reality environment, mix show
Real environment and reality environment) method that interacts and interface it is troublesome, inefficient and limited.For example, using a series of inputs
Come be orientated and position the virtual objects in augmented reality environment be it is cumbersome, significant cognitive load is caused to user, and damage
Experience of the evil to virtual/augmented reality environment.In addition, these methods spend the longer time than the time required to, to waste energy
Amount.This later regard is especially important in battery-driven equipment.
Summary of the invention
Therefore, it is necessary to have the computer system of improved method and interface for interacting with virtual objects.This
Class method and interface optionally supplement or replace the conventional method for interacting with virtual objects.Such method and interface subtract
Lack the quantity, degree, and/or property of input from the user, and generates more effective man-machine interface.Battery is driven
Equipment, such method and interface can save electricity consumption and increase the time between the charging of two primary cells.
The computer system of the disclosure reduce or eliminate disadvantages described above and with for interacting with virtual objects
The associated other problems in interface (for example, user interface and the relevant non-interface AR for augmented reality (AR)).Some
In embodiment, which includes desktop computer.In some embodiments, which is portable
(for example, laptop, tablet computer or handheld device).In some embodiments, which includes personal electricity
Sub- equipment (for example, wearable electronic, such as wrist-watch).In some embodiments, which has touch tablet
(and/or with touch board communications).In some embodiments, which has touch-sensitive display (also referred to as " touching
Touch screen " or " touch-screen display ") (and/or being communicated with touch-sensitive display).In some embodiments, the department of computer science
System has graphic user interface (GUI), one or more processors, memory and one or more modules, is stored in memory
In with the program or instruction set for executing multiple functions.In some embodiments, pass through stylus and/or hand to User Part
Gesture in abutment and touch sensitive surface is interacted with GUI.In some embodiments, these functions optionally include
Play game, picture editting, drawing, displaying, word processing, electrical form production, making and receiving calls, video conference, transceiver electronics postal
Part, instant messaging, body-building support, digital photography, digital video recordings, web page browsing, digital music play, record the note
And/or digital video plays.Executable instruction for executing these functions, which is optionally included in, to be configured for by one
Or in the non-transient computer readable storage medium that executes of multiple processors or other computer program products.
According to some embodiments, in the computer system with display, touch sensitive surface and one or more cameras
Locate execution method.This method includes that the expression of virtual objects is shown in the first interface region over the display.This method
When further including the first expression for showing virtual objects in the first interface region over the display, detect in touch sensitive surface
By contacting the carry out first input at the upper corresponding position of expression with the virtual objects on display.This method further includes,
In response to detecting the first input carried out by contact, the first mark is met by the first input that contact carries out according to determining
It is quasi-: to show second user interface zone over the display, this includes at least part of aobvious of the first interface region of replacement
Show, the first interface region has an expression of the visual field of one or more cameras, and from showing the first user interface area
When domain is switched to display second user interface zone, the expression of virtual objects is constantly shown.
According to some embodiments, in the computer system with display, touch sensitive surface and one or more cameras
Locate execution method.This method includes showing that the first of virtual objects indicate in the first interface region over the display.It should
When method further includes the first expression for showing virtual objects in the first interface region over the display, detect touch-sensitive
It indicates to contact the carry out first input by first at corresponding position with first of the virtual objects on display on surface.It should
Method further includes, in response to detecting the first input carried out by the first contact, and according to determine by first contact into
Capable input meets the first standard, and the expression of virtual objects, second user interface zone are shown in second user interface zone
It is different from the first interface region.This method further includes the second table that virtual objects are shown in second user interface zone
When showing, the second input of detection, and in response to detecting the second input, according to determining second input and in second user interface area
The request that virtual objects are manipulated in domain is corresponding, and change virtual objects based on the second input second indicates in second user interface area
Display properties in domain;And show according to determining second input and that in augmented reality environment, the request of virtual objects is corresponding,
Show that the third of virtual objects indicates, which has the expression of the visual field of one or more cameras.
According to some embodiments, method is executed in the computer systems division with display and touch sensitive surface.This method
Including in response to showing the request of the first user interface, showing has the first user interface of the expression of first item.This method
It further include, according to determining that first item and corresponding virtual three-dimensional object are corresponding, showing the expression of first item, first item
Indicate there is instruction that first item virtual three-dimensional object corresponding with first is corresponding visually indicates.This method further includes, according to
It determines that first item is not corresponding with corresponding virtual three-dimensional object, shows the expression without the first item visually indicated.
This method further includes, and after the expression of display first item, receives asking for the second user interface that display includes second item
It asks.This method further includes that, in response to showing the request at second user interface, showing has the second user of the expression of second item
Interface.This method further includes, according to determining that second item and corresponding virtual three-dimensional object are corresponding, showing the table of second item
Show, the expression of second item, which has an instruction second item virtual three-dimensional object corresponding with second is corresponding, to be visually indicated.The party
Method further includes, according to determining that second item is not corresponding with corresponding virtual three-dimensional object, display without this visually indicate the
Binomial purpose indicates.
According to some embodiments, there is display generating unit, one or more input equipments and one or more
The computer systems division of camera executes method.This method includes receiving to show asking for virtual objects in the first interface region
It asks, the first interface region includes at least part of the visual field of one or more cameras.This method further includes, in response to
The request that virtual objects are shown in first interface region is being included in the first interface region via display generating unit
In one or more cameras visual field at least part of top show virtual objects expression, wherein one or more phases
The visual field of machine is the view of physical environment locating for one or more cameras.The expression for showing virtual objects includes: according to determination
Object placement standard is unmet, and showing has the expression of the virtual objects of first group of perceptual property and first orientation, wherein
The placement location that object places standard requirements virtual objects can be identified in one or more viewing field of camera, to meet object
The part being shown in the visual field of one or more cameras in placement standard, first orientation and physical environment is unrelated;And according to
It determines that object is placed standard and met, shows that there is the expression of the virtual objects of second group of perceptual property and second orientation, the
Two groups of perceptual properties are different from first group of perceptual property, second orientation and the object detected in the visual field of one or more cameras
The plane managed in environment is corresponding.
According to some embodiments, there is display generating unit, one or more input equipments, one or more cameras
And the computer systems division of one or more attitude transducers executes method, attitude transducer includes one or more for detecting
The variation of the posture of the equipment of a camera.This method includes receiving the enhancing that physical environment is shown in the first interface region
The request of real view, the first interface region include the expression of the visual field of one or more cameras.This method further includes ringing
Ying Yu receives the request of the augmented reality view of display physical environment, shows the expression of the visual field of one or more cameras, and
And according to determining that the calibration standard for being used for the augmented reality view of physical environment is unmet, show according in physical environment
The mobile dynamic of one or more cameras animation calibration user interface object, wherein display calibration user interface object packet
It includes: when showing calibration user interface object, via one or more in one or more attitude transducers detection physical environment
The variation of the posture of a camera;Also, in response to the attitudes vibration for detecting one or more cameras in physical environment, according to
The attitudes vibration of one or more cameras in detected physical environment adjusts at least the one of calibration user interface object
A display parameters.This method further includes, in display according to the posture of one or more cameras in detected physical environment
When changing the calibration user interface object moved over the display, detect that calibration standard is met.This method further includes ringing
Ying Yu detects that calibration standard is met, and stops display calibration user interface object.
According to some embodiments, having display generating unit and one or more inputs including touch sensitive surface are set
Standby computer systems division executes method.This method includes by showing that generating unit shows void in the first interface region
The expression at the first visual angle of quasi- three dimensional object.This method further includes showing when in the first interface region over the display
When the expression at the first visual angle of virtual three-dimensional object, detection with relative in display virtual three-dimensional object from virtual three-dimensional object
Corresponding first input of request of the display rotation virtual three-dimensional object of the not visible part in first visual angle.This method is also wrapped
It includes, in response to detecting the first input: according to determining that the first input is corresponding with around the request of first axle rotated three dimensional object, making
Virtual three-dimensional object rotates the amount of the magnitude based on the first input and determination relative to first axle, and the amount is by limitation virtual three
Dimensional object is constrained relative to first axle rotation is more than the moving limit of threshold value rotation amount;Also, according to determine first input with
Request around the second axis rotated three dimensional object for being different from first axle is corresponding, rotates virtual three-dimensional object relative to the second axis
The amount determined based on the magnitude of the first input, wherein for having the input of the magnitude higher than respective threshold, equipment makes virtual three
Dimensional object is more than threshold value rotation amount relative to the rotation of the second axis.
According to some embodiments, method is executed in the computer systems division with display generating unit and touch sensitive surface.
This method includes showing that the first interface region, the first interface region include and multiple objects via display generating unit
The associated user interface object of manipulative behavior, multiple object manipulation behavior include in response to meeting first gesture criterion of identification
Input and the first object manipulation behavior executed and executed in response to the input for meeting second gesture criterion of identification second
Object manipulation behavior.This method further includes, and when showing the first interface region, detection is related to the input of user interface object
First part, this includes the movement for detecting one or more contacts on touch sensitive surface, and when detecting on touch sensitive surface
To when one or more contact, contacted relative to first gesture criterion of identification and second gesture criterion of identification assessment one or more
Movement.This method further includes, in response to detecting the first part of input, the more new user interface of the first part based on input
The appearance of object, this include: according to determine input first part meet before meeting second gesture criterion of identification it is first-hand
Gesture criterion of identification changes the appearance of user interface object according to the first object manipulation behavior and the first part based on input, with
And second gesture criterion of identification is updated by the threshold value of increase second gesture criterion of identification;And meeting according to determining to input
Meet second gesture criterion of identification before first gesture criterion of identification, the first part based on input, according to the second object manipulation
Behavior changes the appearance of user interface object, and the threshold value by increasing first gesture criterion of identification updates first gesture
Criterion of identification.
According to some embodiments, there is display generating unit, one or more input equipments, one or more audios
The computer systems division of output generator and one or more cameras executes method.This method includes via display generating unit
The expression of virtual objects is shown in the first interface region, the first interface region includes the view of one or more cameras
The expression of field, wherein display includes keeping the expression of virtual objects and capturing the physics in the visual field of one or more cameras
The first spatial relationship in environment between detected plane.This method further includes the view that detection adjusts one or more cameras
The movement of the equipment of field.This method further includes that the movement of the equipment of visual field of one or more cameras is adjusted in response to detecting:
When adjusting the visual field of one or more cameras, put down according to virtual objects with what is detected in the visual field of one or more cameras
Display of the expression of the first adjusting space relation virtual objects between face in the first interface region, and according to determination
The movement of equipment is so that virtual objects are moved to except the shown part of the visual field of one or more cameras, and movement is super
The amount for crossing threshold quantity generates the first audio alert via one or more audio output generators.
According to some embodiments, electronic equipment includes display generating unit, optionally one or more input equipments, appoints
Selection of land one or more touch sensitive surface, optionally one or more cameras, for detect times with the contact strength of touch sensitive surface
Selection of land one or more sensors, optionally one or more audio output generators, optionally one or more apparatus orientations
Sensor, optionally one or more tactile output generators, optionally one or more postures for test pose variation
Sensor, one or more processors and the memory for storing one or more programs;The one or more program is configured
At being performed by one or more processors, and one or more programs include any described herein for executing or causing to execute
Method operation instruction.According to some embodiments, computer readable storage medium has the instruction being stored therein, this
A little instructions by have display generating unit, optionally one or more input equipments, optionally one or more touch sensitive surfaces,
Optionally one or more cameras, for detect with the optionally one or more sensors of the contact strength of touch sensitive surface, times
Selection of land one or more audio output generator, optionally one or more apparatus orientation sensor, optionally one or more
When the electronic equipment of tactile output generator and optionally one or more attitude transducers executes, so that the equipment executes sheet
The operation of text any method is performed the operation of any method described herein.According to some embodiments, have
Show generating unit, optionally one or more input equipments, optionally one or more touch sensitive surfaces, optionally one or more
A camera, for detecting and optionally one or more sensors of the contact strength of touch sensitive surface, optionally one or more
Audio output generator, optionally one or more apparatus orientation sensors, optionally one or more tactile output generators
And optionally one or more attitude transducers, memory and for executing one or more programs stored in memory
One or more processors electronic equipment on graphic user interface include one shown in any method described herein
A or multiple elements, the one or more element are updated in response to input, described in any method as described herein.Root
According to some embodiments, electronic equipment includes: display generating unit, optionally one or more input equipments, optionally one
Or multiple touch sensitive surfaces, optionally one or more cameras, for detect and optionally one of the contact strength of touch sensitive surface
Or multiple sensors, optionally one or more audio output generators, optionally one or more apparatus orientation sensors, appoint
Selection of land one or more tactile output generator and the optionally one or more attitude transducers changed for test pose;
And for execute or cause execute methods described herein in either method operation device.According to some embodiments,
There is display generating unit, optionally one or more input equipments, optionally one or more touch sensitive surfaces, optionally one
A or multiple cameras, for detect with the optionally one or more sensors of the contact strength of touch sensitive surface, optionally one
Or multiple audio output generators, optionally one or more apparatus orientation sensors, optionally one or more tactile outputs
Information used in generator and electronic equipment for optionally one or more attitude transducers of test pose variation
Processing unit includes operation for executing any method described herein or the operation of any method described herein is performed
Device.
Therefore, for display generating unit, optionally one or more input equipments, optionally one or more touch-sensitive
Surface, optionally one or more cameras are sensed for detecting with the optionally one or more of the contact strength of touch sensitive surface
Device, optionally one or more audio output generators, optionally one or more apparatus orientation sensors, optionally one or
The electronic equipment of multiple tactile output generators and optionally one or more attitude transducers is provided in various scenes
The improved method of middle display virtual objects and interface, to improve the validity of such equipment, efficiency and user satisfaction.This
Class method and interface can supplement or replace the conventional method for showing virtual objects in various scenes.
Detailed description of the invention
The various embodiments in order to better understand should refer to following specific embodiment in conjunction with the following drawings,
Wherein similar drawing reference numeral indicates corresponding part in all the appended drawings.
Figure 1A is the block diagram for showing the portable multifunction device with touch-sensitive display according to some embodiments.
Figure 1B is the block diagram for showing the exemplary components for event handling according to some embodiments.
Fig. 1 C is the block diagram for showing the tactile output module according to some embodiments.
Fig. 2 shows the portable multifunction devices with touch screen according to some embodiments.
Fig. 3 is the block diagram according to the example multifunctional equipment with display and touch sensitive surface of some embodiments.
The example that Fig. 4 A shows the application menu on the portable multifunction device according to some embodiments is used
Family interface.
Fig. 4 B shows setting with the multi-functional of touch sensitive surface that display separates for having according to some embodiments
Standby example user interface.
Fig. 4 C to Fig. 4 E shows the example of the resistance to vibration threshold value according to some embodiments.
Fig. 4 F to Fig. 4 K shows one group of sample tactile output mode according to some embodiments.
Fig. 5 A to Fig. 5 AT shows the example user interface according to some embodiments, is used to use from display first
Family interface zone shows the expression of virtual objects when being switched to display second user interface zone.
Fig. 6 A to Fig. 6 AJ shows the example user interface according to some embodiments, according to some embodiments, uses
It the first of virtual objects indicates in being shown in the first interface region, in second user interface zone show virtual objects
Second indicate and there is the third of virtual objects of expression of the visual field of one or more cameras to indicate for display.
Fig. 7 A to Fig. 7 E, Fig. 7 F1 to Fig. 7 F2, Fig. 7 G1 to Fig. 7 G2 and Fig. 7 H to Fig. 7 P are shown according to some implementations
The example user interface of scheme is used to show have the directory entry project visually indicated corresponding with virtual three-dimensional object.
Fig. 8 A to Fig. 8 E is according to the flow chart of the process of some embodiments, and according to some embodiments, which is used
Yu Cong shows the expression that virtual objects are shown when the first interface region is switched to display second user interface zone.
Fig. 9 A to Fig. 9 D is according to the flow chart of the process of some embodiments, which is used in the first user interface area
The first of virtual objects are shown in domain indicates, shows that the second of virtual objects indicate and show in second user interface zone
The third of the virtual objects of the expression of visual field with one or more cameras indicates.
Figure 10 A to Figure 10 D is according to the flow chart of the process of some embodiments, which has indication item for showing
The mesh project visually indicated corresponding with virtual three-dimensional object.
Figure 11 A to Figure 11 V shows the example user interface according to some embodiments, is used to be placed according to object and mark
Whether standard obtains meeting to show the virtual objects with different perceptual properties.
Figure 12 A to Figure 12 D, Figure 12 E-1, Figure 12 E-2, Figure 12 F-1, Figure 12 F-2, Figure 12 G-1, Figure 12 G-2, Figure 12 H-1,
Figure 12 H-2, Figure 12 I-1, Figure 12 I-2, Figure 12 J, Figure 12 K-1, Figure 12 K-2, Figure 12 L-1 and Figure 12 L-2 are shown according to some
The example user interface of embodiment, be used to show according to the movement of one or more cameras of equipment and dynamically animation
Calibration user interface object.
Figure 13 A to Figure 13 M, which is shown, to be used according to the constraint virtual objects of some embodiments around the example of the rotation of axis
Family interface.
Figure 14 A to Figure 14 Z, which is shown, meets first according to the first object manipulation behavior that determines according to some embodiments
Amount of threshold shift value is come the example user interface of the mobile magnitude of second threshold needed for increasing the second object manipulation behavior.
Figure 14 AA to Figure 14 AD shows the flow chart according to some embodiments, and it illustrates according to determining first object
Manipulative behavior meets the mobile magnitude of first threshold come the behaviour of the mobile magnitude of second threshold needed for increasing the second object manipulation behavior
Make.
Figure 15 A to Figure 15 AI is shown keeps virtual objects mobile according to the movement according to determining equipment of some embodiments
The example user interface of audio alert is generated except to the visual field of shown one or more equipment cameras.
Figure 16 A to Figure 16 G is according to the flow chart of the process of some embodiments, which is used to be placed according to object and mark
Whether standard obtains meeting to show the virtual objects with different perceptual properties.
Figure 17 A to Figure 17 D is according to the flow chart of the process of some embodiments, and the process is for showing according to equipment
The movement of one or more cameras and the dynamically calibration user interface object of animation.
Figure 18 A to Figure 18 I is according to some embodiments for constraining the process of rotation of the virtual objects around axis
Flow chart.
Figure 19 A to Figure 19 H is according to the flow chart of the process of some embodiments, which is used for according to first pair determining
As manipulative behavior meets the mobile magnitude of first threshold come the mobile magnitude of second threshold needed for increasing the second object manipulation behavior.
Figure 20 A to Figure 20 F is according to the flow chart of the process of some embodiments, which is used for according to determining equipment
Audio alert is generated except the mobile shown visual field for making virtual objects be moved to one or more equipment cameras.
Specific embodiment
Virtual objects are the graphical representations of the three dimensional object in virtual environment.Interacting to virtual objects will be virtual right
As from the feelings for being shown in application program user interface (for example, the two dimensional application program user interface for not showing augmented reality environment)
Be changed into scape be shown in augmented reality environment (for example, wherein using provide a user can not be obtained in physical world it is another
The supplemental information of external information enhances the environment of the view of physical world) scene in conventional method usually require multiple independences
Input (for example, a series of gesture and button press etc.), be just able to achieve expected results (for example, adjustment virtual objects ruler
Very little, position and/or orientation, to realize true to nature or desired appearance in augmented reality environment).In addition, conventional input side
Method is usually directed to the delay between the request for receiving display augmented reality environment and display augmented reality environment, which is by swashing
It time and/or analysis and characterization needed for view of the one or more equipment cameras living to capture physical world and can be placed in
The view of the relevant physical world of virtual objects in augmented reality environment is (for example, in the physical world view of detection capture
Plane and/or surface) needed for time caused by.Embodiments herein provided for user shown in various scenes it is virtual right
The intuitive manner interacted as and/or with virtual objects (provides for example, passing through permission user from application user
Show that virtual objects are switched to the input that virtual objects are shown in augmented reality environment in the scene at interface;By allowing user
Change the display properties of virtual objects before virtual objects are shown in augmented reality environment (for example, going up on the stage environment in three-dimensional
In);By providing the instruction for allowing user to readily recognize system-level virtual objects from multiple application programs;By in determination
Change the perceptual property of object when the placement information of object;It is calibrated and is used by the mobile animation of equipment needed for providing instruction calibration
Family interface object;The rotation of axis is surrounded by constraining shown virtual objects;Pass through the threshold value in the first object manipulation behavior
Amount of movement value increases the amount of threshold shift value of the second object manipulation behavior when obtaining meeting;And by providing instruction virtual objects
Have been moved out the audio alert of shown visual field).
System, method and GUI as described herein improve user circle carried out with virtual/augmented reality environment in many ways
Face interaction.For example, they are easier following operation: showing virtual objects in augmented reality environment, and in response to not
Same input adjusts the appearance for the virtual objects being shown in augmented reality environment.
In the following, Figure 1A to Fig. 1 C, Fig. 2 and Fig. 3 provide the description to example apparatus.Fig. 4 A to Fig. 4 B, Fig. 5 A extremely scheme
5AT, Fig. 6 A to Fig. 6 AJ, Fig. 7 A to Fig. 7 P, Figure 11 A to Figure 11 V, Figure 12 A to Figure 12 L, Figure 13 A to Figure 13 M, Figure 14 A to figure
14Z and Figure 15 A to Figure 15 AI shows the example user interface for showing virtual objects in various scenes.Fig. 8 A extremely schemes
8E is shown for showing virtual objects when being switched to display second user interface zone from the first interface region of display
Expression process.Fig. 9 A to Fig. 9 D show for shown in the first interface region virtual objects first indicate,
The second of virtual objects are shown in second user interface zone indicates and shows the visual field with one or more cameras
The process that the third of the virtual objects of expression indicates.Figure 10 A to Figure 10 D, which is shown, has directory entry and virtual three for showing
The process of the corresponding project visually indicated of dimensional object.Figure 16 A to Figure 16 G is shown for placing whether standard obtains according to object
The process of the virtual objects with different perceptual properties is shown to meeting.Figure 17 A to Figure 17 D is shown for showing that basis is set
The movement of standby one or more cameras and the dynamically process of the calibration user interface object of animation.Figure 18 A to Figure 18 I is shown
For constraining the process of rotation of the virtual objects around axis.Figure 14 AA to Figure 14 AD and Figure 19 A to Figure 19 H show and are used for
Meet the mobile magnitude of first threshold according to the first object manipulation behavior that determines come needed for increasing the second object manipulation behavior second
The process of amount of threshold shift value.Figure 20 A to Figure 20 F is shown for making virtual objects be moved to institute according to the movement of determining equipment
The process of audio alert is generated except the visual field of one or more equipment cameras of display.Fig. 5 A to Fig. 5 AT, Fig. 6 A extremely scheme
6AJ, Fig. 7 A to Fig. 7 P, Figure 11 A to Figure 11 V, Figure 12 A to Figure 12 L, Figure 13 A to Figure 13 M, Figure 14 A to Figure 14 Z and Figure 15 A are extremely
User interface in Figure 15 AI for show Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 14 AA to Figure 14 AD,
Process in Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.
Example devices
Reference will now be made in detail to embodiment, the example of these embodiments is shown in the accompanying drawings.Following retouches in detail
Many details are shown in stating, are fully understood in order to provide to various described embodiments.But to this field
Those of ordinary skill is evident that various described embodiments can be without these specific details
It is practiced.In other cases, well-known method, process, component, circuit and network are not described in detail, thus not
It can unnecessarily make the various aspects of embodiment hard to understand.
Although will be further understood that in some cases, term " first ", " second " etc. are various for describing herein
Element, but these elements should not be limited by these terms.These terms are only intended to an element and another element region
It separates.For example, the first contact can be named as the second contact, and similarly, the second contact can be named as the first contact, and
The range of various described embodiments is not departed from.First contact and the second contact are contact, but they are not same
Contact, unless the context clearly.
Term used in the description to the various embodiments is intended merely to description particular implementation side herein
The purpose of case, and be not intended to be limiting.Such as in the description and the appended claims in the various embodiments
Used "one" is intended to also include plural form with "the" singular like that, indicates unless the context clearly.
It will be further understood that term "and/or" used herein refer to and cover in associated listed project one
Any and all possible combinations of a or multiple projects.It will be further understood that term " includes " (" includes ",
" including ", " comprises " and/or " comprising ") it specifies to exist when using in the present specification and be stated
Feature, integer, step, operations, elements, and/or components, but it is not excluded that in the presence of or add other one or more features, whole
Number, step, operation, component, assembly unit and/or its grouping.
As used herein, based on context, term " if " be optionally interpreted to mean " and when ... when "
(" when " or " upon ") or " in response to determination " or " in response to detecting ".Similarly, based on context, phrase is " if really
Calmly ... " or " if detecting [condition or event stated] " is optionally interpreted to refer to " when in determination ... "
Or it " in response to determination ... " or " when detecting [condition or event stated] " or " [is stated in response to detecting
Condition or event] ".
This document describes electronic equipments, the embodiment party of the user interface of such equipment and the correlated process using such equipment
Case.In some embodiments, which is also portable comprising other function such as PDA and/or music player functionality
Communication equipment, such as mobile phone.The exemplary implementation scheme of portable multifunction device includes but is not limited to come from Apple
Inc. (Cupertino, California)iPodWithEquipment.Optionally just using other
Take formula electronic equipment, such as laptop computer or plate with touch sensitive surface (for example, touch-screen display and/or touch tablet)
Computer.It is to be further understood that in some embodiments, which is not portable communication device, but there is touch-sensitive table
The desktop computer in face (for example, touch-screen display and/or touch tablet).
In the following discussion, a kind of electronic equipment including display and touch sensitive surface is described.However, should manage
Solution, the electronic equipment optionally include other one or more physical user-interface devices, such as physical keyboard, mouse and/or
Control stick.
The equipment usually supports various application programs, one or more application programs in such as following application program: note
Application program, word-processing application, website creation application program, disk volume is presented in notes application program, drawing application program
Collect application program, spreadsheet applications, game application, telephony application, videoconference application, electronics postal
Application program, photo management application program, digital camera applications journey are supported in part application program, instant message application program, body-building
Sequence, digital camera applications program, web-browsing application program, digital music player application, and/or digital video are broadcast
Put device application program.
The physical user-interface device that the various application programs executed in equipment optionally use at least one general, it is all
Such as touch sensitive surface.One or more functions of touch sensitive surface and the corresponding informance being displayed in equipment are optionally for difference
Application program is adjusted and/or changes, and/or is adjusted and/or changes in corresponding application programs.In this way, equipment shares
Physical structure (such as touch sensitive surface) supports various answer optionally with intuitive for a user and clear user interface
Use program.
The embodiment that attention is drawn to the portable device with touch-sensitive display.Figure 1A is shown according to one
The block diagram of the portable multifunction device 100 with touch-sensitive display system 112 of a little embodiments.Touch-sensitive display system
112 are called " touch screen " sometimes for convenient, and are called touch-sensitive display for short sometimes.Equipment 100 includes memory
102 (it optionally includes one or more computer readable storage mediums), Memory Controller 122, one or more processing
Unit (CPU) 120, peripheral device interface 118, RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, input/
Export (I/O) subsystem 106, other inputs or control equipment 116 and outside port 124.Equipment 100 optionally includes one
A or multiple optical sensors 164.Equipment 100 is optionally included for detection device 100 (for example, touch sensitive surface, such as equipment
100 touch-sensitive display system 112) on contact strength one or more intensity sensors 165.Equipment 100 is optionally wrapped
One or more tactile output generators 167 for generating tactile output on the appliance 100 are included (for example, all in touch sensitive surface
As generated tactile output in the touch-sensitive display system 112 of equipment 100 or the touch tablet 355 of equipment 300).These components are optional
Ground passes through one or more communication bus or signal wire 103 is communicated.
It should be appreciated that equipment 100 is only an example of portable multifunction device, and equipment 100 optionally has
There are components more more or fewer than shown component, optionally combines two or more components, or optionally there is this
The different configurations of a little components or arrangement.Various parts shown in Figure 1A are in hardware, software, firmware or any combination of them
Implement in (including one or more signal processing circuits and/or specific integrated circuit).
Memory 102 optionally includes high-speed random access memory, and also optionally includes nonvolatile memory,
Such as one or more disk storage equipments, flash memory device or other non-volatile solid state memory equipment.Equipment
100 other component (such as one or more CPU 120 and peripheral device interface 118) to the access of memory 102 optionally
It is controlled by Memory Controller 122.
Peripheral device interface 118 can be used for the input peripheral of equipment and output peripheral equipment being couple to memory
102 and one or more CPU 120.The operation of one or more processors 120 executes the various softwares stored in memory 102
Program and/or instruction set, to execute the various functions of equipment 100 and handle data.
In some embodiments, peripheral device interface 118, one or more CPU 120 and Memory Controller 122
Optionally realized on one single chip such as chip 104.In some other embodiments, they are optionally in independent chip
Upper realization.
RF (radio frequency) circuit 108 sends and receivees the RF signal of also referred to as electromagnetic signal.RF circuit 108 turns electric signal
Be changed to electromagnetic signal/by electromagnetic signal and be converted to electric signal, and via electromagnetic signal come with communication network and other communicate
Equipment is communicated.RF circuit 108 optionally includes the well known circuit for executing these functions, including but not limited to antenna
System, RF transceiver, one or more amplifiers, tuner, one or more oscillators, digital signal processor, encoding and decoding
Chipset, subscriber identity module (SIM) card, memory etc..RF circuit 108 optionally by wireless communication come with network and
Other equipment are communicated, which is such as internet (also referred to as WWW (WWW)), Intranet, and/or wireless network
(such as cellular phone network, WLAN (LAN) and/or Metropolitan Area Network (MAN) (MAN)).The wireless communication is optionally using a variety of logical
Any one of beacon standard, agreement and technology, including but not limited to global system for mobile communications (GSM), enhanced data GSM
Environment (EDGE), high-speed downlink packet access (HSDPA), High Speed Uplink Packet access (HSUPA), evolution clear data
(EV-DO), HSPA, HSPA+, double unit HSPA (DC-HSPDA), long term evolution (LTE), near-field communication (NFC), wideband code division
Multiple access (W-CDMA), CDMA (CDMA), time division multiple acess (TDMA), bluetooth, Wireless Fidelity (Wi-Fi) are (for example, IEEE
802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE
802.11n), internet protocol voice technology (VoIP), Wi-MAX, email protocol are (for example, internet message access protocol
(IMAP) and/or post office protocol (POP)), instant message (for example, scalable message processing and there are agreement (XMPP), for i.e.
When message and there is Session initiation Protocol (SIMPLE), instant message and the presence service (IMPS) using extension), and/or it is short
Messaging service (SMS) or include this document submission date also it is untapped go out communication protocol other are any appropriate logical
Believe agreement.
Voicefrequency circuit 110, loudspeaker 111 and microphone 113 provide the audio interface between user and equipment 100.Audio
Circuit 110 receives audio data from peripheral device interface 118, and audio data is converted to electric signal, and by electric signal transmission
To loudspeaker 111.Loudspeaker 111 converts electrical signals to the audible sound wave of the mankind.Voicefrequency circuit 110 is also received by microphone
113 electric signals converted according to sound wave.Voicefrequency circuit 110 converts electrical signals to audio data, and audio data is transmitted
To peripheral device interface 118 for handling.Audio data is optionally retrieved from and/or is transmitted to by peripheral device interface 118
Memory 102 and/or RF circuit 108.In some embodiments, voicefrequency circuit 110 further includes earphone jack (for example, in Fig. 2
212).Earphone jack provides the interface between voicefrequency circuit 110 and removable audio input/output peripheral equipment, this can
The earphone or have output (for example, single head-receiver or ears ear that the audio input of removal/output peripheral equipment such as only exports
Machine) and input both (for example, microphone) headset.
I/O subsystem 106 is by such as touch-sensitive display system 112 of the input/output peripheral equipment in equipment 100 and other
Input or control equipment 116 and peripheral device interface 118 couple.I/O subsystem 106 optionally includes display controller 156, light
It learns sensor controller 158, intensity sensor controller 159, tactile feedback controller 161 and inputs or control for other
One or more input controllers 160 of equipment.One or more of input controllers 160 are from other inputs or control equipment
116 reception electric signals/send other described inputs for electric signal or control equipment.Other input control apparatus 116 are optionally
Including physical button (for example, pushing button, rocker buttons etc.), dial, slide switch, control stick, click wheel etc..Some
In alternative embodiment, one or more input controllers 160 are optionally coupled to any one of the following terms (or not coupling
It is connected to any one of the following terms): keyboard, infrared port, USB port, stylus, and/or pointing device such as mouse.One
A or multiple buttons (for example, 208 in Fig. 2) optionally include the volume control for loudspeaker 111 and/or microphone 113
Up/down button.One or more buttons, which optionally include, pushes button (for example, 206 in Fig. 2).
Touch-sensitive display system 112 provides the input interface and output interface between equipment and user.Display controller 156
Electric signal is received from touch-sensitive display system 112 and/or electric signal is sent to touch-sensitive display system 112.Touch-sensitive display
System 112 shows visual output to user.Visual output optionally includes figure, text, icon, video and theirs is any
It combines (being referred to as " figure ").In some embodiments, the visual output of some visual outputs or whole corresponds to user circle
In face of as.As used herein, term " showing can indicate " refers to user's interactive graphical user interface object (for example, being configured as
The graphical user interface object that the input for being led to graphical user interface object is responded).User interactive graphics user
The example of interface object includes but is not limited to button, sliding block, icon, optional menu item, switch, hyperlink or other users
Interface control.
Touch-sensitive display system 112 has the touch-sensitive table for receiving input from the user based on tactile and/or tactile contact
Face, sensor or sensor group.Touch-sensitive display system 112 and display controller 156 are (any related in memory 102
The module and/or instruction set of connection are together) detection touch-sensitive display system 112 on contact (and the contact any movement or in
It is disconnected), and the contact that will test be converted to and be displayed in touch-sensitive display system 112 user interface object (for example,
One or more soft-key buttons, icon, webpage or image) interaction.In some embodiments, in touch-sensitive 112 He of display system
Contact point between user corresponds to the finger or stylus of user.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer displays)
Technology or LED (light emitting diode) technology, but other display technologies are used in other embodiments.Touch-sensitive display system
112 and display controller 156 optionally using in the currently known or later a variety of touch-sensing technologies that will be developed appoint
What technology and other proximity sensor arrays or for determine the one or more points contacted with touch-sensitive display system 112 its
His element is including but not limited to capacitive, electric to detect contact and its any movement or interruption, a variety of touch-sensing technologies
Resistive, infrared ray and surface acoustic wave technique.In some embodiments, using projection-type mutual capacitance detection technology, such as
From Apple Inc.'s (Cupertino, California)iPodWithThe technology of middle discovery.
Touch-sensitive display system 112 is optionally with the video resolution for being more than 100dpi.In some embodiments, it touches
Touching screen video resolution is more than 400dpi (for example, 500dpi, 800dpi or bigger).User optionally uses any suitable object
Body or additives stylus, finger etc. are contacted with touch-sensitive display system 112.In some embodiments, by user interface
It is designed to work together with contact and gesture based on finger, since the contact area of finger on the touchscreen is larger, this
It may be accurate not as good as the input based on stylus.In some embodiments, the rough input based on finger is converted essence by equipment
True pointer/cursor position or order is for executing the desired movement of user.
In some embodiments, in addition to a touch, equipment 100 optionally includes specific for activating or deactivating
The Trackpad (not shown) of function.In some embodiments, Trackpad is the touch sensitive regions of equipment, different from touch screen, should
Touch sensitive regions do not show visual output.Touch tablet is optionally the touch sensitive surface separated with touch-sensitive display system 112, either
By the extension for the touch sensitive surface that touch screen is formed.
Equipment 100 further includes the electric system 162 for powering for various parts.Electric system 162 optionally includes electricity
Power management system, one or more power supply (for example, battery, alternating current (AC)), recharging system, power failure detection circuit,
Power converter or inverter, power supply status indicator (for example, light emitting diode (LED)) and with the electricity in portable device
Power generates, managees, and distributes any other associated component.
Equipment 100 optionally further includes one or more optical sensors 164.Figure 1A show in I/O subsystem 106
The optical sensor that optical sensor controller 158 couples.One or more optical sensors 164 optionally include Charged Couple
Device (CCD) or complementary metal oxide semiconductor (CMOS) phototransistor.One or more optical sensors 164 are from environment
The light projected by one or more lens is received, and converts light to indicate the data of image.In conjunction with image-forming module 143
(being also designated as camera model), one or more optical sensors 164 optionally capture still image and/or video.Some
In embodiment, optical sensor be located at equipment 100 with the opposite facing rear portion of touch-sensitive display system 112 on equipment front
On, so that touch screen can be used as the view finder for still image and/or video image acquisition.In some embodiments,
Another optical sensor is located on the front of equipment, to obtain the image of the user (for example, for self-timer, being used in user
Video conference etc. is carried out when watching other video conference participants on the touchscreen).
Equipment 100 optionally further includes one or more contact strength sensors 165.Figure 1A is shown and I/O subsystem
The contact strength sensor that intensity sensor controller 159 in 106 couples.One or more contact strength sensors 165 are appointed
Selection of land includes one or more piezoresistive strain instrument, capacitive force transducer, electric force snesor, piezoelectric force transducer, optics
Force snesor, capacitive touch sensitive surfaces or other intensity sensors (for example, for measuring the contact on touch sensitive surface power (or
Pressure) sensor).One or more contact strength sensors 165 receive contact strength information (for example, pressure is believed from environment
The surrogate of breath or pressure information).In some embodiments, at least one contact strength sensor and touch sensitive surface (for example,
Touch-sensitive display system 112) Alignment or neighbouring.In some embodiments, at least one contact strength sensor is located at
Equipment 100 be located at equipment 100 front on the opposite facing rear portion of touch-sensitive display system 112 on.
Equipment 100 optionally further includes one or more proximity sensors 166.Figure 1A is shown and peripheral device interface
The proximity sensor 166 of 118 couplings.Alternatively, 160 coupling of input controller in proximity sensor 166 and I/O subsystem 106
It connects.In some embodiments, when multifunctional equipment is placed near user's ear (for example, when user is making a phone call),
Proximity sensor closes and disables touch-sensitive display system 112.
Equipment 100 optionally further includes one or more tactile output generators 167.Figure 1A is shown and I/O subsystem
The tactile output generator that tactile feedback controller 161 in 106 couples.In some embodiments, one or more tactiles
Output generator 167 includes one or more electroacoustic equipment such as loudspeaker or other acoustic components;And/or for energy to be turned
Change into the electromechanical equipment of linear movement such as motor, solenoid, electroactive polymerizer, piezoelectric actuator, electrostatic actuator or its
His tactile exports generating unit (for example, component for converting the electrical signal to the output of the tactile in equipment).Tactile output hair
Raw device 167 receives touch feedback from haptic feedback module 133 and generates instruction, and generating on the appliance 100 can be by equipment 100
User feel tactile output.In some embodiments, at least one tactile output generator and touch sensitive surface (example
Such as, touch-sensitive display system 112) Alignment or neighbouring, and optionally by vertically (for example, to the surface of equipment 100
Inside/outside) or laterally (for example, in plane identical with the surface of equipment 100 rearwardly and a forwardly) mobile touch sensitive surface next life
It is exported at tactile.In some embodiments, at least one tactile output generator sensor be located at equipment 100 be located at set
On the opposite facing rear portion of touch-sensitive display system 112 on standby 100 front.
Equipment 100 optionally further includes one or more accelerometers 168.Figure 1A is shown and 118 coupling of peripheral device interface
The accelerometer 168 connect.Alternatively, accelerometer 168 is optionally coupled with the input controller 160 in I/O subsystem 106.
In some embodiments, it is shown based on to the analysis from the one or more accelerometer received data in touch screen
Information is shown with longitudinal view or transverse views on device.Equipment 100 further includes magnetic force optionally other than accelerometer 168
Instrument (not shown) and GPS (or GLONASS or other Global Navigation Systems) receiver (not shown), for obtaining about equipment
The information of 100 position and orientation (for example, vertical or horizontal).
In some embodiments, the software component being stored in memory 102 includes operating system 126, communication module
(or instruction set) 128, contact/motion module (or instruction set) 130, figure module (or instruction set) 132, haptic feedback module
(or instruction set) 133, text input module (or instruction set) 134, global positioning system (GPS) module (or instruction set) 135, with
And application program (or instruction set) 136.In addition, in some embodiments, memory 102 stores equipment/overall situation internal state
157, if figure is shown in the 1A and Fig. 3.Equipment/overall situation internal state 157 includes one or more of the following: activity application
State indicates which application (if any) is currently movable;Display state indicates what application program, view
Or other information occupies each region of touch-sensitive display system 112;Sensor states, including from equipment each sensor and
The information that other inputs or control equipment 116 obtain;And believe about the position of equipment and/or the position of posture and/or orientation
Breath.
Operating system 126 is (for example, iOS, Darwin, RTXC, LINUX, UNIX, OSX, WINDOWS or embedded operation
System such as VxWorks) include for control and manage general system task (for example, memory management, storage equipment control,
Power management etc.) various component softwares and/or driver, and be conducive to the communication between various hardware and software components.
Communication module 128 is conducive to be communicated by one or more outside ports 124 with other equipment, and also
Including for handling by the various component softwares of 124 received data of RF circuit 108 and/or outside port.Outside port 124
(for example, universal serial bus (USB), firewire etc.) is suitable for being directly coupled to other equipment or indirectly via network (for example, mutually
Networking, Wireless LAN etc.) coupling.In some embodiments, outside port be with Apple Inc. (Cupertino,
California) someiPodThe identical or class with 30 needle connectors used in iPod equipment
Like and/or compatible spininess (for example, 30 needles) connector.In some embodiments, outside port is and Apple Inc.
(Cupertino, California's) is someiPodWith used in iPod equipment
The same or like and/or compatible Lightning connector of Lightning connector.
Contact/motion module 130 optionally detects and touch-sensitive display system 112 (in conjunction with display controller 156) and its
The contact of his touch-sensitive device (for example, touch tablet or physics click wheel).Contact/motion module 130 include various software components with
The relevant various operations of detection are contacted with (such as passing through finger or stylus) for executing, such as to determine that whether being in contact
(for example, detection finger down event), the intensity of determining contact (for example, the power or pressure of contact, or the power or pressure of contact
Sub), determine whether there is the movement of contact and track the movement across touch sensitive surface (for example, detecting one or more hands
Refer to drag events) and determine contact (for example, detection finger is lifted away from event or contact disconnects) whether it has stopped.Contact/fortune
Dynamic model block 130 receives contact data from touch sensitive surface.Determine that the movement of contact point optionally includes the rate (amount of determining contact point
Value), speed (magnitude and direction) and/or acceleration (change in magnitude and/or direction), the movement of the contact point is by a series of
Contacting data indicates.These operations are optionally applied to single-contact (for example, single abutment or stylus contact) or multiple spot
Contact (for example, " multiple point touching "/more abutments) simultaneously.In some embodiments, contact/motion module 130 and display control
Device 156 processed detects the contact on touch tablet.
Contact/motion module 130 optionally detects the gesture input of user.Different gestures on touch sensitive surface have difference
Contact mode (for example, the different motion of detected contact, timing and/or intensity).Therefore, special optionally by detection
Determine contact mode and carrys out detection gesture.For example, detection singly refers to that Flick gesture includes detection finger down event, then pressed with finger
Detection finger lifts and (is lifted away from) thing at (for example, at picture mark position) the identical position (or substantially the same position) of lower event
Part.For another example, detecting the finger on touch sensitive surface and gently sweeping gesture includes detection finger down event, then detects one or more hands
Refer to drag events, and then detection finger lifts and (be lifted away from) event.Similarly, by detect stylus specific contact patterns come
The tap of stylus is optionally detected, gently sweeps, drag and other gestures.
In some embodiments, detect that finger Flick gesture depends on detecting that finger down event is lifted with finger
Time span between event, but the finger contact strength between finger down event and digit up event is unrelated.?
In some embodiments, it is less than according to the time span determined between finger down event and digit up event predetermined
It is worth (for example, less than 0.1,0.2,0.3,0.4 or 0.5 second), detects Flick gesture, the intensity contacted but regardless of finger during tap
Whether given intensity threshold (be greater than Nominal contact detection intensity threshold value), such as light press or deep pressing intensity threshold are reached.
Therefore, finger Flick gesture can satisfy specific input standard, which does not require the characteristic strength of contact to meet
Intensity threshold is given to meet specific input standard.For clarity, the finger contact in Flick gesture usually requires to meet mark
Claim contact detection intensity threshold value to detect finger down event, when being lower than the Nominal contact detection intensity threshold value, will not detect
To contact.Similar analysis is suitable for through stylus or other contact detection Flick gestures.It is able to detect in equipment in touch-sensitive table
In the case that the finger or stylus to hover above face contacts, Nominal contact detection intensity threshold value optionally not with finger or stylus with
Physical contact between touch sensitive surface is corresponding.
Same concept is suitable for other kinds of gesture in a similar manner.For example, can based on for include in gesture
Contact intensity is unrelated or the satisfaction that does not require the contact for executing gesture to reach intensity threshold so as to identified standard
Optionally gesture, kneading gesture, expansion gesture and/or long pressing gesture are gently swept in detection.For example, gently sweeping gesture is based on one or more
The amount of the movement of a contact detects;Scaling gesture is detected based on two or more movements of contact towards each other;Expansion is let go
Gesture contacts movement away from each other based on two or more to detect;Long pressing gesture is based on touch sensitive surface having less than threshold
The duration of the contact of value amount of movement is detected.Contact strength is not required to meet phase accordingly, with respect to certain gestures criterion of identification
The intensity threshold answered means that certain gestures criterion of identification can be in gesture with the statement for meeting certain gestures criterion of identification
Contact is satisfied when being not up to corresponding intensity threshold, and can also be met or exceeded in one or more contacts in gesture
It is satisfied in the case where corresponding intensity threshold.In some embodiments, it is detected in time predefined section based on determining
Finger down event and digit up event detect Flick gesture, are above without considering to contact during time predefined section
Gesture gently is swept to detect greater than predefined magnitude again below corresponding intensity threshold, and based on determining that contact is mobile, even if
It is also such that contact, which is higher than corresponding intensity threshold, at the end of contacting mobile.Even if in the detection to gesture by execution gesture
Contact intensity influence specific implementation in (for example, when the intensity of contact be higher than intensity threshold when, equipment quickly detects
Pressed to long, or when the intensity of contact is higher, equipment can postpone the detection to tap input), as long as being not up in contact
It can satisfy the standard of identification gesture in the case where certain strength threshold value, then contact will not be required to reach the detection of these gestures
To certain strength threshold value (for example, even if time quantum needed for identification gesture changes).
In some cases, contact strength threshold value, duration threshold and mobile threshold value carry out group with various various combinations
It closes, distinguishes gestures different for two or more of identical input element or region to create heuritic approach, so that
The set of richer user interaction and response is capable of providing with multiple and different interactions of identical input element.It is specific about one group
Gesture identification standard does not require the intensity of contact to meet corresponding intensity threshold to meet the statement of certain gestures criterion of identification not
It excludes that other intensity related gesture criterion of identification are carried out while being assessed, to identify that with gesture is worked as include having to be higher than accordingly by force
Spend other gestures for the standard being satisfied when the contact of the intensity of threshold value.For example, in some cases, first gesture it is first-hand
Gesture criterion of identification (it does not require the intensity of contact to meet corresponding intensity threshold to meet first gesture criterion of identification) and second
Second gesture criterion of identification (it depends on the contact for reaching respective strengths threshold value) competition of gesture.In such competition, such as
The second gesture criterion of identification standard first of fruit second gesture is met, then gesture is not identified as optionally meeting first-hand
The first gesture criterion of identification of gesture.For example, if contact reaches corresponding intensity before the mobile predefined amount of movement of contact
Threshold value then detects deep pressing gesture rather than gently sweeps gesture.On the contrary, if being connect before contact reaches corresponding intensity threshold
The mobile predefined amount of movement of touching then detects and gently sweeps gesture rather than deep pressing gesture.Even in this case, first-hand
The first gesture criterion of identification of gesture does not require the intensity of contact to meet corresponding intensity threshold still to meet first gesture identification
Standard, because if contact keeps below corresponding intensity threshold until gesture terminates (for example, phase will not be increased to above by having
That answers the contact of the intensity of intensity threshold gently sweeps gesture), gesture will be identified as gently sweeping gesture by first gesture criterion of identification.Cause
This, does not require the intensity of contact to meet corresponding intensity threshold to meet the certain gestures criterion of identification of certain gestures criterion of identification
Will (A) in some cases, ignore relative to the contact strength (for example, for Flick gesture) of intensity threshold and/or
(B) in some cases, if before certain gestures criterion of identification identifies gesture corresponding with input, the intensity of one group of competition
Related gesture criterion of identification (for example, for pressing gesture deeply) input is identified as it is corresponding with intensity related gesture, then not
It is able to satisfy certain gestures criterion of identification (for example, for long pressing gesture), in this sense, still depends on phase
For the contact strength (for example, for long pressing gesture of deep pressing gesture competition identification) of intensity threshold.
Figure module 132 includes for figure to be rendered and shown in touch-sensitive display system 112 or other displays
Various known software components, including for changing shown figure visual impact (for example, brightness, transparency, saturation degree,
Contrast or other perceptual properties) component.As used herein, term " figure " includes can be displayed to user any right
As without limitation including text, webpage, icon (such as including the user interface object of soft key), digital picture, video, moving
Draw etc..
In some embodiments, figure module 132 stores the data ready for use for indicating figure.Each figure is optionally
It is assigned corresponding code.Figure module 132 is used to specify one or more of figure to be shown from receptions such as application programs
A code also receives coordinate data and other graphic attribute data together in the case of necessary, and then generates screen map
As data, with output to display controller 156.
Haptic feedback module 133 includes for generating instruction (for example, the instruction used by tactile feedback controller 161)
Various software components use with the interaction in response to user and equipment 100 tactile output generator 167 on the appliance 100
One or more positions generate tactile output.
The text input module 134 for being optionally the component of figure module 132 is provided in various application program (examples
Such as, contact person 137, Email 140, IM 141, browser 147 and any other application program for needing text input) in
Input the soft keyboard of text.
GPS module 135 determines the position of equipment and provides this information to use in various application programs (for example, mentioning
It is supplied to the phone 138 for location-based dialing;It is provided to camera 143 and is used as picture/video metadata;And it is provided to
For location based service such as weather desktop small routine, local Yellow Page desktop small routine and map/navigation desktop small routine
Application program).
Application program 136 is optionally included with lower module (or instruction set) or its subset or superset:
Contact module 137 (sometimes referred to as address list or contacts list);
Phone module 138;
Video conference module 139;
Email client module 140;
Instant message (IM) module 141;
Body-building support module 142;
For still image and/or the camera model 143 of video image;
Image management module 144;
Browser module 147;
Calendaring module 148;
Desktop small routine module 149, optionally includes one or more of the following terms: weather desktop little Cheng
Sequence 149-1, stock market desktop small routine 149-2, calculator desktop small routine 149-3, alarm clock desktop small routine 149-4, dictionary table
The desktop small routine 149-6 of face small routine 149-5 and other desktop small routines and user's creation for being obtained by user;
It is used to form the desktop small routine builder module 150 of the desktop small routine 149-6 of user's creation;
Search module 151;
Video and musical player module 152, optionally by video player module and musical player module structure
At;
Notepad module 153;
Mapping module 154;And/or
Online Video module 155.
The example for the other applications 136 being optionally stored in memory 102 includes other text processing application journeys
Other picture editting's application programs, drawing application program, application program, the application program for supporting JAVA, encryption, number is presented in sequence
Word rights management, speech recognition and speech reproduction.
It is defeated in conjunction with touch-sensitive display system 112, display controller 156, contact module 130, figure module 132 and text
Enter module 134, contact module 137 includes executable instruction for managing address list or contacts list (for example, being stored in
In the application program internal state 192 of contact module 137 in reservoir 102 or memory 370), comprising: addition name to lead to
News record;Name is deleted from address book;Telephone number, e-mail address, physical address or other information are associated with name;It will
Image is associated with name;Name is sorted out and is classified;Telephone number and/or e-mail address are provided initiating and/or
Promote to pass through phone 138, video conference 139, the communication of Email 140 or instant message 141;Etc..
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch-sensitive display system 112, display
Controller 156, contact module 130, figure module 132 and text input module 134, phone module 138 include for carry out with
The executable instruction of lower operation: the one or more electricity in input character string corresponding with telephone number, accessing address list 137
Words number, modification inputted telephone number, dial corresponding telephone number, conversate and when session complete when disconnection
Or it hangs up.As described above, wireless communication is optionally using any one of a variety of communication standards, agreement and technology.
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch-sensitive display system 112, display control
Device 156 processed, one or more optical sensors 164, optical sensor controller 158, contact module 130, figure module 132,
Text input module 134, contacts list 137 and phone module 138, video conference module 139 include coming according to user instructions
Initiate, carry out and terminate the executable instruction of the video conference between user and other one or more participants.
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module 132
With text input module 134, email client module 140 includes for creating, sending in response to user instruction, receive
With the executable instruction of management Email.In conjunction with image management module 144, email client module 140 makes very
It is easy creation and sends the Email with the still image or video image that are shot by camera model 143.
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module 132
With text input module 134, instant message module 141 includes the executable instruction for performing the following operation: being inputted and instant
The corresponding character string of message modifies the character being previously entered, sends corresponding instant message (for example, using for based on phone
Instant message short message service (SMS) or multimedia messaging service (MMS) agreement or using for Internet-based
XMPP, SIMPLE, Apple push notification service (APNs) of instant message or IMPS), receive instant message and check institute
Received instant message.In some embodiments, the instant message transferred and/or received optionally include figure, photograph,
Other attachments for being supported in audio file, video file, and/or MMS and/or enhancing messaging service (EMS).Such as this paper institute
With " instant message " is referred to message (for example, the message sent using SMS or MMS) based on phone and Internet-based disappeared
Both breaths (for example, the message sent using XMPP, SIMPLE, APNs or IMPS).
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module
132, text input module 134, GPS module 135, mapping module 154 and video and musical player module 152, body-building branch
Holding module 142 includes executable instruction for creating body-building (for example, having time, distance and/or caloric burn target);With
The communication of (in sporting equipment and smartwatch) body-building sensor;Receive workout sensor data;Calibration is for monitoring body-building
Sensor;Music is selected and played for body-building;And it shows, store and transmit workout data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensing
Device controller 158, contact module 130, figure module 132 and image management module 144, camera model 143 include for carrying out
The executable instruction operated below: still image or video (including video flowing) are captured and stores them in memory 102
In, the feature of modification still image or video, and/or delete still image or video from memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, figure module 132, text input
Module 134 and camera model 143, image management module 144 include for arranging, modifying (for example, editor) or with its other party
Formula manipulates, tags, deleting, showing (for example, in digital slide or photograph album) and storage still image and/or video
The executable instruction of image.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure mould
Block 132 and text input module 134, browser module 147 include according to user instructions come browse internet (including search, chain
Be connected to, receive and show webpage or part thereof and be linked to the attachment and alternative document of webpage) executable instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure mould
Block 132, text input module 134, email client module 140 and browser module 147, calendaring module 148 include using
According to user instructions come create, show, modify and store calendar and data associated with calendar (for example, calendar,
Backlog etc.) executable instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure mould
Block 132, text input module 134 and browser module 147, desktop small routine module 149 are optionally to be downloaded and made by user
Miniature applications program is (for example, weather desktop small routine 149-1, stock market desktop small routine 149-2, calculator desktop little Cheng
Sequence 149-3, alarm clock desktop small routine 149-4 and dictionary desktop small routine 149-5) or by user create miniature applications program
(for example, desktop small routine 149-6 of user's creation).In some embodiments, desktop small routine includes HTML (hypertext mark
Remember language) file, CSS (cascading style sheets) file and JavaScript file.In some embodiments, desktop small routine packet
XML (extensible markup language) file and JavaScript file are included (for example, Yahoo!Desktop small routine).
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure mould
Block 132, text input module 134 and browser module 147, desktop small routine builder module 150 include for creating desktop
The executable instruction of small routine (for example, user's specified portions of webpage are gone in desktop small routine).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132 and text
Input module 134, search module 151 include for carrying out searching in searching storage 102 with one or more according to user instructions
The matched text of rope condition (for example, search term that one or more user specifies), music, sound, image, video and/or its
The executable instruction of his file.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132, audio-frequency electric
Road 110, loudspeaker 111, RF circuit 108 and browser module 147, video and musical player module 152 include allowing user
Download and play back the music recorded stored with one or more file formats (such as MP3 or AAC file) and other sound
The executable instruction of file, and for showing, presenting or otherwise play back video (for example, in touch-sensitive display system 112
Executable instruction above or on the external display being wirelessly connected via outside port 124).In some embodiments, if
Standby 100 optionally include MP3 player, the functionality of such as iPod (trade mark of Apple Inc.).
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, figure module 132 and text input
Module 134, notepad module 153 include creating and managing holding for notepad, backlog etc. for according to user instructions
Row instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure mould
Block 132, text input module 134, GPS module 135 and browser module 147, mapping module 154 include for being referred to according to user
It enables to receive, show, modify and store map and data associated with map (for example, driving route;Specific location or
The data in neighbouring shop and other points of interest;With other location-based data) executable instruction.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132, audio-frequency electric
Road 110, loudspeaker 111, RF circuit 108, text input module 134, email client module 140 and browser module
147, Online Video module 155 includes allowing user to access, browsing, receiving (for example, by stream transmission and/or downloading), return
It puts (such as on touch screen 112 or on external display that is wireless connection or connecting via outside port 124), send tool
Have to the Email of the link of specific Online Video and otherwise manages one or more file formats such as
H.264 the executable instruction of Online Video.In some embodiments, using instant message module 141 rather than electronics postal
Part client modules 140 send the link of specific Online Video.
Above-mentioned identified each module and application correspond to for executing above-mentioned one or more functions and in this Shen
Please described in one group of method (for example, computer implemented method described herein and other information processing method)
Executable instruction.These modules (that is, instruction set) need not be realized with independent software program, process, or module, therefore these moulds
Each subset of block is optionally combined in various embodiments or is otherwise rearranged.In some embodiments,
Memory 102 optionally stores the subgroup of above-mentioned module and data structure.It is not retouched above in addition, memory 102 optionally stores
The other module and data structure stated.
In some embodiments, equipment 100 is that the operation of predefined one group of function in the equipment uniquely passes through
Touch screen and/or touch tablet are performed equipment.By using touch screen and/or Trackpad as operating equipment 100
Main input control apparatus is physically entered control equipment (for example, pushing button, driver plate etc. optionally in reduction equipment 100
Deng) quantity.
Predefined one group of function is uniquely performed by touch screen and/or Trackpad and is optionally included in user circle
Navigation between face.In some embodiments, the touch tablet when being touched by user by equipment 100 from being displayed on equipment
Any user interface navigation on 100 is to main menu, home menu or root menu.In such embodiment, touch tablet is used
To realize " menu button ".In some other embodiments, menu button is that physics pushes button or other are physically entered
Control equipment, rather than touch tablet.
Figure 1B is the block diagram for showing the example components for event handling according to some embodiments.In some implementations
In scheme, memory 102 (in Figure 1A) or memory 370 (Fig. 3) include event classifier 170 (for example, in operating system 126
In) and corresponding application program 136-1 (for example, any of aforementioned applications program 136,137 to 155,380 to 390 apply
Program).
Event classifier 170 receives event information and determination for application program 136-1 that event information is delivered to and answers
With the application view 191 of program 136-1.Event classifier 170 includes event monitor 171 and event dispatcher module
174.In some embodiments, application program 136-1 includes application program internal state 192, the application program internal state
The one or more that indicating is movable when application program or while being carrying out shows in touch-sensitive display system 112 is currently answered
Use Views.In some embodiments, equipment/overall situation internal state 157 is by event classifier 170 for which to be determined
(which) application program is currently movable, and application program internal state 192 will for determination by event classifier 170
The application view 191 that event information is delivered to.
In some embodiments, application program internal state 192 includes other information, and one in such as the following terms
Person or more persons: when application program 136-1 restores to execute recoverys information to be used, indicate just shown by application program 136-1
The information shown is ready for the user interface state information of the information shown by the application program, for allowing users to
Back to the state queue and the prior actions taken of user of the previous state or view of application program 136-1 repetition/remove
Sell queue.
Event monitor 171 receives event information from peripheral device interface 118.Event information includes about subevent (example
Such as, as in the touch-sensitive display system 112 of a part of multi-touch gesture user touch) information.Peripheral equipment connects
Mouth 118 transmits it from I/O subsystem 106 or sensor such as proximity sensor 166, accelerometer 168 and/or microphone 113
(passing through voicefrequency circuit 110) received information.Peripheral device interface 118 is from the 106 received information of institute of I/O subsystem including coming from
The information of touch-sensitive display system 112 or touch sensitive surface.
In some embodiments, event monitor 171 sends the request to peripheral equipment at predetermined intervals and connects
Mouth 118.In response, 118 transmitting event information of peripheral device interface.In other embodiments, peripheral device interface 118 is only
When there are significant events (for example, receiving the input higher than predetermined noise threshold and/or receiving is more than to predefine
Duration input) when ability transmitting event information.
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or life event
Identifier determining module 173.
When touch-sensitive display system 112 shows more than one view, hit view determination module 172 is provided for determining
The subevent software process where occurred in one or more views.View can be seen over the display by user
The control and other elements arrived is constituted.
The another aspect of user interface associated with application program is one group of view, otherwise referred to as applies journey herein
Sequence view or user interface windows are wherein showing information and the gesture based on touch occur.Wherein detecting touch
(corresponding application programs) application view optionally corresponds in the sequencing or view hierarchies structure of application program
Sequencing is horizontal.For example, being optionally referred to as hit view in the floor level view for wherein detecting touch, and it is identified
Event set to correctly enter is based in part on the hit view of initial touch optionally at least to determine, the initial touch is opened
Primordium is in the gesture of touch.
It hits view determination module 172 and receives information relevant to the subevent of the gesture based on touch.Work as application program
When with the multiple views organized in hierarchical structure, hit view determination module 172 will hit view, and be identified as should be to sub- thing
Minimum view in the hierarchical structure that part is handled.In most cases, hit view is to initiate subevent (to form thing
The first subevent in the subevent sequence of part or potential event) in the floor level view wherein occurred.Once hitting view
It is hit view determination module to be identified, hit view, which is just usually received, to be identified as hitting the targeted same touching of view with it
It touches or the relevant all subevents of input source.
It is specific that life event identifier determining module 173 determines which or which view in view hierarchies structure should receive
Subevent sequence.In some embodiments, life event identifier determining module 173 determines that only hit view should just receive spy
Stator sequence of events.In other embodiments, life event identifier determining module 173 determines the physical bit including subevent
All views set are the view of active participation, it is thus determined that all views actively participated in should all receive specific subevent sequence
Column.In other embodiments, it even if touch subevent is confined to region associated with a particular figure completely, is classified
Higher view in structure will still maintain view for active participation.
Event information is assigned to event recognizer (for example, event recognizer 180) by event dispatcher module 174.It is wrapping
In the embodiment for including life event identifier determining module 173, event information is delivered to by living by event dispatcher module 174
Dynamic 173 definite event identifier of event recognizer determining module.In some embodiments, event dispatcher module 174 exists
Event information is stored in event queue, which is retrieved by corresponding event receiver module 182.
In some embodiments, operating system 126 includes event classifier 170.Alternatively, application program 136-1 packet
Include event classifier 170.In another embodiment, event classifier 170 is standalone module, or to be stored in storage
A part of another module (such as contact/motion module 130) in device 102.
In some embodiments, application program 136-1 includes multiple button.onreleases 190 and one or more application
Views 191, each of these all includes occurring in the corresponding views of the user interface of application program for handling
The instruction of touch event.Each application view 191 of application program 136-1 includes one or more event recognizers 180.
In general, corresponding application programs view 191 includes multiple event recognizers 180.In other embodiments, event recognizer 180
In one or more event recognizers be standalone module a part, the standalone module such as user interface tool packet (do not show
Out) or the higher levels of object of application program 136-1 therefrom inheritance method and other attributes.In some embodiments, phase
Answering button.onrelease 190 includes one or more of the following terms: data renovator 176, object renovator 177, GUI are more
New device 178, and/or from the received event data 179 of event classifier 170.Button.onrelease 190 optionally with or call
Data renovator 176, object renovator 177 or GUI renovator 178 carry out more new application internal state 192.Alternatively, it answers
It include one or more corresponding event processing routines 190 with one or more application views in Views 191.Separately
Outside, in some embodiments, one or more of data renovator 176, object renovator 177 and GUI renovator 178 quilt
It is included in corresponding application programs view 191.
Corresponding event recognizer 180 receives event information (for example, event data 179) from event classifier 170, and
From event information identification events.Event recognizer 180 includes Event receiver 182 and event comparator 184.In some embodiment party
In case, event recognizer 180 further include metadata 183 and event transmitting instruction 188 (its optionally include subevent delivering refers to
Enable) at least one subset.
Event receiver 182 receives the event information from event classifier 170.Event information includes about subevent example
As touched or touching mobile information.According to subevent, event information further includes additional information, the position of such as subevent.When
When subevent is related to the movement touched, event information optionally further includes rate and the direction of subevent.In some embodiments
In, event include equipment from an orientation rotate to another orientation (for example, rotate to horizontal orientation from machine-direction oriented, or vice versa
), and event information includes the corresponding informance of the current orientation (also referred to as equipment posture) about equipment.
Event information and predefined event or subevent definition are compared by event comparator 184, and being based on should
Compare to determine event or subevent, or the state of determining or update event or subevent.In some embodiments, event
Comparator 184 includes that event defines 186.Event defines 186 definition (for example, predefined subevent sequence) comprising event,
Such as event 1 (187-1), event 2 (187-2) and other events.In some embodiments, the subevent in event 187
Start for example including touch, touch terminate, touch it is mobile, touch and cancel and multiple point touching.In one example, 1 (187- of event
1) definition is the double-click on shown object.For example, double-clicking the first time including the predetermined duration on shown object
Touch (touch starts), the predetermined duration that the first time of predetermined duration lifts (touch terminates), is shown on object
Touch (touch starts) and predetermined duration for second lift (touch terminates) for the second time.In another example,
The definition of event 2 (187-2) is the dragging on shown object.For example, when dragging includes predetermined on shown object
Movement and touch of the long touch (or contact), touch in touch-sensitive display system 112 are lifted away from (touch terminates).?
In some embodiments, event further includes the information for one or more associated button.onreleases 190.
In some embodiments, it includes the definition to the event for respective user interfaces object that event, which defines 187,.?
In some embodiments, event comparator 184 executes hit test to determine which user interface object is associated with subevent.
For example, being shown in touch-sensitive display system 112 in the application view of three user interface objects, when in touch-sensitive display
When detecting touch in system 112, event comparator 184 executes hit Test to determine which in these three user interface objects
One user interface object is associated with touch (subevent).If each shown object and corresponding event processing routine
190 is associated, then event comparator determines which button.onrelease 190 should be swashed using the result that the hit is tested
It is living.For example, the selection of event comparator 184 button.onrelease associated with the object of subevent and triggering hit test.
It in some embodiments, further include delay voltage to the definition of corresponding event 187, which postpones event
The delivering of information, until having determined that whether subevent sequence exactly corresponds to or do not correspond to the event type of event recognizer.
It, should when the determining subevent sequence of corresponding event identifier 180 does not define any event in 186 with event to be matched
180 entry event of corresponding event identifier is impossible, event fails or event terminates state, ignores after this based on touch
The subsequent subevent of gesture.In this case, for hit view keep other movable event recognizers (if there is
Words) continue to track and handle the subevent of the gesture based on touch of lasting progress.
In some embodiments, corresponding event identifier 180 includes metadata 183, and the metadata has instruction thing
Part delivery system how should execute the configurable attribute to the delivering of the subevent of the event recognizer of active participation, label and/
Or list.In some embodiments, metadata 183 includes how instruction event recognizer interacts or how to interact each other
Configurable attribute, mark and/or list.In some embodiments, metadata 183 includes whether instruction subevent is delivered to
Configurable attribute, mark and/or the list of view or the different levels in sequencing hierarchical structure.
In some embodiments, when one or more specific subevents of identification events, corresponding event identifier 180
Activate button.onrelease 190 associated with event.In some embodiments, corresponding event identifier 180 will be with event
Associated event information is delivered to button.onrelease 190.Activation button.onrelease 190 is different from sending subevent
(and delaying to send) hits view to corresponding.In some embodiments, event recognizer 180 is dished out and the event phase that is identified
Associated mark, and button.onrelease 190 associated with the mark obtains the mark and executes predefined process.
In some embodiments, event delivery instruction 188 includes delivering the event information about subevent without activating
The subevent delivery instructions of button.onrelease.On the contrary, event information is delivered to and subevent sequence by subevent delivery instructions
Associated button.onrelease or the view for being delivered to active participation.With subevent sequence or with the view phase of active participation
Associated button.onrelease receives event information and executes predetermined process.
In some embodiments, data renovator 176 creates and updates the data used in application program 136-1.
For example, data renovator 176 is updated telephone number used in contact module 137, or to video or music
Video file used in player module 152 is stored.In some embodiments, object renovator 177 creation and
Update the object used in application program 136-1.It is used for example, object renovator 177 creates new user interface object or updates
The position of family interface object.GUI renovator 178 updates GUI.For example, GUI renovator 178 prepares display information, and will display
Information is sent to figure module 132 to show on the touch sensitive display.
In some embodiments, button.onrelease 190 includes data renovator 176, object renovator 177 and GUI
Renovator 178 or with the access authority to them.In some embodiments, data renovator 176, object renovator
177 and GUI renovator 178 is included in corresponding application programs 136-1 or the individual module of application view 191.At it
In his embodiment, they are included in two or more software modules.
It should be appreciated that the above-mentioned discussion of the event handling about user's touch on touch-sensitive display is applied also for using defeated
Enter user's input that equipment carrys out the other forms of operating multifunction equipment 100, not all user's input is all in touch screen
Upper initiation.For example, optionally pressing or pinning the mouse movement to cooperate and mouse button down with single or multiple keyboards;Touching
Contact in template is mobile, tap, dragging, rolling etc.;Stylus input;The movement of equipment;Spoken command;The eye detected
Eyeball is mobile;Biological characteristic input;And/or any combination of them is optionally used as corresponding to the son for limiting the event to be identified
The input of event.
Fig. 1 C is the block diagram for showing the tactile output module according to some embodiments.In some embodiments, I/O
System 106 (such as tactile feedback controller 161 (Figure 1A) and/or other input controllers 160 (Figure 1A)) includes shown in Fig. 1 C
At least some of example components.In some embodiments, peripheral device interface 118 includes example components shown in Fig. 1 C
At least some of.
In some embodiments, tactile output module includes tactile feedback module 133.In some embodiments, it touches
Sense feedback module 133 assemble and combine the software application on electronic equipment user interface feedback (for example, to show
Show the generation of the execution or event that operate in the user interface of the corresponding user's input of user interface and instruction electronic equipment
Prompt and other notify the feedback that is responded) tactile output.Tactile feedback module 133 includes waveform module 123 (for mentioning
For the waveform for generating tactile output), mixer 125 (for hybrid waveform, waveform) in such as different channels, compressor
127 (for reduce or the dynamic range of compressed waveform), low-pass filter 129 is (for filtering out the high-frequency signal in waveform point
One or more of amount) and heat controller 131 (for adjusting waveform according to heat condition).In some embodiments, sense of touch
Feedback module 133 is included in tactile feedback controller 161 (Figure 1A).In some embodiments, tactile feedback module 133
Individually unit (or independent specific implementation of tactile feedback module 133) be also included in Audio Controller (such as voicefrequency circuit
110, Figure 1A) in and for generating audio signal.In some embodiments, single tactile feedback module 133 be used to generate
Audio signal and the waveform for generating tactile output.
In some embodiments, tactile feedback module 133 further includes igniter module 121 (for example, determination will generate touching
Feel software application, operating system or the other software mould for exporting and causing the process for generating corresponding tactile output
Block).In some embodiments, igniter module 121 generates the touching that waveform is generated for causing (such as by waveform module 123)
Send out device signal.For example, igniter module 121 generates flop signal based on pre-set timing standard.In some implementations
In scheme, igniter module 121 receives flop signal (for example, in some embodiments except tactile feedback module 133
In, tactile feedback module 133 receives trigger letter from the hardware input processing module 146 being located at except tactile feedback module 133
Number) and flop signal is relayed to other component (for example, waveform module 123) in tactile feedback module 133 or based on using
Family interface element (for example, showing and can indicate in application icon or application program) or hardware input equipment are (for example, home is pressed
Button or strength sensitive input surface, such as strength sensitive touch screen) activation and (with igniter module 121) trigger action it is soft
Part application program.In some embodiments, igniter module 121 is also (such as from tactile feedback module 133, Figure 1A and Fig. 3)
It receives touch feedback and generates instruction.In some embodiments, igniter module 121 in response to tactile feedback module 133 (or touching
Feel the igniter module 121 in feedback module 133) (such as touch feedback is received from tactile feedback module 133, Figure 1A and Fig. 3)
It instructs and generates flop signal.
Waveform module 123 (such as slave flipflop module 121) receives flop signal as input, and in response to receiving
Flop signal and provide waveform for generating the output of one or more tactiles (for example, being assigned use from predefined one group
The waveform selected in the waveform used for waveform module 123, such as below with reference to Fig. 4 F-4G waveform in greater detail).
Mixer 125 (such as from waveform module 123) receives waveform as input, and these waveforms are mixed.
For example, when mixer 125 receives two or more waveforms (for example, in first waveform and second channel in first passage
At least partly second waveform Chong Die with first waveform) when, mixer 125 output correspond to the two or more waveform it
The combined waveform of sum.In some embodiments, mixer 125 also modifies one or more of the two or more waveform
Waveform with emphasized relative to remaining waveform in the two or more waveform specific waveforms (such as by improve it is described specific
The scale of waveform and/or the scale for reducing other waveforms in these waveforms).In some cases, mixer 125 selects one
Or multiple waveforms are removed from combined waveform (for example, when there is the waveform from more than three source to be requested by tactile
When output generator 167 exports simultaneously, the waveform from most old source is dropped).
Mixer 127 receives waveform (such as combined waveform from mixer 125) as input, and modifies these waves
Shape.In some embodiments, compressor 127 reduce these waveforms (for example, according to tactile output generator 167 (Figure 1A) or
The physical specification of 357 (Fig. 3)) so that the tactile output for corresponding to these waveforms is contracted by.In some embodiments, it compresses
Device 127 such as limits waveform by forcing predefined maximum amplitude for waveform.Surpass for example, compressor 127 reduces
The amplitude of the waveform portion of predefined amplitude thresholds is crossed, and is kept for no more the width of the waveform portion of the predefined amplitude thresholds
Value.In some embodiments, compressor 127 reduces the dynamic range of waveform.In some embodiments, compressor 127 is dynamic
Reduce to state the dynamic range of waveform, so that combined waveform is maintained at performance specification (such as the power of tactile output generator 167
And/or removable mass displacement limitation) in.
Low-pass filter 129 receives waveform (such as from compressor 127 through compressed waveform) as input, and to waveform
It is filtered (such as smoothing processing) (such as removing or reduce the high frequency component signal in waveform).Such as in some cases,
Compressor 127 includes interfering tactile output to generate and/or defeated according to tactile is generated through compressed waveform in compressed waveform
More than the irrelevant signal (such as high frequency component signal) of the performance specification of tactile output generator 167 when out.Low-pass filter 129
Reduce or remove such irrelevant signal in waveform.
Heat controller 131 receives waveform (such as from low-pass filter 129 through filter shape) as input, and according to
The heat condition of equipment 100 is (such as based on the internal temperature detected in equipment 100, such as temperature of tactile feedback controller 161
The external temperature that degree and/or equipment 100 detect) adjust waveform.Such as in some cases, tactile feedback controller 161
Output changes according to temperature (for example, in response to receiving same waveform, tactile feedback controller 161 is controlled in tactile feedback
Device 161 generates the output of the first tactile when being in the first temperature, and is in different from the first temperature in tactile feedback controller 161
The output of the second tactile is generated when second temperature).For example, the magnitude (or amplitude) of tactile output can change according to temperature.In order to
The effect of temperature change is reduced, waveform is modified (for example, the amplitude of waveform is based on temperature and is increased or decreased).
In some embodiments, haptic feedback module 133 (such as igniter module 121) is couple to hardware input processing
Module 146.In some embodiments, other input controllers 160 in Figure 1A include hardware input processing module 146.?
In some embodiments, hardware input processing module 146 is received from hardware input equipment 145 (for example, other in Figure 1A are defeated
Enter or control equipment 116, such as, home button or strength sensitive input surface, such as strength sensitive touch screen) input.?
In some embodiments, hardware input equipment 145 is any input equipment as described herein, such as touch-sensitive display system 112
(Figure 1A), keyboard/mouse 350 (Fig. 3), touch tablet 355 (Fig. 3), other inputs control one of equipment 116 (Figure 1A) or intensity
Sensitive home button.In some embodiments, hardware input equipment 145 is made of strength sensitive home button, rather than by
Touch-sensitive display system 112 (Figure 1A), keyboard/mouse 350 (Fig. 3) or touch tablet 355 (Fig. 3) are constituted.In some embodiments
In, in response to coming from the input of hardware input equipment 145 (for example, strength sensitive home button or touch screen), hardware input
Reason module 146 provides one or more flop signals to tactile feedback module 133 to indicate that have detected that satisfaction predefines defeated
Enter user's input of standard, input such as corresponding with main button " click " (such as " pressing click " or " unclamp and click ").?
In some embodiments, tactile feedback module 133 provides in response to corresponding to the input of main button " click " and corresponds to main press
The waveform of button " click ", to simulate the tactile feedback of the pressing main button of physics.
In some embodiments, tactile output module includes that (such as the sense of touch in Figure 1A is anti-for tactile feedback controller 161
Present controller 161), the generation of control tactile output.In some embodiments, tactile feedback controller 161 is couple to more
A tactile output generator, and select one or more tactile output generators in the multiple tactile output generator simultaneously
Waveform is sent to selected one or more of tactile output generators for generating tactile output.In some embodiments
In, tactile feedback controller 161 coordinates the tactile output request for corresponding to activation hardware input equipment 145 and corresponds to software thing
The tactile output request (such as the tactile from tactile feedback module 133 exports request) of part, and modify described two or more
One or more waveforms in a waveform are to emphasize specific waveforms relative to remaining waveform in the two or more waveforms
(such as by the scale for improving the specific waveforms and/or the scale for reducing remaining waveform in these waveforms, compared to correspondence
Correspond to the tactile output of activation hardware input equipment 145 in the tactile output priority processing of software event).
In some embodiments, as shown in Figure 1 C, the output of tactile feedback controller 161 is couple to the sound of equipment 100
Frequency circuit (such as voicefrequency circuit 110, Figure 1A), and audio signal is supplied to the voicefrequency circuit of equipment 100.In some embodiment party
In case, tactile feedback controller 161 provides the waveform for generating tactile output and provides together for exporting with generation tactile
Both audio signals of audio output.In some embodiments, tactile feedback controller 161 modify audio signal and/or
(for generate tactile output) waveform make audio output it is synchronous with tactile output (such as by postpone audio signal and/or
Waveform) in some embodiments, tactile feedback controller 161 includes the digital-to-analogue for digital waveform to be converted into analog signal
Converter, analog signal is amplified device 163 and/or tactile output generator 167 receives.
In some embodiments, tactile output module includes amplifier 163.In some embodiments, amplifier 163
(such as from tactile feedback controller 161) waveform is received, and amplifies the waveform and then is sent to enlarged waveform
Tactile output generator 167 (for example, any one in tactile output generator 167 (Figure 1A) or 357 (Fig. 3)).For example, amplifier
163 by received waveform be amplified to the signal level of the physical specification for meeting tactile output generator 167 and (such as be amplified to touching
Feel output generator 167 in order to generate tactile output and required voltage and/or electric current to be sent to tactile output generator
167 signal is generated to correspond to and be exported from the tactile of the received waveform of tactile feedback controller 161) and send out enlarged waveform
Give tactile output generator 167.In response, tactile output generator 167 generates tactile output (such as by that will move
Mass is in one or more dimensions relative to shifting before and after the neutral position of removable mass).
In some embodiments, tactile output module includes sensor 169, is couple to tactile output generator 167.
Sensor 169 detects one or more components of tactile output generator 167 or tactile output generator 167 (such as giving birth to
At tactile export one or more moving components, such as film) state or state change (such as mechanical location, physical displacement,
And/or mobile).In some embodiments, sensor 169 is magnetic field sensor (such as hall effect sensor) or other positions
Shifting and/or motion sensor.In some embodiments, sensor 169 is by information (such as one in tactile output generator 167
Position, displacement and/or the movement of a or multiple components) it is supplied to tactile feedback controller 161, also, mentioned according to sensor 169
The information of the state about tactile output generator 167 supplied, tactile feedback controller 161 are adjusted from tactile feedback controller
The waveform (for example, waveform that tactile output generator 167 is optionally sent to via amplifier 163) of 161 outputs.
Fig. 2 shows according to some embodiments with touch screen (for example, touch-sensitive display system 112 of Figure 1A)
Portable multifunction device 100.Touch screen optionally shows one or more figures in user interface (UI) 200.At these
In embodiment and in other embodiments for being described below, user can be by, for example, one or more fingers
202 (being not drawn on scale in figure) or one or more stylus 203 (being not drawn on scale in figure) are sold on figure
Gesture selects one or more figures in these figures.In some embodiments, it interrupts as user and schemes with one or more
When the contact of shape, the selection to one or more figures will occur.In some embodiments, gesture optionally include it is primary or
Repeatedly (from left to right, from right to left, up and/or down) and/or tap, one or many gently sweep occur with equipment 100
The rolling (from right to left, from left to right, up and/or down) of the finger of contact.In some specific implementations or in some feelings
Under condition, figure inadvertently will not be selected with pattern contact.For example, applying journey when gesture corresponding with selection is tap
Swept above sequence icon gently corresponding application program will not optionally be selected by sweeping gesture.
Equipment 100 optionally further includes one or more physical buttons, such as " home " button or menu button 204.Such as
Preceding described, menu button 204 is optionally for times navigate in one group of application program being optionally performed on the appliance 100
What application program 136.Alternatively, in some embodiments, menu button is implemented as being displayed on touch
The soft key in GUI on panel type display.
In some embodiments, equipment 100 (is sometimes referred to as main button including touch-screen display, menu button 204
204), pushing button 206, the volume knob 208, user identity for keeping equipment power on/off and for locking device
Module (SIM) card slot 210, earphone jack 212 and docking/charging external port 124.Button 206 is pushed optionally for passing through
Depressing the button and the button is maintained at depressed state continues predefined time interval to carry out machine open/close to equipment;
By depressing the button and discharging the button before in the past in the predefined time interval come locking device;And/or to equipment
It is unlocked or initiates unlocking process.In some embodiments, equipment 100 is also received by microphone 113 for activating
Or deactivate the voice input of certain functions.Equipment 100 is also optionally included for detecting connecing in touch-sensitive display system 112
One or more contact strength sensors 165 of the intensity of touching and/or for for equipment 100 user generate tactile output one
A or multiple tactile output generators 167.
Fig. 3 is the block diagram according to the exemplary multifunctional equipment with display and touch sensitive surface of some embodiments.
Equipment 300 needs not be portable.In some embodiments, equipment 300 is laptop computer, desktop computer, plate electricity
Brain, multimedia player device, navigation equipment, educational facilities (such as children for learning toy), game system or control equipment (example
Such as, household controller or industrial controller).Equipment 300 generally include one or more processing units (CPU) 310, one or
Multiple networks or other communication interfaces 360, memory 370 and one or more communication bus for interconnecting these components
320.Communication bus 320 optionally include make system unit interconnect and control system component between communication circuit (sometimes
Referred to as chipset).Equipment 300 includes input/output (I/O) interface 330 with display 340, which is usually to touch
Touch panel type display.I/O interface 330 also optionally include keyboard and/or mouse (or other sensing equipments) 350 and Trackpad 355,
For generating the tactile output generator 357 of tactile output in equipment 300 (for example, being similar to above with reference to described in Figure 1A
One or more tactile output generators 167), sensor 359 (for example, optical sensor, acceleration transducer, close to sensing
Device, touch-sensitive sensors, and/or the contact similar to one or more contact strengths sensor 165 above with reference to described in Figure 1A
Intensity sensor).Memory 370 include high-speed random access memory, such as DRAM, SRAM, DDR RAM or other deposit at random
Take solid-state memory device;And nonvolatile memory is optionally included, such as one or more disk storage equipments, CD
Store equipment, flash memory device or other non-volatile solid-state memory devices.Memory 370 is optionally included far from one
Or one or more storage equipment that multiple CPU 310 are positioned.In some embodiments, memory 370 storage with it is portable more
The similar program of program, module and the data structure stored in the memory 102 of function device 100 (Figure 1A), module, sum number
According to structure or their subgroup.In addition, memory 370 is optionally stored in the memory 102 of portable multifunction device 100
In the other program, module and the data structure that are not present.For example, the memory 370 of equipment 300 optionally stores graphics module
380, module 382, word processing module 384, website creation module 386, disk editor module 388, and/or electrical form mould is presented
Block 390, and the memory 102 of portable multifunction device 100 (Figure 1A) does not store these modules optionally.
Each element in Fig. 3 in above-mentioned identified element is optionally stored in previously mentioned memory devices
In one or more memory devices.Each module in above-mentioned identified module corresponds to one for executing above-mentioned function
Group instruction.Above-mentioned identified module or program (that is, instruction set) need not be implemented as individual software program, process or mould
Block, therefore each subset of these modules is optionally combined in various embodiments or is otherwise rearranged.One
In a little embodiments, memory 370 optionally stores the subgroup of above-mentioned module and data structure.In addition, memory 370 is optionally
Store other module and data structure not described above.
It attention is drawn to the reality for the user interface (" UI ") optionally realized on portable multifunction device 100
Apply scheme.
Fig. 4 A shows the example of the application menu on the portable multifunction device 100 according to some embodiments
User interface 400.Similar user interface is optionally realized in equipment 300.In some embodiments, user interface 400
Including following element or its subset or superset:
One or more signal strengths of one or more wireless communications (such as cellular signal and Wi-Fi signal) refer to
Show device;
Time;
Bluetooth indicator;
Battery Status Indicator;
With common application program image target pallet 408, icon such as:
The icon 416 for being marked as " phone " of ο phone module 138, the icon optionally include missed call or voice
The indicator 414 of the quantity of message;
The icon 418 for being marked as " mail " of ο email client module 140, which, which optionally includes, does not read
The indicator 410 of the quantity of Email;
The icon 420 for being marked as " browser " of ο browser module 147;And
The label of ο video and musical player module 152 be music " icon 422;And
The icon of other applications, such as:
The icon 424 for being marked as " message " of ο IM module 141;
The icon 426 for being marked as " calendar " of ο calendaring module 148;
The icon 428 for being marked as " photo " of ο image management module 144;
The icon 430 for being marked as " camera " of ο camera model 143;
The icon 432 for being marked as " Online Video " of ο Online Video module 155;
The icon 434 for being marked as " stock market " of the stock market ο desktop small routine 149-2;
The icon 436 for being marked as " map " of ο mapping module 154;
The icon 438 for being marked as " weather " of ο weather desktop small routine 149-1;
The icon 440 for being marked as " clock " of ο alarm clock desktop small routine 149-4;
The icon 442 for being marked as " body-building support " of ο body-building support module 142;
The icon 444 for being marked as " notepad " of ο notepad module 153;And
ο is used to be arranged the icon 446 of application program or module, which is provided to equipment 100 and its various applications
The access of the setting of program 136.
It should be noted that icon label shown in Fig. 4 A is only exemplary.For example, other labels are optionally for each
Kind application icon.In some embodiments, the label of corresponding application programs icon includes and the corresponding application programs figure
Mark the title of corresponding application program.In some embodiments, the label of application-specific icon is different from specific with this
The title of the corresponding application program of application icon.
Fig. 4 B is shown with the touch sensitive surface 451 separated with display 450 (for example, plate or Trackpad in Fig. 3
355) the exemplary user interface in equipment (for example, equipment 300 in Fig. 3).Although touch-screen display 112 will be referred to
Input on (being wherein combined with touch sensitive surface and display) provides subsequent many examples, but in some embodiments,
The input on touch sensitive surface that equipment detection is separated with display, as shown in Figure 4 B.In some embodiments, touch sensitive surface
(for example, 451 in Fig. 4 B) have master corresponding with main shaft (for example, 453 in Fig. 4 B) on display (for example, 450)
Axis (for example, 452 in Fig. 4 B).According to these embodiments, equipment detects position corresponding with corresponding position on display
Place the contact with touch sensitive surface 451 (for example, 460 in Fig. 4 B and 462) (for example, in figure 4b, 460 correspond to 468 and
470) 462 correspond to.In this way, in the display of touch sensitive surface (for example, 451 in Fig. 4 B) and multifunctional equipment (for example, Fig. 4 B
In 450) marquis when being separated, by equipment user's input detected on touch sensitive surface (for example, contact 460 and 462 with
And their movement) be used to manipulate the user interface on display by the equipment.It should be appreciated that similar method optionally for
Other users interface as described herein.
In addition, though mostly in reference to finger input (for example, finger contact, singly referring to that Flick gesture, finger gently sweep gesture
Deng) provide following example it should be appreciated that in some embodiments, one in the input of these fingers or
Multiple finger inputs are by input (for example, input or stylus based on mouse input) replacement from another input equipment.For example,
It gently sweeps gesture and (for example, rather than contact) is optionally clicked by mouse, be that cursor along the path gently swept moves (example later
Such as, rather than contact movement) substitution.For another example, Flick gesture optionally by above the position that cursor is located at Flick gesture when
Mouse click and (for example, instead of the detection to contact, be off detection contact later) substitution.Similarly, when being detected simultaneously by
Multiple users input when, it should be appreciated that multiple computer mouses be optionally used simultaneously or mouse and finger contact appoint
Selection of land is used simultaneously.
As used herein, term " focus selector " refers to the user interface for being used to indicate that user is just interacting therewith
The input element of current portions.In some specific implementations for including cursor or other positions label, cursor serves as " focus selection
Device ", so that when cursor is above particular user interface element (for example, button, window, sliding block or other users interface element)
Detect input (for example, pressing on touch sensitive surface (for example, touch sensitive surface 451 in Trackpad 355 or Fig. 4 B in Fig. 3)
Input) in the case where, which is adjusted according to detected input.It is including making it possible to realize
With the touch-screen display of the user interface element on touch-screen display directly interacted (for example, the touch-sensitive display in Figure 1A
Touch screen in device system 112 or Fig. 4 A) some specific implementations in, " focus choosing is served as in the contact detected on the touchscreen
Select device " so that working as on touch-screen display in particular user interface element (for example, button, window, sliding block or other users
Interface element) position at detect input (for example, by contact pressing input) when, adjusted according to detected input
Whole particular user interface element.In some specific implementations, focus is moved to user interface from a region of user interface
Another region, the movement of the contact in correspondence movement or touch-screen display without cursor is (for example, by using tabulation
Focus is moved to another button from a button by key or arrow key);In these specific implementations, focus selector is according to coke
It puts the movement between the different zones of user interface and moves.The concrete form that focus selector is taken, focus are not considered
Selector is usually from user's control to transmit with the desired interaction of the user of user interface (for example, by indicating to equipment
The user of user interface it is expected the element interacted) user interface element (or contact on touch-screen display).
For example, when detecting pressing input on touch sensitive surface (for example, touch tablet or touch screen), focus selector (for example, cursor,
Contact or choice box) position above the corresponding button will indicate that user it is expected to activate the corresponding button (rather than device display
On the other users interface element that shows).
As used in the present specification and claims, " intensity " of the contact on term touch sensitive surface refers to touching
The power or pressure (power of per unit area) of contact (for example, finger contact or stylus contact) on sensitive surfaces, or refer to touch-sensitive
The power of contact on surface or the sub (surrogate) of pressure.The intensity of contact has value range, which includes at least
Four different values and more typically include a different values up to a hundred (for example, at least 256).The intensity of contact optionally makes
(or measurement) is determined with the combination of various methods and various sensors or sensor.For example, below touch sensitive surface or adjacent
In touch sensitive surface one or more force snesors optionally for measurement touch sensitive surface on difference at power.In some tools
During body is implemented, the power measurement from multiple force sensors is merged (for example, weighted average or adduction), to determine connecing for estimation
Touch.Similarly, pressure of the pressure-sensitive top of stylus optionally for determining stylus on touch sensitive surface.Alternatively, touch-sensitive
The size of the contact area detected on surface and/or its variation, contact near touch sensitive surface capacitor and/or its variation with
And/or the touch sensitive surface near person's contact resistance and/or power of contact for being optionally used as on touch sensitive surface of its variation or
The substitute of pressure.In some specific implementations, the substitute measurement of contact force or pressure, which is directly used in, to determine whether to have surpassed
Cross intensity threshold (for example, intensity threshold is described to correspond to the unit that substitute measures).In some specific implementations, it will connect
Touch or the substitution measured value of pressure, which are converted to, estimates power or pressure, and power is estimated in use or pressure determines whether to be more than strong
Degree threshold value (for example, intensity threshold is the pressure threshold measured with pressure unit).The intensity of contact is used to input as user
Attribute, so that user can indicated and/or receive for showing (for example, on the touch sensitive display) by allowing user to access user
Input the area on the spot of (for example, via touch-sensitive display, touch sensitive surface or physical control/machinery control such as knob or button)
The optional equipment function that cannot be easily accessed originally in limited smaller equipment.
In some embodiments, contact/motion module 130 determines operation using one group of one or more intensity threshold
Whether (for example, determining that whether user " clicks " icon) is executed by user.In some embodiments, according to software parameter
Come determine intensity threshold at least one subset (for example, intensity threshold is not determined by the activation threshold of specific physical actuation device,
And it can be adjusted in the case where not changing the physical hardware of equipment 100).For example, not changing Trackpad or touch screen
In the case where display hardware, mouse " click " threshold value of Trackpad or touch-screen display can be arranged to predefined thresholds
Any one threshold value in a wide range of.In addition, the user of equipment is provided with for adjusting one group of intensity threshold in some specific implementations
One or more intensity thresholds in value are (for example, by adjusting each intensity threshold and/or by using to " intensity " parameter
It is system-level click come the multiple intensity thresholds of Primary regulation) software setting.
As used in the specification and in the claims, " characteristic strength " of contact this term refers to one based on contact
The feature of the contact of a or multiple intensity.In some embodiments, characteristic strength is based on multiple intensity samples.Characteristic strength is appointed
Selection of land is based on (for example, after detecting contact, before detecting that contact is lifted away from, detecting relative to predefined event
Before or after contact start is mobile, before detecting that contact terminates, before or after detecting that the intensity of contact increases
And/or detect contact intensity reduce before or after) in the predetermined period (for example, 0.05 second, 0.1
Second, 0.2 second, 0.5 second, 1 second, 2 seconds, 5 seconds, 10 seconds) during acquire predefined quantity intensity sample or one group of intensity sample.
The characteristic strength of contact is optionally based on one or more of the following terms: the maximum value of contact strength, contact strength it is equal
Value, the average value of contact strength, contact strength preceding 10% at value, half maximum value of contact strength, contact strength 90%
Maximum value, the value generated by the low-pass filtering contact strength in time predefined section or since time predefined etc..?
In some embodiments, when determining characteristic strength using the duration of contact (for example, being the intensity contacted in characteristic strength
When average value in time).In some embodiments, characteristic strength and one group of one or more intensity threshold are compared
Compared with to determine whether user has executed operation.For example, the group one or more intensity threshold may include the first intensity threshold and
Two intensity thresholds.In this example, the contact that characteristic strength is less than the first intensity threshold leads to the first operation, and characteristic strength is super
It crosses the first intensity threshold but the contact for being less than the second intensity threshold leads to the second operation, and characteristic strength is more than the second intensity
The contact of threshold value causes third to operate.In some embodiments, using between characteristic strength and one or more intensity thresholds
Comparison come determine whether to execute one or more operations (such as, if execute respective selection or abandon executing corresponding behaviour
Make), rather than the first operation or the second operation are executed for determining.
In some embodiments, identify a part of gesture for determining characteristic strength.For example, touch sensitive surface can connect
Receipts continuously gently sweep contact, this is continuously gently swept contact from initial position transition and reaches end position (such as drag gesture), at this
At end position, the intensity of contact increases.In this embodiment, characteristic strength of the contact at end position can be based only upon continuously
It gently sweeps a part of contact, rather than entirely gently sweeps contact (for example, only a part for gently sweeping contact at end position).One
In a little embodiments, algorithm can be smoothed in the intensity application that the forward direction for the characteristic strength for determining contact gently sweeps gesture.For example, flat
Cunningization algorithm optionally includes one of the following terms or a variety of: not weighting sliding average smoothing algorithm, triangle smoothing
Algorithm, median filter smoothing algorithm and/or exponential smoothing algorithm.In some cases, these smoothing algorithms are eliminated
The narrow spike or recess in the intensity of contact is swept, gently to realize the purpose for determining characteristic strength.
User interface map described herein optionally includes various intensity maps, these intensity illustrate connecing on touch sensitive surface
Touching is relative to one or more intensity thresholds (for example, contact detection intensity threshold value IT0, light press intensity threshold ITL, deep press pressure
Spend threshold value ITD(for example, being at least initially higher than ITL) and/or other one or more intensity thresholds (for example, comparing ITLLow intensity
Threshold value ITH)) current intensity.The intensity map is not usually a part of shown user interface, but is provided to help
Explain the figure.In some embodiments, light press intensity threshold corresponds to such intensity: equipment will be held under the intensity
Row operation usually associated with the button for clicking physics mouse or touch tablet.In some embodiments, deep to press Compressive Strength threshold
Value correspond to such intensity: under the intensity equipment will execute with usually with click physics mouse or Trackpad button it is related
The different operation of the operation of connection.In some embodiments, when detect characteristic strength lower than light press intensity threshold (for example,
And it is higher than Nominal contact detection intensity threshold value IT0, the contact lower than Nominal contact detection intensity threshold value be no longer detected)
When contact, equipment by according to movement of the contact on touch sensitive surface come moving focal point selector, without executing and flicking Compressive Strength
Threshold value or the deep pressing associated operation of intensity threshold.In general, unless otherwise stated, otherwise these intensity thresholds in difference
It is consistent between the user interface attached drawing of group.
In some embodiments, equipment depends on based on connecing during input the response of input detected by equipment
Touch the standard of intensity.For example, being inputted for some " light press ", more than the intensity of the contact of the first intensity threshold during input
The first response of triggering.In some embodiments, equipment depends on the response of the input as detected by equipment to include input
The standard of the contact strength of period and time-based both criteria.For example, for some " deep to press " inputs, as long as meeting
Pass through delay time between the second intensity threshold of first intensity threshold and satisfaction, the first of light press is exceeded more than during input
The intensity of the contact of second intensity threshold of intensity threshold just triggers the second response.The duration of the delay time is usually less than
(for example, 40ms, 100ms or 120ms, this depends on the magnitude of the second intensity threshold to 200ms (millisecond), wherein the delay time
As the second intensity threshold increases and increase).The delay time helps to avoid unexpectedly to identify deep pressing input.For another example, for
Some " deep pressing " inputs, the period that susceptibility reduction will occur after reaching the first intensity threshold.It is dropped in the susceptibility
During the low period, the second intensity threshold increases.This temporary increase of second intensity threshold, which additionally aids, avoids unexpected depth
Pressing input.For other pressing inputs deeply, time-based standard is not dependent on to the response for detecting deep pressing input.
In some embodiments, one or more of input intensity threshold value and/or corresponding output are based on one or more
A factor (such as, user setting, contact movement, incoming timing, application program operation, rate when applying intensity, input simultaneously
Quantity, user's history, environmental factor (for example, ambient noise), focus selector position etc. and change.Illustrative factor exists
It is described in U.S. Patent Application Serial Number 14/399,606 and 14/624,296, these U.S. Patent applications full text is to quote
Mode is incorporated herein.
For example, the dynamic that Fig. 4 C shows the intensity being based in part on touch input 476 at any time and changes over time is strong
Spend threshold value 480.Resistance to vibration threshold value 480 is the summation of two components: pre- since being initially detected touch input 476
The first component 474 for decaying after the delay time p1 of definition at any time and track the intensity of touch input 476 at any time
Second component 478.The initial high-intensitive threshold value of first component 474 reduces unexpected triggering " deep pressing " response, still allows for simultaneously
" deep pressing " response immediately is carried out in the case where touch input 476 provides sufficient intensity.The reduction of second component 478 passes through touch
The gradual strength fluctuation of input and inadvertent free " deep pressing " response.In some embodiments, meet in touch input 476
When resistance to vibration threshold value 480 (for example, point 481 at) in figure 4 c, triggering " deep pressing " response.
Fig. 4 D shows another resistance to vibration threshold value 486 (for example, intensity threshold ID).Fig. 4 D also show two other
Intensity threshold: the first intensity threshold IHWith the second intensity threshold IL.In fig. 4d, although touch input 484 is full before time p2
The first intensity threshold I of footHWith the second intensity threshold IL, but response is just provided until passing through delay time p2 at the time 482.
Equally in fig. 4d, resistance to vibration threshold value 486 decays at any time, (triggers and the second intensity threshold wherein decaying from the time 482
Value ILWhen associated response) time 488 for having been subjected to after predefined delay time p1 starts.It is such dynamic
State intensity threshold is reduced immediately in triggering and lower threshold intensity (such as the first intensity threshold IHOr the second intensity threshold IL) related
The response of connection surprisingly triggers and resistance to vibration threshold value I later or simultaneouslyDAssociated response.
Fig. 4 E shows another resistance to vibration threshold value 492 (for example, intensity threshold ID).In Fig. 4 E, defeated from touching
Enter 490 by initial detecting to when have been subjected to delay time p2 after, triggering with intensity threshold ILAssociated response.Together
When, resistance to vibration threshold value 492 from touch input 490 by initial detecting to when have been subjected to predefined delay time p1 it
After decay.Therefore, in triggering and intensity threshold ILThe intensity that touch input 490 is reduced after associated response, then not
The intensity for increasing touch input 490 in the case where discharging touch input 490 can trigger and intensity threshold IDAssociated response (example
Such as, at the time 494), even if when the intensity of touch input 490 is lower than another intensity threshold (for example, intensity threshold IL) when
It is such.
Contact characteristic intensity is from lower than light press intensity threshold ITLIntensity increase between light press intensity threshold ITLWith
Deep pressing intensity threshold ITDBetween intensity be sometimes referred to as " light press " input.The characteristic strength of contact is from lower than deep pressing
Intensity threshold ITDIntensity increase to above deep pressing intensity threshold ITDIntensity be sometimes referred to as " deep pressing " input.Contact is special
Intensity is levied from lower than contact detection intensity threshold value IT0Intensity increase between contact detection intensity threshold value IT0With flicking Compressive Strength
Threshold value ITLBetween intensity be sometimes referred to as and detect contact on touch-surface.The characteristic strength of contact is examined from contact is higher than
Survey intensity threshold IT0Intensity be decreased below contact detection intensity threshold value IT0Intensity sometimes referred to as detect contact from touching
Surface is touched to be lifted away from.In some embodiments, IT0It is zero.In some embodiments, IT0Greater than zero in some diagrams, yin
Shadow circle is oval for indicating the intensity of the contact on touch sensitive surface.In some diagrams, shadeless circle or ellipse are used for
Indicate the corresponding contact on touch sensitive surface without the specified intensity accordingly contacted.
Herein in some embodiments, in response to detecting gesture or response including corresponding pressing input
One or more operations are executed in detecting using corresponding pressing input performed by corresponding contact (or multiple contacts), wherein
It is based at least partially on and detects that the intensity of the contact (or multiple contacts) increases to above pressing input intensity threshold value and detects
It is inputted to the corresponding pressing.In some embodiments, in response to detecting that it is defeated that the intensity accordingly contacted increases to above pressing
Enter intensity threshold to execute corresponding operating (for example, executing corresponding operating on " downward stroke " of corresponding pressing input).One
In a little embodiments, pressing input includes that the intensity accordingly contacted increases to above pressing input intensity threshold value and the contact
Intensity is decreased subsequently to lower than pressing input intensity threshold value, and in response to detecting that it is low that the intensity accordingly contacted is decreased subsequently to
Threshold value is inputted in pressing to execute corresponding operating (for example, executing corresponding operating on " up stroke " of corresponding pressing input).
In some embodiments, equipment is lagged using intensity to avoid the accident input sometimes referred to as " shaken ",
Middle equipment limits or selection has the lag intensity threshold of predefined relationship (for example, lag intensity with pressing input intensity threshold value
Threshold value than the low X volume unit of pressing input intensity threshold value, or lag intensity threshold be pressing input intensity threshold value 75%,
90% or some rational proportion).Therefore, in some embodiments, pressing input includes that the intensity accordingly contacted increases to height
It is decreased subsequently in the intensity of pressing input intensity threshold value and the contact lower than the lag for corresponding to pressing input intensity threshold value
Intensity threshold, and in response to detecting that the intensity accordingly contacted is decreased subsequently to execute corresponding behaviour lower than lag intensity threshold
Make (for example, executing corresponding operating on " up stroke " of corresponding pressing input).Similarly, in some embodiments, only
It detects that contact strength is increased to from the intensity equal to or less than lag intensity threshold in equipment and is equal to or higher than pressing input by force
Spend threshold value intensity and optionally contact strength be decreased subsequently to be equal to or less than lag intensity intensity when just detect by
Pressure input, and hold in response to detecting pressing input (for example, according to environment, contact strength increases or contact strength reduces)
Row corresponding operating.
In order to be easy to explain, be optionally in response to detect following situations and trigger in response to pressing input intensity threshold
Be worth the description of associated pressing input or the operation executed in response to including the gesture of pressing input: the intensity of contact increases
It is strong from pressing input is increased to above lower than the intensity for lagging intensity threshold to pressing input intensity threshold value, the intensity of contact is higher than
Degree threshold value intensity, the intensity of contact be decreased below pressing input intensity threshold value or contact intensity be decreased below with by
Press the corresponding lag intensity threshold of input intensity threshold value.In addition, describing the operations as in response to detecting that the intensity of contact subtracts
It is small to lower than pressing input intensity threshold value and in the example that executes, be optionally in response to detect that the intensity of contact is decreased below
Correspond to and is less than the lag intensity threshold of pressing input intensity threshold value to execute operation.As described above, in some embodiment party
In case, the triggering of these operations is additionally depended on and meets time-based standard (for example, meeting the first intensity threshold and satisfaction
Delay time is had been subjected between second intensity threshold).
As used in the specification and claims, term " tactile output ", which refers to, to utilize user's by user
The equipment that sense of touch detects is opposite relative to physical displacement, the component (for example, touch sensitive surface) of equipment of the previous position of equipment
In the displacement relative to the mass center of equipment of physical displacement or component of another component (for example, shell) of equipment.For example,
The component and user of equipment or equipment connect to sensitive surface (for example, other parts of finger, palm or user's hand) is touched
In the case where touching, the tactile output generated by physical displacement will be construed to sense of touch by user, which corresponds to equipment or set
The variation of the physical features of standby component perceived.For example, the movement of touch sensitive surface (for example, touch-sensitive display or Trackpad)
" pressing click " or " unclamp and click " to physical actuation button is optionally construed to by user.In some cases, user will
Feel sense of touch, such as " click is pressed " or " unclamp and click ", even if being physically pressed (example by the movement of user
Such as, be shifted) physical actuation button associated with touch sensitive surface when not moving.For another example, though touch sensitive surface light
When slippery is unchanged, the movement of touch sensitive surface also optionally can be explained by user or be sensed as touch sensitive surface " roughness ".Though
Individuation sensory perception by user is limited such explanation of touch by right user, but is known many sense organs of touch
Feel is that most users are shared.Therefore, when tactile output is described as the specific sensory perception corresponding to user (for example, " pressing
Lower click ", " unclamp click ", " roughness ") when, unless otherwise stated, otherwise tactile output generated correspond to equipment or
The physical displacement of its component, the physical displacement will generate the sensory perception of typical (or common) user.It is defeated using tactile
Touch feedback is provided a user out and enhances the operability of equipment, and keeps user's equipment interface more efficient (for example, passing through side
User's mistake when user being helped to provide input appropriate and reduce operation equipment/interact with equipment), thus can by using family
More rapidly and efficiently uses equipment and further reduce electricity usage and extend the battery life of equipment.
In some embodiments, tactile output mode specifies the characteristic of tactile output, such as amplitude of tactile output, touching
Feel the duration of the shape of the moving wave shape of output, the frequency of tactile output, and/or tactile output.
When equipment (such as the one or more tactile output generators for generating tactile output via mobile removable mass)
When generating the tactile output with different tactile output modes, tactile output can generate not in the user of gripping or touch apparatus
Same sense of touch.Although the perception that the sense organ of user exports tactile based on user, most users will identification apparatus generation
Tactile output waveform, frequency and amplitude variation.Therefore, waveform, frequency and amplitude can be conditioned to indicate to the user that
Perform different operation.In this way, have be designed, select and/or be used for simulate given environment (e.g., including figure is special
Seek peace the user interface of object, the analog physical environment with virtual boundary and virtual objects, there is physical boundary and physics pair
The actual physical situation of elephant, and/or the combination of any of the above person) in Properties of Objects (such as size, material, weight, rigidity,
Smoothness etc.);Behavior (such as oscillation, displacement, acceleration, rotation, stretching, extension etc.);And/or interaction (such as collide, adherency, repel,
Attract, friction etc.) tactile output mode tactile output helpful feedback will be provided in some cases for user, subtract
Few input error and the efficiency for improving operation of the user to equipment.In addition, tactile output is optionally generated as corresponding to and institute
The unrelated feedback of analog physical characteristic (such as input threshold value or Object Selection).Such tactile output will be in some cases use
Family provides helpful feedback, reduces input error and improves the efficiency of operation of the user to equipment.
In some embodiments, the tactile output with suitable tactile output mode is served as in the user interface or is being set
The prompt of events of interest occurs behind standby middle screen.The example of events of interest includes providing in equipment or in user interface
Show and can indicate the activation of (such as true or virtual push button or tumbler switch), request the success or failure operated, reach or
Boundary in user interface, into new state, between objects switch input focus, activation new model, reach or pass through
Input threshold value, detection or a type of input of identification or gesture etc..In some embodiments, tactile output is provided to fill
When about unless change direction or interrupt input be timely detected, otherwise what can be occurred will occur the warning of event or result
Or prompt.Tactile output is also used for abundant user experience in other situations, improves with vision or dyskinesia or other
The user that accessibility needs is to the accessibility of equipment, and/or the efficiency and functionality of improvement user interface and/or equipment.Optionally
Tactile is exported and is compared with audio input and/or visual user interface change by ground, this further enhances user and user circle
The experience of user when face and/or equipment interaction, and be conducive to the more preferable biography of the information of state about user interface and/or equipment
It is defeated, and this reduces input error and improves the efficiency of operation of the user to equipment.
Fig. 4 F to 4H provide can either individually or in combination, as it is or by one or more transformation (for example, modulation, putting
Greatly, truncate etc.) in various scenes for various purposes (such as those described above and for the use being discussed herein
Those of described in family interface and method) generate one group of sample tactile output mode of suitable tactile feedback.The control of tactile output
This example of plate shows how one group of three waveform and eight frequencies can be used to generate the array of tactile output mode.Except this
Except tactile output mode shown in a little figures, adjusted in terms of amplitude optionally by the yield value for changing tactile output mode
In these tactile output modes each, as shown, for example, for FullTap 80Hz, the FullTap in Fig. 4 I to Fig. 4 K
200Hz, MiniTap 80Hz, MiniTap 200Hz, MicroTap 80Hz and MicroTap 200Hz, each are shown
For the variant of the gain with 1.0,0.75,0.5 and 0.25.As shown in Fig. 4 I to Fig. 4 K, change the gain of tactile output mode
The then amplitude of change pattern, without the frequency of change pattern or the shape of change waveform.In some embodiments, change tactile
The frequency of output mode is also resulted in compared with low amplitude value because some tactile output generators be limited to how much power can be applied in it is removable
Kinoplaszm block, therefore the upper frequency movement of mass is constrained to compared with low amplitude value to ensure to accelerate not needing required for generating waveform
Power except the operating force range of tactile output generator is (for example, the peak value width of the FullTap of 230Hz, 270Hz and 300Hz
It is worth lower than the amplitude of the FullTap of 80Hz, 100Hz, 125Hz and 200Hz).
Fig. 4 F to Fig. 4 K shows the tactile output mode with specific waveforms.The waveform of tactile output mode indicates opposite
In neutral position (for example, xzero) physical displacement to the pattern of time, removable mass generates that have should by the pattern
The tactile of tactile output mode exports.For example, first group of tactile output mode shown in Fig. 4 F (such as the tactile of " FullTap "
Output mode) each all has the waveform including having the oscillation there are two complete cycle (for example, starting and ending at neutral position
It sets and passes through the oscillation of neutral position three times).Second group of tactile output mode shown in Fig. 4 G (such as the tactile of " MiniTap "
Output mode) each all has the waveform including the oscillation with a complete cycle (for example, starting and ending at neutral position
It sets and passes through the primary oscillation of neutral position).Third group tactile output mode shown in Fig. 4 H (such as the tactile of " MicroTap "
Output mode) each all has the waveform including the oscillation with half of complete cycle (for example, starting and ending at neutral position
Set and be not passed through the oscillation of neutral position).The waveform of tactile output mode further includes representing at tactile output beginning and end
The starting buffering for ramping up and slowing down and end buffering of removable mass.Example waveform shown in Fig. 4 F to Fig. 4 K includes generation
Table moves xmin the and xmax value of the minimum and maximum mobile degree of mass.The removable biggish larger electronics of mass is set
Standby, the mobile degree of the minimum and maximum of the mass can be greater or lesser.Example shown in Fig. 4 F to Fig. 4 K describes mass in 1 dimension
Movement, however, similar principle be equally applicable in bidimensional or three-dimensional move mass movement.
As shown in Fig. 4 F to Fig. 4 K, each tactile output mode also has corresponding characteristic frequency, influences user from tool
" pitch " for the sense of touch for thering is the tactile output of this feature frequency to feel.Continuous tactile is exported, characteristic frequency represents tactile
The quantity for the circulation (such as circulation per second) that the removable mass of output generator is completed in given time period.For discrete touching
Feel output, generate discrete output signal (such as being recycled with 0.5,1 or 2), and the specified removable mass of characteristic frequency value
It needs mobile how soon to generate the tactile output with this feature frequency.As shown in Fig. 4 F to Fig. 4 H, for each type of touching
Feel output (for example, limited by respective waveforms, such as, FullTap, MiniTap or MicroTap), higher frequency values and removable
Mass it is very fast it is mobile correspond to, therefore, in general, with shorter tactile output deadline (e.g., including complete discrete touching
Feel that the time of the required loop number of output adds starting and ending buffer time) it is corresponding.For example, characteristic frequency is 80Hz's
FullTap than characteristic frequency be 100Hz FullTap take longer complete (such as in Fig. 4 F, 35.4ms
vs.28.3ms).In addition, having more multicycle tactile output ratio in phase in its waveform in corresponding frequencies given frequency
With corresponding frequencies there is the tactile less recycled output to take longer to complete in its waveform.For example, the FullTap of 150Hz
MiniTap than 150Hz takes longer to complete (such as 19.4ms vs.12.8ms), the MiniTap ratio 150Hz's of 150Hz
MicroTap takes longer to complete (such as 12.8ms vs.9.4ms).However mould is exported for the tactile with different frequency
Formula, this rule may be not suitable for (for example, the tactile output with more multi-cycle but with higher frequency may be than with less
It recycles but there is the output of more low-frequency tactile shorter time quantum to be spent to complete, vice versa).For example, in 300Hz, FullTap
The equally long time (such as 9.9ms) is spent with MiniTap.
As shown in Fig. 4 F to Fig. 4 K, tactile output mode also has feature amplitude, influences the energy for including in haptic signal
The amount of amount or user export " intensity " of appreciable sense of touch by the tactile with this feature amplitude.In some implementations
In scheme, the feature amplitude of tactile output mode refers to that representative moves mass when generating tactile output relative to neutral position
Maximum displacement absolute or normalized value.In some embodiments, the feature amplitude of tactile output mode can according to (such as
It is customized based on user interface situation and behavior) various conditions and/or preconfigured measurement (such as the degree based on input
Amount, and/or the measurement based on user interface) it adjusts, such as the gain coefficient by fixing or being dynamically determined (such as between 0 and 1
Between value) adjust.In some embodiments, the measurement based on input (such as Strength Changes measurement or input speed degree
Amount) measurement input during the input that triggering generates tactile output characteristic (such as the characteristic strength that contacts in pressing input
Change rate or contact rate travel on touch sensitive surface).In some embodiments, based on the measurement of user interface
The spy of (such as cross-border speed measurement) measurement user interface element during the user interface that triggering generates tactile output changes
Property (such as the element pass through bug or visible borders in the user interface movement speed).In some embodiments, tactile
The feature amplitude of output mode can be modulated by " envelope ", and the peak value of adjacent circulation can have different amplitudes, wherein the above institute
One of oscillography shape multiplied by the envelope parameters for changing over time (such as changing to 1 from 0) by further modifying, to generate tactile
As the time gradually adjusts the amplitude of the part of tactile output when output.
Although, in order to schematically be illustrated, being merely illustrated in sample tactile output mode in Fig. 4 F to Fig. 4 K
Specific frequency, amplitude and waveform, but the tactile output mode with other frequencies, amplitude and waveform can also be used for similar purpose.
For example, can be used with the waveform between 0.5 to 4 circulations.Other frequencies in 60Hz-400Hz range can also be used.
User interface and associated process
Attention is directed to can be with display, touch sensitive surface, (optionally) one for generating tactile output
A or multiple tactile output generators and (optionally) are for detecting and the one or more of the intensity of the contact of touch sensitive surface
The user interface (" UI ") and correlation realized on the electronic equipment of sensor such as portable multifunction device 100 or equipment 300
The embodiment of the process of connection.
Fig. 5 A to Fig. 5 AT shows the example user interface according to some embodiments, is used to use from display first
Family interface zone shows the expression of virtual objects when being switched to display second user interface zone.User interface in these attached drawings
Be used to show process described below, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G,
Process in Figure 17 A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining,
Some embodiment party in embodiment will be discussed with reference to the operation executed in the equipment with touch-sensitive display system 112
Case.In such embodiment, focus selector is optionally: respective finger or stylus contact connect corresponding to finger or stylus
The representative point (for example, the center of gravity that accordingly contacts or with accordingly contact associated point) of touching or the institute in touch-sensitive display system 112
The center of gravity of two or more contacts detected.However, showing user shown in the accompanying drawings on display 450 in response to working as
The contact on touch sensitive surface 451 is detected when interface is together with focus selector, optionally with display 450 and independent touching
Similar operation is executed in the equipment of sensitive surfaces 451.
Fig. 5 A is shown in which the real world scene using the user interface referring to described in Fig. 5 B to Fig. 5 AT.
Fig. 5 A shows physical space 5002 locating for desk 5004.Equipment 100 is held in the hand 5006 of user by user
In.
Fig. 5 B shows the instant message user interface 5008 of display on the display 112.Instant message user interface
5008 include: the message gas of the message bubble 5010 including received text message 5012, the text message 5016 including transmission
Bubble 5014 and the virtual objects including receiving in the message (for example, virtual chair 5020) and virtual objects indicator 5022
Message bubble 5018, which indicates that virtual chair 5020 is in augmented reality view (for example, in equipment
In the expression of the visual field of 100 one or more cameras) visible object.Instant message user interface 5008 further includes being configured
For the message entry area 5024 of display message input.
Fig. 5 C to Fig. 5 G is shown so that a part of instant message user interface 5008 is by the one or more of equipment 100
The input of the visual field replacement of camera.In figure 5 c, it detects and the contact 5026 of the touch screen 112 of equipment 100.The spy of the contact
It levies intensity and is higher than contact detection intensity threshold value IT0And lower than prompt pressing intensity threshold ITH, as shown in strength level meter 5028.?
In Fig. 5 D, as shown in strength level meter 5028, the characteristic strength for contacting 5026 is increased above prompt pressing intensity threshold ITH,
This increases the region of message bubble 5018, the size of virtual chair 5020 increases and instant message user interface 5008 exists
Start to obscure (for example, providing a user the visual feedback for increasing the effect of characteristic strength of contact) behind message bubble 5018.
In Fig. 5 E, as shown in strength level meter 5028, the characteristic strength for contacting 5026 is increased above light press intensity threshold ITL,
This replaces message bubble 5018 by disk 5030, the size of virtual chair 5020 further increases and instant message user
Interface 5008 further obscures behind disk 5030.In Fig. 5 F, as shown in strength level meter 5028,5026 feature is contacted
Intensity is increased above deep pressing intensity threshold ITD, this makes the tactile output generator 167 of equipment 100 export tactile output
(as shown in 5032), the visual field which exports one or more cameras of instruction equipment 100 replace instant message user
The standard of a part at interface 5008 has been met.
In some embodiments, reach deep pressing intensity threshold IT in the characteristic strength of contact 5026D(such as Fig. 5 F institute
Show) before, progress shown in Fig. 5 C to Fig. 5 E is reversible.For example, reducing after increasing shown in Fig. 5 D and/or Fig. 5 E
Contact 5026 characteristic strength by make with contact 5026 the corresponding interface state of reduced strength level be shown (for example,
It is higher than light press intensity threshold IT according to the reduced characteristic strength for determining contactL, interface as shown in fig. 5e is shown;According to true
Surely the reduced characteristic strength contacted is higher than prompt pressing intensity threshold ITH, interface as shown in Figure 5 D is shown;And according to true
Surely the reduced characteristic strength contacted is lower than prompt pressing intensity threshold ITH, interface as shown in Figure 5 C is shown).In some realities
It applies in scheme, after increasing shown in Fig. 5 D and/or Fig. 5 E, the characteristic strength for reducing contact 5026 will be so that as shown in Figure 5 C
Interface be re-displayed.
Fig. 5 F to Fig. 5 J shows animation transition, in the animation transition period, a part of quilt of instant message user interface
The visual field of one or more cameras (hereinafter referred to as " camera ") of equipment 100 is replaced.From Fig. 5 F to Fig. 5 G, contact 5026 has been lifted
From touch screen 112, and virtual chair 5020 is rotated towards the final position in Fig. 5 I.In Fig. 5 G, the visual field of camera
5034 have started to fade in view (as indicated by the dashed lines) in disc 5030.In fig. 5h, the visual field 5034 of camera is (for example, show
Out by the view of the physical space 5002 of camera capture) it is completed in disc 5030 and fades in view.From Fig. 5 H to Fig. 5 I, virtually
Chair 5020 continues the final position towards it in Fig. 5 I and rotates.In Fig. 5 I, tactile output generator 167 has exported instruction
The tactile output (such as 5036 of at least one plane (for example, floor surface 5038) has been detected in the visual field of camera 5034
Shown in).Virtual chair 5020 is placed in detected plane (for example, according to the determination virtual objects quilt of equipment 100
It is configured to be vertically oriented and be placed on detected horizontal surface, such as, floor surface 5038).When instant message is used
It is continuous on the display 112 to adjust when a part at family interface is converted into the expression of visual field 5034 of the camera on display 112
The size of whole virtual chair 5020.For example, " the true generation based on virtual chair 5020 predefined in the visual field 5034 of camera
Boundary " size and/or the size of detected object (such as, desk 5004) determine the void as shown in the visual field 5034 of camera
Quasi- ratio of the chair 5020 relative to physical space 5002.In fig. 5j, virtual chair 5020 is displayed at its final position,
With the predefined orientation relative to detected floor surface in the visual field 5034 of camera.In some embodiments,
The initial landing position of virtual chair 5020 is the predefined position relative to detected plane in the visual field of camera, all
Such as, at the center in the vacant region of detected plane.In some embodiments, position is lifted away from according to contact 5026
Determine virtual chair 5020 initial landing position (for example, in Fig. 5 F, contact 5026 be lifted away from position can with contact 5026
Initial touch-down position is different, this is being touched by contacting 5026 after being met in the standard for being converted to augmented reality environment
Caused by touching the movement on screen 112).
Fig. 5 K to Fig. 5 L shows the movement of the equipment 100 of the visual field 5034 of adjustment camera (for example, the hand for passing through user
5006).When equipment 100 is mobile relative to physical space 5002, the visual field 5034 of shown camera changes, and virtual chair
Son 5020 keeps the same position relative to floor surface 5038 in the visual field 5034 of shown camera and is orientated constant.
Fig. 5 M to Fig. 5 Q is shown so that virtual floor surface of the chair 5020 in the visual field 5034 of shown camera
The input moved on 5038.In Fig. 5 N, the touch screen with equipment 100 is detected at position corresponding with virtual chair 5020
112 contact 5040.In Fig. 5 N to Fig. 5 O, when contacting 5040 paths indicated by the arrow 5042 and moving, contact 5040
Drag virtual chair 5020.When virtual chair 5020 is mobile by contact 5040, the size of virtual chair 5020 changes to protect
Hold virtual ratio of the chair 5020 relative to physical space 5002 as shown in the visual field 5034 of camera.For example, Fig. 5 N extremely
In Fig. 5 P, when virtual chair 5020 from the prospect of the visual field 5034 of camera moves away from equipment 100 and closer to the view of camera
When the position of the desk 5004 in field 5034, the size of virtual chair 5020 reduces (for example, making in the visual field 5034 of camera
Chair is maintained relative to the ratio of desk 5004).In addition, when virtual chair 5020 is mobile by contact 5040, in phase
The plane identified in the visual field 5034 of machine is highlighted.For example, floor level 5038 is highlighted in Fig. 5 O.?
In Fig. 5 O to Fig. 5 P, when contacting 5040 paths indicated by the arrow 5044 and moving, contact 5040 continues to drag virtual chair
5020.In Fig. 5 Q, contact 5040 has been lifted away from touch screen 112.In some embodiments, as shown in Fig. 5 N to Fig. 5 Q, virtually
Constraint of the movement routine of chair 5020 by the floor surface 5038 in the visual field 5034 of camera, just as contact 5040 is on floor
Virtual chair 5020 is dragged on surface 5038.In some embodiments, as contacted 5040 referring to described in Fig. 5 N to Fig. 5 P
It is that the continuation of the contact 5026 referring to described in Fig. 5 C to Fig. 5 F such as (for example, contact 5026 is not lifted away from, and makes instant message
A part of user interface 5008 is also dragged the void in the visual field 5034 of camera by this contact that the visual field 5034 of camera is replaced
Quasi- chair 5020.
Fig. 5 Q to Fig. 5 U, which is shown, is moved to virtual chair 5020 in the visual field 5034 of camera from floor surface 5038
The input of the Different Plane (for example, desktop 5046) detected.In Fig. 5 R, examined at position corresponding with virtual chair 5020
It measures and the contact 5050 of the touch screen 112 of equipment 100.In Fig. 5 R to Fig. 5 S, when contact 5048 is indicated by the arrow 5050
Path it is mobile when, the 5048 virtual chairs 5020 of dragging of contact.When virtual chair 5020 is mobile by contact 5048, virtual chair
The size of son 5020 changes to keep the virtual chair 5020 as shown in the visual field 5034 of camera relative to physical space 5002
Ratio.In addition, desktop plane 5046 is highlighted (for example, as schemed when virtual chair 5020 is mobile by contact 5040
Shown in 5S).In Fig. 5 S to Fig. 5 T, when contacting 5048 paths indicated by the arrow 5052 and moving, contact 5040 continues to drag
Move virtual chair 5020.In Fig. 5 U, contact 5048 has been lifted away from touch screen 112, and virtual chair 5020 with towards with before
Being vertically oriented for identical direction is placed on desktop plane 5046.
Fig. 5 U to Fig. 5 AD shows the input that virtual chair 5020 is dragged to the edge of touch-screen display 112, makes
The visual field 5034 for obtaining camera stops display.In Fig. 5 V, detected and equipment 100 at position corresponding with virtual chair 5020
Touch screen 112 contact 5054.In Fig. 5 V to Fig. 5 W, when contacting 5054 paths indicated by the arrow 5056 and moving,
The 5054 virtual chair 5020 of dragging of contact.In Fig. 5 W to Fig. 5 X, when the path indicated by the arrow 5058 of contact 5054 is moved
When, contact 5054 continues virtual chair 5020 to drag to position shown in Fig. 5 X.
As shown in Fig. 5 Y to Fig. 5 AD, caused by the input that contact 5054 carries out shown in Fig. 5 U to Fig. 5 X from disc
The visual field 5034 of camera is shown in 5030 to the visual field 5034 for stopping display camera and returns to display instant message user completely
The transformation at interface 5008.In Fig. 5 Y, the visual field 5034 of camera starts to fade out in disc 5030.In Fig. 5 Y to Fig. 5 Z, disk
Piece 5030 is converted to message bubble 5018.In Fig. 5 Z, the visual field 5034 of camera is no longer shown.In Fig. 5 AA, instant message is used
Family interface 5008 stops obscuring, and the size of message bubble 5018 returns to the original size of message bubble 5018 (for example, such as
Shown in Fig. 5 B).
Fig. 5 AA to Fig. 5 AD shows the corresponding position of contact 5054 when virtual chair 5020 from Fig. 5 AA and is moved to
The virtual chair of home position (for example, as shown in Figure 5 B) Shi Fasheng of virtual chair 5020 in instant message user interface 5008
The animation transition of son 5020.In Fig. 5 AB, contact 5054 has been lifted away from touch screen 112.In Fig. 5 AB to Fig. 5 AC, virtual chair
5020 size is gradually increased, and the virtual chair is rotated towards its final position in Fig. 5 AD.
In Fig. 5 B to Fig. 5 AD, virtual chair 5020 is in instant message user interface 5008 and in shown camera
Visual field 5034 in have basically the same three-dimensional appearance, and virtual chair 5020 is from display instant message user interface
5008 to the visual field 5034 of display camera transformation during and during reverse transformation keep the identical three-dimensional appearance.One
In a little embodiments, the expression of virtual chair 5020 tool in application program user interface (for example, instant message user interface)
There is the appearance different from augmented reality environment (for example, in the visual field of shown camera).For example, virtual chair 5020
Appearance in application program user interface optionally with two dimension or more dimension stylization, while having in augmented reality environment
There is three-D grain appearance more true to nature;And virtual chair 5020 is in display application program user interface and display augmented reality ring
The intermediate appearance during transformation between border is a series of interpolation between the two-dimensional appearance and three-dimensional appearance of virtual chair 5020
Appearance.
Fig. 5 AE shows Internet-browser user interface 5060.Internet-browser user interface 5060 includes URL/
Search for input area 5062, be configured as display for web browser and browser control part 5064 (e.g., including retrogressing is pressed
The navigation controls of button and forwarding button, the shared control for showing shared interface, the bookmark control for showing bookmark interface
And the tabs control for Show Tabs interface) URL/ search for input.Internet-browser user interface 5060 is also
Including network object 5066,5068,5070,5072,5074 and 5076.In some embodiments, corresponding network object packet
Link is included, so that being inputted in response to the tap on corresponding network object, is shown in Internet-browser user interface 5060
The internet location (for example, the display for replacing corresponding network object) linked corresponding with network object.Network object 5066,
5068 and 5072 include the two-dimensional representation of three-dimensional object, as virtual objects indicator 5078,5080 and 5082 is signified respectively
Show.Network object 5070,5074 and 5076 includes that (but the two dimensional image of network object 5070,5074 and 5076 is or not two dimensional image
It is corresponding with three-dimensional object, as there is no indicated by virtual objects indicator).Virtual objects corresponding with network object 5068
For lamp object 5084.
Fig. 5 AF to Fig. 5 AH is shown so that a part of Internet-browser user interface 5060 is by the visual field of camera
The input of 5034 replacements.In Fig. 5 AF, detect and the contact 5086 of the touch screen 112 of equipment 100.The feature of the contact is strong
Degree is higher than contact detection intensity threshold value IT0And lower than prompt pressing intensity threshold ITH, as shown in strength level meter 5028.Scheming
In 5AG, as shown in strength level meter 5028, the characteristic strength for contacting 5026 is increased above light press intensity threshold ITLMake
The visual field 5034 for obtaining camera is shown in network object 5068 (for example, being covered by virtual lamp 5084).In Fig. 5 AH, such as intensity water
Shown in flat meter 5028, the characteristic strength for contacting 5086 is increased above deep pressing intensity threshold ITDSo that the visual field 5034 of camera
The greater portion of Internet-browser user interface 5060 is replaced (for example, only leaving URL/ search input area 5062 and browsing
Device control 5064), and the tactile output generator 167 of equipment 100 exports tactile output (as shown in 5088), and the tactile is defeated
The standard that the visual field 5034 of instruction camera replaces a part of Internet-browser user interface 5060 out has been met.?
In some embodiments, in response to inputting referring to described in Fig. 5 AF to Fig. 5 AH, the replacement completely of the visual field 5034 of camera is touched
Internet-browser user interface 506 on panel type display 112.
Fig. 5 AI to Fig. 5 AM shows the input so that the virtual movement of lamp 5084.In Fig. 5 AI to Fig. 5 AJ, work as contact
When 5086 paths indicated by the arrow 5090 are moved, the 5086 virtual lamps 5084 of dragging of contact.When virtual lamp 5084 passes through contact
When 5086 movement, the size constancy of virtual lamp 5084, and the path of virtual lamp 5084 is not captured in camera optionally
The constraint of the structure of physical space in visual field.When virtual lamp 5084 is mobile by contact 5086, in the visual field 5034 of camera
Middle identified plane is highlighted.For example, in Fig. 5 AJ, when virtual lamp 5084 moves above floor level 5038,
Floor level 5038 is highlighted.In Fig. 5 AJ to Fig. 5 AK, when the path indicated by the arrow 5092 of contact 5086 is moved
When, contact 5086 continues to drag virtual lamp 5084.In Fig. 5 AK to Fig. 5 AL, when the road indicated by the arrow 5094 of contact 5086
When diameter is mobile, contact 5086 continues to drag virtual lamp 5084, and stopping highlights floor level 5038, and in virtual lamp 5084
Desktop 5046 is highlighted when moving above desk 5004.In Fig. 5 AM, contact 5086 has been lifted away from touch screen 112.Work as contact
5086 when being lifted away from, and virtual lamp 5086 is sized to have relative to the desk 5004 in the visual field 5034 of camera just
True ratio, and virtual lamp 5086 is on the desktop 5046 in the visual field 5034 for being placed on camera that is vertically oriented.
Fig. 5 AM to Fig. 5 AQ shows the input that virtual lamp 5084 is dragged to the edge of touch-screen display 112, makes
The visual field 5034 for obtaining camera stops display and Internet-browser user interface 5060 is restored.In Fig. 5 AN, with virtual lamp
It is detected at 5084 corresponding positions and the contact 5096 of the touch screen 112 of equipment 100.In Fig. 5 AN to Fig. 5 AO, work as contact
When 5096 paths indicated by the arrow 5098 are moved, the 5096 virtual lamps 5084 of dragging of contact.In Fig. 5 AO to Fig. 5 AP, when connecing
When the path indicated by the arrow 5100 of touching 5054 is moved, virtual lamp 5084 is continued to drag to shown in Fig. 5 AP by contact 5096
Position.In Fig. 5 AQ, contact 5096 has been lifted away from touch screen 112.
As shown in Fig. 5 AQ to Fig. 5 AT, caused by the input that contact 5096 carries out shown in Fig. 5 AM to Fig. 5 AP from display
The visual field 5034 of camera is to the visual field 5034 for stopping display camera and returns to display Internet-browser user interface completely
5060 transformation.In Fig. 5 AR, the visual field 5034 of camera starts to fade out (as indicated by the dashed lines).It is empty in Fig. 5 AR to Fig. 5 AT
The size of quasi- lamp 5084 increases, and the virtual lamp is moved towards its home position in Internet-browser user interface 5060
It is dynamic.In Fig. 5 AS, the visual field 5034 of camera is no longer shown, and Internet-browser user interface 5060 starts to fade in (such as void
Indicated by line).In Fig. 5 AT, Internet-browser user interface 5060 is shown completely, and virtual lamp 5084 has returned to it
Original size and position in Internet-browser user interface 5060.
Fig. 6 A to Fig. 6 AJ shows the example user interface according to some embodiments, is used in the first user interface
The first of virtual objects are shown in region indicates, shows that the second of virtual objects indicate and show in second user interface zone
Show that the third of the virtual objects of the expression of the visual field with one or more cameras indicates.User interface in these attached drawings by with
In showing process described below, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, figure
Process in 17A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining, will
Some embodiments in embodiment are discussed with reference to the operation executed in the equipment with touch-sensitive display system 112.
In such embodiment, focus selector is optionally: respective finger or stylus contact are contacted corresponding to finger or stylus
It represents point (for example, the center of gravity that accordingly contacts or with accordingly contact associated point) or is detected in touch-sensitive display system 112
The center of gravity for two or more contacts arrived.However, in response to showing user circle shown in the accompanying drawings on display 450
The contact on touch sensitive surface 451 is detected when face and focus selector, optionally with display 450 and independent touch-sensitive table
Similar operation is executed in the equipment in face 451.
Fig. 6 A shows instant message user interface 5008 comprising: the message gas including received text message 5012
Bubble 5010, the message bubble 5014 of text message 5016 including transmission and the virtual objects including receiving in the message
The message bubble 5018 of (for example, virtual chair 5020) and virtual objects indicator 5022, virtual objects indicator instruction are empty
Quasi- chair 5020 is in augmented reality view (for example, in the visual field of one or more cameras of shown equipment 100)
Visible object.Instant message user interface 5008 has been described in further detail referring to Fig. 5 B.
Fig. 6 B to Fig. 6 C shows the input for rotating virtual chair 5020.In fig. 6b, the touching with equipment 100 is detected
Touch the contact 6002 of screen 112.Contact 6002 is moved on touch screen 112 along the path as indicated by arrow 6004.In figure 6 c,
In response to the movement of contact, instant message user interface 5008 scroll up (so that message bubble 5010 is rolled off display so that
Message bubble 5014 and 5018 scrolls up, and appears other message bubble 6005), and virtual chair 5020 rotates (example
Such as, inclination upwards).Magnitude that virtual chair 5020 rotates and direction and the shifting for contacting 6002 paths indicated by the arrow 6004
It is dynamic to correspond to.In figure 6d, contact 6002 has been lifted away from touch screen 112.In some embodiments, virtual chair 5020 is in message gas
This circling behavior in bubble 5018 is used as virtual chair 5020 to be including the augmented reality of the visual field of the camera of equipment 100
The instruction of visible virtual objects in environment.
Fig. 6 E to Fig. 6 L show so that instant message user interface 5008 is gone up on the stage the replacement of user interface 6010 and with
Change the input of the orientation of virtual chair 5020 afterwards.In Fig. 6 E, detect and the contact 6006 of the touch screen 112 of equipment 100.
The characteristic strength of the contact is higher than contact detection intensity threshold value IT0And lower than prompt pressing intensity threshold ITH, such as strength level meter
Shown in 5028.In Fig. 6 F, as shown in strength level meter 5028, the characteristic strength for contacting 6006 is increased above prompt by pressure
Spend threshold value ITH, this increases the region of message bubble 5018, the size of virtual chair 5020 increases and instant message user
Interface 5008 starts to obscure (for example, providing a user the effect for increasing the characteristic strength of contact behind message bubble 5018
Visual feedback).In Fig. 6 G, as shown in strength level meter 5028, the characteristic strength for contacting 6006 is increased above flicking pressure
Spend threshold value ITL, this further increases message bubble 5018 and i.e. by the size of the replacement of disk 6008, virtual chair 5020
When message user interface 5008 further obscured behind disk 6008.In Fig. 6 H, as shown in strength level meter 5028, contact
6006 characteristic strength is increased above deep pressing intensity threshold ITDSo that instant message user interface 5008 stops display, and
User interface 6010 of initiating to go up on the stage fades in and (is indicated by dotted line).In addition, as shown in figure 6h, contacting 6006 characteristic strength increase
Intensity threshold IT is pressed deeply to being higher thanDSo that the tactile output generator 167 of equipment 100 exports tactile and exports (such as 6012 places
Instruction), tactile output instruction has been expired with the standard that user interface 6010 of going up on the stage replaces instant message user interface 5008
Foot.
In some embodiments, reach deep pressing intensity threshold IT in the characteristic strength of contact 6006D(such as Fig. 6 H institute
Show) before, progress shown in Fig. 6 E to Fig. 6 G is reversible.For example, reducing after increasing shown in Fig. 6 F and/or Fig. 6 G
Contact 6006 characteristic strength by make with contact 6006 the corresponding interface state of reduced strength level be shown (for example,
It is higher than light press intensity threshold IT according to the reduced characteristic strength for determining contactL, interface as shown in Figure 6 G is shown;According to true
Surely the reduced characteristic strength contacted is higher than prompt pressing intensity threshold ITH, interface as fig 6 f illustrates is shown;And according to true
Surely the reduced characteristic strength contacted is lower than prompt pressing intensity threshold ITH, interface as illustrated in fig. 6e is shown).In some realities
It applies in scheme, after increasing shown in Fig. 6 F and/or Fig. 6 G, the characteristic strength for reducing contact 6006 will be so that as illustrated in fig. 6e
Interface be re-displayed.
In Fig. 6 I, user interface 6010 of going up on the stage is shown.User interface of going up on the stage 6010 includes that virtual chair 5020 is shown in
Rack 6014 thereon.From Fig. 6 H to Fig. 6 I, virtual chair 5020 is animated to indicate the virtual chair 5020 from Fig. 6 H
The transformation of the position of virtual chair 5020 of the position into Fig. 6 I.For example, virtual chair 5020 is rotated to relative to rack 6014
Predefined position is rotated with predefined orientation and/or is rotated predefined distance (for example, seeing virtual chair
It is supported by rack 6014).User interface of going up on the stage 6010 further includes retreating control 6016, when being activated (for example, passing through
Tap at position corresponding with control 6016 is retreated inputs) make previously shown user interface (for example, instant message
User interface 5008) it is shown again.User interface of going up on the stage 6010 further includes toggle control 6018, indicates current display mould
Formula (for example, current display pattern is user interface mode of going up on the stage, as indicated by highlighted " 3D " indicator), and its
Make the display pattern for being converted to selection when being activated.For example, when user interface 6010 is gone up on the stage in display, with toggle control
Pass through at 6018 corresponding positions (for example, in toggle control 6018 include text " world " the corresponding position in part) and contacts
The tap input of progress is so that user interface 6010 of going up on the stage is replaced by the visual field of camera.User interface of going up on the stage 6010 further includes sharing
Control 6020 (for example, for showing the shared control at shared interface).
Fig. 6 J to Fig. 6 L shows virtual rotation of the chair 5020 relative to rack 6014 as caused by the movement of contact 6006
Turn.In Fig. 6 J to Fig. 6 K, when contacting 6006 paths indicated by the arrow 6022 and moving, virtual chair 5020 rotates (example
Such as, around the first axle of the movement perpendicular to contact 6066).In Fig. 6 K to Fig. 6 L, when contact 6006 is signified along arrow 6024
The path that shows and when then the path indicated by the arrow 6025 is moved, the virtual rotation of chair 5020 (for example, around perpendicular to
Second axis of the movement of contact 6066).In Fig. 6 M, contact 6006 has been lifted away from touch screen 112.In some embodiments, such as
Shown in Fig. 6 J to Fig. 6 L, the constraint on surface of the rotation of virtual chair 5020 by rack 6014.For example, in the rotation of virtual chair
Between refunding, at least one leg holding of virtual chair 5020 is contacted with the surface of rack 6014.In some embodiments, rack
The referential of virtual chair 5020 rotated freely with vertical translation is served as on 6014 surface, without the shifting to virtual chair 5020
It is dynamic to cause specifically to constrain.
Fig. 6 N to Fig. 6 O shows the input of the size of the shown virtual chair 5020 of adjustment.In Fig. 6 N, detect
6026 and second, which are contacted, with the first of touch screen 112 contacts 6030.First path indicated by the arrow 6028 of contact 6026 is moved
Dynamic, while the first contact 6026 is mobile, the second contact 6030 path indicated by the arrow 6032 is moved.In Fig. 6 N to figure
In 6O, when the first contact 6026 and the second path indicated by the arrow 6028 and 6032 of contact 6030 are moved respectively (for example,
In separation gesture), the size of shown virtual chair 5020 increases.In Fig. 6 P, the first contact 6030 and the second contact
6026 have been lifted away from touch screen 112, and after contact 6026 and 6030 is lifted away from, virtual chair 5020 keeps the size increased.
Fig. 6 Q to Fig. 6 U is shown so that user interface 6010 of going up on the stage is by the visual field of one or more cameras of equipment 100
The input of 6036 replacements.In Fig. 6 Q, detect and the contact 6034 of the touch screen 112 of equipment 100.The characteristic strength of the contact
Higher than contact detection intensity threshold value IT0And lower than prompt pressing intensity threshold ITH, as shown in strength level meter 5028.In Fig. 6 R
In, as shown in strength level meter 5028, the characteristic strength for contacting 5026 is increased above prompt pressing intensity threshold ITHMake
User interface of going up on the stage 6010 starts to obscure (as indicated by the dashed lines) behind virtual chair 5020.In Fig. 6 S, such as strength level
Shown in meter 5028, the characteristic strength for contacting 6034 is increased above light press intensity threshold ITLSo that user interface 6010 of going up on the stage
Stop display, and the visual field 6036 for initiating camera fades in (indicated by dotted line).In Fig. 6 T, such as 5028 institute of strength level meter
Show, the characteristic strength for contacting 6034 is increased above deep pressing intensity threshold ITDSo that the visual field 6036 of camera is shown.In addition,
As shown in Fig. 6 T, the characteristic strength for contacting 6034 is increased above deep pressing intensity threshold ITDSo that the tactile of equipment 100 exports
Generator 167 exports tactile output (as indicated at 6038), and the display which exports the visual field 6036 of instruction camera is replaced
The standard for changing the display for user interface 6010 of going up on the stage has been met.In Fig. 6 U, contact 6034 has been lifted away from touch screen 112.?
In some embodiments, reach deep pressing intensity threshold IT in the characteristic strength of contact 6034DBefore (as shown in Fig. 6 T), Fig. 6 Q
It is reversible to progress shown in Fig. 6 T.For example, reducing the spy of contact 6034 after increasing shown in Fig. 6 R and/or Fig. 6 S
Sign intensity will to be shown with the corresponding interface state of reduced strength level for contacting 6034.
From Fig. 6 Q to Fig. 6 U, virtual chair 5020 is placed in the plane detected (for example, being determined according to equipment 100
Virtual chair 5020 is configured as to be vertically oriented and be placed on the horizontal surface detected, such as, floor surface 5038), and
And the size of virtual chair 5020 is adjusted (for example, virtual chair 5020 defined in the visual field 6036 based on camera is " true
The world " size and/or detected object size (such as, desk 5004) are determined as shown in the visual field 6036 of camera
Ratio of the virtual chair 5020 relative to physical space 5002).When virtual chair 5020 is converted to from user interface 6010 of going up on the stage
When the visual field 6036 of camera, the virtual chair 5020 as caused by the rotation of virtual chair 5020 when interface 6010 is gone up on the stage in display
Orientation is maintained (for example, as referring to described in Fig. 6 J to Fig. 6 K).For example, virtual chair 5020 is relative to floor surface
5038 orientation with virtual chair 5020 relative to rack 5014 surface it is final equally oriented.In some embodiments,
When the size of the virtual chair 5020 of the size adjusting in visual field 6036 relative to physical space 5002, consider to the user that goes up on the stage
The adjustment of the size of virtual objects 5020 in interface.
Fig. 6 V to Fig. 6 Y is shown so that the input that the visual field 6036 of camera is replaced by user interface 6010 of going up on the stage.In Fig. 6 V
In, and the corresponding position of toggle control 6018 (for example, with including the corresponding position in the part of text " 3D " in toggle control 6018
Set) at detect the inputs (for example, tap input) carried out by contact 6040.In Fig. 6 W to Fig. 6 Y, in response to by connecing
The input that touching 6040 carries out, the visual field 6036 of camera are faded out (as indicated by the dotted line in Fig. 6 W), and user interface of going up on the stage 6010 is light
Enter (as indicated by the dotted line in Fig. 6 X), and user interface 6010 is gone up on the stage in display completely (as shown in Fig. 6 Y).From Fig. 6 V to figure
The size of 6Y, virtual chair 5020 are adjusted, and the position change of virtual chair 5020 is (for example, return virtual chair 5020
Return to the predefined positions and dimensions for user interface of going up on the stage).
Fig. 6 Z to Fig. 6 AC show so that user interface 6010 of going up on the stage replaced by instant message user interface 5008 it is defeated
Enter.In Fig. 6 Z, with retreat the corresponding position of control 6016 at detect by contact 6042 progress input (for example, gently
Hit input).In Fig. 6 AA to Fig. 6 AC, in response to the input carried out by contact 6042, user interface of going up on the stage 6010 is faded out (such as
Indicated by dotted line in Fig. 6 AA), instant message user interface 5008 fades in (as indicated by the dotted line in Fig. 6 AB), and completely
It shows instant message user interface 5008 (as shown in Fig. 6 AC).From Fig. 6 Z to Fig. 6 AB, virtual chair is continuously adjusted over the display
Size, orientation and the position of son 5020 are (for example, so that virtual chair 5020 is returned to for instant message user interface 5008
Predefined position, size and orientation).
Fig. 6 AD to Fig. 6 AJ show so that instant message user interface 5008 by camera visual field 6036 replacement (for example,
Around the display for user interface 6010 of going up on the stage) input.In Fig. 6 AD, detected at position corresponding with virtual chair 5020
To contact 6044.It include that long touch gestures (during this period, contact 6044 on touch sensitive surface by the input that contact 6044 carries out
At position corresponding with the expression of virtual objects 5020 be less than amount of threshold shift mobile holding at least predefined threshold value when
The area of a room) and subsequent upward gently sweep gesture (dragging virtual chair 5020 upwards).As shown in Fig. 6 AD to Fig. 6 AE, work as contact
When 6044 paths indicated by the arrow 6046 are moved, virtual chair 5020 is dragged upwards.In Fig. 6 AE, instant message is used
It fades out behind virtual chair 5020 at family interface 5008.As shown in Fig. 6 AE to Fig. 6 AF, when contact 6044 is signified along arrow 6048
When the path shown is mobile, the virtual continuation of chair 5020 is dragged upwards.In Fig. 6 AF, the visual field 5036 of camera is in virtual chair
Fade in behind 5020.In Fig. 6 AG, in response to include long touch gestures and it is subsequent it is upward gently sweep gesture pass through contact
6044 inputs carried out, show the visual field 5036 of camera completely.In Fig. 6 AH, contact 6044 is lifted away from touch screen 112.In Fig. 6 AH
Into Fig. 6 AJ, be lifted away from response to contact 6044, virtual chair 5020 be released (for example, because virtual chair 5020 no longer by
Contiguity constraint or dragging) and under fall on plane (for example, floor surface 5038, according to determining horizontal (floor) surface and virtual chair
Son 5020 is corresponding) on.In addition, the tactile output generator 167 of equipment 100 exports tactile output (such as 6050 as shown in Fig. 6 AJ
Shown in), tactile output indicates that virtual chair 5020 has landed on floor surface 5038.
Fig. 7 A to Fig. 7 P shows the example user interface according to some embodiments, is used to show have directory entry
The project visually indicated corresponding with virtual three-dimensional object.User interface in these attached drawings be used to show mistake described below
Journey, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A extremely scheme
Process in 18I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining, will be with reference to touch-sensitive display
The operation executed in the equipment of system 112 is to discuss some embodiments in embodiment.In such embodiment, focus
Selector is optionally: respective finger or stylus contact, the representative point contacted corresponding to finger or stylus are (for example, corresponding contact
Center of gravity or with accordingly contact associated point) or detected two or more contacts in touch-sensitive display system 112
Center of gravity.However, in response to show it is shown in the accompanying drawings in the user interface and focus selector on display 450 when examine
The contact on touch sensitive surface 451 is surveyed, optionally executes class in the equipment with display 450 and independent touch sensitive surface 451
As operate.
Fig. 7 A shows the input detected when showing the user interface 400 of the application menu.The input and display
The request of first user interface (for example, Internet-browser user interface 5060) is corresponding.In fig. 7, with browser module
The input (for example, tap input) carried out by contact 7000 is detected at the 147 corresponding position of icon 420.In response to this
Input shows Internet-browser user interface 5060, as shown in Figure 7 B.
Fig. 7 B shows Internet-browser user interface 5060 (for example, as referring to being described in detail Fig. 5 AE).Internet
Browser user interface 5060 includes network object 5066,5068,5070,5072,5074 and 5076.Network object 5066,
5068 and 5072 include the two-dimensional representation of three-dimensional object, as virtual objects indicator 5078,5080 and 5082 is signified respectively
Show.Network object 5070,5074 and 5076 includes that (but the two dimensional image of network object 5070,5074 and 5076 is or not two dimensional image
It is corresponding with three-dimensional object, as there is no indicated by virtual objects indicator).
Fig. 7 C to Fig. 7 D shows the input so that the translation of Internet-browser user interface 5060 (for example, rolling).?
In Fig. 7 B, the contact with touch screen 112 7002 is detected.In Fig. 7 C to Fig. 7 D, when contact 7002 is indicated by the arrow 7004
Path it is mobile when, network object 5066,5068,5070,5072,5074 and 5076 scrolls up, to appear other net
Network object 7003 and 7005.In addition, respectively including virtual objects when contacting 7002 paths indicated by the arrow 7004 and moving
Virtual objects in the network object 5066,5068 and 5072 of indicator 5078,5080 and 5082 are (upward according to the direction of input
Vertically) rotation (for example, inclination upwards).For example, virtual lamp 5084 ramps upwardly into Fig. 7 D from the first orientation in Fig. 7 C
Second orientation.When contact rolls Internet-browser user interface 5060, the X-Y scheme of network object 5070,5074 and 5076
As not rotating.In figure 7e, contact 7002 has been lifted away from touch screen 112.In some embodiments, network object 5066,5068
With 5072 in discribed object circling behavior be used as these network objects have in augmented reality environment it is visible right
The three-dimensional object answered visually indicates, and there is no such for discribed object in network object 5070,5074 and 5076
Circling behavior is used as the view that these network objects do not have the visible corresponding three-dimensional object in augmented reality environment
Feel instruction.
Fig. 7 F to Fig. 7 G shows parallax effect, wherein virtual objects in response to equipment 100 taking relative to physical world
To variation and rotate over the display.
Fig. 7 F1 is shown, and equipment 100 is held in the hand 5006 of user by user 7006, so that equipment 100 has substantially
Vertical orientation.Fig. 7 F2 shows such as equipment 100 is shown when being orientated shown in equipment 100 is in Fig. 7 F1 internet browsing
Device user interface 5060.
Fig. 7 G1 is shown, and equipment 100 is held in the hand 5006 of user by user 7006, so that equipment 100 has substantially
Horizontal orientation.Fig. 7 G2 shows such as equipment 100 is shown when being orientated shown in equipment 100 is in Fig. 7 G1 internet browsing
Device user interface 5060.From Fig. 7 F2 to Fig. 7 G2, the network object of virtual objects indicator 5078,5080 and 5082 is respectively included
5066, the orientation of the virtual objects in 5068 and 5072 is according to the variation rotation of the orientation of equipment (for example, inclination upwards).Example
Such as, according to the apparatus orientation in physical space change while, virtual lamp 5084 is ramped upwardly into from the first orientation in Fig. 7 F2
Second orientation in Fig. 7 G2.When the orientation of equipment changes, the two dimensional image of network object 5070,5074 and 5076 does not rotate.
In some embodiments, the circling behavior of discribed object is used as these nets in network object 5066,5068 and 5072
There is network object the visible corresponding three-dimensional object in augmented reality environment to visually indicate, and network object 5070,
There is no such circling behaviors to be used as these network objects without in augmented reality for discribed object in 5074 and 5076
Visible corresponding three-dimensional object visually indicates in environment.
Fig. 7 H to Fig. 7 L shows the request pair with display second user interface (for example, instant message user interface 5008)
The input answered.In Fig. 7 H, contact 7008 is detected at position corresponding with the lower edge of display 112.In Fig. 7 H to figure
In 7I, the path indicated by the arrow 7010 of contact 7008 is moved up.In Fig. 7 I to Fig. 7 J, contact 7008 is along arrow 7012
Indicated path continues to move up.In Fig. 7 H to Fig. 7 J, when contact 7008 is moved up from the lower edge of display 112
When, the size of Internet-browser user interface 5060 reduces, as shown in Figure 7 I;And in Fig. 7 J, multitask user is shown
Interface 7012 (for example, gently sweeping gesture to top edge in response to what is carried out by contact 7008).7012 quilt of multi-task user interface
Be configured to allow from reserved state (for example, when corresponding application program is the foreground application executed in equipment,
Reserved state be corresponding application programs final state) various application programs and various control interfaces (for example, control centre use
Boundary is selected in family interface 7014, Internet-browser user interface 5060 and instant message user interface 5008, as shown in figure 7j)
Face.In Fig. 7 K, contact 7008 has been lifted away from touch screen 112.In Fig. 7 L, in position corresponding with instant message user interface 5008
The place of setting detects the input (for example, tap input) carried out by contact 7016.The inputs carried out in response to passing through contact 7016,
Instant message user interface 5008 is shown, as shown in Fig. 7 M.
Fig. 7 M show including message bubble 5018 instant message user interface 5008 (for example, as referring to Fig. 5 B into one
Step detailed description), which includes virtual objects (for example, virtual chair 5020) received in the message and virtually
Object indicator 5022, the virtual objects indicator indicate that virtual chair 5020 is virtual three-dimensional object (for example, in augmented reality
Visible object and/or visible object from different perspectives in view).Instant message user interface 5008 further includes message bubble
6005 and message bubble 7018, the former includes the text message sent, and the latter includes the received text including emoticon 7020
This message.Emoticon 7020 is two dimensional image not corresponding with virtual three-dimensional object.For this purpose, the emoticon 7020 of display is not
With virtual objects indicator.
Fig. 7 N shows map user interface 7022 comprising the interest point information region of map 7024, the first point of interest
7026 and second point of interest interest point information region 7032.For example, the first point of interest and the second point of interest are maps 7024
Shown in in corresponding region search entry " Apple " in search input area 7025 or neighbouring search result.?
In one interest point information region 7026, the first interest point object 7028 of display has virtual objects indicator 7030, this is virtual
Object indicator indicates that the first interest point object 7028 is virtual three-dimensional object.In the second interest point information region 7032, show
The the second interest point object 7034 shown does not have virtual objects indicator, because the second interest point object 7034 does not show in enhancing
Visible virtual three-dimensional object is corresponding in real-time coupling.
Fig. 7 O shows file management user interface 7036 comprising file management control 7038, file management search are defeated
Enter region 7040, the file information region 7042 for being used for the first file (for example, Portable Document format (PDF) file), be used for
The file information region 7044 of second file (for example, photo files), the text for being used for third file (for example, virtual chair object)
Part information area 7046 and the file information region 7048 for being used for the 4th file (for example, pdf document).Third the file information area
Domain 7046 includes the virtual objects indicator 7050 that the previewing file object 7045 of adjacent files information area 7046 is shown, the void
Quasi- object indicator instruction third file is corresponding with virtual three-dimensional object.First the 7042, second text of the file information region of display
Part information area 7044 and the 4th the file information region 7048 do not have virtual objects indicator, because with these the file information areas
The corresponding file in domain does not have the visible corresponding virtual three-dimensional object in augmented reality environment.
Fig. 7 P shows electronic mail user interface 7052 comprising Email navigation controls 7054, Email letter
Cease the Email content region of the expression of region 7056 and expression and the second attachment 7062 including the first attachment 7060
7058.The expression of first attachment 7060 includes virtual objects indicator 7064, which indicates that the first attachment is
The visible virtual three-dimensional object in augmented reality environment.Second attachment 7062 of display does not have virtual objects indicator, because
It is not the visible virtual three-dimensional object in augmented reality environment for the second attachment.
Fig. 8 A to Fig. 8 E is to show be switched to display from the first interface region of display according to some embodiments
The flow chart of the method 800 of the expression of virtual objects is shown when second user interface zone.Method 800 is with display, touching
Sensitive surfaces and one or more cameras are (for example, one or more on side opposite with display and touch sensitive surface in equipment
A backward camera) electronic equipment (for example, the equipment 300 in Fig. 3 or portable multifunction device 100 in Figure 1A) at hold
Row.In some embodiments, display is touch-screen display, and touch sensitive surface over the display or with display collection
At.In some embodiments, display is to separate with touch sensitive surface.Some operations in method 800 are optionally combined,
And/or the sequence of some operations is optionally changed.
Method 800 is related to detecting the input carried out at the touch sensitive surface of equipment by contact, and the input is for first
The expression of virtual objects is shown in interface region.In response to the input, equipment determines the one of equipment in use using standard
Whether the visual field of a or multiple cameras continuously displays virtual right when replacing at least part of display of the first interface region
The expression of elephant.It is determined using standard and is replacing at least one of the first interface region with the visual field of one or more cameras
Point display when whether continuously display the expressions of virtual objects, this make a variety of different types of operations be able to respond in input and
It executes.It is able to carry out a variety of different types of operations (for example, by the visual field with one or more cameras
At least part of display of user interface, or the display by keeping the first interface region are replaced, and does not have to one
Or at least part of display of expression the first interface region of replacement of the visual field of multiple cameras) improve user and can hold
The efficiency of these operations of row, to enhance the operability of equipment, this is further through allowing users to faster and more effectively make
Reduce electricity usage with equipment and extends the battery life of equipment.
The first interface region of equipment on the display 112 is (for example, two-dimensional graphical user interface or part of it
(for example, the browsable list of furniture image, image comprising one or more optional objects etc.)) in display (802) it is virtual right
Elephant expression (for example, the graphical representation of three dimensional object, such as, virtual chair 5020, virtual lamp 5084, shoes, furniture, craft
Tool, ornament, people, emoticon, game role, virtual furnishings etc.).For example, the first interface region is such as Fig. 5 B institute
The instant message user interface 5008 or the Internet-browser user interface 5060 as shown in Fig. 5 AE shown.In some embodiment party
In case, in addition to the image of the physical environment around equipment, the first interface region further includes background (for example, the first user
The background of interface zone is the background color/pattern or background image of pre-selection, which is different from by one or more phases
The output image that machine captures simultaneously, and the real time content being different from the visual field of one or more cameras).
When show virtual objects in the first interface region over the display first indicates, equipment detection
(804) pass through first for contacting and carrying out at the corresponding position of expression of the virtual objects on display on touch sensitive surface 112
Input (for example, first of virtual objects on touch-screen display indicates to detect contact, or with virtual objects
First indicates that being simultaneously displayed on showing in the first interface region can indicate to detect contact, this, which shows, can indicate to be configured as
The display of the AR view of triggering virtual objects when being called by contact).For example, the first input is as described referring to Fig. 5 C to Fig. 5 F
Pass through contact 5020 carry out input or as referring to Fig. 5 AF to Fig. 5 AL description by contact 5086 progress inputs.
It is defeated by contacting carry out first according to determining in response to detecting by contacting the carry out first input (806)
Enter satisfaction first (for example, AR- is triggered) standard (gently to sweep input for example, AR- trigger criteria is configured as identification, touch holding
Input, pressing input, tap input, intensity be higher than predefined intensity threshold firmly pressing or another type of predefined input
The standard of gesture, the standard and triggers the activation of camera, the display of augmented reality (AR) view of physical environment, void around equipment
Placement of the three dimensional representation of quasi- object inside the augmented reality view of physical environment and/or above the two in movement or more
The combination of more persons is associated): equipment shows second user interface zone over the display, this includes with one or more cameras
At least part of display of expression the first interface region of replacement of visual field, and equipment is from showing the first user interface
Region continuously displays the expression of virtual objects when being switched to display second user interface zone.For example, second on display uses
The visual field 5034 or the camera as described in referring to Fig. 5 AH that family interface zone is the camera in the disk 5030 as described in referring to Fig. 5 H
Visual field 5034.In Fig. 5 C to Fig. 5 I, deep pressing is increased above according to determining that the input carried out by contact 5026 has
Intensity threshold ITDCharacteristic strength, when from display the first interface region (instant message user interface 5008) be switched to it is aobvious
When showing second user interface zone, virtual chair object 5020 is continuously displayed, wherein display second user interface zone is to use
The visual field 5034 of camera in disk 5030 replaces the display of a part of instant message user interface 5008.In Fig. 5 AF to figure
In 5AH, deep pressing intensity threshold IT is increased above according to determining that the input carried out by contact 5086 hasDFeature it is strong
Degree shows second user interface area when being switched to from display the first interface region (Internet-browser user interface 5060)
When domain, virtual lamp object 5084 is continuously displayed, wherein display second user interface zone is that the visual field 5034 of camera is used to replace
The display of a part of Internet-browser user interface 5060.
In some embodiments, continuously display virtual objects expression include keep virtual objects expression display or
Showing the first of virtual objects indicates the animation transition for the second expression that variation is virtual objects (for example, having different sizes, coming
From the view of the virtual objects at different perspectives, different location with different render styles or over the display).Some
In embodiment, the visual field 5034 of one or more cameras shows the realtime graphic of the physical environment 5002 around equipment, the reality
When image updated in real time when equipment changes relative to the position of physical environment and orientation (for example, such as Fig. 5 K to Fig. 5 L
It is shown).In some embodiments, second user interface zone replaces the first user interface on display completely.
In some embodiments, second user interface zone covers a part of the first interface region (for example, the
Edge of a part of one interface region along display or the boundary around display are shown).In some embodiments,
Second user interface zone pops up beside the first interface region.In some embodiments, the first interface region
Interior background is replaced by the content of the visual field 5034 of camera.In some embodiments, equipment, which is shown, shows virtual objects from such as
First orientation shown in first interface region is mobile and rotates (for example, as shown in Fig. 5 E to Fig. 5 I) to second orientation
(for example, the current of a part of the physical environment relative to capture in the visual field of one or more cameras is orientated to predefine
Orientation) animation transition.For example, animation includes from the bivariate table for showing virtual objects when showing the first interface region
Show the transformation that the three dimensional representation of virtual objects is shown when showing second user interface zone.In some embodiments, empty
The three dimensional representation of quasi- object has based on virtual as shown in two-dimensional graphical user interface (for example, first interface region)
The shape and orientation of object carry out predefined anchor plane.When being converted to augmented reality view (for example, second user interface area
Domain) when, the three dimensional representation of virtual objects is moved, is re-sized independently of the remainder of the area of display and by reorientation, so that virtual objects are from display
Home position on device reaches the new position on display and (for example, reaching the center of augmented reality view, or reaches enhancing now
Another predefined position in real-time coupling), and during movement or at the end of movement, to the three dimensional representation of virtual objects
Reorientation is carried out, so that the three dimensional representation of virtual objects is in relative to recognizing in the visual field of one or more cameras
Predefined plane (for example, may act as the physical surface of the supporting plane of the three dimensional representation of virtual objects, such as, perpendicular walls or
Horizontal floor surface) predefined position at and/or orientation under.
In some embodiments, the first standard includes (808) when (for example, according to identified below) is contacted in touch sensitive surface
It is less than the mobile holding at least time predefined amount (example of amount of threshold shift at upper position corresponding with the expression of virtual objects
Such as, long pressing time threshold) when the standard that is met.In some embodiments, met according to determining contact another for identification
The standard of the gesture (for example, tap) of one type, while keeping showing virtual objects, equipment is also executed except triggering AR user
Another predefined function except interface.According to contact on touch sensitive surface at position corresponding with the expression of virtual objects whether
To be less than the mobile holding at least time predefined amount of amount of threshold shift, determines and replacing the first user interface with the visual field of camera
Whether the expression of virtual objects is continuously displayed when at least part of display in region, this enables a variety of different types of operations
It is enough to be executed in response to input.A variety of different types of operations are enabled to be able to respond to execute in input and improve user and hold
The efficiency of these operations of row, to enhance the operability of equipment, this is further through allowing users to faster and more effectively make
Reduce electricity usage with equipment and extends the battery life of equipment.
In some embodiments, the first standard includes the characteristic strength that (810) are contacted when (for example, according to identified below)
The first intensity threshold is increased above (for example, light press intensity threshold ITLOr deep pressing intensity threshold ITD) when the mark that is met
It is quasi-.For example, as referring to described in Fig. 5 C to Fig. 5 F, when the characteristic strength of contact 5026 is increased above deep pressing intensity threshold
ITDWhen, standard is met, as indicated by strength level meter 5028.In some embodiments, use is met according to determining contact
In the standard for identifying another type of gesture (for example, tap), while keeping showing virtual objects, equipment is also executed except touching
Send out another predefined function except AR user interface.In some embodiments, the first standard requirements first input is not light
Hitting input, (for example, input has the duration between the touch-down and being lifted away from of contact of contact, which is greater than
Tap time threshold).It whether is increased above the first intensity threshold according to the characteristic strength of contact, is determined in the visual field with camera
Whether the expression of virtual objects is continuously displayed when replacing at least part of display of the first interface region, this makes a variety of
Different types of operation is able to respond to be executed in input.So that a variety of different types of operations are able to respond and execute in input
The efficiency that user is able to carry out these operations is improved, to enhance the operability of equipment, this is further through allowing users to
Faster and more effectively reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, the first standard includes the mobile satisfaction that (812) are contacted when (for example, according to identified below)
Predefined mobile standard is (for example, contact moves out predefined threshold position (for example, using with first on touch sensitive surface
The corresponding position in the boundary of family interface zone, with the home position contacted at a distance of position of threshold distance etc.), contact is pre- to be greater than
The speed for defining threshold velocity is mobile, and the movement of contact terminates under pressing input, etc.) when the standard that is met.Some
In embodiment, during the initial part of the movement of contact, by the expression of contact dragging virtual objects, and in contact
When movement will meet predefined definition movement standard, virtual objects stop moving under a touch, are with the first standard of instruction
It will be met;Also, if contact mobile continuation and contact continue to move to meet predefined mobile standard,
It is then started to change to display second user interface zone and shows virtual objects in augmented reality view.In some embodiments
In, when dragging virtual objects during the initial part in the first input, object size and viewing visual angle be will not change, and one
Denier shows augmented reality view, and is fallen at the position in augmented reality view under virtual objects, then display has and depends on
By the size of the physical location of the drop-off positions expression of the virtual objects in augmented reality view and the virtual objects of viewing visual angle.
Whether meet predefined mobile standard according to the movement of contact, determines and replacing the first interface region with the visual field of camera
At least part of display when whether continuously display the expressions of virtual objects, this enables a variety of different types of operations to ring
It should be executed in input.So that a variety of different types of operations are able to respond to execute in input and improve user and be able to carry out this
The efficiency operated a bit, to enhance the operability of equipment, this is further through allowing users to faster and more effectively using setting
It is standby and reduce electricity usage and extend the battery life of equipment.
In some embodiments, in response to detecting the first input carried out by contact, contact is passed through according to determination
The first input carried out has met the first standard, has equipment output (814) touching of one or more tactile output generators 167
Feel output, tactile output the first input of instruction meet the first standard (for example, as referring to the tactile described in Fig. 5 F export 5032 or
Tactile output as described in referring to Fig. 5 AH is 5088).In some embodiments, it is appeared in the visual field of one or more cameras
Sense of touch is generated before on display.For example, sense of touch instruction triggers the activation of one or more cameras and then triggers one
Or the first standard of the plane monitoring-network in the visual field of multiple cameras is met.Due to activating camera and showing that visual field can
Need the time, the sense of touch be user serve as indicating equipment have detected that it is necessary input and equipment get ready surely just present enhancing
The non-visual signal at real user interface.
Output instruction standard (for example, at least part of display for replacing user interface with the visual field of camera) obtains
The tactile output of satisfaction provides the feedback that input provided by instruction meets standard for user.Improved touch feedback is provided to increase
The strong operability of equipment is (for example, by helping user to provide input appropriate and reducing operation equipment/hand over equipment
User's mistake when mutually), this further through allow users to faster and more effectively to reduce using equipment electricity usage and
Extend the battery life of equipment.
In some embodiments, in response to detect the first input at least initial part (e.g., including: detect
Contact;Or detect the input carried out by contact for meeting corresponding predefined standard and being unsatisfactory for the first standard;Or detection
To the input for meeting the first standard), the visual field of device analysis (816) one or more camera, to detect one or more cameras
Visual field in one or more planes (for example, floor surface 5038, desktop 5046, wall etc.).In some embodiments,
One or more cameras are activated in response to detecting at least initial part of the first input, and are sent out while activating camera
Play plane monitoring-network.In some embodiments, postpone the visual field of one or more cameras after activating one or more cameras
Display (for example, from the time delay that one or more cameras are activated to detecting that at least one is flat in the visual field of camera
The time in face).In some embodiments, one or more cameras are initiated at the time that one or more cameras are activated
The display of visual field, and complete to put down after visual field is over the display (for example, in second user interface zone) visible
Face detection.In some embodiments, after corresponding plane being detected in the visual field of one or more cameras, equipment is based on
Respective planes determine size and/or the position of the expression of virtual objects relative to the position of the visual field of one or more cameras.
In some embodiments, the position when electronic equipment is mobile, with the visual field of one or more cameras relative to respective planes
It sets change, updates the size and/or position of the expression of virtual objects (for example, referring to Fig. 5 K to Fig. 5 L description).Based in phase
The position of the respective planes detected in the visual field of machine is come the size for determining the expression of virtual objects and/or position (for example, being not required to
Want further user to input, as virtual objects scale cun and/or it positioned using the visual field relative to camera) increase
The strong operability of equipment, this is further through allowing users to faster and more effectively reduce electricity usage using equipment simultaneously
And extend the battery life of equipment.
In some embodiments, in response to position corresponding with the expression of the virtual objects on display on touch sensitive surface
The place of setting detects contact (for example, in response to contacting on touch screen 112 with detecting at the corresponding position of virtual chair 5020
5026) (818), are initiated to one in the visual field of the visual field of one or more cameras analyzed to detect one or more cameras
Or multiple planes.For example, (for example, the characteristic strength in contact 5026 increases to height before the first input meets the first standard
In pressing intensity threshold IT deeplyDBefore, as referring to described in Fig. 5 F), and before showing second user interface zone, start
The activation of camera and detection to the plane in the visual field of camera.By being opened when being interacted with virtual objects any detecting
Begin detection plane, plane monitoring-network can be completed before AR trigger criteria is met, therefore, user when watching following procedure not
There are visual lags: when the first input meets AR trigger criteria, virtual objects transformation enters augmented reality view.In response to
Contact is detected at the position of the expression of virtual objects, initiates one or more planes in visual field of the analysis to detect camera
(for example, not needing further user's input to initiate the analysis to the visual field of camera) improves the efficiency of equipment, this leads to again
It crosses and allows users to faster and more effectively reduce electricity usage using equipment and extend the battery life of equipment.
In some embodiments, in response to detect by contact carry out first input meet the first standard (for example,
In response to detecting that the characteristic strength of contact 5026 is increased above deep pressing intensity threshold ITD, as referring to described in Fig. 5 F),
Initiation (820) detects one or more of the visual field of one or more cameras to the analysis of the visual field of one or more cameras
Plane.For example, starting the activation of camera and the inspection to the plane in the visual field of camera when the first input meets the first standard
It surveys, and before plane monitoring-network completion, shows the visual field of camera.Swashed by starting camera when AR trigger criteria obtains meeting
Living and plane monitoring-network, unnecessarily will not activate and remain operational camera and plane monitoring-network, this saves battery capacity and extend
Battery life and camera service life.
In some embodiments, in response to detect the initial part of the first input meet plane monitoring-network trigger criteria and
It is unsatisfactory for the first standard, initiation (822) detects the view of one or more cameras to the analysis of the visual field of one or more cameras
One or more planes in.For example, when the initial part of the first input meets some standards (for example, not as good as AR triggering mark
Quasi- stringent standard) when, start the activation of camera and the detection to the plane in the visual field of camera, and complete in plane monitoring-network
At before, the visual field of camera is optionally shown.By opening after certain standards are met rather than when detecting contact
Beginning camera activation and plane monitoring-network unnecessarily will not activate and remain operational camera and plane monitoring-network, and this saves battery electricity
Measure and extend battery life and camera service life.By starting camera activation and plane inspection before AR trigger criteria is met
It surveys, reduces the delay for the virtual objects that display is converted in augmented reality view when the first input meets AR trigger criteria
(as caused by camera activation and plane monitoring-network).
In some embodiments, equipment shows that (824) are virtual right in second user interface zone in the corresponding way
The expression of elephant, so that virtual objects (for example, virtual chair 5020) are relative in the visual field 5034 of one or more cameras
The predefined angle orientation of the respective planes detected is (for example, make downside and the floor table of four legs of virtual chair 5020
Without distance (or there are minimum ranges) between face 5038).For example, based on virtual right as shown in two-dimensional graphical user interface
The shape and orientation of elephant come predefined virtual objects relative to the orientation of respective planes and/or position (for example, respective planes with can
The horizontal physical surface for serving as the support surface of the three dimensional representation of the virtual objects in augmented reality view is corresponding (for example, being used for
Support vase horizontal table top) or respective planes be that may act as the three dimensional representation of the virtual objects in augmented reality view
The vertical physical surface (for example, perpendicular walls for hanging virtual picture frame) of support surface).In some embodiments, empty
The orientation of quasi- object and/or position are by the respective surfaces of virtual objects or boundary (for example, bottom surface, bottom boundaries point, side surface
And/or lateral boundaries point) limit.In some embodiments, anchor plane corresponding with respective planes is one group of virtual objects
Attribute in attribute, and the anchor plane property of physical object that should be indicated according to virtual objects is specified.Some
In embodiment, virtual objects are placed on relative to the pre- of the multiple planes detected in the visual field of one or more cameras
Definition is orientated at lower and/or position (for example, multiple respective sides of virtual objects are corresponding to what is detected in the visual field of camera
Plane is associated).In some embodiments, if the horizontal base plane definition relative to virtual objects is predetermined for virtual objects
The orientation of justice and/or position show the baseplane (example of virtual objects on the floor level then detected in the visual field of camera
Such as, the horizontal baseplane of virtual objects is parallel with floor level, and zero) the distance between itself and floor level are.In some realities
Apply in scheme, if relative to virtual objects it is vertical after plane definition be the predefined orientation of virtual objects and/or position,
It is resisted against the wall plane detected in the visual field of one or more cameras and places the rear surface of virtual objects (for example, virtual right
The vertical rear plane of elephant is parallel with wall plane, and zero) the distance between itself and wall plane are.In some embodiments,
With the position of respective planes fixed distance at or relative to respective planes at the angle in addition to zero degree or right angle
Place virtual objects.Show the expression of virtual objects (for example, not needing relative to the plane detected in the visual field of camera
Further user's input shows virtual objects with the plane in the visual field relative to camera) enhance operating for equipment
Property, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the electricity of equipment
The pond service life.
In some embodiments, in response to detecting corresponding plane in the visual field of one or more cameras, have
Equipment output (826) the tactile output of one or more tactile output generators 167, tactile output instruction is in one or more
Corresponding plane is detected in the visual field of camera.In some embodiments, each plane to be detected in the visual field of camera
(for example, floor surface 5038 and/or desktop 5046) generates corresponding tactile output.In some embodiments, it completes to put down
Tactile output is generated when face is detected.In some embodiments, tactile output, which is accompanied by, shows in second user interface portion
Visual field in the visually indicating of visual field plane (for example, the visual field plane having detected that instantaneously highlights).Output instruction
Detect that the tactile output of plane has provided a user the feedback that instruction has detected that the plane in the visual field of camera.Offer changes
Into touch feedback enhance equipment operability (for example, by help user provide it is appropriate input and reduce for putting
Set the unnecessary other input of virtual objects), this further through allow users to faster and more effectively using equipment and reduce
Electricity usage and the battery life for extending equipment.
In some embodiments, display second user interface zone is being switched to from the first interface region of display
When, equipment shows the expression transformation of (828) virtual objects (for example, moving, rotation, scale is very little and/or with different styles again
Again rendering etc.) predefined position into second user interface zone relative to respective planes when animation (for example, such as Fig. 5 F
To shown in Fig. 5 I), and combine and show relative to respective planes into predefined angle (for example, relative to the pre- of respective planes
Definition is orientated at lower and/or position and size, the rotation angle of its arrival end-state to be shown in augmented reality view
Degree and appearance) virtual objects expression, there is the equipment output tactile output of one or more tactile output generators 167,
Tactile output instruction virtual objects are shown in second user interface zone relative to respective planes at predefined angle.Example
Such as, as shown in fig. 5i, in conjunction with showing relative to floor surface 5038 into the virtual chair 5020 of predefined angle, equipment, which exports, to be touched
Feel output 5036.In some embodiments, the tactile output of generation is configured with reflection virtual objects or virtual objects
Represented physical object with properties feature (for example, frequency, loop number, modulation, amplitude, with audio wave etc.):
Weight (for example, heavy and light), material (for example, metal, cotton, timber, marble, liquid, rubber, glass), size are (for example, big
With it is small), shape (for example, Bao Yuhou, length and short, round with point etc.), elasticity (for example, elasticity and rigidly), property is (for example, aughty
With it is solemn, mild with it is powerful etc.) and other attributes.For example, tactile output exports mould using tactile shown in Fig. 4 F to Fig. 4 K
One or more of formula.In some embodiments, one or more changes including one or more features over time
The default distribution changed is corresponding with virtual objects (for example, emoticon).For example, being provided for " smile " emoticon virtual objects
The output distribution of " spring " tactile.The expression of output instruction virtual objects is user relative to the tactile output of the placement of respective planes
Provide the feedback that the expression of instruction virtual objects has been placed relative to respective planes automatically.Improved touch feedback enhancing is provided
The operability of equipment is (for example, by the way that help user to provide input appropriate and reduce need not for place virtual objects
The other input wanted), this is further through allowing users to faster and more effectively reduce electricity usage using equipment and prolong
The battery life of equipment is grown.
(830) in some embodiments, tactile output have with the features of virtual objects (for example, analog physical characteristic,
Such as size, density, quality and/or material) the output distribution of corresponding tactile.In some embodiments, tactile output distribution
The spy changed with the one or more features (for example, weight, material, size, shape and/or elasticity) based on virtual objects
Sign (for example, frequency, loop number, modulation, amplitude, adjoint audio wave etc.).For example, tactile output uses shown in Fig. 4 F to Fig. 4 K
One or more of tactile output mode.In some embodiments, with the size of virtual objects, weight and/or quality
Increase, the amplitude of tactile output and/or duration also increase.In some embodiments, based on the void for constituting virtual objects
Quasi- material selects tactile output mode.Exporting, there is the tactile output of distribution corresponding with the feature of virtual objects to mention for user
Supply instruction about the feedback of the information of the feature of virtual objects.Improved touch feedback is provided and enhances the operability of equipment
(for example, by helping user to provide input appropriate;By reducing the unnecessary additional input for placing virtual objects;
And by providing the feature for allowing user to perceive virtual objects, but the use of the information about these features with display is not made
Family interface is chaotic), this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
In some embodiments, when showing the expression of virtual objects in second user interface zone, equipment detection
(832) equipment for adjusting the visual field 5034 (for example, as shown in Fig. 5 K to Fig. 5 L) of one or more cameras is mobile (for example, equipment
Transverse shifting and/or rotation), and the movement in response to detecting equipment, when adjusting the visual field of one or more cameras,
According to fixed empty between the respective planes (for example, floor surface 5038) in the visual field of virtual objects and one or more cameras
Between relationship (for example, orientation and/or position) (for example, virtual objects are so that fixation between the expression and plane of virtual objects
Angle is maintained (for example, virtual objects seem to keep fixed position in the plane or along visual field plane rolling)
Orientation and position are shown over the display), equipment adjusts the virtual objects in second user interface zone (for example, virtual chair
5020) expression.For example, in Fig. 5 K to Fig. 5 L, when equipment 100 is mobile, the second user of the visual field 5034 including camera
The orientation and position that virtual chair 5020 in interface zone is kept fixed relative to floor surface 5038.In some embodiments
In, virtual objects seem static and unchanged relative to physical environment 5002 around, that is to say, that when one or more cameras
Visual field it is mobile relative to physical environment around with equipment and when changing, the expression of virtual objects size over the display, position
It sets and/or is orientated with device location and/or change in orientation and changes.According to the fixed relationship between virtual objects and respective planes
(keep virtual objects relative to respective planes for example, not needing further user input to adjust the expression of virtual objects
Position) enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, (for example, with the visual field with one or more cameras indicate replacement the first user circle
At at least part of display corresponding time in face region), equipment shows that (834) are cut from the first interface region of display
Animation (the example of the expression of virtual objects (for example, virtual chair 5020) is continuously displayed when changing to display second user interface zone
Such as, the mobile, rotation around one or more axis and/or scaling) (for example, as shown in Fig. 5 F to Fig. 5 I).For example, animation includes
It is shown from the two-dimensional representation for showing virtual objects when showing the first interface region to when showing second user interface zone
Show the transformation of the three dimensional representation of virtual objects.In some embodiments, the three dimensional representation of virtual objects has relative to capture
The current orientation of a part of the physical environment in the visual field of one or more cameras carrys out predefined orientation.In some implementations
In scheme, when being converted to augmented reality view, the expression of virtual objects is moved, is re-sized independently of the remainder of the area of display and taken again
To so that virtual objects reach the new position on display (for example, in augmented reality view from the initial position on display
Another predefined position in the heart or augmented reality view), and during movement or at the end of movement, to virtual objects
Reorientation is carried out, so that virtual objects are relative to the plane detected in the visual field of camera (for example, sustainable virtual objects
Expression physical surface, such as, perpendicular walls or horizontal floor surface) at fixed angle.In some embodiments, work as hair
When lively picture transition, the shade of illumination and/or virtual objects projection of virtual objects is adjusted (for example, to match at one or more
The ambient lighting detected in the visual field of a camera).Display is cut when the expression of virtual objects from the first interface region of display
Animation when changing to display second user interface zone provides the feedback that the first input of instruction meets the first standard for user.It mentions
For the improved operability for enhancing equipment of feeding back (for example, being set by helping user to provide input appropriate and reduce operation
It is standby/with equipment interact when user's mistake), this further through allow users to faster and more effectively using equipment and reduce
Electricity usage and the battery life for extending equipment.
In some embodiments, when showing second user interface zone over the display, equipment detection (836) passes through
The second input that second contact (for example, contact 5040) carries out, wherein the second input include (optionally, by second contact into
The pressing of the expression of capable selection virtual objects or touch input and) the second contact along first path over the display shifting
Dynamic (for example, as shown in Fig. 5 N to Fig. 5 P), and in response to detecting the second input carried out by the second contact, equipment edge with
The mobile second user in second path of first path corresponding (for example, constraint identical as first path or by first path)
The expression of virtual objects (for example, virtual chair 5020) in interface zone.In some embodiments, the second contact is different from
First contacts and is detected after the first contact is lifted away from (for example, scheming as shown in the contact 5040 in Fig. 5 N to Fig. 5 P
Contact 5026 in 5C to Fig. 5 F is detected after being lifted away from).In some embodiments, it second contacts and is continuously held in touching
On sensitive surfaces first contact it is identical (for example, as by contact 5086 carry out input shown in, the input meet AR triggering mark
Standard, and then moved on touch screen 112, with mobile virtual lamp 5084).In some embodiments, on virtual objects
Gently sweeping input rotates virtual objects, and the movement of virtual objects is optionally by the constraint (example of the plane in the visual field of camera
Such as, gently sweeping input rotates the expression of chair on the floor level in the visual field of camera).It is moved in response to detecting input
The user that is expressed as of virtual objects provides the positions of the shown virtual objects of instruction and may be in response to user's input and mobile
Feedback.The improved operability for enhancing equipment of feeding back is provided (for example, by helping user to provide input appropriate and reducing
User's mistake when operation equipment/interacted with equipment), this is further through allowing users to faster and more effectively using equipment
And reduces electricity usage and extend the battery life of equipment.
In some embodiments, the movement when the expression of virtual objects based on contact and corresponding corresponding to virtual objects
When plane is moved along the second path, equipment adjusts the size of the expression of (838) virtual objects (for example, based on from virtual objects
The pseudo range of user is indicated, to keep accurate visual angle of the virtual objects in visual field).For example, in Fig. 5 N to Fig. 5 P, when
When virtual chair is mobile deeper into the visual field 5034 of camera, far from equipment 100 and towards desk 5004, virtual chair 5020
Size reduces.It is moved based on the movement of contact and plane corresponding with virtual objects along the second path in the expression of virtual objects
When adjustment virtual objects expression size (for example, not needing further user input to adjust the expression of virtual objects
The expression of virtual objects is maintained under the size true to nature of the environment in the visual field relative to camera by size), which enhance set
Standby operability, and further through allowing users to faster and more effectively reduce electricity usage using equipment and prolong
The battery life of equipment is grown.
In some embodiments, when the expression of virtual objects is moved along the second path, equipment keeps (840) virtual right
As the first size (for example, as shown in Fig. 5 AI to Fig. 5 AL) of the expression of (for example, virtual lamp 5084), equipment detection passes through second
Contact carry out second input termination (e.g., including detection second contact be lifted away from, as shown in Fig. 5 AL to Fig. 5 AM), and
It is terminated in response to detecting by the second input that the second contact carries out, the expression of virtual objects is placed on second user by equipment
At extended position (for example, on desktop 5046) in interface zone, and at the extended position in second user interface zone
Show the expression with the virtual objects of the second size, the second size is different from first size (for example, passing through contact in Fig. 5 AM
The size of virtual lamp 5084 after the 5086 input terminations carried out is different from the input end carried out in Fig. 5 AL by contact 5086
The size of virtual lamp 5084 before only).For example, the size and viewing visual angle of object will not change when being dragged by contact, and
And when at the final position fallen in augmented reality view under object, display has based on the visual field in physical environment with camera
Shown in virtual objects the corresponding physical location of drop-off positions come the size determined and the object of viewing visual angle so that according to
Drop-off positions are determined as the first position in the visual field of camera, object has the second size, and is according to determining down position
The second position in the visual field of camera, object have the third size different from the second size, wherein being based on drop-off positions and one
The distance between a or multiple cameras select the second size and third size.In response to detecting the second of mobile virtual object
The termination of input and show the expression of the virtual objects with the size changed and (come for example, not needing further user input
The size for adjusting virtual objects, virtual objects is maintained under the size true to nature of the environment in the visual field relative to camera) increase
The strong operability of equipment, this is further through allowing users to faster and more effectively reduce electricity usage using equipment simultaneously
And extend the battery life of equipment.
In some embodiments, it is marked according to the satisfaction second that moves for determining that second contacts the first path on display
Standard (for example, in the end of first path, is contacted within the threshold range, or at the edge of display (for example, feather edge, top margin
Edge and/or side edge) or the edge of second user interface zone except), equipment (842): stop display include one or more
The second user interface zone of the expression of the visual field of camera, and display has (complete) first of the expression of virtual objects again
Interface region (for example, if a part of previous first interface region is shown simultaneously with second user interface zone,
So after no longer showing second user interface zone, equipment shows complete first interface region).For example, response
In the movement of the contact 5054 at the edge that virtual chair 5054 is dragged to touch screen 112, as shown in Fig. 5 V to Fig. 5 X, stop aobvious
Show the visual field 5034 of camera, and show complete instant message user interface 5008 again, as shown in Fig. 5 Y to Fig. 5 AD.?
In some embodiments, as contact is close to the edge of display or the edge of second user interface zone, second user interface
Region is faded out (for example, as shown in Fig. 5 X to Fig. 5 Y) and/or the first interface region is not (it shows or is blocked
Part) fade in (for example, as shown in Fig. 5 Z to Fig. 5 AA).In some embodiments, it is used for from non-AR view (for example, first uses
Family interface zone) it is converted to the gesture of AR view (for example, second user interface zone) and for being converted to non-AR from AR view
The gesture of view is identical.For example, beyond the threshold position in the user interface currently shown (for example, current on virtual objects
The within the threshold range on the boundary of the interface region of display, or the boundary beyond the interface region currently shown)
Drag gesture to be converted to corresponding interface region from the interface region currently shown (for example, from display first
Interface region is converted to display second user interface zone, or alternatively, is converted to from display second user interface zone
Show the first interface region).In some embodiments, view is shown before the first standard/second standard is met
Feel instruction (for example, fade out the interface region currently shown and fade in corresponding user interface), and defeated detecting
Before the termination (for example, contact is lifted away from) entered, if input continues and the first standard/second standard is unmet, the view
Feel that instruction is reversible.Meet the input of input standard in response to detecting and show the first user interface again and provide in addition
Control option, but not make with the control in addition shown (for example, for from the first user interface of second user interface display
Control) second user interface it is chaotic.Other control option is provided but not makes have the second of the control in addition shown
User interface confusion enhances the operability of equipment, this is further through allowing users to faster and more effectively subtract using equipment
Lack electricity usage and extends the battery life of equipment.
In some embodiments, at the time corresponding with the first interface region is shown again, equipment is shown
(844) from showing that the expressions of virtual objects is virtual right to showing in the first interface region in second user interface zone
The animation transition (for example, the mobile, rotation around one or more axis and/or scaling) of the expression of elephant is (for example, extremely such as Fig. 5 AB
Shown in the animation of virtual chair 5020 in Fig. 5 AD).Show from shown at second user interface the expressions of virtual objects to
Show that the animation transition of the expression of virtual objects (comes again for example, not needing further user's input in first user interface
Position the virtual objects in the first user interface) enhance the operability of equipment, this further through allow users to more rapidly and
Equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, when the second contact is moved along first path, equipment changes (846) in one or more
The one or more respective planes recognized in the visual field of camera visual appearance (for example, highlight, mark, sketching the contours and/or
Otherwise visually change the appearance of one or more planes), the one or more respective planes and contact it is current
Position is corresponding.For example, when contact 5042 drags virtual chair along the path as shown in the arrow 5042 and 5044 in Fig. 5 O to Fig. 5 P
When son 5020, floor surface 5038 is highlighted (for example, compared with Fig. 5 M, before contact 5042 is mobile).In some realities
It applies in scheme, contacts at position corresponding with the first plane detected in the visual field of camera, highlight according to determining
First plane.According to determine contact have been moved to corresponding with the second plane detected in the visual field of camera position (for example,
As shown in Fig. 5 S to Fig. 5 U), stopping highlights the first plane (for example, floor surface 5038), and it is flat to highlight second
Face (for example, desktop 5046).In some embodiments, while multiple planes being highlighted.In some embodiments, with
Visually change other planes the different mode of mode visually change in multiple planes visually changed
One plane, with instruction contact at position corresponding with the first plane.Change one or more recognized in the visual field of camera
The visual appearance of a respective planes provides instruction for user and has recognized the plane (for example, virtual objects can be flat relative to this
Face positioning) feedback.The operability that improved visual feedback enhances equipment is provided (to fit for example, providing by help user
When input and reduce operation equipment/with equipment interact when user's mistake), this further through allow users to more rapidly and
Equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, in response to detecting the first input carried out by contact, contact is passed through according to determination
The first input carried out meets third (for example, user interface of going up on the stage is shown) standard (for example, user interface of going up on the stage shows that standard is
Be configured as identification gently sweep input, touch keep input, pressing input, tap input or intensity be higher than predefined intensity threshold
Firmly pressing standard), equipment show (848) third interface region over the display, this include replace the first user
At least part of display (e.g., including replace the 3D model of the virtual objects of the 2D image of virtual objects) of interface zone.
In some embodiments, when user interface (for example, user interface 6010 of going up on the stage as described in referring to Fig. 6 I) is gone up on the stage in display,
Equipment updates the appearance of the expression of virtual objects based on the input corresponding with user interface of going up on the stage detected (for example, as follows
Text is more fully described with reference to method 900).In some embodiments, it is shown in user interface of going up on the stage when in virtual objects
When detect another input and when the input meets the standard for being converted to display second user interface zone, equipment is with second
Interface region replaces the display for user interface of going up on the stage, while continuously displaying virtual objects.It is described more relative to method 900
More details.Meet third standard shows third user interface according to determining first input and provide other control option, but not
First with the control (for example, for showing the control of third user interface from the first user interface) in addition shown can be made to use
Family interface is chaotic.Other control option is provided but not makes the second user interface confusion enhancing with the control in addition shown
The operability of equipment, this further through allow users to faster and more effectively to reduce using equipment electricity usage and
Extend the battery life of equipment.
In some embodiments, according to the first input determined by contacting progress (for example, with the first user circle is rolled
The corresponding webpage or Email for gently sweeping input or correspond to the content in first interface region with display in face region
Corresponding tap is requested to input) it is unsatisfactory for first (for example, AR is triggered) standard, equipment keeps (850) first interface regions
Display, and do not have at least part of display of expression the first interface region of replacement of the visual field of one or more cameras
(for example, as referring to described in Fig. 6 B to Fig. 6 C).It is determined using the first standard and is replaced with the visual field of one or more cameras
The display of the first interface region whether is kept when at least part of display of the first interface region or whether is connected
The expression of continuous display virtual objects, this is able to respond a variety of different types of operations to execute in input.In response to input
And it is able to carry out a variety of different types of operations (for example, by replacing user interface with the visual field of one or more cameras
At least part of display, or the display by keeping the first interface region, and do not have to the view of one or more cameras
At least part of display of expression the first interface region of replacement of field) improve the effect that user is able to carry out these operations
Rate, to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
It should be appreciated that be only exemplary to the concrete order of Fig. 8 A operation being described into 8E, not purport
Showing that the order is the but one sequence that can execute these operations.Those skilled in the art will recognize that various ways
To resequence to operations described herein.Additionally, it should be noted that herein in relation to other methods (example as described herein
Such as, method 900 and 1000) described in other processes details equally in a similar way be suitable for above in relation to Fig. 8 A to scheme
Method 800 described in 8E.For example, above with reference to contact described in method 800, input, virtual objects, interface region, strong
Degree threshold value, tactile output, visual field, movement and/or animation optionally have herein with reference to other methods as described herein (for example,
Method 900,1000,16000,17000,18000,19000 with 20000) described in contact, input, virtual objects, user interface
One or more of region, intensity threshold, tactile output, visual field, movement and/or feature of animation.For brevity, this
These details are not repeated in place.
Fig. 9 A to Fig. 9 D is the flow chart for showing the method 900 according to some embodiments, and this method is used to use first
The first of virtual objects are shown in the interface zone of family indicates, shows that the second of virtual objects indicate in second user interface zone
And there is the third of the virtual objects of the expression of the visual field of one or more cameras to indicate for display.Method 900 has display
Device, touch sensitive surface and one or more cameras are (for example, one on side opposite with display and touch sensitive surface in equipment
Or multiple backward cameras) electronic equipment (for example, the equipment 300 in Fig. 3 or portable multifunction device 100 in Figure 1A) at
It executes.In some embodiments, display is touch-screen display, and touch sensitive surface over the display or with display collection
At.In some embodiments, display is to separate with touch sensitive surface.Some operations in method 900 are optionally combined,
And/or the sequence of some operations is optionally changed.
As described below, method 900 is related to detecting the input carried out at the touch sensitive surface of equipment by contact, the input
Expression for the display virtual objects in the first user interface (for example, two-dimensional graphical user interface).It is inputted in response to first,
Equipment is determined whether at second user interface using standard (for example, user interface of going up on the stage can in the user interface of going up on the stage
The three dimensional representation of mobile virtual object can be the three dimensional representation scale cun and/or can be to virtual objects again of virtual objects
Three dimensional representation carry out reorientation) in display virtual objects second indicate.It is virtual right when being shown in second user interface
When the second of elephant indicates, in response to the second input, equipment changes the second display category indicated of virtual objects based on the second input
Property, or the third table of virtual objects is shown in the third user interface of the visual field in one or more cameras including equipment
Show.So that a variety of different types of operations are able to respond and execute in input (for example, by the display properties for changing virtual objects
Or virtual objects are shown in third user interface) and the efficiency that user is able to carry out these operations is improved, it is set to enhance
Standby operability, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
The first interface region of equipment on the display 112 is (for example, two-dimensional graphical user interface or part of it
(for example, the browsable list of furniture image, image comprising one or more optional objects etc.)) in display (902) it is virtual right
The first of elephant indicate (for example, the graphical representation of three dimensional object, such as, virtual chair 5020, virtual lamp 5084, shoes, furniture,
Hand tools, ornament, people, emoticon, game role, virtual furnishings etc.).For example, the first interface region is as schemed
Instant message user interface 5008 shown in 6A.In some embodiments, in addition to the image of the physical environment around equipment,
First interface region further include background (for example, the background of the first interface region be pre-selection background color/pattern or
Background image, the background image be different from by one or more cameras simultaneously capture output image, and be different from one or
Real time content in the visual field of multiple cameras).
When show virtual objects in the first interface region over the display first indicates, equipment detection
(904) it indicates to contact progress by first at corresponding position with first of the virtual objects on display on touch sensitive surface
First input (for example, first of virtual objects on touch-screen display indicates to detect the first contact, or with void
The first of quasi- object indicates that being simultaneously displayed on showing in the first interface region can indicate to examine in (for example, toggle control 6018)
The first contact is measured, this, which shows, can indicate to be configured as to trigger AR view by the first contact when being called (for example, the visual field of camera
6036) and/or the display of the user interface 6010 of going up on the stage of the expression including virtual objects (for example, virtual chair 5020)).Example
Such as, the first input is the input carried out by contact 6006 as described in referring to Fig. 6 E to Fig. 6 I.
Pass through what the first contact carried out in response to detecting the first input carried out by the first contact, and according to determining
First input meet first (for example, going up on the stage to trigger) standard (for example, trigger criteria of going up on the stage be configured as identification gently sweep input,
Touch the initial movement or another type of predetermined of holding inputs, pressing inputs, tap inputs, contacts touch-down, contact
Justice input gesture, the inspection of the go up on the stage trigger criteria and the visual field plane in the visual field of the activation and/or triggering camera that trigger camera
Survey associated), equipment shows that the second of (906) virtual objects indicate in second user interface zone, second user interface area
Domain different from the first interface region (does not include camera for example, second user interface zone is to go up on the stage user interface 6010
Visual field and including simulated three-dimensional space, may be in response in the simulated three-dimensional space user input manipulation (for example, rotation or
It is mobile) three dimensional representations of virtual objects).For example, having according to determining by the input that contact 6006 carries out in Fig. 6 E to Fig. 6 H
It is increased above deep pressing intensity threshold ITDCharacteristic strength, virtual chair object 5020 is displayed on user interface of going up on the stage
In 6010 (for example, as shown in fig. 6i), the user interface of going up on the stage is different from instant message user interface 5008 (for example, such as Fig. 6 E
It is shown).
In some embodiments, it in response to detecting the first input, and goes up on the stage to touch according to determining that the first input meets
Issue of bidding documents is quasi-, and equipment shows the first animation transition, and the first animation transition is shown from as shown in the first interface region the
One orientation (for example, first orientation of the virtual chair 5020 as shown in the instant message user interface 5008 in Fig. 6 E) is moved
It is dynamic and be oriented to second orientation again (for example, the second orientation of the virtual chair 5020 determined based on gantry plane 6014, such as
Shown in Fig. 6 I) virtual objects three dimensional representation, wherein second orientation is determined based on the virtual plane on display, this is virtual
Plane is orientated independently of equipment relative to the current orientation of physical environment around equipment.For example, the three dimensional representation of virtual objects
With the predefined orientation relative to plane and/or apart from (for example, based on virtual right as shown in two-dimensional graphical user interface
The shape and orientation of elephant), and when being converted to view of going up on the stage (for example, user interface 6010 of going up on the stage), three dimensional representation moved,
It is re-sized independently of the remainder of the area of display and by reorientation, so that virtual objects reach the new position on display from the home position on display
(for example, center of virtual stand 6014) is set, and during movement or at the end of movement, three dimensional representation is carried out again
Orientation so that virtual objects relative to predefined virtual plane 6014 of going up on the stage at fixed angle, this is predefined go up on the stage it is virtual
Plane is defined independently of the physical environment around equipment.
When show virtual objects in second user interface zone second indicates, (908) second input of equipment detection
(for example, the input carried out as shown in Fig. 6 Q to Fig. 6 T by contact 6034).In some embodiments, the second input of detection
It include: to detect to indicate that the one or more second at corresponding position contacts with the second of virtual objects on touch screen;Detection is shown
The second contact on capable of indicating, this shows the physical environment that can indicate to be configured as when being called by the second contact around triggering equipment
Augmented reality view display;The movement of the second contact of detection;And/or the second contact of detection is lifted away from.In some realities
Apply in scheme, second input be carried out by identical contact first input continuation (for example, second input be such as Fig. 6 E extremely
It is carried out as shown in Fig. 6 Q to Fig. 6 T by contact 6034 after the first input carried out shown in Fig. 6 I by contact 6006
Input (for example, contact be not lifted away from)), or the independent input carried out by entirely different contact is (for example, the second input is
Passing through after the first input carried out as shown in Fig. 6 E to Fig. 6 I by contact 6006 as shown in Fig. 6 Q to Fig. 6 T connects
The inputs (for example, contact be lifted away from) that touching 6034 carries out), or the input by addition contacting progress continuation (for example, second
Input be as shown in Fig. 6 E to Fig. 6 I by contact 6006 carry out first input after as shown in Fig. 6 J to Fig. 6 L
The input carried out by contact 6006).For example, the second input can be the continuation for gently sweeping input, the second tap input, the second pressing
Input, the pressing input after the first input, second, which touch, keeps input, the sustained touch continued from the first input etc..
In response to detecting the second input (910): according to determining second input and the (example in second user interface zone
Such as, without being converted to augmented reality view) requests of manipulation virtual objects is corresponding, and equipment is based on the second input and changes virtual objects
Second indicate display properties in second user interface zone, and according to determine the second input in augmented reality environment
The request of middle display virtual objects is corresponding, and equipment show the of the virtual objects of the expression of the visual field with one or more cameras
Three indicate (for example, equipment show the third user interface of the visual field 6036 including one or more cameras, and by virtual objects
The three dimensional representation of (for example, virtual chair 5020) is placed on the physical plane around equipment in physical environment 5002 (for example, ground
Plate) corresponding camera visual field on the virtual plane (for example, floor surface 5038) that detects).
In some embodiments, corresponding with the request of virtual objects is manipulated in second user interface zone second is defeated
Enter to pass through second at position corresponding with the second expression of the virtual objects in second user interface zone on touch sensitive surface
It contacts the kneading carried out or gently sweeps.For example, the second input is the inputs as shown in Fig. 6 J to Fig. 6 L by contacting 6006 progress
Or the input carried out as shown in Fig. 6 N to Fig. 6 O by contact 6026 and 6030.
In some embodiments, it is corresponding with the request of virtual objects is shown in augmented reality environment second input be
On touch sensitive surface at position corresponding with the expression of virtual objects in second user interface zone or from touch sensitive surface
Tap input, pressing input or the touch that position corresponding with the expression of the virtual objects in second user interface zone carries out
It keeps or pressing inputs and subsequent dragging input.For example, the second input is to pass through contact as shown in Fig. 6 Q to Fig. 6 T
The 6034 deep pressing inputs carried out.
In some embodiments, change virtual objects based on the second input second indicates in second user interface zone
Interior display properties includes surrounding one or more axis rotations (for example, gently sweeping by vertical and/or level), again scale cun
(for example, mediating with scale again cun) tilts (for example, passing through reclining device) around one or more axis, changes visual angle (example
Such as, by horizontal mobile device, in some embodiments, this is used to analyze the visual field of one or more cameras, to detect one
A or multiple visual field planes) and/or change virtual objects expression color.For example, change virtual objects second indicates
Display properties include that it is virtual gently to sweep gesture rotation in response to the level carried out as shown in Fig. 6 J to Fig. 6 K by contact 6006
Chair 5020;In response to rotating virtual chair by the diagonal gesture of gently sweeping that contact 6006 carries out as shown in Fig. 6 K to Fig. 6 L
5020;Or in response to increasing virtual chair by the separation gesture that contact 6026 and 6030 carries out as shown in Fig. 6 N to Fig. 6 O
5020 size.In some embodiments, virtual objects second indicate display properties variable quantity with second input
The variable quantity of attribute (for example, the moving distance of contact or movement speed, contact strength, duration of contact etc.) is associated.
In some embodiments, it inputs and in augmented reality environment according to determining second (for example, in one or more
In the visual field 6036 of camera, as described referring to Fig. 6 T) requests of display virtual objects correspondence, equipment shows the second animation mistake
It crosses, the second animation transition is shown from the respective orientation relative to the virtual plane on display (for example, virtual chair shown in Fig. 6 R
Son 5020 orientation) by reorientation be third be orientated (for example, orientation of virtual chair 5020 shown in Fig. 6 T) it is virtual right
The three dimensional representation of elephant, third are orientated based on the current of the part in the visual field for being trapped in one or more cameras in physical environment
Orientation is to determine.For example, carry out reorientation to the three dimensional representations of virtual objects so that the three dimensional representation of virtual objects relative to
The physical environment 5002 in the visual field of camera is being captured (for example, the physical surface of the three dimensional representation of sustainable virtual objects, all
Such as perpendicular walls or horizontal floor surface) realtime graphic in the predefined plane (for example, floor surface 5038) that recognizes at
Fixed angle.In some embodiments, At at least one aspect, the orientation of the virtual objects in augmented reality view is stepped on
The constraint of the orientation of virtual objects in platform user interface.For example, when virtual objects are converted to enhancing from user interface of going up on the stage
When real view, virtual objects are maintained around the rotation angle of at least one axis of three-dimensional system of coordinate (for example, as referring to figure
6Q to Fig. 6 U description, the rotation of the virtual chair 5020 as described in referring to Fig. 6 J to Fig. 6 K is maintained).In some embodiment party
In case, the light source being incident upon in the expression of the virtual objects in second user interface zone is virtual light source.In some embodiment party
In case, the thirds of the virtual objects in third interface region indicate to be illuminated by real-world light source (for example, such as at one or
It detects in the visual field of multiple cameras and/or is determined by the visual field of one or more cameras).
In some embodiments, the first standard includes (912) when the input of (for example, according to identified below) first is included in
Met on touch sensitive surface with when contacting the tap carried out input by first at the corresponding position of virtual objects indicator 5022
Standard (for example, the indicator of overlapping and/or the expression of the virtual objects in proximity displays, such as, icon).For example, empty
Quasi- object indicator 5022 provides virtual objects corresponding with the virtual objects indicator in view of going up on the stage (for example, the user that goes up on the stage
Interface 6010) and augmented reality view (for example, visual field 6036 of camera) in it is visible instruction (for example, below with reference to method
1000 are more fully described).It whether include that tap input determines whether to show in second user interface zone according to the first input
Show virtual objects second indicate, this make it is a variety of it is different types of operation be able to respond in first input and execute.So that more
The different types of operation of kind, which is able to respond, executes the efficiency for improving user and being able to carry out these operations in input, to enhance
The operability of equipment, this further through allow users to faster and more effectively to reduce using equipment electricity usage and
Extend the battery life of equipment.
In some embodiments, the first standard includes (914) when (for example, according to identified below) first is contacted touch-sensitive
It is less than at least predefined threshold of mobile holding of amount of threshold shift on surface at position corresponding with the first expression of virtual objects
The standard met when value time quantum (for example, long pressing time threshold).For example, keeping input to meet the first mark by touching
It is quasi-.In some embodiments, the first standard include require first contact on touch sensitive surface with the expression pair of virtual objects
After at least predefined threshold amount of time of mobile holding at the position answered to be less than amount of threshold shift mobile first contact with
Just meet the standard of standard.For example, keeping input and subsequent dragging input to meet the first standard by touching.According to connecing
Whether touching is on touch sensitive surface to be less than the mobile holding of amount of threshold shift at least at position corresponding with the expression of virtual objects
Time predefined amount, it is determined whether shown in second user interface zone virtual objects second indicate, this make it is a variety of not
The operation of same type is able to respond to be executed in the first input.So that a variety of different types of operations are able to respond and hold in input
Row improves the efficiency that user is able to carry out these operations, to enhance the operability of equipment, this is further through enabling users to
Enough battery lifes for faster and more effectively reducing electricity usage using equipment and extend equipment.
In some embodiments, the first standard includes the feature that (916) are contacted when (for example, according to identified below) first
Intensity is increased above the first intensity threshold (for example, deep pressing intensity threshold ITD) when the standard that is met.For example, such as reference
Fig. 6 Q to Fig. 6 T description, when the characteristic strength of contact 6034 is increased above deep pressing intensity threshold ITDWhen, standard is expired
Foot, as indicated by strength level meter 5028.In some embodiments, met according to determining contact another type of for identification
The standard of gesture (for example, tap), while keeping showing virtual objects, equipment is also executed except triggering second is (for example, step on
Platform) another predefined function except user interface.In some embodiments, the input of the first standard requirements first is not tap
Input is (for example, the intensity before detecting that contact is lifted away from the tap time threshold of the initial touch-down of contact reaches high
In the firmly tap input of threshold intensity).In some embodiments, the first standard includes requiring the intensity in the first contact super
The standard of the first intensity threshold mobile first contact later is crossed, to meet standard.For example, by pressing input and then
Dragging input is to meet the first standard.The first intensity threshold whether is increased above according to the characteristic strength of contact, it is determined whether
Show that virtual objects are able to respond a variety of different types of operations and hold in the first input in second user interface zone
Row.So that a variety of different types of operations are able to respond and execute the effect for improving user and being able to carry out these operations in input
Rate, to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, in response to detecting the first input carried out by the first contact, and according to determination
Meet the second standard (for example, interface rolling motion standard) by the first input that the first contact carries out, equipment is being contacted with first
The corresponding side of moving direction scrolls up (918) first interface regions (and expression of virtual objects) (for example, the first mark
It is quasi- unmet, and abandon the expression that virtual objects are shown in second user interface zone), wherein the second standard requirements
First input includes that movement of first contact on the direction across touch sensitive surface is greater than threshold distance (for example, by gently sweeping gesture
Meet the second standard, such as, vertically gently sweep or horizontal gestures).For example, passing through contact as referring to described in Fig. 6 B to Fig. 6 C
6002 carry out upward vertically gently sweep gesture instant message user interface 5008 and virtual chair 5020 made to scroll up.Some
In embodiment, it includes that the movement of the first contact is greater than threshold distance that the first standard, which also requires the first input, to meet first
Standard, and initial part (for example, touch expression of virtual objects in keep or pressing) of the equipment based on the first input is
It is no to meet object selection criteria to determine whether the first input meets the first standard (for example, trigger criteria of going up on the stage) or the second standard
(for example, interface rolling motion standard).In some embodiments, except the position of virtual objects and the AR icon of virtual objects
Touch location at initiate gently sweep input meet the second standard.Whether meet the second standard according to the first input, in response to the
One input determines whether the first interface region of rolling, this is able to respond a variety of different types of operations in the first input
And it executes.So that it is a variety of it is different types of operation be able to respond in input execute improve user be able to carry out these operation
Efficiency, to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, in response to detecting the first input carried out by the first contact, and according to determination
Meet third (for example, AR is triggered) standard by the first input that the first contact carries out, equipment shows that (920) have one or more
The third of the virtual objects of the expression of the visual field of a camera indicates.For example, passing through contact as referring to described in Fig. 6 AD to Fig. 6 AG
The upward dragging input of the 6044 long touch inputs carried out and the virtual chair 5020 of dragging then carried out by contact 6044
So that the visual field 6036 of camera shows virtual chair 5020.
In some embodiments, third standard includes the standard for example according to satisfaction identified below: one or more phases
Machine is in active state;Apparatus orientation fall into definition in the range of (for example, come customized original orientation, definition around one
The range of the rotation angle of a or multiple axis);By the input that contact carries out include selection input (for example, long touch) and with
Dragging input (movement of the contact of the virtual objects on mobile display) afterwards is (for example, be moved to the edge phase with display
In the range of preset distance);The characteristic strength of contact is increased above AR triggering intensity threshold (for example, light press threshold value ITL
Or deep pressing threshold value ITD);The duration of contact increases to greater than AR triggering duration threshold (for example, long pressing threshold value);
And/or the mobile distance of contact increases to greater than AR triggering distance threshold (for example, long gently sweep threshold value).In some embodiment party
Control in case, for the expression of display virtual objects in second user interface zone (for example, user interface 6010 of going up on the stage)
(for example, toggle control 6018) is shown in the user of the expression including virtual objects and the visual field 6036 of one or more cameras
In interface (for example, at least part of third interface region of replacement second user interface zone).
In some embodiments, when from the first interface region (for example, non-AR, it is non-go up on the stage, touch screen UI view)
When being to a transition directly to third interface region (for example, augmented reality view), equipment shows that animation transition, the animation transition are shown
Having gone out the respective orientation indicated from the touch screen UI (for example, non-AR, non-view of going up on the stage) on display by reorientation is phase
Current be orientated for capturing a part of the physical environment in the visual field of one or more cameras carrys out predefined orientation
The three dimensional representation of virtual objects.For example, as shown in Fig. 6 AD to Fig. 6 AJ, when from the first interface region (for example, instant message
User interface 5008, as shown in Fig. 6 AD) be to a transition directly to third interface region (e.g., including the visual field 6036 of camera
Augmented reality user interface, as shown in Fig. 6 AJ), virtual chair 5020 changes from the first orientation as shown in Fig. 6 AD to Fig. 6 AH
For the predefined orientation (example relative to such as floor surface 5038 of the capture in the physical environment 5002 in the visual field 6036 of camera
Such as, as shown in Fig. 6 AJ).For example, the three dimensional representation to virtual objects carries out reorientation, so that the three dimensional representation of virtual objects
Relative to the predefined plane recognized in the realtime graphic of physical environment 5002 (for example, the three-dimensional of sustainable virtual objects
The physical surface of expression, such as, perpendicular walls or horizontal floor surface (for example, floor surface 5038)) at fixed angle.According to
Whether the first input meets third standard, determines whether that display has the virtual objects of the visual field of camera in response to the first input
Third indicate, this make it is a variety of it is different types of operation be able to respond in first input and execute.So that a variety of different types of
Operation, which is able to respond, executes the efficiency for improving user and being able to carry out these operations in input, to enhance grasping for equipment
The property made, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend equipment
Battery life.
In some embodiments, in response to detecting the first input carried out by the first contact, equipment passes through one
Or multiple equipment orientation sensors come determine (922) equipment current device be orientated (for example, relative to physical environment around equipment
Orientation), and third standard (for example, AR trigger criteria) requires current device orientation within the scope of first orientation, so as to full
(for example, when the angle between equipment and ground is less than threshold angle, the second standard is met sufficient third standard, this instruction
Equipment is parallel enough with ground (bypass gap state)).In some embodiments, the first standard is (for example, triggering mark of going up on the stage
It is quasi-) require current device orientation within the scope of second orientation, to meet the first standard (for example, between equipment and ground
When angle is in threshold value and up to 90 degree, the first standard is met, this indicating equipment is vertical enough relative to ground, with first into
Enter gap state).According to apparatus orientation whether in orientation range, determine whether that display has camera in response to the first input
The thirds of the virtual objects of visual field indicates, this is able to respond a variety of different types of operations to execute in the first input.Make
It is a variety of it is different types of operation be able to respond in input execute improve user be able to carry out these operation efficiency, thus
The operability of equipment is enhanced, this is further through allowing users to faster and more effectively reduce electricity usage using equipment
And extend the battery life of equipment.
In some embodiments, the second of virtual objects indicates at least one display properties (for example, size, shape,
Around the respective angles etc. of yaw, pitching and the axis of rolling) applied (924) to indicate in the third of virtual objects.For example, such as reference
Fig. 6 Q to Fig. 6 U description, when the third of virtual chair 5020 indicates the augmented reality view for being shown in the visual field 6036 including camera
When in figure (for example, as shown in Fig. 6 U), applying as described in referring to Fig. 6 J to Fig. 6 K is virtual in user interface 6010 of going up on the stage
The rotation that the second of chair 5020 indicates is maintained.In some embodiments, At at least one aspect, augmented reality view
In virtual objects orientation by the virtual objects in user interface of going up on the stage orientation constraint.For example, working as virtual objects
When being converted to augmented reality view from view of going up on the stage, virtual objects surround at least one axis (example of predefined three-dimensional system of coordinate
Such as, yaw, pitching and the axis of rolling) rotation angle be maintained.In some embodiments, if inputted by user
In some manner the second of (for example, changing size, shape, texture, orientation etc.) manipulation virtual objects indicates, then virtual objects
The third that second at least one display properties indicated is only applied to virtual objects indicates.In other words, increasing when object is shown
In strong real view or in one or more ways come when constraining the object appearance in augmented reality view, in view of going up on the stage
In the change made be maintained.At least one display properties that the second of virtual objects is indicated is applied to the of virtual objects
Three indicate (for example, not needing identical display properties is applied to virtual objects by further user's input second indicates
Indicated with the third of virtual objects) operability of equipment is enhanced (for example, by allowing user in the big version of virtual objects
Rotation is applied to the second virtual objects when being shown in second user interface, and there is one by what rotation was applied to show
Or the third of the virtual objects of the expression of the visual field of multiple cameras indicates), this is further through allowing users to faster and more effectively
Reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, in response to detecting at least first of the first input carried out by the first contact (926)
Initial portion (e.g., including: detect the first contact;Or it detects and meets corresponding predefined standard and be unsatisfactory for the first standard
By first contact carry out input;Or detect the input for meeting the first standard): device activation one or more camera
(for example, activation camera, visual field without showing camera immediately over the display), and device analysis one or more camera
Visual field, one or more planes in visual field to detect one or more cameras.In some embodiments, in activation one
Or postpone to show the visual field 6036 of one or more cameras after multiple cameras (for example, until detecting and in augmented reality ring
Corresponding second input of the request of virtual objects is shown in border, until detecting at least one visual field plane, or until detection
To visual field plane corresponding with the anchor plane defined for virtual objects).In some embodiments, with one or more phases
The visual field of one or more cameras is shown at (for example, while one or more camera activations) activation of the machine corresponding time
6036.In some embodiments, one or more phases are shown before detecting plane in the visual field of one or more cameras
The visual field 6036 of machine (for example, in response to detecting the first input carried out by contact and according to determination, shows one or more
The visual field of a camera).In response to detecting the initial part of the first input (for example, there are one or more cameras in display
Before the third expression of the virtual objects of the expression of visual field) activate camera and the visual field by analyzing camera to detect one or more
A visual field plane, this improves the efficiency of equipment (for example, determining that the third of virtual objects is indicated relative to camera by reducing
Time quantum needed for the positions of respective planes in visual field and/or orientation), this is further through allowing users to faster and more effectively
Reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, in response to detected in the visual field of one or more cameras corresponding plane (for example,
Floor surface 5038), there is equipment output (928) the tactile output of one or more tactile output generators 167, the tactile is defeated
It indicates to detect corresponding plane in the visual field of one or more cameras out.In some embodiments, view can recognized
Visual field 6036 is shown before the plane of field.In some embodiments, after detecting at least one visual field plane or knowing
It is clipped to after all visual field planes, other user interface controls and/or icon is covered in the real world images in visual field.
Output instruction detects that the tactile output of plane has provided a user instruction and had detected that the anti-of the plane in the visual field of camera
Feedback.The operability that improved touch feedback enhances equipment is provided (to help user to provide input appropriate for example, passing through and subtract
Lack the unnecessary other input for placing virtual objects), this is further through allowing users to faster and more effectively using setting
It is standby and reduce electricity usage and extend the battery life of equipment.
In some embodiments, the simulation real world dimensions based on virtual objects and one or more cameras and one
Third in the visual field 6036 of a or multiple cameras with virtual objects (for example, virtual chair 5020) indicates the space for having fixed
The distance between position (for example, being attached the plane of virtual objects, such as, floor surface 5038) of relationship, determines (930) display
The size that the third of virtual objects on device indicates.In some embodiments, virtual objects third indicate size by
Constraint, so that the size that the third of virtual objects indicates is maintained relative to the ratio of the visual field of one or more cameras.?
In some embodiments, for virtual objects define one or more physical dimensional parameters (for example, length, width, depth and/or
Radius).In some embodiments, in second user interface (for example, user interface of going up on the stage), virtual objects are not defined by it
Physical dimensional parameters (for example, the size of virtual objects may be in response to user input and change) constraint.In some embodiment party
In case, the third of virtual objects indicates the constraint by its dimensional parameters defined.It is regarded when detecting for changing augmented reality
In figure virtual objects relative to the position of the physical environment indicated in visual field user input when, or when detect for changing
When user's input of the zoom level of visual field, or works as and detect for the user mobile relative to the physical environment around equipment
When input, the appearance (for example, size, viewing visual angle) of virtual objects will be changed in a manner of the constraint by following factor: empty
Fixed spatial relationship between quasi- object and physical environment is (for example, such as by the anchor plane of virtual objects and augmented reality environment
In plane between fixed spatial relationship represented by) and predefined size parameter based on virtual objects and physical environment
The fixed proportion of actual size.The view of simulation real world dimensions and one or more cameras and camera based on virtual objects
The distance between position in determines the size that the third of virtual objects indicates (for example, it is defeated not need further user
Entering indicates again scale cun for the third of virtual objects, to simulate the real world dimensions of virtual objects), which enhance equipment
Operability, and further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
In some embodiments, the second input packet corresponding with the request of virtual objects is shown in augmented reality environment
Include (932) (selection is simultaneously) dragging virtual objects second indicates (for example, dragging increases to above the distance of distance threshold, dragging
To the boundary beyond definition and/or drag at the edge of display or second user interface zone (for example, feather edge, top margin
Edge and/or side edge) within the threshold range position at) input.It is shown in response to detecting in augmented reality environment
Corresponding second input of the request of virtual objects indicates that the third of display virtual objects indicates with the visual field of camera, this offer
Other control option, but not make with the control in addition shown (for example, being used for from second user interface display augmented reality
The control of environment) second user interface it is chaotic.Other control option is provided but not is made with the control in addition shown
Second user interface confusion enhances the operability of equipment, this is further through allowing users to faster and more effectively using equipment
And reduces electricity usage and extend the battery life of equipment.
In some embodiments, when in second user interface zone (for example, user interface of going up on the stage as shown in Fig. 6 Z
6010) when the second of display virtual objects indicates in, equipment detection (934) meets for showing the first interface region again
Respective standard the 4th input (for example, at position corresponding with the second expression of virtual objects or being touched on touch sensitive surface
The tap of another position (for example, bottom or edge of second user interface zone) on sensitive surfaces is firmly pressed or is touched
It keeps and drags input, and/or on touch sensitive surface and for corresponding back to the control of the first interface region
Input at position), and in response to detecting the 4th input, it is virtual right that equipment stops at display in second user interface zone
The second of elephant indicates, and equipment shows that the first of virtual objects indicate in the first interface region again.For example, as schemed
Shown in 6Z to Fig. 6 AC, in response to leading at position corresponding with the retrogressing control 6016 being shown in user interface 6010 of going up on the stage
The input that contact 6042 carries out is crossed, equipment stops at display in second user interface zone (for example, user interface 6010 of going up on the stage)
The second of virtual chair 5020 indicates, and equipment is again in the first interface region (for example, instant message user interface
5008) the first of the virtual chair 5020 of display indicates in.In some embodiments, the first of virtual objects indicates to be shown in the
In one interface region, be converted to go up on the stage view and/or augmented reality view it is previously shown those with identical
Appearance, position and/or orientation.For example, virtual chair 5020 is shown in instant message user interface 5008 in Fig. 6 AC,
Itself and the orientation having the same of virtual chair 5020 in display instant message user interface 5008 in fig. 6.In some realities
Apply in scheme, when transform back into show virtual objects in the first interface region when, equipment continuously displays virtually on the screen
Object.For example, in Fig. 6 Y to Fig. 6 C, user interface 6010 is being gone up on the stage to display instant message user interface 5008 from display
During transformation, virtual chair 5020 is continuously displayed.According to when show virtual objects in second user interface second indicates
Whether the 4th input detected meets the standard for showing the first user interface again, it is determined whether again in the first user
Shown in interface virtual objects first indicate, this make it is a variety of it is different types of operation be able to respond in the 4th input and hold
Row.So that a variety of different types of operations are able to respond and execute the effect for improving user and being able to carry out these operations in input
Rate, to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, when the third for indicating display virtual objects of the visual field 5036 with one or more cameras
When expression (for example, as shown in Fig. 6 U), equipment detection (936) meets the corresponding mark for showing second user interface zone again
The 5th quasi- input is (for example, on touch sensitive surface at position corresponding with the expression of the third of virtual objects or on touch sensitive surface
Another position tap, firmly press or touches and drags input, and/or on touch sensitive surface and be used for return to
Show the input at the corresponding position of control of second user interface zone), and in response to detecting the 5th input, equipment is stopped
Only show that the third of virtual objects indicates the expression with the visual field of one or more cameras, and again in second user interface area
The second of virtual objects are shown in domain indicates.For example, as shown in Fig. 6 V to Fig. 6 Y, in response to be shown in the view including camera
The inputs carried out at the corresponding position of toggle control 6018 in the third user interface of field 6036 by contact 6040, equipment are stopped
It only shows the visual field 6036 of camera and shows user interface 6010 of going up on the stage again.In some embodiments, the of virtual objects
Two indicate to be shown in second user interface zone, have with the second expression for showing the virtual objects in augmented reality view
There is identical orientation.In some embodiments, when transform back into show virtual objects in second user interface zone when, equipment
Virtual objects are continuously displayed on the screen.For example, going up on the stage from the visual field 6036 of display camera to display in Fig. 6 V to Fig. 6 Y
During the transformation of user interface 6010, virtual chair 5020 is continuously displayed.According to when the visual field display virtual objects with camera
Whether the 5th input that third detects when indicating meets the standard for showing second user interface again, it is determined whether again
The second of virtual objects are shown in second user interface indicates, this is able to respond a variety of different types of operations in the 5th
It inputs and executes.So that a variety of different types of operations are able to respond to execute in input and improve user and be able to carry out these behaviour
The efficiency of work, to enhance the operability of equipment, this further through allow users to faster and more effectively using equipment and
Reduce electricity usage and extends the battery life of equipment.
In some embodiments, in the third for indicating 6036 display virtual objects of the visual field with one or more cameras
When expression, equipment detection (938) meets for showing the first interface region (for example, instant message user interface again
5008) the 6th input of respective standard, and in response to detecting that the 6th inputs, equipment stopping display virtual objects (for example,
Virtual chair 5020) third indicate and the expression (for example, as shown in Fig. 6 U) of the visual field 6036 of one or more cameras, and
And equipment shows that the first of virtual objects indicate (for example, as shown in Fig. 6 AC) in the first interface region again.Some
In embodiment, the 6th input is for example on touch sensitive surface at position corresponding with the expression of the third of virtual objects or touch-sensitive
The tap of another position on surface, firmly press or touch and drag input, and/or on touch sensitive surface be used for
Input back at the corresponding position of control of the first interface region of display.In some embodiments, virtual objects
First indicate to be shown in the first interface region, and before being converted to and going up on the stage view and/or augmented reality view
Appearance having the same and position those of are shown.In some embodiments, when transforming back into the first interface region
When showing virtual objects, equipment continuously displays virtual objects on the screen.According to when the visual field display virtual objects with camera
Whether the 6th input that third detects when indicating meets the standard for showing the first user interface again, it is determined whether again
The first of virtual objects are shown in the first user interface indicates, this is able to respond a variety of different types of operations in the 6th
It inputs and executes.So that a variety of different types of operations are able to respond to execute in input and improve user and be able to carry out these behaviour
The efficiency of work, to enhance the operability of equipment, this further through allow users to faster and more effectively using equipment and
Reduce electricity usage and extends the battery life of equipment.
In some embodiments, in response to detecting the first input carried out by the first contact, and according to determination
By first contact carry out input meet the first standard, equipment from display the first interface region (for example, instant message
User interface 5008) be changed into display second user interface zone (for example, user interface 6010 of going up on the stage) when continuously display (940)
Virtual objects, this include show virtual objects in the first interface region first indicate a transition to second user interface area
The animation (for example, the mobile, rotation around one or more axis and/or scaling) that second of virtual objects in domain indicates.Example
Such as, in Fig. 6 E to Fig. 6 I, in the tour for user interface 6010 of going up on the stage from display instant message user interface 5008 to display
Between, continuously display simultaneously animated virtual chair 5020 (for example, the orientation of virtual chair 5020 changes).In some embodiments,
Virtual objects have relative to the plane in the visual field of camera the orientation defined, position and/or distance (for example, based on such as the
The shape and orientation that first of virtual objects shown in one interface region indicates define), and when being converted to second
When interface region, the first of virtual objects indicates by movement, is re-sized independently of the remainder of the area of display and/or by reorientation for aobvious
Show second of the virtual objects at the new position (for example, center of the plane of virtually going up on the stage in second user interface zone) on device
Indicate, and during movement or at the end of movement, reorientation carried out to virtual objects so that virtual objects relative to
At predetermined angle, which comes predefined plane of virtually going up on the stage independently of the physical environment around equipment
Definition.Display indicates the virtual objects being converted in second user interface when first of the virtual objects in the first user interface
Animation when the second expression provides the feedback that the first input of instruction meets the first standard for user.Improved feedback enhancing is provided
The operability of equipment is (for example, by helping user to provide input appropriate and reducing operation equipment/interact with equipment
When user's mistake), this is further through allowing users to faster and more effectively reduce electricity usage using equipment and prolong
The battery life of equipment is grown.
In some embodiments, in response to detecting the second input carried out by the second contact, and according to determination
By second contact carry out second input with show that the request of virtual objects is corresponding in augmented reality environment, equipment from show
Show that second user interface zone (for example, user interface 6010 of going up on the stage) is changed into the visual field that display includes one or more cameras
(942) virtual objects are continuously displayed when 6036 third interface region, this includes in display second user interface zone
The second of virtual objects indicates a transition to include virtual right in the third interface region of the visual field of one or more cameras
The animation (for example, the mobile, rotation around one or more axis and/or scaling) that the third of elephant indicates.For example, in Fig. 6 Q to figure
In 6U, from display go up on the stage user interface 6010 to the visual field 6036 of display camera transformation during, continuously display and animation be empty
Quasi- chair 5020 (for example, the positions and dimensions of virtual chair 5020 change).In some embodiments, virtual objects are carried out
Reorientation so that virtual objects be in relative to detected in the visual field of one or more cameras visual field plane (for example,
The physical surface of the three dimensional representation of sustainable user interface object, such as, perpendicular walls or horizontal floor surface) it is predefined
Under orientation, at position and/or at distance.Display indicates to be converted to third use when second of the virtual objects in second user interface
The animation when third of virtual objects in the interface of family indicates provides instruction second for user and inputs and in augmented reality environment
The corresponding feedback of request of middle display virtual objects.Improved visual feedback, which is provided, for user enhances the operability of equipment
(for example, by help user provide it is appropriate input and reduce operation equipment/with equipment interact when user's mistake), this
Further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery longevity of equipment
Life.
It should be appreciated that be only exemplary to the concrete order of Fig. 9 A operation being described into 9D, not purport
Showing that the order is the but one sequence that can execute these operations.Those skilled in the art will recognize that various ways
To resequence to operations described herein.Additionally, it should be noted that herein in relation to other methods (example as described herein
Such as, method 800,900,16000,17000,18000,19000 with 20000) described in other processes details equally with similar
Mode be suitable for above in relation to method 900 described in Fig. 9 A to Fig. 9 D.For example, above with reference to contact described in method 900,
Input, virtual objects, interface region, intensity threshold, visual field, tactile output, mobile and/or animation optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,16000,17000,18000,19000 and 20000)
Contact, input, virtual objects, interface region, intensity threshold, visual field, tactile export, in mobile and/or animation feature
One of or more persons.For brevity, these details are not repeated herein.
Figure 10 A to Figure 10 D is to show to have directory entry and virtual three-dimensional object pair according to the display of some embodiments
The flow chart of the method 1000 for the project visually indicated answered.Method 1000 with display and touch sensitive surface (for example, simultaneously
Serve as the touch-screen display of display and touch sensitive surface) the electronic equipment (equipment 300 or portable in Figure 1A in Fig. 3
Multifunctional equipment 100) at execute.In some embodiments, display is touch-screen display, and touch sensitive surface is being shown
It is integrated on device or with display.In some embodiments, display is to separate with touch sensitive surface.It is some in method 1000
Operation is optionally combined and/or the sequence of some operations is optionally changed.
As described below, method 1000 is related to showing project in the first user interface and second user interface.According to item
Whether mesh is corresponding with corresponding virtual three-dimensional object, and each project of display has directory entry corresponding with virtual three-dimensional object
It visually indicates or is visually indicated without this.Provide a user whether project is that the instruction of virtual three-dimensional object improves user
The efficiency of operation can be executed in first item (for example, by come whether help user according to project be virtual three-dimensional object
To provide input appropriate), to enhance the operability of equipment, this is further through allowing users to faster and more effectively make
Reduce electricity usage with equipment and extends the battery life of equipment.
It includes first item (for example, icon, thumbnail, image, emoticon, attachment, patch that equipment, which receives (1002) display,
Paper, application icon, head portrait etc.) the first user interface request.For example, in some embodiments, which is to use
In open for shown in predefined environment associated with first item the expression of first item user interface (for example,
Input (for example, as reference Fig. 7 A described in) Internet-browser user interface 5060, as shown in Figure 7 B).Predefined ring
Border is optionally the user interface of application program (for example, email application, instant message application program, browser are answered
With program, word-processing application, electronic reader application program etc.) or system user interface (for example, lock-screen, logical
Know interface, suggest interface, control panel user interface, home on-screen user interface etc.).
In response to showing the request of the first user interface, equipment shows that (1004) have the first of the expression of first item to use
Family interface (for example, Internet-browser user interface 5060 as shown in Figure 7 B).According to determining first item and corresponding void
Quasi- three dimensional object is corresponding, and equipment shows that the expression of first item, the expression of first item have instruction first item and the first phase
The virtual three-dimensional object answered is corresponding to be visually indicated (for example, it is shown in the image at the corresponding position of expression with first item,
Such as, icon and/or background panel;Profile;And/or text).According to determine first item not with corresponding virtual three-dimensional
Object is corresponding, and equipment shows the expression for not having the first item visually indicated.For example, as shown in Figure 7 B, in internet browsing
In device user interface 5060, the network object 5068 (expression including virtual three-dimensional lamp object 5084) of display has instruction virtual
Lamp 8084 is visually indicating (virtual objects indicator 5080) for virtual three-dimensional object, and the network object 5074 shown does not have
There is visual object indicator, because network object 5074 does not include project corresponding with virtual three-dimensional object.
After the expression of display first item, it includes second item (for example, icon, contracting that equipment, which receives (1006) display,
Sketch map, image, emoticon, attachment, paster, application icon, head portrait etc.) second user interface (for example, such as Fig. 7 M institute
The instant message user interface 5008 shown) request (for example, as referring to input described in Fig. 7 H to Fig. 7 L).Second item is different
In first item, second user interface is different from the first user interface.For example, in some embodiments, the request be for
Open the another defeated of the user interface for showing the expression of second item in predefined environment associated with second item
Enter.Predefined environment is optionally the user interface of the application program in addition to the application program for showing first item
(for example, email application, instant message application program, browser application, word-processing application, electronics
Reader application etc.) or in addition to the system user interface for showing first item system user interface (for example, lock
Determine screen, notice interface, suggest interface, control panel user interface, home on-screen user interface etc.).
In response to showing the request at second user interface, equipment shows that (1008) have the second of the expression of second item to use
Family interface (for example, instant message user interface 5008 as shown in Fig. 7 M).According to determining second item and corresponding virtual three
Dimensional object is corresponding, and equipment shows that the expression of second item, the expression of second item have instruction second item corresponding with second
Virtual three-dimensional object is corresponding to be visually indicated (for example, and instruction first item is corresponding with virtual three-dimensional object visually indicates phase
Same visually indicates).According to determining that second item is not corresponding with corresponding virtual three-dimensional object, equipment is shown to be referred to without vision
The expression for the second item shown.For example, as shown in Fig. 7 M, in instant message user interface 5008, the virtual three-dimensional chair of display
Subobject 5020, which has, indicates that virtual chair 5020 is visually indicating (virtual objects indicator 5022) for virtual three-dimensional object, and
And the emoticon 7020 of display does not have visual object indicator, because emoticon 7020 does not include and virtual three-dimensional object
Corresponding project.
In some embodiments, display has instruction first item vision corresponding with the first respective virtual three dimensional object
The first item (for example, virtual lamp 5084) for indicating (for example, virtual objects indicator 5080) includes (1010): in response to detection
It is mobile (for example, as by orientation sensors (example from the first apparatus orientation to the equipment of the variation of the second apparatus orientation to causing
Such as, one or more accelerometers 168 of equipment 100) detect), display takes with from the first apparatus orientation to the second equipment
To the corresponding first item of variation movement (for example, the inclination of the first item relative to the first user interface and/or first
The movement of project).For example, the first apparatus orientation is the orientation of the equipment 100 as shown in Fig. 7 F1, and the second apparatus orientation is
The orientation of equipment 100 as shown in Fig. 7 G1.In response to moving shown in Fig. 7 F1 to Fig. 7 G1, first item is (for example, virtual lamp
5084) (for example, as shown in Fig. 7 F2 to Fig. 7 G2) is tilted.In some embodiments, if the second object and virtual three-dimensional pair
As correspondence, the second object also respond in the manner described above detection device movement (for example, with indicate the second object also with void
Quasi- three dimensional object is corresponding).
Show that the movement of first item corresponding with the variation from the first apparatus orientation to the second apparatus orientation mentions for user
The visual feedback of the behavior of instruction virtual three-dimensional object is supplied.Improved visual feedback, which is provided, for user enhances grasping for equipment
The property made is (for example, by allowing user to watch the virtual three-dimensional under each orientation in the case where not needing and providing further input
Object), this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend equipment
Battery life.
In some embodiments, display has the corresponding view of instruction first item virtual three-dimensional object corresponding with first
Feel instruction first item expression include (1012): in response to detect by first contact carry out when first item table
Show the first input for rolling the first user interface when being shown in the first user interface (for example, the in the first user interface
Input gently is swept on one direction, or the touch in the scroll button on scroll bar end keeps input): equipment is according to first
User interface rolling translation display on first item expression (for example, make the anchor station of first item with rolling
The distance of the mobile rolling amount based on the first user interface on touch sensitive surface (for example, when by moving on opposite direction
When contact drags the first user interface upwards, the expression of first item is moved up with the first user interface over the display)),
And equipment is revolved according to the direction of the first user interface scrolling relative to plane defined in the first user interface (or display)
Turn the expression of first item.For example, working as virtual lamp by what contact 7002 carried out in response to detecting as shown in Fig. 7 C to Fig. 7 D
The defeated of Internet-browser user interface 5060 is rolled when 5084 expression is shown in Internet-browser user interface 5060
Enter, virtual lamp 5084 is translated according to the rolling of Internet-browser user interface 5060, and according to the mobile road of contact 7002
The direction of diameter rotates virtual lamp 5084 relative to display 112.In some embodiments, according to determining first user interface quilt
Dragging, the expression of first item are moved up with the first user interface upwards, and first as shown on the first user interface
The viewing visual angle of project changes, and just looks like that user just watches first item from different visual angles (for example, lower angle).
In some embodiments, according to determining that second user interface is dragged upwards, the expression of second item is with second user interface
Move up, and the viewing visual angle of the second item as shown on second user interface changes, just look like user just from
Second item is watched at different visual angles (for example, lower angle).
Show that the movement of project corresponding with the variation from the first apparatus orientation to the second apparatus orientation provides for user
The visual feedback of the variation of indicating equipment orientation.Improved visual feedback, which is provided, for user enhances the operability (example of equipment
Such as, by allowing user to watch the virtual three-dimensional object under each orientation in the case where not needing and providing further input), this
Further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery longevity of equipment
Life.
In some embodiments, (for example, Internet-browser user interface 5060, such as scheme when in the first user interface
Shown in 7B) in display have and visually indicate the first item (for example, lamp object 5084) of (for example, virtual objects indicator 5080)
Expression when, equipment shows the expression of (1014) third item, wherein the expression of the third item shown do not have visually indicate,
So as to indicate third item it is not corresponding with virtual three-dimensional object (for example, third item not with can be in augmented reality environment by wash with watercolours
Any three dimensional object of dye is corresponding).For example, as shown in Figure 7 B, in Internet-browser user interface 5060, the network of display
Object 5074,5070 and 5076 does not have visual object indicator, because network object 5074,5070 and 5076 is not with virtual three
Dimensional object is corresponding.
It is the first item of virtual three-dimensional object visually indicated that display, which has instruction first item, in the first user interface
Mesh and user is improved without the third item visually indicated be able to use the first user interface the effect that executes operation
Whether rate is (for example, be that virtual three-dimensional object is appropriate to provide by the project for helping user to be interacted according to user
Input), to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, when at second user interface (for example, instant message user interface 5008, such as Fig. 7 M institute
Show) in display there is the second item (for example, virtual chair 5020) for visually indicating (for example, virtual objects indicator 5022)
When expression, equipment shows the expression of (1016) fourth item (for example, emoticon 7020), wherein the table of the fourth item shown
Show not having and visually indicate, to indicate that fourth item is not corresponding with corresponding virtual three-dimensional object.
It is the Section 2 of virtual three-dimensional object visually indicated that display, which has instruction second item, in second user interface
Mesh and user is improved without the fourth item visually indicated be able to use second user interface the effect that executes operation
Whether rate is (for example, be that virtual three-dimensional object is appropriate to provide by the project for helping user to be interacted according to user
Input), to enhance the operability of equipment, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
(1018) in some embodiments, the first user interface (for example, Internet-browser user interface 5060, such as
Shown in Fig. 7 B) it is corresponding with the first application program (for example, Internet-browser application program), second user interface is (for example, immediately
Message user interface 5008, as shown in Fig. 7 M) and it is different from the second application program of the first application program (for example, instant message
Application program) it is corresponding, and having of showing visually indicate (for example, virtual objects indicator 5080) first item (for example,
Lamp object 5084) expression and having for display visually indicate the second item (example of (for example, virtual objects indicator 5022)
Such as, virtual chair 5020) expression share one group of predefined visual signature and/or behavioural characteristic (for example, using identical finger
Show symbol icon, or texture having the same or render style and/or behavior when the input for being predefined type is called).Example
Such as, the icon for the icon of virtual objects indicator 5080 and for virtual objects indicator 5022 includes identical symbol.
Display has the first item visually indicated and answers second in the first user interface of the first application program
There is the second item visually indicated with display in the second user interface of program, so that first item visually indicates and second
Project visually indicates shared one group of predefined visual signature and/or behavioural characteristic, this improves user and is able to use second
User interface executes the efficiency of operation (for example, by helping whether the project that interacts according to user of user is empty
Quasi- three dimensional object provides input appropriate), to enhance the operability of equipment, this is further through allowing users to more rapidly
And equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, the first user interface is (1020) Internet-browser application program user interface (example
Such as, Internet-browser user interface 5060, as shown in Figure 7 B), and first item is the element of webpage (for example, first item
Mesh is expressed as image, hyperlink, applet, emoticon, the media object of insertion etc. of insertion in webpage).For example,
First item is the virtual lamp object 5084 of network object 5068.
There is instruction web page element to improve user for the web page element of virtual three-dimensional object visually indicated for display can
The efficiency of operation is executed using Internet-browser application program (for example, by helping user to hand over according to user with it
Whether mutual web page element is virtual three-dimensional object to provide input appropriate), to enhance the operability of equipment, this is again
Electricity usage faster and more effectively can be reduced using equipment by using family and extend the battery life of equipment.
In some embodiments, the first user interface is (1022) email application user interface (for example, electricity
Sub- e-mail user interface 7052, as shown in figure 7p), and first item is the attachment (for example, attachment 7060) of Email.
It is that the e-mail attachment of virtual three-dimensional object visually indicated improves that showing, which has instruction e-mail attachment,
User be able to use email application user interface execute the efficiency of operation (for example, by help user according to
Whether the e-mail attachment that family interacts is virtual three-dimensional object to provide input appropriate), it is set to enhance
Standby operability, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
In some embodiments, the first user interface is (1024) instant message application program user interface (for example, i.e.
When message user interface 5008, as shown in Fig. 7 M), and first item be message in attachment or element (for example, virtual chair
5020) (for example, first item is image, hyperlink, mini program, emoticon, media object etc.).
Iting has been shown that, there is instruction message attachment or element to mention for the message attachment visually indicated or element of virtual three-dimensional object
High user be able to use instant message user interface execute the efficiency of operation (for example, by help user according to user with
Whether its message attachment interacted or element are virtual three-dimensional object to provide input appropriate), to enhance equipment
Operability, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
In some embodiments, the first user interface is (1026) document management application program user interface (for example, text
Part managing user interface 7036, as shown in figure 7o), and first item is previewing file object (for example, the file information region
Previewing file object 7045 in 7046).
It is that the previewing file object of virtual three-dimensional object visually indicated improves that showing, which has instruction previewing file object,
User be able to use document management application program user interface execute the efficiency of operation (for example, by help user according to
Whether the previewing file object that family interacts is virtual three-dimensional object to provide input appropriate), it is set to enhance
Standby operability, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend
The battery life of equipment.
In some embodiments, the first user interface is (1028) map application user interface (for example, map is answered
With program user interface 7024), and first item is the expression of the point of interest (for example, interest point object 7028) in map
(for example, feature corresponding with the position on map three dimensional representation (e.g., including landform corresponding with the position on map and/
Or the three dimensional representation of structure), or the control for when being activated the three dimensional representation of map being shown).
Show the expression in map with the point of interest visually indicated for being expressed as virtual three-dimensional object of instruction point of interest
Improve user be able to use map application user interface execute the efficiency of operation (for example, by help user according to
The indicating whether of the point of interest that user interacts provides input appropriate for virtual three-dimensional object), to enhance
The operability of equipment, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and prolong
The battery life of equipment is grown.
In some embodiments, first item is corresponding with corresponding virtual three-dimensional object visually indicates including (1030)
The animation of the first item occurred in the case where not needing to be related to the input of expression of corresponding three dimensional object is (for example, at any time
Passage, applied to the continuous moving of first item or the visual effect (for example, flash of light, flashing etc.) of variation).
The animation of the first item occurred in the case where being shown in the input without reference to the expression of corresponding three dimensional object increases
The strong operability of equipment (for example, quantity of input needed for watching the three-dimensional aspect of first item by reducing user),
This is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery of equipment
Service life.
In some embodiments, there is instruction second item vision corresponding with corresponding virtual three-dimensional object in display
When indicating the expression of second item (for example, virtual chair 5020) of (for example, virtual objects indicator 5022), equipment detection
(1032) the carry out second input (example is contacted by second at the corresponding position of expression of second item on touch sensitive surface
Such as, the input as described in referring to Fig. 5 C to Fig. 5 F), and in response to detecting the second input carried out by the second contact, with
And first (for example, AR is triggered) standard is met by the second input that the second contact carries out according to determining, equipment is over the display
Show third interface region, this include with the visual fields 5036 of one or more cameras indicate replace second user interface
(for example, instant message user interface 5008) at least part of display (for example, referring to Fig. 5 F to Fig. 5 I description) and
The second virtual three-dimensional object is continuously displayed when from display second user changing interface to display third interface region.(example
Such as, it is such as more fully described herein with reference to method 800).In some embodiments, equipment is shown in from display second user circle
There is the animation (example for the expression that virtual objects are continuously displayed when the part switching of the expression of the visual field of one or more cameras in face
Such as, it is such as more fully described herein with reference to operation 834).
The first standard is used to determine whether to show that third interface region enables a variety of different types of operations
It is executed in response to the second input.A variety of different types of operations are able to respond to execute in input and improve user
The efficiency for executing these operations, to enhance the operability of equipment, this is further through allowing users to faster and more effectively
Reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, (for example, being such as more fully described herein with reference to method 900) has instruction the in display
The two projects second item for visually indicating (for example, virtual objects indicator 5022) corresponding with corresponding virtual three-dimensional object
When (for example, virtual chair 5020), equipment detection (1034) is on touch sensitive surface at position corresponding with the expression of second item
The third input (for example, input as described in referring to Fig. 6 E to Fig. 6 I) carried out by third contact, and in response to detecting
The third input carried out by third contact, and the first (example is met according to the determining third input carried out by third contact
Such as, go up on the stage to trigger) standard, equipment shows the second virtual three-dimensional object, fourth user interface and second in fourth user interface
User interface is different (for example, user interface 6010 of going up on the stage as being more fully described with reference to method 900).In some embodiments
In, when the second virtual three-dimensional object of display in fourth user interface (for example, user interface 6010 of going up on the stage, as shown in fig. 6i)
When, equipment detects the 4th input, and in response to detecting the 4th input: according to determining 4th input and in fourth user interface
The request of the second virtual three-dimensional object of middle manipulation is corresponding, and equipment is virtual based on second in the 4th input change fourth user interface
Three dimensional object display properties (for example, as referring to Fig. 6 J to Fig. 6 M describe and/or as referring to Fig. 6 N to Fig. 6 P description
), and according to determine the 4th input it is corresponding with the request of the second virtual objects is shown in augmented reality environment (for example,
On touch sensitive surface at position corresponding with the expression of virtual objects in second user interface zone or from touch sensitive surface with
Tap input, the pressing that the corresponding position of expression of virtual objects in second user interface zone carries out, which are inputted or touched, to be protected
Hold or press input and subsequent dragging input), equipment shows the second of the expression of the visual field with one or more cameras
Virtual three-dimensional object (for example, as referring to described in Fig. 6 Q to Fig. 6 U).
When the second three dimensional object of display in the fourth user interface (for example, user interface 6010 of going up on the stage), in response to the
Four inputs, equipment change the display properties of the second three dimensional object based on the 4th input, or display has one or more of equipment
Second three dimensional object of the expression of the visual field of a camera.So that a variety of different types of operations are able to respond and execute in input
(for example, by change the second three dimensional object display properties or with the visual field of one or more cameras of equipment indicate come
Show the second three dimensional object) efficiency that user is able to carry out these operations is improved, so that the operability of equipment is enhanced, this
Further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery longevity of equipment
Life.
It should be appreciated that the particular order that the operation in Figure 10 A to Figure 10 D is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,16000,17000,18000,19000 and 20000) description is equally with class
As mode be suitable for above in relation to method 1000 described in Figure 10 A to Figure 10 D.For example, above with reference to described in method 1000
Contact, input, virtual objects, user interface, interface region, visual field, movement and/or animation optionally have and join herein
It examines and is connect described in other methods as described herein (for example, method 800,900,16000,17000,18000,19000 and 20000)
One of touching, input, virtual objects, user interface, interface region, visual field, the feature of movement and/or animation are more
Person.For brevity, these details are not repeated herein.
Figure 11 A to Figure 11 V is shown for showing according to whether object placement standard obtains meeting with different visions
The example user interface of the virtual objects of attribute.User interface in these attached drawings be used to show process described below, packet
Include Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A to Figure 18 I,
Process in Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining, will be with reference to touch-sensitive display system
The operation executed in 112 equipment is to discuss some embodiments in embodiment.In such embodiment, focus selection
Device is optionally: respective finger or stylus contact, the representative point contacted corresponding to finger or stylus are (for example, the weight accordingly contacted
The heart or with accordingly contact associated point) or detected two or more contacts in touch-sensitive display system 112 weights
The heart.However, in response to show it is shown in the accompanying drawings in the user interface and focus selector on display 450 when detection touch
Contact on sensitive surfaces 451 optionally executes on display 450 and the equipment of independent touch sensitive surface 451 similar
Operation.
Figure 11 A to Figure 11 E shows the input that virtual objects are shown in view of going up on the stage.For example, in user interface (example
Such as, electronic mail user interface 7052, file management user interface 7036, map user interface 7022, instant message user circle
Face 5008, Internet-browser user interface 5060 or third party application user interface) in display three dimensional object two dimension
Detection input when (for example, breviary) expression.
In Figure 11 A, Internet-browser user interface 5060 includes the bivariate table of three-dimensional object 11002 (chair)
Show.With the inputs (for example, tap input) by contacting 11004 progress are detected at the corresponding position of virtual objects 11002.
It is inputted in response to the tap, the display of the display replacement Internet-browser user interface 5060 of user interface of going up on the stage 6010.
Figure 11 B to Figure 11 E shows the display replacement Internet-browser user interface when user interface 6010 of going up on the stage
The transformation occurred when 5060 display.In some embodiments, during transformation, virtual objects 10002 gradually fade in view,
And/or the control of user interface of going up on the stage 6010 is (for example, retreat control 6016, toggle control 6018 and/or shared control
6020) gradually fade in view.For example, the control of user interface of going up on the stage 6010 fades in after virtual objects 11002 fade in view
View is (for example, to postpone the aobvious of control in the period needed for rendering the three dimensional representation of virtual objects 11002 over the display
Show).In some embodiments, " the fading in " of virtual objects 11002 includes the low resolution for showing virtual objects 11002, two
Dimension and/or holographic version, then show the final three dimensional representation of virtual objects 11002.Figure 11 B to Figure 11 D shows virtual right
Gradually fade in as 11002.In Figure 11 D, the shade 11006 of virtual objects 11002 is shown.Figure 11 D to Figure 11 E shows control
Part 6016,6018 and 6020 gradually fades in.
Figure 11 F to Figure 11 G is shown so that the three dimensional representation of virtual objects 11002 is displayed on one including equipment 100
Input in the user interface of the visual field 6036 of a or multiple cameras.In Figure 11 F, in position corresponding with toggle control 6018
Place detects the input carried out by contact 11008.In response to the input, the user interface of the visual field 6036 including camera is shown
Show that replacement is gone up on the stage the display of user interface 6010, as shown in fig. 11g.
As shown in Figure 11 G to Figure 11 H, when being initially displayed the visual field 6036 of camera, the translucent of virtual objects can be shown
It indicates (for example, when not detecting plane corresponding with virtual objects in the visual field of camera 6036).
Figure 11 G to Figure 11 H shows the virtual objects 11002 being shown in the user interface of the visual field including camera 6036
Semi-transparent representation.The semi-transparent representation of virtual objects 11002 is shown in the fixed position relative to display 112.For example,
From Figure 11 G to Figure 11 H, when equipment 100 is mobile (for example, the desk in the visual field 6036 of such as camera relative to physical environment 5002
Indicated by the position of 5004 change) when, virtual objects 11002 are maintained at the fixed position relative to display 112.
In some embodiments, it has been detected in the visual field of camera 6036 according to determination corresponding with virtual objects
Virtual objects are placed in the plane detected by plane.
In Figure 11 I, plane corresponding with virtual objects 11002 has been detected in the visual field of camera 6036, and
Virtual objects 11002 are placed in the plane detected.Equipment generated the tactile as shown in 11010 output (for example, with
Instruction has detected at least one plane (for example, floor surface 5038) in the visual field of camera 6036).Work as virtual objects
11002 when being placed at the position relative to the plane detected in the visual field 6036 of camera, and virtual objects 11002 are kept
In the fixed position of the physical environment 5002 captured relative to one or more cameras.From Figure 11 I to Figure 11 J, work as equipment
100 is mobile (for example, the position of the change of the desk 5004 in the visual field 6036 of camera as shown relative to physical environment 5002
Set indicated) when, virtual objects 11002 are maintained at the fixed position relative to physical environment 5002.
In some embodiments, when showing the visual field 6036 of camera, stop display control (for example, retreating control
6016, toggle control 6018 and/or shared control 6020) (for example, according to determine have already passed through one section do not receive input when
Between).In Figure 11 J to Figure 11 L, control 6016,6018 and 6020 gradually fades out (for example, as shown in fig. 11k), and which increase aobvious
Show the part that the visual field 6036 of camera is shown in device 112 (for example, as shown in figure 11L).
Figure 11 M to Figure 11 S shows user circle for being shown in the visual field 6036 including camera when virtual objects 11002
The input that it is manipulated when in face.
In Figure 11 M to Figure 11 N, detect the analog physical size for changing virtual objects 11002 passes through contact
11012 and 11014 inputs (for example, expansion gesture) carried out.In response to detecting input, control 6016,6018 is shown again
With 6020.When the path indicated by the arrow 11016 of contact 11012 is moved and contacts 11014 roads indicated by the arrow 11018
When diameter is mobile, the size of virtual objects 11002 increases.
In Figure 11 N to Figure 11 P, detect the analog physical size for changing virtual objects 11002 passes through contact
11012 to 1104 inputs (for example, kneading gesture) carried out.Move when the path indicated by the arrow 11020 of contact 11012 and
When the path indicated by the arrow 11022 of contact 11014 is moved, the size of virtual objects 11002 reduces (such as Figure 11 N to Figure 11 O
And shown in Figure 11 O to Figure 11 P).It is being it relative to physical rings by the size adjusting of virtual objects 11002 as shown in Figure 11 O
Border 5002 original size (for example, when being initially placed into physical environment 5002 in detected plane, virtual objects
11002 size, as shown in figure 11I) when, tactile output (as shown in 11024) occurs (for example, virtual right to provide instruction
The feedback for having returned to its original size as 11002).In Figure 11 Q, contact 11012 and 11014 has been lifted away from touch-screen display
112。
In Figure 11 R, detect for making virtual objects 11002 back to its original ruler relative to physical environment 5002
Very little input (for example, double-clicking input).Input is detected at position corresponding with virtual objects 11002, as contacted 11026 institutes
Instruction.In response to the input, the size adjusting by the reduction shown in Figure 11 R of virtual objects 11002 is as indicated by Figure 11 S
The original size of virtual objects 11002.It is being it relative to physics by the size adjusting of virtual objects 11002 as shown in Figure 11 S
When the original size of environment 5002, tactile output (as shown in 11028) occurs (for example, indicating virtual objects 11002 to provide
The feedback of its original size is returned to).
In Figure 11 T, with detected at the corresponding position of toggle control 6018 by contact 11030 progress inputs.
In response to the input, the replacement of user interface of going up on the stage 6010 includes the display of the user interface of the visual field 6036 of camera, such as Figure 11 U institute
Show.
In Figure 11 U, detect at the retrogressing corresponding position of control 6016 by contacting 11032 inputs carried out.
In response to the input, Internet-browser user interface 5060 replaces the display for user interface 6010 of going up on the stage, as shown in Figure 11 V.
Figure 12 A to Figure 12 L show for show according to the movement of one or more cameras of equipment and dynamically animation
Calibration user interface object example user interface.User interface in these attached drawings be used to show mistake described below
Journey, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A extremely scheme
Process in 18I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining, will be with reference to touch-sensitive display
The operation executed in the equipment of system 112 is to discuss some embodiments in embodiment.In such embodiment, focus
Selector is optionally: respective finger or stylus contact, the representative point contacted corresponding to finger or stylus are (for example, corresponding contact
Center of gravity or with accordingly contact associated point) or detected two or more contacts in touch-sensitive display system 112
Center of gravity.However, in response to show it is shown in the accompanying drawings in the user interface and focus selector on display 450 when examine
The contact on touch sensitive surface 451 is surveyed, optionally executes class in the equipment with display 450 and independent touch sensitive surface 451
As operate.
According to some embodiments, void is shown in the user interface of visual field for including one or more cameras when receiving
When requesting but also needing the other data for the calibration of equipment of quasi- object, shows calibration user interface object.
Figure 12 A, which is shown, to be required to show virtual objects in the user interface of visual field 6036 for including one or more cameras
11002 input.With detected at the corresponding position of toggle control 6018 by contact 12002 progress inputs.In response to
The input, the display for user interface 6010 of going up on the stage is replaced in the display of the user interface of the visual field 6036 including camera, such as Figure 12 B institute
Show.The semi-transparent representation of virtual objects 11002 is shown in the user interface of visual field 6036 for including camera.When needing to calibrate
(for example, since plane corresponding with virtual objects 11002 being not detected in the visual field 6036 of camera), the visual field 6036 of camera
It is blurred (for example, behavior to emphasize prompt and/or calibration object, as described below).
The animated image and text that Figure 12 B to Figure 12 D shows prompt user's mobile device are (for example, calibration as needed
Determination show).Animated image includes the expression 12004 of equipment 100, indicates the arrow for needing equipment 100 to move left and right
12006 and 12008, plane expression 12010 (for example, being used to indicate equipment 100 must be relative to planar movement, to detect
Plane corresponding with virtual objects 11002).Text prompt 12012 provides the letter of the movement about equipment 100 needed for calibration
Breath.In Figure 12 B to Figure 12 C and Figure 12 C to Figure 12 D, the expression of equipment 100 is adjusted relative to the expression 12010 of plane
12004 and arrow 12006, with the instruction of the movement of equipment 100 needed for providing calibration.From Figure 12 C to Figure 12 D, 100 phase of equipment
It is mobile for physical environment 5002 (for example, as camera visual field 6036 in desk 5004 change position indicated by).Make
For the testing result of the movement of equipment 100, show calibration user interface object 12014 (profile of cube), such as Figure 12 E-1 institute
Instruction.
Figure 12 E-1 to Figure 12 I-1 shows the behavior of calibration user interface object 12014, respectively extremely with such as Figure 12 E-2
Mobile correspondence of the equipment 100 shown in Figure 12 I-2 relative to physical environment 5002.In response to equipment 100 movement (for example, horizontal
To movement), animation calibration user interface object 12014 (for example, cubic profiles rotation) (for example, with provide a user about
Facilitate the feedback of the movement of calibration).In Figure 12 E-1,12014 quilt of calibration user interface object with the first rotation angle
It shows in the user interface of visual field 6036 for including the camera of equipment 100.In Figure 12 E-2, held by the hand 5006 of user
Equipment 100 be illustrated at the first position relative to physical environment 5002.From Figure 12 E-2 to Figure 12 F-2,100 phase of equipment
It is laterally (to the right) mobile for physical environment 5002.As it is mobile as a result, as equipment 100 show camera 6036 quilt of visual field
It updates, and calibration user interface object 12014 has rotated (relative to its position in Figure 12 E-1), such as Figure 12 F-1 institute
Show.From Figure 12 F-2 to Figure 12 G-2, equipment 100 continues to move right relative to physical environment 5002.As mobile as a result, such as
The visual field 6036 for the camera that equipment 100 is shown is updated again, and calibration user interface object 12014 further rotates, such as
Shown in Figure 12 G-1.From Figure 12 G-2 to Figure 12 H-2, equipment 100 is moved up relative to physical environment 5002.As mobile knot
Fruit, as the visual field 6036 of the camera of the display of equipment 100 is updated.As shown in Figure 12 G-1 to Figure 12 H-1, calibration user interface pair
It is not responsive to moving up for equipment shown in Figure 12 G-2 to Figure 12 H-2 and rotates (for example, being set with providing a user as 12014
Standby vertically moves the instruction that will not influence calibration).From Figure 12 H-2 to Figure 12 I-2, equipment 100 is relative to physical environment 5002
Further move right.As mobile as a result, the visual field 6036 of camera as equipment 100 is shown is updated again, and school
Quasi- user interface object 12014 rotates, as shown in Figure 12 I-1.
In Figure 12 J, the movement (for example, as shown in Figure 12 E to Figure 12 I) of equipment 100 has met required calibration (example
Such as, and plane corresponding with virtual objects 11002 is detected in the visual field of camera 6036).Virtual objects 11002 are put
It sets in detected plane, and the visual field 6036 of camera stops obscuring.The output instruction of tactile output generator exists
The tactile output of plane (for example, floor surface 5038) is detected in the visual field 6036 of camera (as shown in 12016).Floor table
Face 5038 is highlighted, to provide the instruction for the plane having detected that.
When virtual objects 11002 have been placed at the position relative to the plane detected in the visual field 6036 of camera
When, virtual objects 11002 are maintained at the fixed position of the physical environment 5002 captured relative to one or more cameras.When
When equipment 100 is mobile relative to physical environment 5002 (as shown in Figure 12 K-2 to Figure 12 L-2), virtual objects 11002 are maintained at phase
For the fixed position of physical environment 5002 (as shown in Figure 12 K-1 to Figure 12 L-1).
Figure 13 A to Figure 13 M shows the example user interface for constraining rotation of the virtual objects around axis.These attached drawings
In user interface be used to show process described below, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D,
Process in Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.
For the ease of explaining, by discussing in embodiment with reference to the operation executed in the equipment with touch-sensitive display system 112
Some embodiments.In such embodiment, focus selector is optionally: respective finger or stylus contact correspond to
Finger or the representative point of stylus contact (for example, the center of gravity that accordingly contacts or with accordingly contact associated point) or touch-sensitive aobvious
Show the center of gravity of two or more contacts detected in system 112.However, in response to showing shown in the accompanying drawings showing
Contact when showing the user interface and focus selector on device 450 on detection touch sensitive surface 451, optionally with display
450 with execute similar operation in the equipment of independent touch sensitive surface 451.
In figure 13a, virtual objects 11002 are shown in user interface 6010 of going up on the stage.X-axis, y-axis and z-axis are relative to void
Quasi- object 11002 is shown.
Figure 13 B to Figure 13 C shows the input for making virtual objects 11002 surround the rotation of y-axis indicated by Figure 13 A.Scheming
In 13B, with detected at the corresponding position of virtual objects 11002 by contact 13002 progress inputs.Input is along arrow
Moving distance d in path indicated by 130041.When the input is moved along the path, virtual objects 11002 are rotated around y-axis
(for example, 35 degree of rotation), reaches position indicated by Figure 13 B.In user interface 6010 of going up on the stage, display and virtual objects
11002 corresponding shades 13006.From Figure 13 B to Figure 13 C, shade 13006 according to the position of the change of virtual objects 11002 and
Change.
After contact 13002 is lifted away from touch screen 112, virtual objects 11002 continue to rotate, as shown in Figure 13 C to Figure 13 D
(for example, behaving like physical object according to " momentum " of the mobile imparting by contact 13002 to provide virtual objects 11002
Impression).
Figure 13 E to Figure 13 F shows the input for making virtual objects 11002 surround the rotation of x-axis indicated by Figure 13 A.Scheming
In 13E, with detected at the corresponding position of virtual objects 11002 by contact 13008 progress inputs.Input is along arrow
Moving distance d in path indicated by 130101.When the input is moved along the path, virtual objects 11002, which enclose, to be rotated around x axis
(for example, 5 degree of rotation), reaches position indicated by Figure 13 F.Although in Figure 13 E to Figure 13 F contact 13008 move along the x-axis away from
From d1It is identical at a distance from 13002 movements as being contacted in Figure 13 B to Figure 13 C, but virtual objects 11002 surround in Figure 13 E to Figure 13 F
The angle of x-axis rotation is less than the angle that virtual objects 11002 in Figure 13 B to Figure 13 C surround y-axis rotation.
Figure 13 F to Figure 13 G, which is shown, makes virtual objects 11002 surround the further defeated of the rotation of x-axis indicated by Figure 13 A
Enter.In Figure 13 F, contact 13008 continues its movement, and the path moving distance d indicated by the arrow 130122(be greater than away from
From d1).When the input is moved along the path, virtual objects 11002, which enclose, rotates around x axis (25 degree of rotation), and it is signified to reach Figure 13 G
The position shown.As shown in Figure 13 E to Figure 13 G, 13008 moving distance d is contacted1+d2So that virtual objects 11002 are revolved around x-axis
Turn 30 degree, and in Figure 13 B to Figure 13 C, contact 13004 moving distance d1So that virtual objects 11002 are around y-axis rotation 35
Degree.
After contact 13008 is lifted away from touch screen 112, virtual objects 11002 with caused by the movement as contacting 13008
Rotation contrary direction on rotate, as shown in Figure 13 G to Figure 13 H (for example, with indicate contact 13008 movement so that
Virtual objects 11002 are rotated out the amount of the rotation limit).
In Figure 13 G into 13I, shade 13006 is not shown (for example, due to the virtual objects when watching object from below
11002 will not cast shadow).
In Figure 13 I, detect for making virtual objects 11002 back to its visual angle when initially shown (for example, as schemed
Indicated by 13A) input (for example, double-click input).It inputs at position corresponding with virtual objects 11002, such as contacts
Indicated by 13014.In response to the input, virtual objects 11002 (are occurred in Figure 13 E to Figure 13 H around y-axis rotation with reversing
Rotation) and enclose and rotate around x axis (reverse the rotation occurred in Figure 13 B to Figure 13 D).In Figure 13 J, by contact 13016 into
Capable input has made virtual objects 11002 back to its visual angle when initially shown.
In some embodiments, it receives when user interface 6010 is gone up on the stage in display for adjusting virtual objects 11002
The input of size.For example, the input of the size of adjustment virtual objects 11002 is to increase the expansion of the size of virtual objects 11002
The kneading gesture of gesture (for example, as referring to described in Fig. 6 N to Fig. 6 O) or the size of reduction virtual objects 11002.
In Figure 13 J, receives and replace user interface of going up on the stage with the display of the user interface for the visual field 6036 for including camera
The input of 6010 display.With detected at the corresponding position of toggle control 6018 by contact 13016 progress inputs.It rings
Should be in the input, the user interface of the visual field 6036 including camera replaces the display for user interface 6010 of going up on the stage, such as Figure 13 K institute
Show.
In Figure 13 K, virtual objects 11002 are shown in the user interface of visual field 6036 for including camera.It indicates
Tactile output (such as 13018 places of plane corresponding with virtual objects 11002 have been detected in the visual field of camera 6036
Show).The rotation angle of virtual objects 11002 in the user interface of visual field 6036 including camera and user interface 6010 of going up on the stage
In virtual objects 11002 rotation angle it is corresponding.
When display includes the user interface of the visual field 6036 of camera, the input including transverse shifting is so that include camera
11002 transverse shifting of virtual objects in the user interface of visual field 6036, as shown in Figure 13 L to Figure 13 M.In Figure 13 L, with
Detect contact 13020 at the corresponding position of virtual objects 11002, and contact path indicated by the arrow 13022 is moved
It is dynamic.As the contact is mobile, virtual objects 11002 (are such as schemed along with the corresponding path of moving for contacting 13020 from first position
Shown in 13L) it is moved to the second position (as shown in Figure 13 M).
In some embodiments, when display includes the user interface of the visual field 6036 of camera, provided input can make
It obtains virtual objects 11002 and is moved to the second plane (for example, desktop plane from the first plane (for example, floor level 5038)
5046), as referring to described in Fig. 5 AJ to Fig. 5 AM.
Figure 14 A to Figure 14 Z show for according to the first object manipulation behavior that determines meet the mobile magnitude of first threshold come
The example user interface of the mobile magnitude of second threshold needed for increasing the second object manipulation behavior.User interface in these attached drawings
It is used to show process described below, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 14 AA extremely scheme
In 14AD, Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F
Process.For the ease of explaining, by embodiment party is discussed with reference to the operation executed in the equipment with touch-sensitive display system 112
Some embodiments in case.In such embodiment, focus selector is optionally: respective finger or stylus contact, are right
The representative point (for example, the center of gravity that accordingly contacts or with accordingly contact associated point) that should contact in finger or stylus is touching
The center of gravity of two or more detected contacts in quick display system 112.However, in response to shown in the accompanying drawings in display
The contact on touch sensitive surface 451 is detected in the user interface and focus selector on display 450, optionally with aobvious
Show and executes similar operation in device 450 and the equipment of independent touch sensitive surface 451.
In Figure 14 A, virtual objects 11002 are shown in the user interface of visual field 6036 for including camera.As referring to figure
What 14B to Figure 14 Z was further described, it is moved in translation the mobile meter 14004 of meter 14002, scaling and moving in rotation meter 14006 is used for
Indicate corresponding mobile magnitude (for example, translation, zoom operations and/or rotation process) corresponding to object manipulation behavior.It is flat
The mobile amount of the transverse direction (for example, to the left or to the right) of one group contact of mobile 14002 instruction of meter on touch-screen display 112
Value.Between each contact in one group of contact on the mobile 14004 instruction touch-screen displays 112 of meter of scaling increase or reduce away from
From magnitude (for example, mediate or expansion gesture magnitude).Moving in rotation meter 14006 indicates on touch-screen display 112
The magnitude of the moving in rotation of one group of contact.
Figure 14 B to Figure 14 E is shown for rotating in the user interface of visual field 6036 for including one or more cameras
The input of virtual objects 11002.Input for rotating virtual objects 11002 includes wherein the first contact 14008 along arrow
Path indicated by 14010 moving in rotation and the second path indicated by the arrow 14014 of contact 14012 in the clockwise direction
The gesture of moving in rotation in the clockwise direction.In fig. 14b, the contact with touch screen 112 14008 and 14012 is detected.?
In Figure 14 C, the path indicated by the arrow 14010 of contact 14008 is moved, and contacts 14012 indicated by the arrow 14012
Path is mobile.Since threshold value RT has not yet been reached in the magnitude of the moving in rotation of contact 14008 and contact 14012 in Figure 14 C, virtually
Object 11002 is not rotated in response to input.In Figure 14 D, the magnitude of the moving in rotation of contact 14008 and contact 14012 is
Threshold value RT is increased to above, and virtual objects 11002 rotate in response to input (relative to virtual right shown in Figure 14 B
Position as 11002).When the magnitude of moving in rotation increases to above threshold value RT, movement needed for scaling virtual objects 11002
Magnitude increases (for example, scaling threshold value ST increases to ST ' from ST, as indicated by the mobile meter 14004 of scaling), and translates virtual
Mobile magnitude needed for object 11002, which increases, (for example, translation threshold value TT increases to TT ' from TT, is such as moved in translation meter 14002
It is indicated).In Figure 14 E, contact 14008 and the rotation indicated by the arrow 14010 and arrow 14014 respectively of contact 14012
Path continues to move to, and virtual objects 11002 proceed to respond to rotate in input.In Figure 14 F, contact 14008 and 14012
It has been lifted away from touch screen 112.
Figure 14 G to Figure 14 I is shown for scaling in the user interface of visual field 6036 for including one or more cameras
The input of virtual objects 11002 (for example, increasing its size).The input of size for increasing virtual objects 11002 includes it
In the first path indicated by the arrow 14018 of contact 14016 move and the second contact 14020 is indicated by the arrow 14022
The gesture in path mobile (for example, contact 14016 is increased with contacting the distance between 14020).In Figure 14 G, detect
The contact with touch screen 112 14016 and 14020.In Figure 14 H, the path indicated by the arrow 14018 of contact 14016 is moved,
And it contacts 14020 paths indicated by the arrow 14022 to move.Due to the separate contact 14020 of contact 14016 in Figure 14 H
Threshold value ST has not yet been reached in mobile magnitude, therefore not in response to the size of input adjustment virtual objects 11002.In Figure 14 I, connect
The mobile magnitude of the scaling of touching 14016 and contact 14020 has built up higher than threshold value ST, and has increased (phase in response to input
For the size of virtual objects 11002 shown in Figure 14 H) sizes of virtual objects 11002.When the mobile magnitude of scaling increases to
When higher than threshold value ST, rotates mobile magnitude needed for virtual objects 11002 and increase (for example, threshold rotating value RT is increased to from RT
RT ', as indicated by moving in rotation meter 14006), and translate mobile magnitude needed for virtual objects 11002 and increase (for example, flat
It moves threshold value TT and increases to TT ' from TT, as being moved in translation indicated by meter 14002).In Figure 14 J, contact 14016 and 14020 is
It is lifted away from touch screen 112.
Figure 14 K to Figure 14 M is shown for translating in the user interface of visual field 6036 for including one or more cameras
The input of virtual objects 11002 (for example, being moved to the left virtual objects 11002).Input packet for mobile virtual object 11002
Include that wherein the first path indicated by the arrow 14026 of contact 14024 is moved and the second contact 14028 is indicated by the arrow 1430
Path mobile (for example, make contact 14024 with contact 14028 and be moved to the left) gesture.In Figure 14 K, detect with
The contact 14024 and 14028 of touch screen 112.In Figure 14 L, the path indicated by the arrow 14026 of contact 14024 is moved, and
And the path indicated by the arrow 14030 of contact 14028 is moved.Due in Figure 14 L contact 14024 and 14028 to moving to left
Threshold value TT has not yet been reached in dynamic magnitude, and virtual objects 11002 are not moved in response to input.In Figure 14 M, 14024 Hes are contacted
The magnitude of contact 14028 being moved to the left has built up higher than threshold value TT, and virtual objects 11002 are contacting 14024 Hes
It is moved on 14028 moving direction.When the magnitude of translational movement increases to above threshold value TT, 11002 institute of virtual objects is scaled
The mobile magnitude needed increases (for example, scaling threshold value ST increases to ST ' from ST, as indicated by the mobile meter 14004 of scaling), and
Mobile magnitude needed for virtual objects 11002 is rotated to increase (for example, threshold rotating value RT increases to RT ' from RT, such as moving in rotation
Indicated by meter 14006).In Figure 14 N, contact 14024 and 14028 has been lifted away from touch screen 112.
Figure 14 O to Figure 14 Z is shown including for translating virtual objects 11002 (for example, the virtual objects that move right
11002) hand of virtual objects 11002 (for example, size for increasing virtual objects 11002) and rotation virtual objects 11002, is scaled
The input of gesture.In Figure 14 O, the contact with touch screen 112 14032 and 14036 is detected.In Figure 14 O to Figure 14 P, contact
14032 paths indicated by the arrow 14034 are moved, and are contacted 14036 paths indicated by the arrow 14038 and moved.It connects
The magnitude of touching 14032 and 14036 to move right has built up higher than threshold value TT, and virtual objects 11002 are contacting
It is moved on 14032 and 14036 moving direction.Since the movement of contact 14032 and 14036 meets threshold value TT, scaling is virtual right
The mobile magnitude as needed for 11002 increases to ST ', and rotates mobile magnitude needed for virtual objects 11002 and increase to RT '.
After threshold value TT is met (indicated by the high water mark 14043 as shown in the translational movement meter 14002 in Figure 14 Q),
Any transverse shifting of contact 14032 and 14036 will cause the transverse shifting of virtual objects 11002.
In Figure 14 Q to Figure 14 R, the path indicated by the arrow 14040 of contact 14032 is moved, and contacts 14036 edges
Path indicated by arrow 14042 is mobile.In Figure 14 R, contact 14032 has been more than original far from the mobile magnitude of contact 14036
Begin scaling threshold value ST, but the scaling threshold value ST ' of increase has not yet been reached.When the mobile threshold value ST ' of the scaling of increase is effective, Zhi Daojie
Touching 14032 just scales, therefore far from the mobile threshold value ST ' of scaling that the mobile magnitude of contact 14036 increases to above increase
From Figure 14 Q to Figure 14 R, the size of virtual objects 11002 does not change.In Figure 14 R to Figure 14 S, when contact 14032 is along arrow
When path indicated by first 14044 is mobile and the path indicated by the arrow 14046 of contact 14036 is moved, contact 14032 with connect
The distance between touching 14046 continues to increase.In Figure 14 S, contact 14032 has been more than to increase far from the mobile magnitude of contact 14036
Big scaling threshold value ST ', and the size of virtual objects 11002 has increased.After threshold value ST ' is met (in such as Figure 14 T
The mobile meter 14004 of scaling shown in indicated by high water mark 14047), any scaling of contact 14032 and 14036 is mobile
It will cause the scaling of virtual objects 11002.
In Figure 14 T to Figure 14 U, the path indicated by the arrow 14048 of contact 14032 is moved, and contacts 14036 edges
Path indicated by arrow 14050 is mobile.Since threshold value TT has been met (as being moved in translation high water level shown in meter 14002
Indicated by label 14043), virtual objects 11002 move freely on the direction of the transverse shifting of contact 14032 and 14036.
In Figure 14 V to Figure 14 W, the path indicated by the arrow 14052 of contact 14032 is moved, and contacts 14036 edges
Path indicated by arrow 14054 is mobile.The movement of contact 14032 and 14036 includes being moved in translation (contact 14032 and 14036
Be moved to the left) and scaling it is mobile (reduce contact 14032 with contact the distance between 14036 movement (for example, kneading hand
Gesture)).Met as translating threshold value TT (as being moved in translation indicated by high water mark 14043 shown in meter 14002),
Virtual objects 11002 move freely on the direction of the transverse shifting of contact 14032 and 14036, and due to the scaling of increase
Threshold value ST ' has been met (indicated by the high water mark 14047 as shown in scaling mobile meter 14004), virtual objects 11002
It is freely scaled in response to contact 14032 towards the movement of contact 14036.From Figure 14 V to Figure 14 W, the ruler of virtual objects 11002
It is very little to have reduced, and the moving and connecing in response to the path indicated by the arrow 14052 of contact 14032 of virtual objects 11002
The moving for path indicated by the arrow 14054 of touching 14036 and be moved to the left.
In Figure 14 X to Figure 14 Z, the path indicated by the arrow 14056 of contact 14032 rotates shifting in the counterclockwise direction
It is dynamic, and contact 14036 paths indicated by the arrow 14058 moving in rotation in the counterclockwise direction.In Figure 14 Y, contact
14032 and the magnitude of moving in rotation of contact 14036 be more than original scale threshold value RT, but the scaling threshold value of increase has not yet been reached
RT′.When the mobile threshold value RT ' of the scaling of increase is effective, until the magnitude of the moving in rotation of contact 14032 and 14036 increases to
Higher than the moving in rotation threshold value RT ' of increase, the rotation of virtual objects 11002 just occurs, therefore from Figure 14 X to Figure 14 Y, it is virtual right
It does not rotate as 11002.In Figure 14 Y to Figure 14 Z, when the path indicated by the arrow 14060 of contact 14032 is moved and is contacted
When 14036 paths indicated by the arrow 14062 are moved, contact 14032 and 14046 continues rotation in the counterclockwise direction and moves
It is dynamic.In Figure 14 Z, the magnitude of the moving in rotation of contact 14032 and contact 14036 has been more than the scaling threshold value RT ' of increase, and
Virtual objects 11002 are rotated in response to input.
Figure 14 AA to Figure 14 AD is shown for meeting the mobile magnitude of first threshold according to the first object manipulation behavior that determines
Come the flow chart of the operation of the mobile magnitude of second threshold needed for increasing the second object manipulation behavior.4AA to Figure 14 AD referring to Fig.1
The operation of description have display generating unit (for example, display, projector, head-up display etc.) and touch sensitive surface (for example,
Touch sensitive surface or the touch-screen display for functioning simultaneously as display generating unit and touch sensitive surface) electronic equipment (for example, Fig. 3
The portable multifunction device 100 of equipment 300 or Figure 1A) at execute.Some operations of 4AA to Figure 14 AD description are appointed referring to Fig.1
Selection of land is combined and/or the sequence of some operations is optionally changed.
At operation 14066, detection includes the first part of user's input of the movement of one or more contacts.It is operating
At 14068, determine whether the movement of one or more contacts (for example, at position corresponding with virtual objects 11002) increases
To higher than object threshold rotating value (for example, threshold rotating value RT indicated by moving in rotation meter 14006).It is one or more according to determining
The movement of contact increases to above object threshold rotating value (for example, as described in 4B to Figure 14 D referring to Fig.1), and process advances to behaviour
Make 14070.Object threshold rotating value is not increased to above according to the movement for determining one or more contacts, and process advances to operation
14074。
At operation 14070, the first part based on user's input is come target rotation (for example, virtual objects 11002) (example
Such as, as described in 4B to Figure 14 D referring to Fig.1).At operation 14072, increase object translation threshold value (for example, increasing to from TT
TT ', as described in 4D referring to Fig.1), and increase object scaling threshold value (for example, ST ' is increased to from ST, as 4D is retouched referring to Fig.1
It states).Process advances to the operation 14086 of Figure 14 AB from operation 14072, as indicated at A.
At operation 14074, one or more contacts (for example, at position corresponding with virtual objects 11002) are determined
Movement whether increase to above object translation threshold value (for example, be moved in translation meter 14002 indicated by translation threshold value TT).According to
Determine that the movement of one or more contacts increases to above object translation threshold value (for example, such as 4K to Figure 14 M is described referring to Fig.1
), process advances to operation 14076.Object translation threshold value is not increased to above according to the movements of determining one or more contacts,
Process advances to operation 14080.
At operation 14076, object (for example, virtual objects 11002) (example is translated based on the first part of user's input
Such as, as described in 4K to Figure 14 M referring to Fig.1).At operation 14078, increase object threshold rotating value (for example, increasing to from RT
RT ', as described in 4M referring to Fig.1), and increase object scaling threshold value (for example, ST ' is increased to from ST, as 4M is retouched referring to Fig.1
It states).Process advances to the operation 14100 of Figure 14 AC from operation 14078, as indicated at B.
At operation 14080, one or more contacts (for example, at position corresponding with virtual objects 11002) are determined
Movement whether increase to above object scaling threshold value (for example, scaling threshold value ST indicated by the mobile meter 14004 of scaling).According to
Determine that the movement of one or more contacts increases to above object scaling threshold value (for example, such as 4G to Figure 14 I is described referring to Fig.1
), process advances to operation 14082.Object scaling threshold value is not increased to above according to the movements of determining one or more contacts,
Process advances to operation 14085.
At operation 14082, the first part based on user's input is come scale objects (for example, virtual objects 11002) (example
Such as, as described in 4G to Figure 14 I referring to Fig.1).At operation 14084, increase object threshold rotating value (for example, increasing to from RT
RT ', as described in 4I referring to Fig.1), and increase object translation threshold value (for example, TT ' is increased to from TT, as 4I is retouched referring to Fig.1
It states).Process advances to the operation 14114 of Figure 14 AD from operation 14084, as indicated at C.
At operation 14085, detection includes the other part of user's input of the movement of one or more contacts.Process from
Operation 14086 advances to operation 14066.
In Figure 14 AB, at operation 14086, user's input of movements of the detection including one or more contacts is in addition
Part.Process advances to operation 14088 from operation 14086.
At operation 14088, determine whether the movement of one or more contacts is moving in rotation.According to one or more determining
The movement of a contact is moving in rotation, and process advances to operation 14090.It is not rotation according to the movement for determining one or more contacts
Transfer is dynamic, and process advances to operation 14092.
At operation 14090, the other part based on user's input is come target rotation (for example, virtual objects 11002) (example
Such as, as described in 4D to Figure 14 E referring to Fig.1).Since threshold rotating value has previously been met, object is defeated according to other rotation
Enter to rotate freely.
At operation 14092, determine whether the movement of one or more contacts increases to above the object translation threshold of increase
Value (for example, translationals movement in Figure 14 D count 14002 indicated by translation threshold value TT ').According to determining one or more contacts
The mobile object translation threshold value for increasing to above increase, process advance to operation 14094.According to determining one or more contacts
The mobile object translation threshold value for not increasing to above increase, process advance to operation 14096.
At operation 14094, object (for example, virtual objects 11002) are translated based on the other part of user's input.
At operation 14096, determine whether the movement of one or more contacts increases to above the object scaling threshold of increase
Value (for example, the scaling in Figure 14 D moves scaling threshold value ST ' indicated by meter 14004).According to determining one or more contacts
The mobile object scaling threshold value for increasing to above increase, process advance to operation 14098.According to determining one or more contacts
The mobile object scaling threshold value for not increasing to above increase, process are back to operation 14086.
At operation 14098, the other part based on user's input is come scale objects (for example, virtual objects 11002).
In Figure 14 AC, at operation 14100, user's input of movements of the detection including one or more contacts is in addition
Part.Process advances to operation 14102 from operation 14100.
At operation 14102, determine whether the movement of one or more contacts is translational movement.According to one or more determining
The movement of a contact is to be moved in translation, and process advances to operation 140104.It is not according to the movements for determining that one or more contacts
It is moved in translation, process advances to operation 14106.
At operation 14104, object (for example, virtual objects 11002) are translated based on the other part of user's input.By
Previously met in translation threshold value, object is according to other translation input free shift.
At operation 14106, determine whether the movement of one or more contacts increases to above the object rotation threshold of increase
Value (for example, threshold rotating value RT ' indicated by moving in rotation meter 14006 in Figure 14 M).According to determining one or more contacts
The mobile object threshold rotating value for increasing to above increase, process advance to operation 14108.According to determining one or more contacts
The mobile object threshold rotating value for not increasing to above increase, process advance to operation 14110.
At operation 14108, the other part based on user's input is come target rotation (for example, virtual objects 11002).
At operation 14110, determine whether the movement of one or more contacts increases to above the object scaling threshold of increase
Value (for example, the scaling in Figure 14 M moves scaling threshold value ST ' indicated by meter 14004).According to determining one or more contacts
The mobile object scaling threshold value for increasing to above increase, process advance to operation 14112.According to determining one or more contacts
The mobile object scaling threshold value for not increasing to above increase, process are back to operation 14100.
At operation 14112, the other part based on user's input is come scale objects (for example, virtual objects 11002).
In Figure 14 AD, at operation 14114, user's input of movements of the detection including one or more contacts is in addition
Part.Process advances to operation 14116 from operation 14114.
At operation 14116, determine whether the movement of one or more contacts is scaling movement.According to one or more determining
The movement of a contact is scaling movement, and process advances to operation 140118.It is not according to the movements for determining that one or more contacts
Scaling movement, process advance to operation 14120.
At operation 14118, the other part based on user's input is come scale objects (for example, virtual objects 11002).By
Previously met in scaling threshold value, object is freely scaled according to other scaling input.
At operation 14120, determine whether the movement of one or more contacts increases to above the object rotation threshold of increase
Value (for example, threshold rotating value RT ' indicated by moving in rotation meter 14006 in Figure 14 I).According to determining one or more contacts
The mobile object threshold rotating value for increasing to above increase, process advance to operation 14122.According to determining one or more contacts
The mobile object threshold rotating value for not increasing to above increase, process advance to operation 14124.
At operation 14122, the other part based on user's input is come target rotation (for example, virtual objects 11002).
At operation 14124, determine whether the movement of one or more contacts increases to above the object translation threshold of increase
Value (for example, translationals movement in Figure 14 I count 14002 indicated by translation threshold value TT ').According to determining one or more contacts
The mobile object translation threshold value for increasing to above increase, process advance to operation 14126.According to determining one or more contacts
The mobile object translation threshold value for not increasing to above increase, process advance to operation 14114.
Figure 15 A to Figure 15 AI is shown for making virtual objects be moved to shown one according to the movement of determining equipment
Or the example user interface of audio alert is generated except the visual field of multiple equipment camera.User interface in these attached drawings by with
In showing process described below, including Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, figure
Process in 17A to Figure 17 D, Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F.For the ease of explaining, will
Some embodiments in embodiment are discussed with reference to the operation executed in the equipment with touch-sensitive display system 112.
In such embodiment, focus selector is optionally: respective finger or stylus contact are contacted corresponding to finger or stylus
It represents point (for example, the center of gravity that accordingly contacts or with accordingly contact associated point) or is detected in touch-sensitive display system 112
The center of gravity for two or more contacts arrived.However, in response to showing user circle shown in the accompanying drawings on display 450
The contact on touch sensitive surface 451 is detected when face and focus selector, optionally with display 450 and independent touch-sensitive table
Similar operation is executed in the equipment in face 451.
Figure 15 A to Figure 15 AI shows the user interface occurred when accessibility feature is in active state and equipment behaviour
Make.In some embodiments, accessibility feature includes that the input of quantity reduction wherein can be used or optionally input to access
The mode of equipment feature is (for example, so that the limited user of ability for providing above-mentioned input gesture can more easily access equipment
Feature).For example, accessibility mode is toggle control mode, and in this mode, the first input gesture (for example, gently sweeping input)
For promoting or reversing available equipment operation, and select input (for example, double-clicking input) for executing the behaviour currently indicated
Make.When user and equipment interact, generation audio alert (for example, to provide a user the feedback that instruction has executed operation,
To indicate current display shape of the virtual objects 11002 relative to the visual field of one or more cameras of go up on the stage user interface or equipment
State, etc.).
In Figure 15 A, instant message user interface 5008 includes the two-dimensional representation of three-dimensional object 11002.Select light
Mark 15001 is shown as surrounding three-dimensional object 11002 (for example, to indicate the operation that currently selects as will be in virtual objects
The operation executed on 11002).It detects for executing the operation currently indicated (for example, going up on the stage to show in user interface 6010
The three dimensional representation of virtual objects 11002) the inputs (for example, double-click input) carried out by contact 15002.It is defeated in response to this
Enter, the display of the display replacement instant message user interface 5060 of user interface of going up on the stage 6010, as shown in fig. 15b.
In Figure 15 B, virtual objects 11002 are shown in user interface 6010 of going up on the stage.It generates as indicated at 15008
Audio alert (for example, passing through device speaker 111), with the state of indicating equipment.For example, audio alert 15008 includes such as
The notice indicated at 15010: " chair is presently shown in view of going up on the stage ".
In Figure 15 B, cursor 15001 is selected to be shown as surrounding shared control 6020 (for example, to indicate currently to select
Operation is sharing operation).Detect the input carried out by contact 15004 (for example, indicated by the arrow 15006 path
It gently sweeps to the right).In response to the input, selected operation proceeds to next operation.
In figure 15 c, display upwards inclination control 15012 (for example, to indicate the operation that currently selects for for updip
The tiltedly operation of shown virtual objects 11002).The audio alert as indicated at 15014 is generated, with the shape of indicating equipment
State.For example, audio alert includes the notice as indicated at 15016: " choose: tilting button upwards ".It detects and passes through contact
15018 inputs (for example, the path indicated by the arrow 15020 is gently swept to the right) carried out.It is selected in response to the input
Operation proceeds to next operation.
In Figure 15 D, display tilts down control 15022 (for example, to indicate the operation currently selected for for dipping down
The tiltedly operation of shown virtual objects 11002).The audio alert as indicated at 15024 is generated, with the shape of indicating equipment
State.For example, audio alert includes the notice as indicated at 15026: " choose: tilting down button ".It detects and passes through contact
15028 inputs (for example, double-clicking input) carried out.In response to the input, selected operation is executed (for example, in view of going up on the stage
In tilt down virtual objects 11002).
In Figure 15 E, virtual objects 11002 are tilted down in view of going up on the stage.Generate the audio as indicated at 15030
Alarm, with the state of indicating equipment.For example, audio alert includes the notice as indicated at 15032: " chair has tilted down 5
Degree, chair is now towards 10 degree of screen inclination ".
In Figure 15 F, detect the input carried out by contact 15034 (for example, the path indicated by the arrow 15036
Gently sweep to the right).In response to the input, selected operation proceeds to next operation.
In Figure 15 G, display rotates clockwise control 15038 (for example, to indicate the operation currently selected for for up time
The operation of the shown virtual objects 11002 of needle rotation).Audio alert 15040 includes the notice as indicated at 15042: " choosing
In: rotate clockwise button ".Detect the input carried out by contact 15044 (for example, the path indicated by the arrow 15046
Gently sweep to the right).In response to the input, selected operation proceeds to next operation.
In Figure 15 H, show pivot controls counterclockwise 15048 (for example, to indicate the operation currently selected for for the inverse time
The operation of the shown virtual objects 11002 of needle rotation).Audio alert 15050 includes the notice as indicated at 15052: " choosing
In: spin button counterclockwise ".Detect the input (for example, double-clicking input) carried out by contact 15054.In response to the input,
Execute selected operation (for example, virtual objects 11002 being rotated counterclockwise in view of going up on the stage, as indicated by Figure 15 I).
In Figure 15 I, audio alert 15056 includes the notice as indicated at 15058: " chair has rotated 5 counterclockwise
Degree.Chair is now far from 5 degree of screen rotation ".
In Figure 15 J, detect the input carried out by contact 15060 (for example, the path indicated by the arrow 15062
Gently sweep to the right).In response to the input, selected operation proceeds to next operation.
In Figure 15 K, display zoom control 15064 is (for example, to indicate that the operation currently selected is shown for scaling
Virtual objects 11002 operation).Audio alert 15066 includes the notice as indicated at 15068: " ratio: adjustable ".
Keyword " adjustable " indicates that gently sweeping input (for example, vertical gently sweep input) can be used for operating together with the control title in notice
Control.For example, the contact provides gently sweeps input upwards when contacting 5070 paths indicated by the arrow 5072 and moving up.
In response to the input, execute zoom operations (for example, the size of virtual objects 11002 increases, as indicated by Figure 15 K to Figure 15 L).
In Figure 15 L, audio alert 15074 includes the notice as indicated at 15076: " chair is adjusted to original now
The 150% " of size.For reducing the input (for example, gently sweeping input downwards) of the size of virtual objects 11002 by along arrow 5078
The contact 5078 that indicated path moves down provides.In response to the input, zoom operations are executed (for example, virtual objects
11002 size reduces, as indicated by Figure 15 L to Figure 15 M).
In Figure 15 M, audio alert 15082 includes the notice as indicated at 15084: " chair is adjusted to original now
The 100% " of size.Due to the ruler of virtual objects 11002 being sized to when it is initially displayed in view 6010 of going up on the stage
It is very little, tactile output (as shown in 15086) occurs (for example, having returned to its original size to provide instruction virtual objects 11002
Feedback).
In Figure 15 N, detect the input carried out by contact 15088 (for example, the path indicated by the arrow 15090
Gently sweep to the right).In response to the input, selected operation proceeds to next operation.
In Figure 15 O, cursor 15001 is selected to be shown as surrounding and retreat control 6016 (for example, to indicate currently to select
Operation is to return to the operation of previous user interface).Audio alert 15092 includes the notice as indicated at 15094: " it chooses:
Return push-button ".Detect the input carried out by contact 15096 (for example, the path indicated by the arrow 15098 is light to the right
It sweeps).In response to the input, selected operation proceeds to next operation.
In Figure 15 P, cursor 15001 is selected to be shown as surrounding toggle control 6018 (for example, to indicate currently to select
Operation is for going up on the stage to switch between the user interface of user interface 6010 and display including the visual field 6036 of camera in display
Operation).Audio alert 15098 includes the notice as indicated at 50100: " choosing: world view/view switch piece of going up on the stage ".
Detect the input (for example, double-clicking input) carried out by contact 15102.In response to the input, the visual field 6036 including camera
The display replacement of user interface go up on the stage the display of user interface 6010 (as indicated by Figure 15 Q).
Figure 15 Q to Figure 15 T shows the calibrating sequence occurred when showing the visual field 6036 of camera (for example, due in phase
Plane corresponding with virtual objects 11002 is not yet detected in the visual field 6036 of machine).During calibrating sequence, display is virtual right
As 11002 semi-transparent representation, the visual field 6036 of camera is obscured, and shows to include the animated image (expression including equipment 100
12004 and plane expression 12010) prompt to prompt user's mobile device.In Figure 15 Q, audio alert 15102 includes such as
The notice indicated at 50104: " mobile device is to detect plane ".From Figure 15 Q to Figure 15 R, equipment 100 is relative to physical environment
It is 5002 mobile (for example, as camera visual field 6036 in desk 5004 change position indicated by).Shifting as equipment 100
Dynamic testing result shows calibration user interface object 12014, as indicated by Figure 15 S.
In Figure 15 S, audio alert 15106 includes the notice as indicated at 50108: " mobile device is flat to detect
Face ".In Figure 15 S to Figure 15 T, when equipment 100 is mobile (for example, in the visual field 6036 of such as camera relative to physical environment 5002
Desk 5004 change position indicated by) when, calibration user interface object 12014 rotate.In Figure 15 T, have occurred and that
It is enough to detect the movement of plane corresponding with virtual objects 11002, and audio alert 15110 in the visual field 6036 of camera
Including the notice indicated at such as 50112: " detecting plane ".In Figure 15 U to Figure 15 V, virtual objects 11002 it is translucent
Degree reduces, and virtual objects 11002 are placed in the plane detected.
In Figure 15 V, audio alert 15114 includes the notice as indicated at 50116: " chair is projected alive now
In boundary, 100% as it can be seen that occupy the 10% " of screen.Tactile output generator output instruction virtual objects 11002 have been placed on
Tactile output in plane (as indicated at 15118).Virtual objects 11002 are shown in the fixation relative to physical environment 5002
At position.
In Figure 15 V to Figure 15 W, equipment 100 is mobile (for example, such as the visual field 6036 of camera relative to physical environment 5002
In desk 5004 change position indicated by), so that virtual objects 11002 are no longer visible in the visual field 6036 of camera.
Since virtual objects 11002 remove the visual field 6036 of camera, audio alert 15122 includes the notice as indicated at 50124:
" chair not on the screen ".
In Figure 15 W to Figure 15 X, equipment 100 has been moved relative to physical environment 5002, so that virtual right in Figure 15 X
As 11002 visible again in the visual field 6036 of camera.It is raw since virtual objects 11002 are moved in the visual field 6036 of camera
At audio alert 15118, which includes the notice as indicated at 50120: " chair is projected in the world now,
100% as it can be seen that occupy the 10% " of screen.
In Figure 15 X to Figure 15 Y, equipment 100 moved relative to physical environment 5002 (for example, make in Figure 15 Y,
Equipment 100 " closer to " is such as projected in the virtual objects 11002 in the visual field 6036 of camera, and virtual objects 11002 are in phase
It is partially visible in the visual field 6036 of machine).Since virtual objects 11002 partly remove the visual field 6036 of camera, audio alert
15126 include the notice as indicated at 50128: " chair 90% is as it can be seen that occupy the 20% of screen ".
In some embodiments, at position corresponding with virtual objects 11002 provide input so that include about
The audio message of the verbal information of virtual objects 11002 is provided.On the contrary, when in the position far from virtual objects 11002 and control
When place's offer input is provided, then the audio message including the verbal information about virtual objects 11002 is not provided.In Figure 15 Z, hair
Raw audio output 15130 (for example, " click sound " or " buzz "), audio output instruction not with the control in user interface
Or contact 15132 is detected at the corresponding position in position of virtual objects 11002.In Figure 15 AA, with virtual objects 11002
The corresponding position in position at detect by contact 15134 carry out inputs.In response to the input, generation and virtual objects
The audio alert 15136 of 11002 corresponding (for example, states of instruction virtual objects 11002), which includes such as 50138
The notice of place instruction: " chair 90% is as it can be seen that occupy the 20% of screen ".
It is controlled when Figure 15 AB to Figure 15 AI shows the user interface of the visual field 6036 for when display including camera in switching
The input of operation is selected and executed under part mode.
In Figure 15 AB, detect the input carried out by contact 15140 (for example, the path indicated by the arrow 15142
Gently sweep to the right).In response to the input, selection operation, as indicated by Figure 15 AC.
In Figure 15 AC, display to the right transverse shifting control 15144 (for example, with indicate the operation that currently selects for for
Move right the operations of virtual objects 11002).Audio alert 15146 includes the notice as indicated at 15148: " it chooses: to
Move right button ".Detect the input (for example, double-clicking input) carried out by contact 15150.In response to the input, institute is executed
The operation (for example, the virtual objects 11002 that move right in the visual field 6036 of camera, as indicated by Figure 15 AD) of selection.
In Figure 15 AD, the movement of virtual objects 11002 is reported as audio alert 15152, which includes such as
The notice indicated at 15154: " chair 100% is as it can be seen that occupy the 30% of screen ".
In Figure 15 AE, detect the input carried out by contact 15156 (for example, the path indicated by the arrow 15158
Gently sweep to the right).In response to the input, selected operation proceeds to next operation.
In Figure 15 AF, display to the left transverse shifting control 15160 (for example, with indicate the operation that currently selects for for
It is moved to the left the operation of virtual objects 11002).Audio alert 15162 includes the notice as indicated at 15164: " it chooses: to
It moves left ".Detect the input carried out by contact 15166 (for example, the path indicated by the arrow 15168 is light to the right
It sweeps).In response to the input, selected operation proceeds to next operation.
In Figure 15 AG, display rotates clockwise control 15170 (for example, to indicate the operation currently selected for for suitable
The operation of hour hands rotation virtual objects 11002).Audio alert 15172 includes the notice as indicated at 15174: " being chosen: suitable
Hour hands rotation ".Detect the input carried out by contact 15176 (for example, the path indicated by the arrow 15178 is light to the right
It sweeps).In response to the input, selected operation proceeds to next operation.
In Figure 15 AH, show pivot controls counterclockwise 15180 (for example, to indicate the operation currently selected for for suitable
The operation of hour hands rotation virtual objects 11002).Audio alert 15182 includes the notice as indicated at 15184: " being chosen: inverse
Hour hands rotation ".Detect the input (for example, double-clicking input) carried out by contact 15186.In response to the input, selected by execution
The operation (for example, rotation virtual objects 11002 counterclockwise, as indicated by Figure 15 AI) selected.
In Figure 15 AI, audio alert 15190 includes the notice as indicated at 15164: " chair has rotated 5 counterclockwise
Degree.Chair is now relative to screen rotation zero degree ".
In some embodiments, at least one surface of object (for example, virtual objects 11002) (for example, downside table
Face) on generate reflection.Reflection is generated using one or more camera institutes captured image data of equipment 100.For example, reflection base
In capture corresponding with horizontal plane (for example, floor level 5038) for being detected in the visual field 6036 of one or more cameras
At least part of image data (for example, image, one group of image and/or video).In some embodiments, reflection packet is generated
It includes and generates the spherical model including captured image data (for example, by the way that captured image data are mapped in virtual sphere
On model).
In some embodiments, the reflection generated on the surface of the object includes reflection gradient (for example, making on surface
The reflectivity magnitude for having the part in specific surface far from plane high closer to the part of plane).In some embodiments, exist
Reflectance value of the reflectivity magnitude of the reflection generated on the surface of object based on texture corresponding with surface.For example, on surface
Non-reflective portion at do not generate reflection.
In some embodiments, reflection is adjusted over time.For example, receiving for moving and/or scaling
When the input of object, adjustment reflection (for example, when object is mobile, the reflection of object is adjusted on the plane of reflection and object pair
The part at position answered).In some embodiments, in target rotation (for example, surrounding z-axis), reflection is not adjusted.
In some embodiments, object is being shown at determining position (for example, corresponding with object in camera
Visual field 6036 in the plane that detects) before, do not generate reflection on the surface of the object.For example, the half of display object
When transparent expression (for example, as described in 1G to Figure 11 H referring to Fig.1), and/or when executing calibration (for example, as referring to figure
12B to Figure 12 I description), do not generate reflection on the surface of the object.
In some embodiments, object is generated in one or more planes detected in the visual field of camera 6036
Reflection.In some embodiments, the reflection of object is not generated in the visual field of camera 6036.
It is including one or more cameras that whether Figure 16 A to Figure 16 G, which shows and place standard according to object and obtain meeting,
The flow chart of the method 16000 of the virtual objects using different perceptual properties is shown in the user interface of visual field.Method 16000 exists
With display generating unit (for example, display, projector, head-up display etc.), one or more input equipments (for example, touching
Sensitive surfaces or the touch-screen display for functioning simultaneously as display generating unit and touch sensitive surface) and one or more cameras (for example,
The backward cameras of one or more in equipment on the side opposite with display and touch sensitive surface) electronic equipment (for example, Fig. 3
Equipment 300 or Figure 1A portable multifunction device 100) at execute.In some embodiments, display is to touch screen display
Show device, and touch sensitive surface is over the display or integrated with display.In some embodiments, display is with touch sensitive surface
Separated.Some operations in method 16000 are optionally combined and/or the sequence of some operations is optionally changed.
Equipment receive (16002) (for example, display include virtual objects removable expression go up on the stage user interface when,
And before the visual field of display camera) the display void in the first interface region (for example, augmented reality viewer interface)
The request of quasi- object (for example, expression of threedimensional model), wherein the first interface region includes the view of one or more cameras
At least part of field is (for example, the request connects for what is detected in the expression by the virtual objects on touch-screen display
Touching shows contact (" AR view " or " world that can indicate to detect by what is shown simultaneously in the expression with virtual objects
Tap on view " button) carry out input, wherein show can indicate to be configured as by first contact calling when triggering AR view
Display).For example, the request is to show the input of virtual objects 11002 in the visual field 6036 of one or more cameras, such as join
According to Figure 11 F description.
In response to showing the request of virtual objects (for example, the physics around display equipment in the first interface region
The request of virtual objects in the view of environment), equipment is via display generating unit in being included in the first interface region
One or more cameras visual field at least part on show the expression of (16004) virtual objects (for example, in response to the
The request of virtual objects is shown in one interface region and shows the visual field of one or more cameras), wherein one or more
The visual field of camera is the view of physical environment locating for one or more cameras.For example, as described in 1G referring to Fig.1, virtually
Object 11002 is shown in the visual field 6036 of one or more cameras, which is physical rings locating for one or more cameras
The view in border 5002.Display virtual objects expression include: according to determine object place standard it is unmet, wherein object is put
The placement location (for example, plane) for setting standard requirements virtual objects can be identified in the visual field of one or more cameras, so as to
Meet object place standard (for example, when equipment not yet identify in the first interface region relative to one or more
The visual field of camera places the position of virtual objects or plane (for example, plane identifies still in progress, or is not present enough
Image data identifies plane) when, it is unmet that object places standard), display has first group of perceptual property (for example, place
In the first translucent level or the first luminance level or the first saturation levels etc.) and virtual objects with first orientation
It indicates, the first orientation is unrelated with which part of physical environment is shown in the visual field of one or more cameras (for example, empty
Quasi- object is swum in the visual field of camera, has the orientation relative to predefined plane, the orientation (example unrelated with physical environment
Such as, the orientation being arranged in view of going up on the stage) and with the change that occurs in the visual field of camera (for example, since equipment is relative to object
Change caused by the movement of reason environment) unrelated).For example, in Figure 11 G to Figure 11 H, because not yet in the visual field of camera 6036
The placement location of middle identification virtual objects 11002, so the translucent version of display virtual objects 11002.As equipment is mobile
(as from shown in Figure 11 G to Figure 11 H), the orientation of virtual objects 11002 does not change.In some embodiments, object places mark
Standard includes visual field stabilization and provides the static view of physical environment (for example, camera movement during at least threshold amount of time is small
At least predetermined time quantum and/or phase have been had been subjected in threshold quantity, and/or since receiving request
Machine be calibrated in the case where equipment has previously carried out abundant movement carry out plane monitoring-network) requirement.According to determining pair
As placement standard met (for example, when equipment not yet identify in the first user interface relative to one or more phases
When the visual field of machine places position or the plane of virtual objects, object is placed standard and is met), equipment, which is shown, has second group of view
Feel attribute (for example, in the second translucent level or second luminance level or the second saturation levels etc.) and has second to take
To virtual objects expression, second group of perceptual property be different from first group of perceptual property, the second orientation at one or
Plane in the physical environment detected in the visual field of multiple cameras is corresponding.For example, in Figure 11 I, because in camera
Identify the placement location of virtual objects 11002 (for example, corresponding to the floor surface 5038 in physical environment 5002 in visual field 6036
Plane), so display virtual objects 11002 not translucent version.The orientation of virtual objects 11002 is (for example, in touch screen
Position on display 112) the first orientation shown in Figure 11 H change into second orientation shown in Figure 11 I.As equipment is moved
Dynamic (as from shown in Figure 11 I to Figure 11 J), the orientation of virtual objects 11002 change (because virtual objects 11002 are now with opposite
It is shown in the fixed orientation of physical environment 5002).It shows according to whether object placement standard obtains meeting with first group of view
Feel that the virtual objects of attribute or second group of perceptual property provide visual feedback (for example, having been received with instruction aobvious for user
Show the request of virtual objects, but in order to which virtual objects are placed in the visual field of one or more cameras, when needing additional
Between and/or calibration information).Improved visual feedback is provided for user and enhances the operability of equipment, and makes user-equipment
Interface is more effective (for example, suitably inputting by helping user to provide and putting object with the second orientation for corresponding to plane
Avoid try to provide the input for manipulating virtual objects before setting), this is further through allowing users to faster and more effectively make
Reduce electricity usage with equipment and extends the battery life of equipment.
In some embodiments, when indicating of virtual objects is being shown with first group of perceptual property and first orientation, if
It is standby to detect that (16006) object is placed standard and met (for example, being in translucent in virtual objects is suspended in equipment week
When on the view of the physical environment enclosed, the plane for placing virtual objects is identified).With first group of perceptual property (for example, place
In translucent) display virtual objects when detect that object places standard and obtains meeting without for initiating test object
The further user input of placement standard reduces the quantity of the input needed for object is placed.It reduces defeated needed for executing operation
The quantity entered enhances the operability of equipment, and keeps user-equipment interface more effective, this is further through allowing users to faster
Speed and equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, met in response to detecting that object places standard, equipment is via display generating unit
Part shows (16008) animation transition, and the expression which shows virtual objects is mobile (for example, rotation, contracting from first orientation
Put, translate and/or above combination) to second orientation, and change into from first group of perceptual property with second group of view
Feel attribute.For example, once identify the plane for placing virtual objects in the visual field of camera, just take visually adjusting it
To, size and translucence (etc.) in the case where virtual objects are placed on this plane.Display is from first orientation to second
The animation transition of orientation is not (for example, need further user's input to carry out weight to virtual objects in the first user interface
New orientation) reduce the quantity that object places required input.The quantity for reducing input needed for executing operation enhances equipment
Operability, and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively use equipment
And reduces electricity usage and extend the battery life of equipment.
In some embodiments, detecting that object places standard and obtains meeting includes one or more of following operation
(16010): detecting and identified plane in the visual field of one or more cameras;It is detected at least threshold amount of time
The movement less than amount of threshold shift between equipment and physical environment in the visual field of camera (for example, lead to the base of physical environment
Static view in sheet);And detect since receive shown in the first interface region the request of virtual objects with
To have had been subjected at least predetermined time quantum.Detect that object is placed standard and met (for example, by one or more
Plane is detected in the visual field of a camera without user's input to detect plane) reduce the required input of object placement
Quantity.The quantity for reducing input needed for executing operation enhances the operability of equipment, and has user-equipment interface more
Effect, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the electricity of equipment
The pond service life.
In some embodiments, first part's (example of the physical environment captured in the visual field of one or more cameras
Such as, the first part that user passes through the visible physical environment of translucent virtual objects) on display have first group of perceptual property
And the virtual objects of first orientation expression when (for example, being in translucent in virtual objects is suspended in the object around equipment
When managing on the view of environment), the first movements of the one or more cameras of equipment detection (16012) is (for example, equipment is relative to surrounding
The physical environment of equipment is rotationally and/or translationally).For example, in Figure 11 G to Figure 11 H, the half of display virtual objects 11002
When transparent expression, one or more cameras are mobile, and (the change position of the desk 5004 in such as visual field 6036 of such as camera is signified
Show).The wall and desk for the physical environment for capturing and being displayed in the user interface in the visual field 6036 of camera pass through semi-transparent
Bright virtual objects 11002 are visible.In response to detecting the first movement of one or more cameras, equipment is in one or more phases
Show that (16014) have first group of perceptual property and first orientation on the second part of the physical environment captured in the visual field of machine
Virtual objects, wherein the second part of physical environment is different from the first part of physical environment.For example, in the semi-transparent of virtual objects
Bright version is shown when hovering on physical environment shown in the visual field of camera, when equipment is mobile relative to physical environment,
The view (for example, behind translucent virtual objects) of physical environment in the visual field of camera is shifted and is scaled.Therefore, it is setting
During standby movement, the translucent version of virtual objects becomes to be covered on the different piece of the physical environment indicated in visual field,
The result of the Pan and Zoom of view as the physical environment in the visual field of camera.For example, in Figure 11 H, the visual field of camera
The second part of 6036 display physical environments 5002, is different from the first part for the physical environment 5002 being shown in Figure 11 G.
The orientation of the semi-transparent representation of virtual objects 11002 is not with the movement of the one or more cameras occurred in Figure 11 G to Figure 11 H
And change.It shows the virtual objects with first orientation in response to detecting the movement of one or more cameras and is provided for user
Visual feedback is (for example, to indicate that virtual objects are not yet placed on the fixed position and therefore relative to physical environment
Not as the part of the physical environment captured in the visual field of one or more cameras is according to the movement of one or more cameras
Change and move).Improved visual feedback is provided for user and enhances the operability of equipment, and makes user-equipment interface
More effectively (for example, by helping user avoiding try to provide use to correspond to before the second orientation of plane is placed by object
In manipulation virtual objects input), this further through allow users to faster and more effectively using equipment and reducing electric power makes
With and extend the battery life of equipment.
In some embodiments, it is shown on the Part III of the physical environment captured in the visual field of one or more cameras
Show the expression of the virtual objects with second group of perceptual property and second orientation (for example, the Part III of physical environment is direct
View (for example, support virtual objects the plane detected a part) stopped by virtual objects) and when (for example, being put in object
It sets in the plane detected in the physical environment that standard has been met and virtual objects have been placed in the visual field of camera
Later), the second of the one or more cameras of equipment detection (16016) is mobile (for example, equipment is relative to the physical rings for surrounding equipment
Border is rotationally and/or translationally).For example, in Figure 11 I to Figure 11 J, when the not translucent for showing virtual objects 11002 indicates,
One or more cameras it is mobile (as such as camera visual field 6036 in desk 5004 change position indicated by).In response to inspection
Measure equipment second is mobile, when the physical environment captured in the visual field in one or more cameras is mobile according to the second of equipment
And (for example, displacement and scaling) is moved, and the object that second orientation continues and detects in the visual field of one or more cameras
Plane in reason environment is to the physical environment that equipment keeps (16018) to capture in the visual field of one or more cameras when corresponding to
Display has the expression of the virtual objects of second group of perceptual property and second orientation on Part III.For example, in virtual objects
Not translucent version is fallen in after the resting position in the plane detected in physical environment shown in the visual field of camera, virtually
The position of object and orientation are fixed relative to the physical environment in the visual field of camera, and when equipment is mobile relative to physical environment
When, virtual objects shift the physical environment in the visual field with camera and scaling in Figure 11 I to Figure 11 J (for example, when occurring
One or more cameras it is mobile when, virtual objects 11002 not translucent expression remain fixed in relative to physical environment
In the orientation of floor level in 5002).In response to detecting the movement of one or more cameras and by the display of virtual objects
It is maintained in second orientation and provides visual feedback (for example, to indicate that virtual objects have been placed on relative to physics for user
The fixed position of environment and therefore with the part root of the physical environment captured in the visual field of one or more cameras
It is moved according to the mobile change of one or more cameras).Improved visual feedback, which is provided, for user enhances operating for equipment
Property, and keep user-equipment interface more effective (for example, by helping user to be directed to be placed on corresponding to plane second
Virtual objects in orientation provide suitable input), this is further through allowing users to faster and more effectively subtract using equipment
Lack electricity usage and extends the battery life of equipment.
In some embodiments, standard is placed according to determining object to be met (for example, being used for when equipment not yet identifies
When placing position or the plane of virtual objects relative to the visual field of one or more cameras in the first user interface, object is placed
Standard is met), it is defeated that equipment (for example, using one or more tactile output generators of equipment) generates (16020) tactile
Out, have second group of perceptual property (for example, being in the translucent level or higher luminance level reduced, or more in conjunction with display
High saturation levels etc.) and the virtual objects with second orientation expression, the second orientation in one or more cameras
Visual field in plane in the physical environment that detects it is corresponding (for example, the generation of tactile output with arrive the non-semi-transparent of virtual objects
The completion of the conversion of bright appearance and virtual objects are to the rotation for stopping being placed at the landing place in the plane detected in physical environment
Turn synchronous with the completion of translation).For example, as shown in figure 11I, the tactile output as indicated at 11010 is generated, it is attached in conjunction with display
It is connected to the not translucent table of the virtual objects 11002 of the plane (for example, floor surface 5038) corresponding to virtual objects 11002
Show.According to determine object place standard met and generate tactile output for user provide improved touch feedback (for example,
Instruction is successfully executed for placing the operation of virtual objects).Improved feedback, which is provided, for user enhances operating for equipment
Property (for example, allowing user's perceptive object to place the sensory information that has been met of standard without the letter due to display by providing
Breath keeps user interface mixed and disorderly) and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively make
Reduce electricity usage with equipment and extends the battery life of equipment.
In some embodiments, display with second group of perceptual property and with the view in one or more cameras
When the expression of the virtual objects of the corresponding second orientation of plane in physical environment detected in, equipment receives (16022)
About at least position of the plane in the physical environment detected in the visual field of one or more cameras or the update (example of orientation
Such as, the plan-position of update and orientation are based on accumulating after initial plane monitoring-network result be used to place virtual objects
The result of the more acurrate calculating of additional data or more time-consuming calculation method (for example, less degree of approximation etc.)).In response to receiving
About at least position of the plane in the physical environment detected in the visual field of one or more cameras or the update of orientation, if
Standby at least position and/or orientation according to the expression for updating adjustment (16024) virtual objects is (for example, virtual objects are gradually moved
Dynamic (for example, translation and rotation) is close to the plane updated).It is adjusted in response to receiving the update about the plane in physical environment
The position of whole virtual objects and/or orientation (for example, not needing user's input for placing virtual objects relative to plane) subtract
The quantity of input needed for having lacked adjustment virtual objects.The quantity for reducing input needed for executing operation enhances grasping for equipment
The property made, and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, first group of perceptual property includes (16026) first size and the first translucent horizontal (example
Such as, before falling into AR view, object has the fixed dimension and the fixed translucent level of height relative to display) and the
Two groups of perceptual properties include that (16028) are different from the second size of first size (for example, once falling into AR view, object is just
Display has analog physical size relevant to size and the landing place in physical environment) and lower than the first translucent level
The second of (for example, opaquer than it) is translucent horizontal (for example, object is no longer translucent in AR view).For example,
In Figure 11 H, the semi-transparent representation of virtual objects 11002 is shown with first size, and in Figure 11 I, virtual objects
11004 not translucent expression is shown with second (smaller) size.It is shown according to whether object placement standard obtains meeting
Show that the virtual objects with translucent horizontal or the second size of first size and first and the second translucent level provide for user
Visual feedback (for example, to indicate to have been received the request of display virtual objects, but in order to virtual objects are placed on one
In the visual field of a or multiple cameras, additional time and/or calibration information are needed).Improved visual feedback is provided for user to increase
The strong operability of equipment, and keep user-equipment interface more effective (for example, by helping user to provide suitable input
And avoid try to provide the input for being used to manipulate virtual objects before the second orientation placement by object to correspond to plane),
This is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery of equipment
Service life.
In some embodiments, being shown in virtual objects does not include at least one of visual field of one or more cameras
(for example, virtual objects are relative to the physical environment having with equipment in the respective user interfaces (for example, user interface of going up on the stage) divided
The virtual stand of unrelated orientation is orientated) when, receiving (16030) is including at least part of the visual field of one or more cameras
The first interface region (for example, AR view) in display virtual objects request.In virtual objects when receiving request
When being shown in respective user interfaces, first orientation is corresponding with the orientation of virtual objects.For example, as described in 1F referring to Fig.1,
When user interface 6010 (it does not include the visual field of camera) is gone up on the stage in display, receiving is including the user of the visual field 6036 of camera
The request of virtual objects 11002 is shown in interface.Wherein virtual objects 11002 are shown in the user of the visual field 6036 including camera
The orientation of the virtual objects 11002 in Figure 11 G in interface corresponds to wherein virtual objects 11002 and is shown in user interface of going up on the stage
The orientation of the virtual objects 11002 in Figure 11 F in 6010.By virtual objects with when display at (previously having shown) interface
When in (for example, user interface of going up on the stage) orientation of virtual objects it is corresponding orientation be shown in the first user interface (for example, display
Augmented reality view) in for user provide visual feedback (for example, to indicate the object that provides when user interface is gone up on the stage in display
Manipulation input can be used for establishing orientation of the object in AR view).Improved visual feedback, which is provided, for user enhances equipment
Operability, and keep user-equipment interface more effective (for example, by helping user to provide suitable input and by object
Avoid try to provide the input for manipulating virtual objects to correspond to before the second orientation of plane is placed), this is further through making
User faster and more effectively can be reduced electricity usage using equipment and extend the battery life of equipment.
In some embodiments, first orientation and (16032) predefined orientation are (for example, default orientation, such as when virtual
Object be initially shown in do not include one or more cameras visual field at least part of respective user interfaces in when it is shown
Orientation) it is corresponding.By virtual objects with first group of perceptual property and with the display of predefined orientation in the first user interface (example
Such as, the augmented reality view of display) in reduce electricity usage and extend the battery life of equipment (for example, passing through permission
It shows the semi-transparent representation of pre-generated virtual objects rather than is rendered according to the orientation established in user interface of going up on the stage
Semi-transparent representation).
In some embodiments, by virtual objects with second group of perceptual property and with the view in one or more cameras
The corresponding second orientation of plane in physical environment detected in is shown in the first interface region (for example, AR view)
When middle, equipment detects (16034) (for example, the knot as scaling input (for example, kneading or expansion gesture for being directed toward virtual objects)
Fruit) the analog physical size of virtual objects is changed into from the first analog physical size relative to the view in one or more cameras
Second analog physical size of the physical environment captured in is (for example, change into default size from the 80% of default size
120%, vice versa) request.For example, being such as reference for reducing the input of the analog physical size of virtual objects 11002
Kneading gesture described in Figure 11 N to Figure 11 P.Change the request of the analog physical size of virtual objects, root in response to detecting
According to analog physical size the gradually changing from the first analog physical size to the second analog physical size of virtual objects, equipment by
Gradually change the display size of the expression of virtual objects in (16036) first interface regions (for example, the display ruler of virtual objects
Very little growth or diminution, and the holding of the display size of the physical environment captured in the visual field of one or more cameras does not change), and
It is virtual right according to determining and during the display size of the expression of virtual objects gradually changes in the first interface region
The analog physical size of elephant has reached predefined analog physical size (for example, 100% of default size), and equipment generates tactile
Output is to indicate that the analog physical size of virtual objects has reached predefined analog physical size.For example, such as referring to Fig.1 1N is extremely
It described in Figure 11 P, is inputted in response to kneading gesture, the display size of the expression of virtual objects 11002 is gradually reduced.Scheming
In 11O, when the display size of the expression of virtual objects 11002 reaches the 100% of the size of virtual objects 11002 (for example, ought be most
It is just shown in the size of virtual objects 11002 when in the user interface of the visual field 6036 including one or more cameras, such as Figure 11 I
It is indicated) when, it generates the tactile indicated at 11024 such as and exports.It is had reached according to the analog physical size for determining virtual objects
Predefined analog physical size provides feedback (for example, indicating not needing further defeated to generate tactile output as user
Enter to make the simulation size of virtual objects back to predefined size).There is provided that improved touch feedback enhances equipment can
Operability is (for example, by providing the sensory information for the predefined analog physical size for allowing user's perception to have reached virtual objects
Keep user interface mixed and disorderly without the information due to display), this further through allowing users to more rapidly and efficiently use equipment and
Reduce electricity usage and extends the battery life of equipment.
In some embodiments, it is different from the first interface region (for example, AR view) with virtual objects
Predefined analog physical size the second analog physical size (for example, the 80% of 120% or default size of default size,
As scaling input (for example, be directed toward virtual objects kneading or expansion gesture) result) display virtual objects when, equipment detection
(16038) make virtual objects back to predefined analog physical size request (for example, detection on the touchscreen (for example,
On virtual objects, or alternatively, except virtual objects) tap or double-click).For example, mediate input made it is virtual right
After reducing as 11002 size (as described in 1N to Figure 11 P referring to Fig.1), examined at the position for corresponding to virtual objects 11002
Measure double-click input (as described in 1R referring to Fig.1).Make virtual objects back to predefined analogies in response to detecting
The request for managing size, according to the analog physical size of virtual objects to the variation of predefined analog physical size, equipment changes
The display size of the expression of virtual objects in (16040) first interface regions is (for example, the display size of virtual objects increases
Or reduce, and the holding of the display size of the physical environment captured in the visual field of one or more cameras does not change).For example, ringing
The Ying Yu double-click input that 1R is described referring to Fig.1, it is virtually right when being shown in Figure 11 I that the size of virtual objects 11002 is returned to
As 11002 size (when in the user interface for being initially shown in the visual field 6036 including one or more cameras virtual objects
11002 size).In some embodiments, predefined mould is had reached according to the analog physical size of determining virtual objects
Quasi- physical size (for example, 100% of default size), equipment generate tactile output to indicate the analog physical size of virtual objects
Have reached predefined analog physical size.Make virtual objects back to predefined analog physical size in response to detecting
It requests and the display size of virtual objects is changed into predefined size (for example, by providing for display size is accurate
Ground is adjusted to the option of predefined analog physical size, rather than user's estimation is required to be provided to the defeated of adjustment display size
Enter to make when enough virtual objects to show with predefined analog physical size) reduce pair that display has predefined size
As the quantity of required input.The quantity for reducing input needed for executing operation enhances the operability of equipment, and uses
Family-equipment interface is more effective, this is further through allowing users to faster and more effectively reduce electricity usage using equipment simultaneously
And extend the battery life of equipment.
In some embodiments, equipment selection is for the corresponding positions according to one or more cameras relative to physical environment
(for example, current location and orientation when object placement standard obtains meeting) is set and is orientated to be arranged with second group of vision category
Property virtual objects expression second orientation plane, wherein selection plane includes (16042): according to determining when at one or
Show the expression of virtual objects (for example, semitransparent object in the first part of the physical environment captured in the visual field of multiple cameras
Pedestal it is Chong Die with the plane in the first part of physical environment) when object place standard met (for example, referring to as equipment
The result of first direction into physical environment), selection detects in the physical environment in the visual field of one or more cameras
Multiple planes the first plane (for example, bigger according to the degree of approach between the pedestal of object over the display and the first plane,
It is bigger in the degree of approach in physical world between the first plane and the first part of physical environment) as being arranged with second
The plane of the second orientation of the expression of the virtual objects of group perceptual property;And according to the determining view when in one or more cameras
Show the expression of virtual objects (for example, the pedestal and physics of semitransparent object on the second part of the physical environment captured in
In the second part of environment plane overlapping) when object place standard is met (for example, as equipment direction physical environment in
Second direction result), select the multiple planes detected in the physical environment in the visual field of one or more cameras
Second plane (for example, it is bigger according to the degree of approach between the pedestal of object over the display and the second plane, in physical world
The degree of approach between second plane and the second part of physical environment is bigger) there is second group of perceptual property as being arranged
The plane of the second orientation of the expression of virtual objects, wherein the first part of physical environment is different from second of physical environment
Point, and the first plane is different from the second plane.Select the first plane or the second plane virtual right as that will be arranged relative to it
The plane of elephant will be (for example, which plane in the plane for not needing user's input many is specified to detect will be set relative to it
Set the plane of virtual objects) reduce selection plane needed for input quantity.Reduce the quantity of input needed for executing operation
The operability of equipment is enhanced, and keeps user-equipment interface more effective, this is further through allowing users to more rapidly and effectively
Ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, equipment display in the first interface region (for example, AR view) has second group
(for example, camera shutter button) can be indicated by showing that (16044) snapshot shows while the virtual objects of perceptual property and second orientation.
Show that the activation that can be indicated, equipment capture (16046) include the snapshot plotting of the active view of the expression of virtual objects in response to snapshot
Picture, the expression of virtual objects are located at the placement location in the physical environment in the visual field of one or more cameras, have second group
It is flat in perceptual property and second orientation, the second orientation and the physical environment detected in the visual field of one or more cameras
Face is corresponding.Show for capture the snapshot of the snapshot image of the active view of object show can indicate to reduce capture object snapshot
The quantity of input needed for image.The quantity for reducing input needed for executing operation enhances the operability of equipment, and makes
User-equipment interface is more effective, this is further through allowing users to faster and more effectively reduce electricity usage using equipment
And extend the battery life of equipment.
In some embodiments, equipment in the first interface region with it is virtual right with second group of perceptual property
The expression of elephant, which display together (16048) one or more controls, which to be shown, can indicate (for example, for switching back to user interface of going up on the stage
Showing can indicate, can indicate, can indicate etc. for capturing showing for snapshot for exiting showing for AR view device).For example, in Figure 11 J,
Display includes one group of control for retreating control 6016, toggle control 6018 and shared control 6020.And have second group of vision
The expression of the virtual objects of attribute display together one or more controls and shows that equipment detects (16050) control gradually when can indicate
Light standard met (for example, not yet detected on touch sensitive surface in threshold amount of time user input (for example, have or
There is no the movement of equipment and the update to the visual field of camera)).In response to detecting that gradually light standard is met for control, equipment
Stop the one or more controls of (16052) display to show and can indicate, while continuing the of the visual field including one or more cameras
Display has the expression of the virtual objects of second group of perceptual property in one interface region.For example, such as 1K extremely schemes referring to Fig.1
Described in 11L, when user's input is not detected in threshold amount of time, control 6016,6018 and 6020 gradually fades out simultaneously
And stop display.In some embodiments, the tap input after control is shown and can indicate to fade away, on touch sensitive surface
Or with the interaction of virtual objects so that equipment is shown together with the expression of virtual objects again simultaneously in the first interface region
Showing that control is shown can indicate.In response to determining that gradually light standard is met and is automatically stopped and shows that control reduces stopping display for control
The quantity of input needed for control.The quantity for reducing input needed for executing operation enhances the operability of equipment, and makes
User-equipment interface is more effective, this is further through allowing users to faster and more effectively reduce electricity usage using equipment
And extend the battery life of equipment.
In some embodiments, in response to showing the requests of virtual objects in the first interface region: including
Shown at least part of the visual field of one or more cameras in the first interface region virtual objects expression it
Before, according to determine calibration standard it is unmet (for example, because there is no from difference check angle sufficient amount image come
Physical environment to capture in the visual field of one or more cameras generates size and spatial relationship data), equipment is aobvious to user
Show (16054) for relative to physical environment mobile device prompt (for example, display be used for mobile device visual cues, and
And optionally show calibration user interface object (for example, being moved according to the movement of equipment in the first interface region
Resilient wire frames ball or cube) (for example, calibration user interface object is covered on the blurred picture of the visual field of one or more cameras
On), it is more fully described below with reference to method 17000).It shows to user for relative to physical environment mobile device
It prompts for user and provides visual feedback (for example, needing the movement of equipment with instruction to obtain for being placed on virtual objects
Information in the visual field of camera).Improved visual feedback is provided for user and enhances the operability of equipment, and makes user-
Equipment interface is more effective (for example, by helping user to provide calibration input), this is further through allowing users to more rapidly and effectively
Ground is reduced electricity usage using equipment and extends the battery life of equipment.
It should be appreciated that the particular order that the operation in Figure 16 A to Figure 16 G is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,1000,17000,18000,19000 and 20000) description is equally with class
As mode be suitable for above in relation to method 16000 described in Figure 16 A to Figure 16 G.For example, above with reference to 16000 institute of method
Contact, input, virtual objects, interface region, visual field, tactile output, movement and/or the animation stated optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,1000,17000,18000,19000 and 20000)
One of contact, input, virtual objects, interface region, visual field, tactile output, mobile and/or animation feature are more
Person.For brevity, these details are not repeated herein.
Figure 17 A to Figure 17 D is to show display according to the movement of one or more cameras of equipment and the dynamically school of animation
The flow chart of the method 17000 of quasi- user interface object.Method 17000 has display generating unit (for example, display, throwing
Shadow instrument, head-up display etc.), one or more input equipment is (for example, touch sensitive surface or function simultaneously as display generating unit and touching
The touch-screen display of sensitive surfaces), one or more camera is (for example, side opposite with display and touch sensitive surface in equipment
On the backward cameras of one or more), and for detect include one or more cameras equipment posture (for example, relatively
In physical environment around orientation (for example, rotation, yaw and/or inclination angle) and position) variation one or more postures
The electronic equipment of sensor (for example, accelerometer, gyroscope and/or magnetometer) is (for example, the equipment 300 or Figure 1A of Fig. 3 are just
Take formula multifunctional equipment 100) at execute.Some operations in method 17000 are optionally combined and/or some operations
Sequence be optionally changed.
Equipment receive (17002) shown in the first interface region physical environment (e.g., including one or more phases
Physical environment around the equipment of machine) augmented reality view request, first interface region include one or more
The expression (for example, at least part of visual field capture physical environment) of the visual field of camera.In some embodiments, the request
It is the tap of the augmented reality view for being switched to virtual objects from the view of going up on the stage of virtual objects detected on button
Input.In some embodiments, which is the increasing to the expression display for being located next to virtual objects in two-dimensional user interface
Strong reality shows the selection that can be indicated.In some embodiments, which is to augmented reality measurement application program (for example, promoting
Into the measurement application program of the measurement of physical environment) activation.For example, the request be switching 6018 at detect for
The tap input that virtual objects 11002 are shown in the visual field 6036 of one or more cameras, as described in 2A referring to Fig.1.
In response to receiving the request of the augmented reality view of display physical environment, equipment shows (17004) one or more
(for example, when the standard of calibration is unmet, equipment shows the visual field of one or more cameras for the expression of the visual field of a camera
The blurry versions of physical environment).For example, equipment shows the Fuzzy Representation of the visual field 6036 of one or more cameras, such as Figure 12 E-1
It is shown.It is unmet (for example, because being not present according to the calibration standard for determining the augmented reality view for physical environment
The image data of (for example, checking angle from difference) sufficient amount is come the object to capture in the visual field of one or more cameras
Environment generation size and spatial relationship data are managed, because not detecting in the visual field of one or more cameras corresponding to virtual right
The plane of elephant, and/or because there is no enough information with based on from image data obtained by camera come start or continue into
Row plane monitoring-network), equipment (for example, via display generating unit, and in the expression for including the visual field of one or more cameras
In first interface region of (for example, blurry versions of visual field)) it shows according to one or more cameras in physical environment
Mobile dynamic animation calibration user interface object (for example, scanning prompt object, such as elastic cube or wire frame pair
As).For example, showing calibration user interface object 12014 in Figure 12 E-1 to Figure 12 I-1.Figure 12 E-1 be see, for example to figure
12F-1 describes the animation of movement of the calibration user interface object according to one or more cameras.In some embodiments, when
When receiving the initial part for the input of the request of expression for corresponding to display augmented reality view, one or more cameras are analyzed
Visual field of the visual field to detect one or more cameras in one or more planes (for example, floor, wall, desk etc.) hair
It is raw.In some embodiments, analysis is sent out before the request is received (for example, when virtual objects are shown in view of going up on the stage)
It is raw.Show that calibration user interface object includes: when showing calibration user interface object, via one or more attitude transducers
The posture of one or more cameras in physical environment is detected (for example, position and/or orientation are (for example, rotation, inclination, yaw
Angle)) variation;Also, in response to the attitudes vibration for detecting one or more cameras in physical environment, according to detected
Physical environment in one or more cameras attitudes vibration come adjust calibration user interface object (for example, scanning prompt pair
As such as elastic cube or wire frame object) at least one display parameters (for example, orientation over the display, size, rotation
Turn or position).For example, Figure 12 E-1 to Figure 12 F-1 for corresponding respectively to Figure 12 E-2 to Figure 12 F-2 shows equipment 100 relatively
Corresponding variation in the display visual field 6036 of one or more cameras of the transverse shifting and equipment of physical environment 5002.
In Figure 12 E-2 to Figure 12 F-2, calibration user interface object 12014 in response to one or more cameras movement and rotate.
In display according to the attitudes vibration of one or more cameras in detected physical environment come over the display
When mobile calibration user interface object (for example, scanning prompt object, such as elastic cube or wire frame object), equipment detection
Met to (17006) calibration standard.For example, as described in 2E to Figure 12 J referring to Fig.1, in response to from 12E-1 to 12I-
The movement of 1 equipment occurred, equipment determine that calibration standard is met.
In response to detecting that calibration standard is met, equipment stops (17008) and shows calibration user interface object (example
Such as, scanning prompt object, such as elastic cube or wire frame object).In some embodiments, stop display calibration in equipment
After user interface object, equipment shows the expression for not having the visual field of fuzzy camera.In some embodiments, virtual right
The expression of elephant is shown in the non-fuzzy expression of the visual field of camera.For example, in Figure 12 J, in response to referring to 12E-1 to 12I-1
The movement of described equipment no longer shows calibration user interface object 12014, and virtual objects 11002 are shown in camera
Visual field non-fuzzy indicate 6036 on.According to one or more cameras (for example, the equipment phase of the physical environment of capture device
Machine) movement provide visual feedback (for example, to indicate needs to adjust the display parameters of calibration user interface object as user
The movement of equipment is to be calibrated).Improved visual feedback is provided for user and enhances the operability of equipment, and is used
Family-equipment interface it is more effective (for example, help user by provide meet calibration standard needed in a manner of information movement set
It is standby), this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend equipment
Battery life.
In some embodiments, in the first interface region of expression for including the visual field of one or more cameras
Show the request of the augmented reality view of physical environment (e.g., including the physical environment around the equipment of one or more cameras)
Show virtual three-dimensional object (for example, virtual with threedimensional model in the augmented reality view of physical environment including (17010)
Object) expression request.In some embodiments, which detected on button for stepping on from virtual objects
Platform view is switched to the tap input of the augmented reality view of virtual objects.In some embodiments, which is to two
The augmented reality of expression display in dimension user interface next to virtual objects shows the selection that can be indicated.For example, in fig. 12,
Being in the input that the position for corresponding to toggle control 6018 carries out by contact 12002 is including the user of the visual field 6036 of camera
The request of virtual objects 11002 is shown in interface, as shown in Figure 12 B.In response to showing virtual objects in augmented reality view
Request and show the augmented reality view of physical environment reduce (for example, display physical environment view and virtual objects two
Person) needed for input quantity.The quantity for reducing input needed for executing operation enhances the operability of equipment, and uses
Family-equipment interface is more effective (for example, by helping user to provide calibration input), this is further through allowing users to more rapidly and have
Effect ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, after stopping showing calibration user interface object, equipment is (for example, in calibration standard
After obtaining satisfaction) in the first interface region of expression for including the visual field of one or more cameras show (17012)
The expression of virtual three-dimensional object.In some embodiments, it in response to request, completes calibration and is clearly displaying phase completely
After the visual field of machine, virtual objects are dropped to relative to the predefined plane (example identified in the visual field of one or more cameras
Such as, physical surface, such as can be used as the supporting plane of the three dimensional representation of virtual objects perpendicular walls or horizontal floor) it is predetermined
Adopted position is set and/or is orientated.For example, equipment has stopped being shown in the calibration user shown in Figure 12 E to Figure 12 I in Figure 12 J
Interface object 12014, and virtual objects 11002 are shown in the user interface of the visual field 6036 including camera.Stopping showing
Show that calibration user interface object shows that virtual objects provide visual feedback (example in shown augmented reality view later
Such as, to indicate that calibration standard has been met).Improved visual feedback, which is provided, for user enhances the operability of equipment, and
And keep user-equipment interface more effective (for example, by helping user to provide suitable input and obtaining meeting it in calibration standard
Before avoid try to provide the input for manipulating virtual objects), this is further through allowing users to faster and more effectively using setting
It is standby and reduce electricity usage and extend the battery life of equipment.
In some embodiments, equipment is while showing calibration user interface object (for example, obtaining in calibration standard
Before satisfaction) (17014) virtual three-dimensional is shown in the first interface region (for example, behind calibration user interface object)
The expression of object, wherein (for example, in calibration user interface during movement of one or more cameras in physical environment
When object moves in the first interface region according to the movement of one or more cameras), the expression of virtual three-dimensional object is protected
The fixation position in the first interface region is held (for example, virtual three-dimensional object is not placed on the position in physical environment
It sets).For example, showing virtual objects while showing calibration user interface object 12014 in Figure 12 E-1 to Figure 12 I-1
1102 expression.When the equipment 100 for including one or more cameras is mobile (for example, such as Figure 12 E-1 to Figure 12 F-1 and correspondence
Figure 12 E-2 to Figure 12 F-2 shown in), virtual objects 1102 are maintained at the user of the visual field 6036 including one or more cameras
Fixation position in interface.Shown while showing calibration user interface object virtual objects provide visual feedback (for example,
Its object for being carrying out calibration is directed to instruction).Improved visual feedback, which is provided, for user enhances the operability of equipment,
And keep user-equipment interface more effective (for example, corresponding to by helping user to provide and will place virtual objects relative to it
The calibration input of plane), this further through allow users to faster and more effectively to reduce using equipment electricity usage and
Extend the battery life of equipment.
In some embodiments, in the first interface region of expression for including the visual field of one or more cameras
Show the request of the augmented reality view of physical environment (e.g., including the physical environment around the equipment of one or more cameras)
Show the expression of the visual field of one or more cameras (for example, showing one or more user interface objects simultaneously including (17016)
And/or control (for example, the profile of plane, object, pointer, icon, label etc.)) request, without requiring in one or more
Any virtual three-dimensional object (for example, virtual objects with threedimensional model) are shown in the physical environment captured in the visual field of camera
Expression.In some embodiments, which is the expression display to virtual objects are located next in two-dimensional user interface
Augmented reality shows the selection that can be indicated.In some embodiments, the request be to augmented reality measurement application program (for example,
Promote physical environment measurement measurement application program) activation.Request show the expression of the visual field of one or more cameras and
The expression for showing any virtual three-dimensional object is not requested to provide feedback (for example, by using identical calibration user interface pair
As indicating to need to calibrate, whether shown but regardless of virtual objects).Improved feedback, which is provided, for user enhances grasping for equipment
The property made and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, the request of the augmented reality view of physical environment is shown in response to receiving, equipment is aobvious
Show that the expression of the visual field of (17018) one or more cameras (for example, when the standard of calibration is unmet, shows one or more
The blurry versions of physical environment in the visual field of a camera), and according to the augmented reality view determined for physical environment
Calibration standard met (for example, because the image data that there is (for example, from difference check angle) sufficient amount come for
The physical environment captured in the visual field of one or more cameras generates size and spatial relationship data, because at one or more
The plane corresponding to virtual objects is detected in the visual field of a camera, and/or because there are enough information to be based on from camera
Obtainable image data come start or continue to carry out plane monitoring-network), equipment abandon calibration user interface object (for example, scanning
Prompt object, such as elastic cube or wire frame object) display.In some embodiments, it is shown in virtual three-dimensional object
When in user interface of going up on the stage, start to be scanned physical environment to detect plane, so that equipment can increase in display
In some cases (for example, the visual field in camera is sufficiently moved and examined with providing enough data before strong real view
Survey physical space in one or more planes in the case where) detection physical space in one or more planes so that being not required to
Show calibration user interface.Met according to the calibration standard of the determining augmented reality view for physical environment and is abandoned
The user that is shown as of calibration user interface object provides visual feedback (for example, there is no calibration user interface objects to indicate school
Fiducial mark standard has been met and has not been needed the movement of equipment to be calibrated).Improved visual feedback enhancing is provided for user
The operability of equipment, and keep user-equipment interface more effective (for example, avoiding by help user for alignment purpose
Equipment unnecessary movement), this is further through allowing users to faster and more effectively reduce electricity usage using equipment
And extend the battery life of equipment.
In some embodiments, equipment is while showing calibration user interface object (for example, obtaining in calibration standard
Before satisfaction) show (17020) text object (for example, the wrong shape that description is currently detected in the first interface region
The text of state describes and/or the text prompt of request user action (for example, to correct the error condition detected)), the text
Object (for example, being located next to calibration user interface object) provides the information for the movement that can be taken about user, to improve enhancing
The calibration of real view.In some embodiments, text object provide a user the movement for equipment prompt (for example,
With currently detected error condition), " mobile excessive ", " details is poor ", " moving close to a bit " etc..In some realities
Apply in scheme, the new error condition that equipment is detected according to movement of the user during calibration process and based on user action come
Update text object.Display text provides visual feedback (for example, mentioning for user while showing calibration user interface object
For the oral instruction of the mobile type needed for calibrating).Improved visual feedback, which is provided, for user enhances the operability of equipment,
And make user-equipment interface it is more effective (for example, help user provide suitably input and reduce operation equipment/and equipment into
User's mistake when row interaction), this is further through allowing users to faster and more effectively reduce electricity usage using equipment
And extend the battery life of equipment.
In some embodiments, in response to detecting that calibration standard is met (for example, standard is in calibration user interface
Met before object is shown or standard is shown and is shown with animation mode in calibration user interface object and is up to
Met after certain time period), equipment (for example, if calibration user interface object is initially shown, is stopping showing
After calibration user interface object) it detects in display (17022) physical environment for being captured in the visual field of one or more cameras
To the visually indicating of plane (for example, profile of the display around the plane detected, or highlight the plane detected).
For example, highlighting plane (floor surface 5038) in Figure 12 J to indicate in the display visual field of one or more cameras
The plane is detected in the physical environment 5002 captured in 6036.Show the plane detected visually indicates that provide vision anti-
Feedback (for example, instruction detects plane in the physical environment captured by equipment camera).Improved vision is provided for user
Feedback enhances the operability of equipment, and keeps user-equipment interface more effective (for example, suitable by helping user to provide
Input and user's mistake when reducing operation equipment/interact with equipment), this is further through allowing users to more rapidly and have
Effect ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, the request of the augmented reality view of physical environment is shown in response to receiving: according to true
Surely calibration standard is unmet and before showing calibration user interface object, and equipment is (for example, via display generating unit
Part, and in the first user interface area for including the expression (for example, blurry versions of visual field) of the visual field of one or more cameras
In domain) display (17024) animation prompt object (for example, scanning prompt object, such as elastic cube or wire frame object), this is dynamic
Drawing prompt object includes the expression of the equipment mobile relative to the expression of plane (for example, table of the expression of equipment relative to plane
The mobile instruction shown is mobile by the required equipment that user realizes).For example, animation prompt object includes the expression relative to plane
The expression 12004 of 12010 mobile equipment 100, as described in 2B to Figure 12 D referring to Fig.1.In some embodiments, when
Equipment detect equipment it is mobile when, equipment stop display animation prompt object (for example, instruction user have begun will make
Calibrate the mode mobile device continued).In some embodiments, when equipment detect equipment it is mobile when and in school
Before standard has been completed, equipment replaces the display of animation prompt object further relative to equipment with calibration user interface object
Calibration guide user.For example, as described in 2C to Figure 12 E referring to Fig.1, when detect equipment it is mobile when (such as Figure 12 C
To shown in Figure 12 D), the animation prompt of the expression 12004 including equipment 100 stops showing and showing in fig. 12e that calibration is used
Family interface object 12014.Display includes that the animation prompt object of the expression of the equipment mobile relative to the expression of plane is user
Provide visual feedback (for example, to show the mobile type of the required equipment of calibration).Improved visual feedback is provided for user
The operability of equipment is enhanced, and keeps user-equipment interface more effective (for example, by helping user to meet school to provide
The mode mobile device of the quasi- required information of fiducial mark), this is further through allowing users to faster and more effectively subtract using equipment
Lack electricity usage and extends the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17026): according to cameras one or more in physical environment
First movement magnitude, mobile first amount of the quasi- user interface object of high-ranking officers;And according to cameras one or more in physical environment
The second mobile magnitude, mobile second amount of the quasi- user interface object of high-ranking officers, wherein the first amount is different from (for example, being greater than) second
Amount, and first movement magnitude is different from the mobile magnitude of (for example, being greater than) second (for example, based on same in physical environment
Movement on direction moves magnitude to measure first movement magnitude and second).The quasi- user interface object of high-ranking officers is mobile to correspond to one
The amount of the mobile magnitude of a or multiple (equipment) camera provides visual feedback (for example, indicating to the user that calibration user interface pair
The movement of elephant is the guidance mobile for equipment needed for calibration).Improved visual feedback, which is provided, for user enhances equipment
Operability, and keep user-equipment interface more effective (for example, suitably inputting by helping user to provide and reducing operation
User's mistake when equipment/interacted with equipment), this is further through allowing users to faster and more effectively subtract using equipment
Lack electricity usage and extends the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17028): according to the one or more phases confirmly detected
The attitudes vibration of machine and the movement (for example, lateral movement, lateral movement such as to the left, to the right or back and forth) of the first kind
It is corresponding (and not with the movement of Second Type (for example, vertically move, such as upwards, downwards or up-and-down movement) it is right
Answer), calibration user interface object is moved based on the movement of the first kind (for example, moving calibration user interface in the first way
Object (for example, the vertical axis rotational alignment user interface object for passing around calibration user interface object));And according to determination
The attitudes vibration of the one or more cameras detected and Second Type it is mobile it is corresponding (and not with the movement pair of the first kind
Answer), the movement based on Second Type is abandoned to move calibration user interface object (for example, abandoning moving calibration in the first way
User interface object or the quasi- user interface object of high-ranking officers are remain stationary).E.g., including the equipment 100 of one or more cameras
Lateral movement (for example, as described in 2F-1 to Figure 12 G-1 and Figure 12 F-2 to Figure 12 G-2 referring to Fig.1) make calibrate user
Interface object 12014 rotates, and the vertically moving of equipment 100 is (for example, extremely such as 2G-1 to Figure 12 H-1 and Figure 12 G-2 referring to Fig.1
Described in Figure 12 H-2) calibration user interface object 12014 will not be made to rotate.According to the equipment camera confirmly detected
Attitudes vibration correspond to moving for Second Type and abandon calibration user interface object movement provide visual feedback (for example,
Indicate to the user that the movement for not needing the Second Type of one or more cameras to be calibrated).Improved view is provided for user
Feel that feedback enhances the operability of equipment, and keeps user-equipment interface more effective (for example, by helping user to avoid mentioning
For unnecessary input), this further through allow users to faster and more effectively to reduce using equipment electricity usage and
Extend the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17030): according to one in the physical environment detected
Or the attitudes vibration of multiple cameras is used to move calibration user interface object (for example, rotation and/or inclination) without changing calibration
Feature display position of the family interface object on the first interface region is (for example, the position of geometric center, or calibration user
The axis of interface object over the display) (for example, calibration user interface object is anchored into the fixation position on display, simultaneously
Physical environment moves below calibration user interface object in the visual field of one or more cameras).For example, Figure 12 E-1 extremely
In Figure 12 I-1, calibration user interface object 12014 rotates, while being maintained at the fixation position relative to display 112.It is mobile
Calibration user interface object provides visual feedback (for example, referring to without changing the feature display position of calibration user interface object
Show that calibration user interface object is different from being placed on the virtual objects at the position relative to the augmented reality environment of display).
There is provided improved visual feedback for user and enhance the operability of equipment, and make user-equipment interface it is more effective (for example,
Suitably input by helping user to provide and reduce user's input error), this is further through allowing users to more rapidly and effectively
Ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17032): around perpendicular to one in physical environment or
The axis rotational alignment user interface object of the moving direction of multiple cameras is (for example, when the equipment of (e.g., including camera) is in x-y
When mobile back and forth in plane, calibration user interface object is rotated around z-axis, or ought (e.g., including camera) set
It is standby along x-axis (for example, x-axis is defined as the horizontal direction relative to physical environment and for example positioned at the flat of touch-screen display
In face) edge to edge it is mobile when, calibration user interface object is around y-axis rotation).For example, in Figure 12 E-1 to Figure 12 G-1, school
Quasi- user interface object 12014 surrounds the vertical axis rotation perpendicular to the lateral movement of equipment shown in Figure 12 E-2 to Figure 12 G-2
Turn.Axis rotational alignment user interface object around the movement perpendicular to equipment camera provides visual feedback (for example, to user
The movement for indicating calibration user interface object is the guidance mobile for equipment needed for calibration).Improved view is provided for user
Feel that feedback enhances the operability of equipment, and keeps user-equipment interface more effective (for example, by helping user to provide conjunction
Suitable input and user's mistake when reducing operation equipment/interact with equipment), this further through allow users to more rapidly and
Equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17034): according to the visual field in one or more cameras
In the change rate (for example, movement speed of physical environment) that detects the mobile calibration user interface object of speed that determines.With
The mobile calibration user interface object of the speed determined according to the attitudes vibration of equipment camera provide visual feedback (for example, to
The movement of family instruction calibration user interface object is the guidance mobile for equipment needed for calibration).It is provided for user improved
Visual feedback enhances the operability of equipment, and keeps user-equipment interface more effective (for example, by helping user to provide
User's mistake when suitably inputting and reducing operation equipment/interact with equipment), this is further through allowing users to more rapidly
And equipment is efficiently used and reduces electricity usage and extends the battery life of equipment.
In some embodiments, it is adjusted according to the attitudes vibration of one or more cameras in the physical environment detected
At least one display parameters of whole calibration user interface object include (17036): along according to the view in one or more cameras
The mobile calibration user interface object in the direction that variation (for example, movement speed of the physical environment) direction detected in determines
(for example, equipment rotates clockwise calibration user interface object for the movement of equipment from right to left and is directed to equipment from left-hand
Right movement rotational alignment user interface object counterclockwise or equipment rotates school for the movement of equipment from right to left counterclockwise
Quasi- user interface object and rotate clockwise calibration user interface object for the movement of equipment from left to right).Along basis
The mobile calibration user interface object in the direction that the attitudes vibration of equipment camera determines provides visual feedback (for example, referring to user
The movement for showing calibration user interface object is the guidance mobile for equipment needed for calibration).Improved vision is provided for user
Feedback enhances the operability of equipment, and keeps user-equipment interface more effective (for example, suitable by helping user to provide
Input and user's mistake when reducing operation equipment/interact with equipment), this is further through allowing users to more rapidly and have
Effect ground is reduced electricity usage using equipment and extends the battery life of equipment.
It should be appreciated that the particular order that the operation in Figure 17 A to Figure 17 D is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,1000,16000,18000,19000 and 20000) description is equally with class
As mode be suitable for above in relation to method 17000 described in Figure 17 A to Figure 17 D.For example, above with reference to 17000 institute of method
Contact, input, virtual objects, interface region, visual field, tactile output, movement and/or the animation stated optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,1000,16000,18000,19000 and 20000)
One of contact, input, virtual objects, interface region, visual field, tactile output, mobile and/or animation feature are more
Person.For brevity, these details are not repeated herein.
Figure 18 A to Figure 18 I is the flow chart for showing the method 18000 of rotation of the constraint virtual objects around axis.Method
18000 have display generating unit (for example, display, projector, head-up display etc.), one or more input equipments
(for example, touch sensitive surface or function simultaneously as the touch-screen display of display generating unit and touch sensitive surface), one or more camera
(for example, backward camera of one or more on side opposite with display and touch sensitive surface in equipment), and for detecting
The posture of equipment including one or more cameras is (for example, the orientation relative to physical environment around is (for example, rotation, yaw
And/or inclination angle) and position) variation one or more attitude transducers (for example, accelerometer, gyroscope and/or magnetic force
Instrument) electronic equipment (for example, the equipment 300 of Fig. 3 or portable multifunction device 100 of Figure 1A) at execute.In method 18000
Some operations be optionally combined and/or the sequence of some operations is optionally changed.
Equipment is by showing that generating unit shows the first of (18002) virtual three-dimensional object in the first interface region
The expression (for example, go up on the stage user interface or augmented reality user interface) at visual angle.For example, virtual objects 11002 are in the user that goes up on the stage
It is shown in interface 6010, as shown in Figure 13 B.
When showing the expression at the first visual angle of virtual three-dimensional object in the first interface region over the display, if
Standby (18004) first input of detection (for example, input (for example, contacting by one or two finger) gently is swept on touch sensitive surface,
Or pivot input (for example, two finger rotations or a finger contact are contacted around another finger and pivoted)), first input
Corresponding to by virtual three-dimensional object relative to display (for example, with the corresponding display plane of display generating unit, such as touch screen
The plane of display) rotation request, with display from the sightless virtual three-dimensional object in the first visual angle of virtual three-dimensional object
A part.For example, the request is inputted as described in 3B to Figure 13 C referring to Fig.1 or as described by 3E to Figure 13 F referring to Fig.1
Input.
In response to detecting the first input (18006): according to determine first input correspond to around first axle (for example,
The first axle of display plane (for example, x-y plane), such as x-axis are parallel in horizontal direction) request of rotated three dimensional object,
Equipment is a certain amount of relative to first axle rotation virtual three-dimensional object, and magnitude of the amount based on the first input is (for example, gently sweep input edge
The speed of the vertical axis (for example, y-axis) of touch sensitive surface (for example, x-y plane corresponding with the x-y plane of display) and/or
Distance) it determines, and the amount relative to first axle by rotating the movement more than threshold value rotation amount to limitation virtual three-dimensional object
Limit constrain (for example, be limited in the range of +/- 30 degree of angles of first axle around the rotation of first axle, no matter the
How is the magnitude of one input, forbids off-limits rotation).For example, as described in 3E to Figure 13 G referring to Fig.1, virtual objects
11002 limited swivel restricts beam.According to determine the first input correspond to around second axis different from first axle (for example,
The second axis of display plane (for example, x-y plane), such as y-axis are parallel in vertical direction) request of rotated three dimensional object,
Equipment is a certain amount of relative to the second axis rotation virtual three-dimensional object, and magnitude of the amount based on the first input is (for example, gently sweep input edge
The speed of the trunnion axis (for example, x-axis) of touch sensitive surface (for example, x-y plane corresponding with the x-y plane of display) and/or
Distance) it determines, wherein for having the input of the magnitude higher than respective threshold, equipment rotates virtual three-dimensional pair relative to the second axis
As being more than threshold value rotation amount.In some embodiments, for the rotation relative to the second axis, equipment applies constraint to rotation,
The constraint be greater than to the constraint of the rotation relative to first axle (for example, allow three dimensional object rotate 60 degree rather than 30 degree).?
In some embodiments, for the rotation relative to the second axis, equipment does not apply constraint to rotation, so that three dimensional object can be around the
Two axis rotate freely (for example, for the input with enough high magnitudes, movements such as including one or more contact fast or
Long gently sweeps input, and three dimensional object can be rotated relative to the second axis more than 360 degree).For example, virtual objects 11002 are in response to ginseng
According to inputted described in Figure 13 B to Figure 13 C and around y-axis occur rotation amount than virtual objects 11002 in response to 3E referring to Fig.1 extremely
The described input of Figure 13 G and the rotation amount that occurs around x-axis is bigger.It is around first axle or around the rotation pair of the second axis according to input
The request of elephant determines that object rotation is still more than threshold quantity by the amount for being constrained object rotation by threshold quantity, to improve control
Make the ability of different types of rotation process.Additional control option is provided without since the control of additional display keeps user interface miscellaneous
The operability of equipment disorderly is enhanced, and keeps user-equipment interface more efficient.
In some embodiments, in response to detecting the first input (18008): including contact according to determining first input
First movement across touch sensitive surface on (for example, the direction y, vertical direction) on touch sensitive surface in a first direction, and determination connects
The first movement of touching in a first direction meets the first standard for the expression relative to first axle rotation virtual objects, wherein
First standard includes that the first input includes the requirement for being greater than first threshold amount of movement to meet the first standard in a first direction
(for example, equipment does not initiate three dimensional object around the rotation of first axle, until equipment is detected in a first direction greater than first threshold
Amount of movement), equipment determines that the first input corresponds to and (for example, x-axis, the trunnion axis parallel with display, or passes through around first axle
The trunnion axis of virtual objects) rotated three dimensional object request;It and include contact in second direction (example according to determining first input
Such as, the direction x, the horizontal direction on touch sensitive surface) on it is second across touch sensitive surface mobile, and determine contact in a second direction
The second mobile the second standard met for the expression relative to the second axis rotation virtual objects, wherein the second standard includes the
One input includes being greater than second threshold amount of movement to meet the requirement of the second standard (for example, equipment is not sent out in a second direction
Three dimensional object is played around the rotation of the second axis, until equipment is detected in a second direction greater than second threshold amount of movement), equipment is true
Fixed first input, which corresponds to, surrounds the second axis (for example, being parallel to the vertical axis of display, or across the vertical axis of virtual objects)
The request of rotated three dimensional object, wherein first threshold is greater than second threshold (for example, user needs gently to sweep in vertical direction to touch
Hair is gently swept in the horizontal direction around rotation (for example, tilting forward or backward object relative to the user) ratio of trunnion axis to trigger
The amount bigger around the rotation (for example, target rotation) of vertical axis).It is around first axle or around the rotation pair of the second axis according to input
The request of elephant determines that object rotation is still more than threshold quantity by the amount for being constrained object rotation by threshold quantity, to improve sound
The request of Ying Yuyu target rotation is corresponding to be inputted to control the ability of different types of rotation process.Additional control option is provided
Without making user interface enhance the operability of equipment in a jumble due to the control of additional display, and make user-equipment interface
It is more efficient.
In some embodiments (18010), virtual three-dimensional object relative to first axle rotation with the of the first input
The characteristic value of one input parameter (for example, gently sweep distance or gently sweep speed) and the rotation that virtual three-dimensional object is applied to around first axle
The corresponding relationship for turning the first degree between amount occurs, and virtual three-dimensional object inputs gesture relative to the rotation of the second axis with second
The characteristic value of the first input parameter (for example, gently sweep distance or gently sweep speed) be applied to virtual three-dimensional object with around the second axis
Rotation amount between the corresponding relationship of the second degree occur, and the virtual three-dimensional object that the corresponding relationship of the first degree is related to
The corresponding relationship of the second degree of speed ratio relative to the first input parameter is less (for example, the speed ratio around first axle surrounds
The rotation of second axis, which has, more to be rubbed or obtains).For example, the first rotation amount of virtual objects 11002 is defeated in response to gently sweeping
Enter and occur, has and gently sweep distance d1, for rotating (as described in 3B to Figure 13 C referring to Fig.1) around y-axis, and virtual objects
11002 the second rotation amount less than the first rotation amount occurs in response to gently sweeping input, has and gently sweeps distance d1, for surrounding
X-axis rotates (as described in 3E to Figure 13 G referring to Fig.1).In response to inputting, with bigger swing, still smaller swing rotation is virtual
Object, depending on input be around first axle or around the request of the second axis target rotation, this improve in response to target rotation
Request corresponding input control the ability of different types of rotation process.Additional control option is provided without due to additional aobvious
The control shown makes user interface enhance the operability of equipment in a jumble, and keeps user-equipment interface more efficient.
In some embodiments, equipment detects the end of (18012) first inputs (for example, input includes touch sensitive surface
On one or more contacts movements, and the end for detecting the first input includes the one or more contacts of detection from touch-sensitive table
Face is lifted away from).After (for example, in response to) detects the end of the first input, equipment continues (18014) and is detecting input
Magnitude (for example, the movement speed contacted before being lifted away from based on contact) rotated three dimensional object based on the first input before end,
It include: to make spin down first amount of the object relative to first axle, this according to determining that three dimensional object rotates relative to first axle
One amount and three dimensional object proportional (for example, being based on the first analog physical parameter, such as have relative to the magnitude of the rotation of first axle
There is the simulation friction of the first coefficient of friction, slow down rotation of the three dimensional object around first axle);And it is opposite according to determining three dimensional object
It is rotated in the second axis, makes spin down second amount of the object relative to the second axis, second amount and three dimensional object are relative to second
The magnitude of the rotation of axis is proportional (for example, the second analog physical parameter is based on, such as with second less than the first coefficient of friction
The simulation friction of coefficient of friction slows down three dimensional object around the rotation of the second axis), wherein the second amount is different from the first amount.For example,
In Figure 13 C to Figure 13 D, virtual objects 11002 continue to rotate after contact 13002 is lifted away from, this causes virtual objects 11002
Rotation, as described in 3B to Figure 13 C referring to Fig.1.In some embodiments, the second amount is greater than the first amount.In some implementations
In scheme, the second amount is less than the first amount.It according to input is revolved around first axle or around the second axis after detecting end of input
The request for turning object makes the first amount of spin down or the second amount of virtual objects, provides and indicates rotation process in different ways
The visual feedback that virtual objects are applied to rotate around first axle and the second axis.Improved visual feedback enhancing is provided for user
The operability of equipment, and keep user-equipment interface more efficient (for example, by helping user to provide suitable input simultaneously
Avoid try to provide the input for being used to manipulate virtual objects before the second orientation placement by object to correspond to plane), this
Reduce electricity usage further through allowing users to more rapidly and efficiently using equipment and extend battery longevity of equipment
Life.
In some embodiments, equipment detects the end of (18016) first inputs (for example, input includes touch sensitive surface
On one or more contacts movements, and the end for detecting the first input includes the one or more contacts of detection from touch-sensitive table
Face is lifted away from).After (for example, in response to) detects the end of the first input (18018): according to determining that three dimensional object is opposite
It is more than corresponding threshold rotating value in first axle rotation, equipment keeps three dimensional object anti-relative at least part of the rotation of first axle
Turn;Also, it is less than corresponding threshold rotating value according to determining that three dimensional object is rotated relative to first axle, it is three-dimensional that equipment abandons reversion
Rotation of the object relative to first axle.(for example, stopping rotation of the three dimensional object relative to first axle, and/or continue three-dimensional
Rotation of the object relative to first axle in the direction of motion of input rotates magnitude by inputting before detecting end of input
Magnitude determine).For example, after the rotation of virtual objects 11002 is more than threshold rotating value, as described by 3E to Figure 13 G referring to Fig.1
, the rotation of virtual objects 11002 is inverted, as shown in Figure 13 G to Figure 13 H.In some embodiments, it is based on three dimensional object
Rotation is more than the distance of corresponding threshold rotating value to determine the reversion amount of the rotation of three dimensional object (for example, if three dimensional object is revolved
Turn larger more than the rotation amount of corresponding threshold rotating value, then the rotation by three dimensional object relative to first axle inverts larger quantities, phase
Than under, if the rotation amount that three dimensional object is rotated more than corresponding threshold rotating value is smaller, by the rotation relative to first axle
Invert lesser amount).In some embodiments, the reversion of rotation is driven by the physical parameter such as buoyancy effect simulated, when
Three dimensional object relative to first axle be rotated more than corresponding threshold rotating value it is farther when, which is pulled with bigger power.
In some embodiments, the reversion of rotation is along based on the rotation side for being rotated more than corresponding threshold rotating value relative to first axle
To determining direction of rotation (for example, if three dimensional object is rotated into so that move backward in display at the top of object,
The top of object is rotated out of forward display by the reversion of rotation;If three dimensional object be rotated into so that the top of object forward
Display is rotated out of, then the reversion rotated will be rotated back in display at the top of object;If three dimensional object is rotated into
So that moving backward in display on the right side of object, then the right side of object is rotated out of forward display by the reversion rotated;And
And/or if person's three dimensional object is rotated into so that the left side of object rotates out of forward display, the reversion rotated is by object
Left side be rotated back in display).In some embodiments, for example, being constrained in the rotation relative to the second axis
In the case where corresponding angular range, the similar rubber item selection (rubberbanding) for rotating around the second axis is executed
(for example, condition sex reversal of rotation).In some embodiments, for example, not constraining rotation relative to the second axis so that
Equipment allows in the case that three dimensional object is rotated by 360 °, do not execute the rubber item for rotate around the second axis select (for example, because
Threshold rotating value is not applied to the rotation relative to the second axis really for equipment).Whether it is rotated more than threshold rotating value according to object,
At least part of three dimensional object relative to the rotation of first axle is inverted after detecting end of input, or abandons reversion three
The a part of dimensional object relative to the rotation of first axle, so that the threshold rotating value of rotation of the instruction suitable for virtual objects is provided
Visual feedback.Improved visual feedback is provided for user and enhances the operability of equipment, and keeps user-equipment interface higher
Effect (for example, by helping user to avoid try to provide the input for by virtual objects rotation being more than threshold rotating value), this leads to again
It crosses and allows users to more rapidly and efficiently use equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments (18020), corresponded to according to determining first input around different from first axle and the second axis
Third axis (for example, the third axis of the plane (for example, x-y plane) perpendicular to display, such as z-axis) rotated three dimensional object
Request, equipment abandon rotating virtual three-dimensional object relative to third axis (for example, being prohibited around the rotation of z-axis and rotating around z-axis
The request of object is ignored by equipment).In some embodiments, equipment provides alarm (for example, being used to indicate the touching of input failure
Feel output).The rotation that virtual objects are abandoned corresponding to the request around third axis rotation virtual objects is inputted according to determining to rotate,
The visual feedback that instruction is restricted around the rotation of third axis is provided.Improved visual feedback, which is provided, for user enhances equipment
Operability, and keep user-equipment interface more efficient (for example, by helping user to avoid try to provide for will be virtual right
As the input rotated around third axis), this is further through allowing users to more rapidly and efficiently use equipment and reducing electric power makes
With and extend the battery life of equipment.
In some embodiments, equipment shows the expression of the shade of (18022) virtual three-dimensional object projection, while the
The expression at the first visual angle of display virtual three-dimensional object in one interface region (for example, user interface of going up on the stage).The equipment root
Change the shape of the expression of shade relative to the rotation of first axle and/or the second axis according to virtual three-dimensional object.For example, when virtual
When object 11002 rotates, the shape of the shade 13006 of virtual objects 11002 is different from Figure 13 B to Figure 13 F.In some embodiment party
In case, shade shifts and changes shape to indicate go up on the stage user of the virtual objects relative to the predefined bottom side for supporting virtual objects
The current orientation of invisible ground plane in interface.In some embodiments, the surface of virtual three-dimensional object seems to reflect
Light from analog light source, the analog light source are located on the predefined direction in the Virtual Space indicated in user interface of going up on the stage.
Visual feedback is provided (for example, instruction virtual objects are relative to its orientation according to the shape that the rotation of virtual objects changes shade
Virtual plane (for example, rack in view of going up on the stage)).Improved visual feedback, which is provided, for user enhances operating for equipment
Property, and keep user-equipment interface more efficient (for example, by helping user to determine for causing to revolve around first axle or the second axis
What is turned gently sweeps the appropriate direction of input), this reduces electric power further through allowing users to more rapidly and efficiently using equipment
Using and extend the battery life of equipment.
In some embodiments, in the first interface region (18024) when rotation virtual three-dimensional object: according to true
It is fixed that virtual three-dimensional object is shown with the second visual angle for appearing the predefined bottom of virtual three-dimensional object, it abandons with virtual three-dimensional pair
The expression indicated to show shade at the second visual angle of elephant.For example, equipment is not shown virtually when observing from below virtual objects
The shade (for example, as described in 3G to Figure 13 I referring to Fig.1) of object.It is abandoned according to the bottom for determining display virtual objects
The shade of display virtual objects provides visual feedback (for example, instruction object has rotated into the position for being no longer correspond to virtual plane
(for example, the rack for view of going up on the stage)).Improved visual feedback, which is provided, for user enhances operability and the use of equipment
Family-equipment interface is more efficient, this reduces electricity usage simultaneously further through allowing users to more rapidly and efficiently using equipment
And extend the battery life of equipment.
In some embodiments, virtual three-dimensional object is rotated in the first interface region (for example, view of going up on the stage)
Later, equipment detection (18026) corresponds to the second defeated of the request that virtual three-dimensional object is reset in the first interface region
Enter (for example, the second input is the double-click on the first interface region).In response to detecting the second input, equipment is first
(18028) (for example, size by rotating and resetting virtual objects) virtual three-dimensional object is shown in interface region
The expression for predefining original visual angle (for example, the first visual angle, or the default starting visual angle different from the first visual angle is (for example, when the
One visual angle is in the display view angle after user's manipulation in user interface of going up on the stage)) (for example, in response to double-clicking, equipment will be empty
The orientation of quasi- object be reset to predefined original orientation (for example, upright, wherein front side user oriented and bottom side be shelved on it is pre-
On the ground plane of definition)).Make the visual angle of virtual objects 11002 (from the visual angle of change for example, Figure 13 I to Figure 13 J is shown
For the result for rotating input described in 3B to Figure 13 G referring to Fig.1) change into the original visual angle in Figure 13 J (its in Figure 13 A
Shown in virtual objects 11002 visual angle it is identical) input.In some embodiments, virtual with resetting in response to detecting
Corresponding second input of the instruction of three dimensional object, equipment reset the size of virtual three-dimensional object also to reflect virtual three-dimensional pair
The default display size of elephant.In some embodiments, the orientation that input resets virtual objects in user interface of going up on the stage is double-clicked
And size, and double-click input and only reset size, the orientation without resetting virtual objects in augmented reality user interface.In some realities
It applies in scheme, equipment requirement will be double-clicked and be directed toward virtual objects, to reset the ruler of the virtual objects in augmented reality user interface
It is very little, while equipment is reset virtually in response to the double-click detected on virtual objects and the double-click detected around virtual objects
The orientation and size of object.In augmented reality view, single finger gently sweeps dragging virtual objects rather than rotation virtual objects
(for example, different from view of going up on the stage).The predetermined of virtual objects is shown in response to detecting the request of resetting virtual objects
The original visual angle of justice enhances the operability of equipment and keeps user-equipment interface more efficient (for example, by providing resetting object
Option rather than require user estimate provide the input for regulating object attribute when by object return to it is predefined
Original visual angle).It reduces input quantity needed for executing operation and improves the operability of equipment, this is further through allowing users to more
Quickly and efficiently reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, when the display virtual three in the first interface region (for example, user interface of going up on the stage)
When dimensional object, equipment detects the third input (example that (18030) correspond to the request of size for resetting virtual three-dimensional object
Such as, third input is directed to the kneading or expansion gesture of the virtual objects indicated on the first interface region, third input
With meet standard (for example, for initiates reset size operation it is original or enhance standard (below with reference to method
19000 in greater detail)) magnitude.In response to detecting that third inputs, equipment adjusts (18032) according to the magnitude of input
The size of the expression of virtual three-dimensional object in first interface region.For example, in response to including the input (example that gesture is unfolded
Such as, as referring to described in Fig. 6 N to Fig. 6 O), the size of virtual objects 11002 reduces.In some embodiments, work as adjustment
When the size of the expression of virtual three-dimensional object, equipment display indicator is to indicate the current zoom levels of virtual objects.Some
In embodiment, equipment stops the indicator of display zoom level when third is inputted and terminated.According to for resetting object
The magnitude of the input of size enhances the operability of equipment to adjust the size of virtual objects (for example, by providing by required
Amount resets the option of object size).Reduce execute operation needed for input quantity improve equipment operability and
Keep user-equipment interface more efficient, this reduces electric power further through allowing users to more rapidly and efficiently using equipment
Using and extend the battery life of equipment.
In some embodiments, the virtual three-dimensional in adjustment the first interface region (for example, user interface of going up on the stage)
While the size of the expression of object, the size of equipment detection (18034) virtual three-dimensional object has reached virtual three-dimensional object
Predefined default display size.In response to detecting that the size of virtual three-dimensional object has reached the predefined of virtual three-dimensional object
Default display size, equipment generate (18036) tactile output (for example, discrete tactile output) with indicate virtual three-dimensional object with
Predefined default display size is shown.Figure 11 O provides the embodiment of tactile output 11024, virtual in response to detecting
The size of object 11002 has reached the previous predefined size of virtual objects 11002 and is provided (for example, such as referring to Fig.1 1M is extremely
Described in Figure 11 O).In some embodiments, it is shown when in response to double-clicking to input to reset to default by the size of virtual objects
When showing size, equipment generates identical tactile output.Predefined default display is had reached according to the size of determining virtual objects
Size provides a user feedback (for example, instruction does not need further to input the simulation ruler of virtual objects to generate tactile output
It is very little to return to predefined size).The operability that improved touch feedback enhances equipment is provided (to permit for example, passing through and providing
Family perception allowable has reached the sensory information of the predefined analog physical size of virtual objects without the information use due to display
Family interface is mixed and disorderly), this reduces electricity usage and extends further through allowing users to more rapidly and efficiently using equipment
The battery life of equipment.
In some embodiments, zoom level is shown in the first interface region (for example, user interface of going up on the stage)
Visually indicate (for example, sliding block of corresponding with the current zoom level value of instruction).When the expression for adjusting virtual three-dimensional object
When size, sized according to the expression of virtual three-dimensional object adjusts visually indicating for zoom level.
In some embodiments, when the display virtual three in the first interface region (for example, user interface of going up on the stage)
When the expression at the third visual angle of dimensional object, equipment detects (18042) and in second user interface zone (for example, augmented reality is used
Family interface) in display virtual three-dimensional object request it is corresponding 4th input, the second user interface zone include one or more
The visual field of a camera (for example, the camera of insertion in a device).In response to detecting the 4th input, equipment is via display generating unit
Part display (18044) in at least part of visual field for including one or more cameras in second user interface zone is empty
The expression of quasi- object is (for example, show one or more in response to showing the request of virtual objects in second user interface zone
The visual field of a camera), the visual field of wherein one or more cameras is the view of physical environment locating for one or more cameras.It is aobvious
The expression for showing virtual objects includes: around first axle by virtual three-dimensional object (for example, being parallel to the flat of display in the horizontal direction
The axis in face (for example, x-y plane), such as x-axis) rotate to predefined angle (for example, to default yaw angle, such as 0 degree;Or
It rotates to and the planar alignment (for example, parallel) that is detected in the physical environment captured in the visual field of one or more cameras
Angle).In some embodiments, equipment shows the animation of three dimensional object, which is gradually swiveled into pre- relative to first axle
Define angle.Keep virtual three-dimensional object relative to the second axis (for example, being parallel to the plane (example of display in vertical direction
Such as, x-y plane) axis, such as y-axis) current angular.In response to showing virtual objects in the visual field of one or more cameras
Request, virtual objects are rotated into predefined angle (for example, without further input that virtual objects are again fixed around first axle
Position is to the predefined orientation relative to plane) enhance the operability of equipment.Input quantity needed for executing operation is reduced to mention
The high operability of equipment and keep user-equipment interface more efficient, this is further through allowing users to more rapidly and efficiently
Ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, when the display virtual three in the first interface region (for example, user interface of going up on the stage)
When the expression at the 4th visual angle of dimensional object, equipment detects (18046) and corresponds to back to the bivariate table including virtual three-dimensional object
5th input of the request for the two-dimensional user interface shown.In response to detecting the 5th input, equipment (18048): (for example, aobvious
Before the two-dimensional representation for showing virtual three-dimensional object and two-dimensional user interface) virtual three-dimensional object is rotated to show virtual three-dimensional object
Visual angle corresponding with the two-dimensional representation of virtual three-dimensional object;And in rotation virtual three-dimensional object to show and virtual three-dimensional pair
The two-dimensional representation of virtual three-dimensional object is shown after the corresponding corresponding visual angle of the two-dimensional representation of elephant.In some embodiments, if
The animation of standby display three dimensional object, the animation gradually rotate the bivariate table with virtual three-dimensional object to show virtual three-dimensional object
Show corresponding visual angle.In some embodiments, equipment also resets virtual three-dimensional object during rotation or after rotation
Size to match the size of the two-dimensional representation of the virtual three-dimensional object shown in two-dimensional user interface.In some embodiments
In, show animation transition with show the virtual three-dimensional object of rotation in two-dimensional user interface towards two-dimensional representation (for example, virtual
The thumbnail of object) position it is mobile, and stablize in this position.In response to for returning to display virtual three-dimensional object
Virtual three-dimensional object is rotated to visual angle corresponding with the two-dimensional representation of virtual three-dimensional object and provides vision by the input of two-dimensional representation
Feedback (for example, to indicate that shown object is two-dimensional).For user provide that improved visual feedback enhances equipment can
Operability, and keep user-equipment interface more efficient (for example, by helping user to provide input appropriate and avoiding try to
Input for rotating two-dimensional object along axis is provided, the rotation of two-dimensional object is unavailable for the axis), this is further through using
Family more rapidly and can efficiently use equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments, before the expression at the first visual angle of display virtual three-dimensional object, equipment is shown
(18050) user interface, the user interface include the expression of virtual three-dimensional object (for example, thumbnail or icon), the expression packet
The expression of the view of the virtual three-dimensional object from corresponding visual angle is included (for example, static representations are such as corresponding to virtual three-dimensional object
Two dimensional image).When showing the expression of virtual three-dimensional object, equipment detects the request that (18052) show virtual three-dimensional object
(for example, tap input or other selection inputs for being directed toward the expression of virtual three-dimensional object).In response to detecting display virtual three
The request of dimensional object, the virtual three-dimensional object replacement at the equipment corresponding visual angle indicated for being rotated to matching virtual three dimensional object
(18054) display of the expression of virtual three-dimensional object.Figure 11 A to Figure 11 E provides the use of the expression of display virtual objects 11002
The embodiment at family interface 5060.In response to showing the request of virtual objects 11002, as described in 1A referring to Fig.1, go up on the stage user
The display of the display replacement user interface 5060 of virtual objects 11002 in interface 6010, as depicted in fig. 11E.It is virtual in Figure 11 E
The visual angle of object 11002 is identical as the visual angle of expression of virtual objects 11002 in Figure 11 A.In some embodiments, virtually
The expression of three dimensional object is amplified before by the replacement of virtual three-dimensional object (for example, being amplified to the size with virtual three-dimensional object
Matched size).In some embodiments, virtual three-dimensional object is initially shown with the size of virtual three-dimensional object indicated,
And it is then amplified.In some embodiments, in the expression from virtual three-dimensional object to the transitional period of virtual three-dimensional object
Between, equipment gradually amplifies the expression of virtual three-dimensional object, by the expression of virtual three-dimensional object and virtual three-dimensional object cross fade,
Then gradually amplify virtual three-dimensional object, with virtual three-dimensional object indicate to be formed between virtual three-dimensional object it is smoothed
It crosses.In some embodiments, select the initial position of virtual three-dimensional object to correspond to the position of virtual three-dimensional object indicated
It sets.In some embodiments, the expression of virtual three-dimensional object, which is displaced to, is selected as and will show virtual three-dimensional object
The corresponding position in position.The display indicated with (two dimension) of the virtual three-dimensional object replacement virtual three-dimensional object of rotation is to match
The visual angle that (two dimension) indicates provides visual feedback (for example, to indicate that three dimensional object is the two-dimensional representation phase with virtual three-dimensional object
Same object).Improved visual feedback is provided for user to enhance the operability of equipment and keep user-equipment interface higher
Effect, this reduces electricity usage and extends the electricity of equipment further through allowing users to more rapidly and efficiently using equipment
The pond service life.
In some embodiments, before showing the first user interface, equipment shows that (18056) include virtual three-dimensional pair
The two-dimensional user interface of the two-dimensional representation of elephant.When display includes the two-dimensional user interface of the two-dimensional representation of virtual three-dimensional object,
Equipment detects (18058) on touch sensitive surface and meets preview standard at position corresponding with the two-dimensional representation of virtual three-dimensional object
(for example, preview standard requirements pressing input intensity be more than the first intensity threshold (for example, light press intensity threshold) and/or
Person's preview standard requirements pressing input duration be more than the first duration threshold) touch input first part's (example
Such as, the increase of contact strength).Meet the first part of the touch input of preview standard in response to detecting, equipment is shown
(18060) preview of virtual three-dimensional object, the preview are greater than the two-dimensional representation of virtual three-dimensional object (for example, the preview is animated
Change the different perspectives to show virtual three-dimensional object);In some embodiments, equipment shows that the animation of three dimensional object is gradually put
(for example, duration or pressure based on input or predetermined rate based on animation) greatly.Show virtual three-dimensional object
Preview (for example, display of user interface that the replacement of different user interfaces will not be used currently to show) enhance grasping for equipment
The property made (for example, can show virtual three-dimensional object by using family and back to checking the two-dimensional representation of virtual three-dimensional object,
Without providing the input for navigating between user interface).It reduces input quantity needed for executing operation and improves equipment
Operability, this reduces electricity usage and extends further through allowing users to more rapidly and efficiently using equipment
The battery life of equipment.
In some embodiments, when showing the preview of virtual three-dimensional object, equipment is (for example, by identical continuous
The contact of holding) detection (18062) touch input second part.In response to detecting the second part of touch input
(18064): menu being met according to the second part of determining touch input and shows standard (for example, menu shows standard requirements contact
Mobile along predefined direction (for example, upwards) is more than threshold quantity), equipment, which is shown, corresponds to multiple behaviour associated with virtual objects
The multiple optional options (for example, the share menu) made are (for example, sharing option, such as virtual with another equipment or user sharing
The various means of object);And standard is gone up on the stage (for example, standard requirements of going up on the stage according to the second part satisfaction for determining touch input
The intensity of contact exceeds more than the second threshold intensity (for example, deep pressing intensity threshold) of first threshold intensity), equipment is with including
The first user interface replacement of virtual three-dimensional object includes the display of the two-dimensional user interface of the two-dimensional representation of virtual three-dimensional object.
It goes up on the stage standard according to whether meeting, shows menu associated with virtual objects or with the first user including virtual three-dimensional object
Interface replacement includes the display of the two-dimensional user interface of the two-dimensional representation of virtual three-dimensional object, this makes a variety of different types of behaviour
It is able to respond and is executed in input.The input of a variety of different types of operation first kind is executed and improves use
Family is able to carry out the efficiency of these operations, to enhance the operability of equipment, this further through allow users to more rapidly and
It efficiently uses equipment and reduces electricity usage and extend the battery life of equipment.
In some embodiments, the first user interface includes (18066) multiple controls (for example, for being switched to the world
View, the button for returning etc.).Before showing the first user interface, equipment shows that (18068) include virtual three-dimensional object
Two-dimensional representation two-dimensional user interface.The request of virtual three-dimensional object is shown in the first user interface in response to detecting,
Equipment (18070) shows virtual three-dimensional object in the first user interface, without showing associated with virtual three-dimensional object one
The one or more controls of group;And after showing virtual three-dimensional object in the first user interface, equipment show one group one or
Multiple controls.For example, display includes virtual right before user interface 6010 of going up on the stage as described in 1A to Figure 11 E referring to Fig.1
As the display of the user interface 5060 of 11002 two-dimensional representation.In response to going up on the stage to show virtual objects in user interface 6010
11002 request (as described in 1A referring to Fig.1), display virtual objects 11002 (as shown in Figure 11 B to Figure 11 C), without
The control 6016,6018 and 6020 of user interface of going up on the stage 6010.In Figure 11 D to Figure 11 E, the control of user interface of going up on the stage 6010
6016,6018 and 6020 fade in the view in user interface.In some embodiments, which includes using
In the control for showing virtual three-dimensional object in augmented reality environment, wherein virtual three-dimensional object relative at one of equipment or
The plane detected in the visual field of multiple cameras is placed on fixed position.In some embodiments, in response to detecting
The request of virtual three-dimensional object is shown in one user interface: unripe in the first user interface according to determining virtual three-dimensional object
Middle display (for example, when getting out the first user interface of display, the threedimensional model of virtual objects is not fully loaded) (example
Such as, the load time of virtual objects is more than threshold amount of time (for example, obvious and discernable for a user)), equipment is shown
A part (for example, backdrop window of the first user interface) of first user interface is more without showing in the first user interface
A control;And it is ready to show in the first user interface (for example, in the feelings of not control according to determining virtual three-dimensional object
After the part for showing the first user interface under condition), equipment shows (for example, fading in) virtual three-dimensional pair in the first user interface
As;And after showing virtual three-dimensional object in the first user interface, equipment shows (for example, fading in) control.In response to inspection
It measures the request for showing virtual three-dimensional object in the first user interface and is ready to be shown according to determining virtual three-dimensional object
Show (for example, the threedimensional model of virtual objects has been loaded (for example, virtual objects when getting out the first user interface of display
Load time is less than threshold amount of time (for example, negligible and imperceptible for a user)): equipment shows the first user circle
Face, and have multiple controls in the first user interface;And equipment shows (example in the first user interface with multiple controls
Such as, do not fade in) virtual three-dimensional object.In some embodiments, when presence goes up on the stage user interface to return to two-dimentional user circle
When face (for example, in response to the request of " return "), control is converted into the bivariate table of virtual three-dimensional object in virtual three-dimensional object
It fades out first before showing.Display control provides visual feedback (for example, instruction after showing virtual three-dimensional object in the user interface
The control that virtual objects are manipulated during the time quantum needed for loading virtual objects is unavailable).Improved vision is provided for user
Feedback enhances the operability of equipment, and keeps user-equipment interface more efficient (for example, by helping user when virtual
Avoid providing the input of manipulating objects when manipulation operations are unavailable during the load time of object), this is further through allowing users to
More rapidly and efficiently uses equipment and reduce electricity usage and extend the battery life of equipment.
It should be appreciated that the particular order that the operation in Figure 18 A to Figure 18 I is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,1000,16000,17000,19000 and 20000) description is equally with class
As mode be suitable for above in relation to method 18000 described in Figure 18 A to Figure 18 I.For example, above with reference to 18000 institute of method
Contact, input, virtual objects, interface region, visual field, tactile output, movement and/or the animation stated optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,1000,17000,18000,19000 and 20000)
One of contact, input, virtual objects, interface region, visual field, tactile output, mobile and/or animation feature are more
Person.For brevity, these details are not repeated herein.
Figure 19 A to Figure 19 H is to show to meet the mobile magnitude of first threshold according to the first object manipulation behavior that determines to increase
The flow chart of the method 19000 of the mobile magnitude of second threshold needed for second object manipulation behavior.Method 19000 has display
Generating unit (for example, display, projector, head-up display etc.) and touch sensitive surface are (for example, touch sensitive surface or function simultaneously as aobvious
Show the touch-screen display of generating unit and touch sensitive surface) electronic equipment (for example, the equipment 300 of Fig. 3 or Figure 1A's is portable
Multifunctional equipment 100) at execute.Some operations in method 19000 be optionally combined and/or some operations it is suitable
Sequence is optionally changed.
Equipment shows (19002) first interface regions via display generating unit, the first interface region packet
Include user interface object associated with multiple object manipulation behaviors (e.g., including the user interface area of the expression of virtual objects
Domain), the multiple object manipulation behavior include in response to meeting the input of first gesture criterion of identification (for example, rotation standard) and
The the first object manipulation behavior (for example, around rotation of the user interface object of corresponding axis) executed is and in response to meeting second-hand
The input of gesture criterion of identification (for example, translation one of standard and scaling standard) and the second object manipulation behavior (example executed
Such as, one of the translation of user interface object or the scaling of user interface object).For example, shown virtual objects 11002
Associated with manipulative behavior, the manipulative behavior includes the rotation around corresponding axis (for example, as 4B to Figure 14 E referring to Fig.1 is retouched
State), translation (for example, such as 4K to Figure 14 M referring to Fig.1 described in) and scale (for example, such as 4G to Figure 14 I institute referring to Fig.1
Description).
When showing the first interface region, equipment detects first that (19004) are directed toward the input of user interface object
Partially (for example, the one or more on equipment detection touch sensitive surface at position corresponding with the display position of user interface object
Contact), the movement including the one or more contacts of detection on touch sensitive surface, and when detected on touch sensitive surface one or
When multiple contacts, the one or more contacts of both equipment combination first gesture criterion of identification and second gesture criterion of identification assessment
It is mobile.
In response to detecting the first part of input, equipment updates the outer of user interface object based on the first part of input
It sees, including (19006): meeting first gesture before meeting second gesture criterion of identification according to the first part of determining input and know
Other standard: first part's (for example, the direction of the first part based on input and/or magnitude) based on input is according to the first object
The appearance (for example, rotator user interface object) of manipulative behavior change user interface object;And (for example, not according to second
In the case where the appearance of object manipulation behavior change user interface object) pass through the threshold value (example of increase second gesture criterion of identification
Such as, threshold value needed for increasing the moving parameter (for example, moving distance, speed etc.) in second gesture criterion of identification) Lai Gengxin the
Two gesture criterion of identification.For example, virtual objects 1102 (contract according to the determining rotation standard that met meeting in Figure 14 E
Before putting standard) it rotates, and the threshold value ST for scaling standard increases to ST '.In some embodiments, for identification
Before the standard of gesture for target rotation obtains satisfaction, by the mark for meeting the gesture for translating or scaling for identification
Quasi- (assuming that unmet before the standard for being used to translate or scale) initiates translation to object or zoom operations are relatively easy.One
The standard that denier is used for the gesture of target rotation for identification is met, and initiation just becomes more the translation of object or zoom operations
Difficult (for example, the standard for Pan and Zoom is updated to the moving parameter threshold value increased), and object manipulation is biased to
In with it is identified and for manipulating the corresponding manipulative behavior of the gesture of the object.Meeting first gesture identification mark according to determining
Input meets second gesture criterion of identification before quasi-: first part of the equipment based on input is (for example, the first part based on input
Direction and/or magnitude) changed according to the second object manipulation behavior user interface object appearance (for example, translation user circle
In face of as or reset user interface object size);And (for example, not according to the first object manipulation behavior change user
In the case where the appearance of interface object) by increasing the threshold value of first gesture criterion of identification update first gesture criterion of identification
(for example, threshold value needed for increasing the moving parameter (for example, moving distance, speed etc.) in first gesture criterion of identification).For example,
In Figure 14 I, the size of virtual objects 1102 is according to determining scaling standard (before rotation standard obtains satisfaction)
Met and increased, and the threshold value RT for rotating standard increases to RT '.In some embodiments, it is used in satisfaction
Identification for translate or the standard of the gesture of scale objects before, by meet for identification for rotate gesture standard come
It is relatively easy (assuming that for identification for not expired before the standard of the gesture of target rotation that rotation process is initiated to object
Foot).Once for identification for translate or the standard of the gesture of scale objects is met, initiation to the rotation process of object just
It becomes increasingly difficult to (for example, the standard for target rotation is updated to the moving parameter threshold value increased), and object manipulation
Behavior be partial to it is identified and for manipulating the corresponding manipulative behavior of the gesture of the object.In some embodiments, root
According to the value dynamic of the corresponding moving parameter of input and continuously (for example, showing different sizes, position, visual angle, reflection, shade
Deng) change user interface object appearance.In some embodiments, equipment follows moving parameter (for example, each type of behaviour
The corresponding moving parameter that stringer is) with the change made to the appearance of user interface object (for example, each type of manipulative behavior
Appearance corresponding aspect) between default corresponding relationship (for example, corresponding corresponding relationship of each type of manipulative behavior).When
Input is mobile when being increased above the second threshold of the second object manipulation, increase input needed for the first object manipulation it is mobile the
One threshold value enhances the operability of equipment (for example, by helping user to avoid attempting to provide for executing the first object behaviour
The second object manipulation is unexpectedly executed when vertical input).The ability for improving the different types of object manipulation of user's control enhances
The operability of equipment simultaneously keeps user's equipment interface more efficient.
In some embodiments, after the appearance that the first part based on input updates user interface object, equipment
Detect (19008) input second part (for example, by input first part in the identical contact continuously kept, or
The different contacts detected after the termination (for example, being lifted away from) of contact of the person in the first part of input).In some implementations
In scheme, the second part of input is detected based on the input for the direction user interface object being consecutively detected.In response to detection
To the second part of input, equipment updates the appearance of (19010) user interface object based on the second part of input, comprising: root
Meet first gesture criterion of identification according to the first part of determining input and the second part inputted is unsatisfactory for the second-hand updated
Gesture criterion of identification: (for example, not considering whether the second part of input meets first gesture criterion of identification or original second gesture is known
Other standard) appearance of user interface object changed according to the first object manipulation behavior (for example, base based on the second part of input
Direction and/or magnitude in the second part of input), rather than according to the second object manipulation behavior change user interface object
Appearance (for example, even if the second part of input meets original second gesture criterion of identification really before the update);According to determination
The first part of input meets second gesture criterion of identification and the second part inputted is unsatisfactory for the first gesture updated identification
Standard: (for example, not considering whether the second part of input meets second gesture criterion of identification or original first gesture identification mark
It is quasi-) appearance of user interface object changed according to the second object manipulation behavior (for example, based on defeated based on the second part of input
The direction of the second part entered and/or magnitude), rather than according to the appearance of the first object manipulation behavior change user interface object
(for example, even if the second part of input meets original first gesture identification really before the update).
In some embodiments (19012), after the first part of input meets first gesture criterion of identification, it is based on
The second part of input, when appearance according to the first object manipulation behavior to change user interface object, the second part of input
Including meet second gesture criterion of identification before updating second gesture criterion of identification input (for example, threshold value increase before,
With the original threshold of the moving parameter of the input in second gesture criterion of identification) (for example, the second part of input does not include meeting
The input of the second gesture criterion of identification of update).
In some embodiments (19014), after the first part of input meets second gesture criterion of identification, it is based on
The second part of input, when appearance according to the second object manipulation behavior to change user interface object, the second part of input
Including meet first gesture criterion of identification before updating first gesture criterion of identification input (for example, threshold value increase before,
With the original threshold of the moving parameter of the input in first gesture criterion of identification) (for example, the second part of input does not include meeting
The input of the first gesture criterion of identification of update).
In some embodiments (19016), after the first part of input meets first gesture criterion of identification, it is based on
The second part of input, when according to the appearance of the first object manipulation behavior change user interface object, the second part of input is not
Including meeting the input of first gesture criterion of identification (for example, moving parameter with the input in first gesture criterion of identification
Original threshold).For example, meet first gesture criterion of identification it is primary after, input no longer need to continue to meet first gesture knowledge
Other standard is to cause the first object manipulation behavior.
In some embodiments (19018), after the first part of input meets second gesture criterion of identification, it is based on
The second part of input, when according to the appearance of the second object manipulation behavior change user interface object, the second part of input is not
Including meeting the input of second gesture criterion of identification (for example, moving parameter with the input in second gesture criterion of identification
Original threshold).For example, meet second gesture criterion of identification it is primary after, input no longer need to continue to meet second gesture knowledge
Other standard is to cause the second object manipulation behavior.When the second part of input includes the movement for being increased above the threshold value of increase
Shi Zhihang the first object manipulation behavior enhances the operability of equipment (for example, by meeting the standard increased execution first
The ability for executing the second object manipulation intentionally is provided a user after Object Operations, provides new input without user).Reduction is held
Row operation needed for input quantity improve the operability of equipment and keep user-equipment interface more efficient, this further through
It allows users to more rapidly and efficiently uses equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments, the appearance that the second part based on input updates user interface object includes (19020):
Meet second gesture criterion of identification according to the first part of determining input and the second part inputted meets the first-hand of update
Gesture criterion of identification: change the appearance of user interface object according to the first object manipulation behavior based on the second part of input;And
And according to the appearance for changing user interface object based on the second object manipulation behavior of the second part of input;Also, according to
Determine that the second gesture for the second part satisfaction update that the first part of input meets first gesture criterion of identification and inputs is known
Other standard: the second part based on input changes the appearance of user interface object according to the first object manipulation behavior;And root
According to the appearance for changing user interface object based on the second object manipulation behavior of the second part of input.For example, full first
Then sufficient first gesture criterion of identification and input meet the second gesture criterion of identification updated after, input can cause now
First object manipulation behavior and the second object manipulation behavior.For example, meeting second gesture criterion of identification first and inputting right
After meeting the first gesture criterion of identification of update afterwards, input can cause the first object manipulation behavior and the second object to be grasped now
Stringer is.In response to the input that is detected after meeting the first gesture criterion of identification of second gesture criterion of identification and update
A part enhances the operability (example of equipment according to the first object manipulation behavior and the second object manipulation behavior upgating object
Such as, by providing a user free manipulation using the first object manipulation and the second object manipulation after meeting the threshold value increased
The ability of object provides new input without user).It reduces input quantity needed for executing operation and improves operating for equipment
Property and keep user-equipment interface more efficient, this is reduced further through allowing users to more rapidly and efficiently using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, after the appearance that the second part based on input updates user interface object, (example
Such as, after meeting the second gesture criterion of identification of first gesture criterion of identification and update, or meeting second gesture identification
After standard and the first gesture criterion of identification of update) equipment detection (19022) input Part III (for example, passing through input
The first and second parts in the identical contact continuously kept, or first part's contact in second part in input
Termination (for example, being lifted away from) after the different contacts that detect).In response to detecting the Part III of input, equipment is based on defeated
The Part III entered updates the appearance of (19024) user interface object, comprising: based on the Part III of input according to the first object
Manipulative behavior changes the appearance of user interface object;And based on the Part III of input according to the second object manipulation behavior come
Change the appearance of user interface object.For example, the second gesture criterion of identification for meeting first gesture criterion of identification and update it
Afterwards, or after meeting both first gesture criterion of identification of second gesture criterion of identification and update, input can then draw
The first object manipulation behavior and the second object manipulation behavior are played, without considering the first and second gesture identification marks that are original or updating
Threshold value in standard.It is defeated in response to being detected after meeting the first gesture criterion of identification of second gesture criterion of identification and update
The a part entered enhances the operability of equipment according to the first object manipulation behavior and the second object manipulation behavior upgating object
(for example, by using first pair after demonstrating the intention for executing the first object manipulation type by meeting the threshold value increased
The ability that free manipulating objects are provided a user as manipulating with the second object manipulation provides new input without user).It reduces
Input quantity needed for executing operation improves the operability of equipment and keeps user-equipment interface more efficient, this leads to again
It crosses and allows users to more rapidly and efficiently use equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments (19026), the Part III of input does not include meet first gesture criterion of identification defeated
Enter or meet the input of second gesture criterion of identification.For example, knowing in the second gesture for meeting first gesture criterion of identification and update
After other standard, or after meeting both first gesture criterion of identification of second gesture criterion of identification and update, input can
Then to cause the first object manipulation behavior and the second object manipulation behavior, without considering the first and second hands that are original or updating
Threshold value in gesture criterion of identification.In response to being examined after meeting the first gesture criterion of identification of second gesture criterion of identification and update
A part of the input measured enhances equipment according to the first object manipulation behavior and the second object manipulation behavior upgating object
Operability (for example, by after meeting the standard improved using the first object manipulation and the second object manipulation come to user
The ability of free manipulating objects is provided, provides new input without user).Input quantity needed for executing operation is reduced to improve
The operability of equipment and keep user-equipment interface more efficient, this is further through allowing users to more rapidly and efficiently make
Reduce electricity usage with equipment and extends the battery life of equipment.
In some embodiments, multiple object manipulation behaviors include the behavior of (19028) third object manipulation (for example, making
User interface object is rotated around corresponding axis), the third object manipulation behavior response is in meeting third gesture identification standard (example
Such as, scale standard) input and execute.The appearance that first part based on input updates user interface object includes (19030):
According to determining that the first part that inputs before meeting second gesture criterion of identification or meeting third gesture identification standard meets the
One gesture criterion of identification: first part's (for example, the direction of the first part based on input and/or magnitude) based on input, root
According to the appearance (for example, rotator user interface object) of the first object manipulation behavior change user interface object;And (for example,
In the case where not according to the appearance of the second object manipulation behavior change user interface object) by increasing second gesture criterion of identification
Threshold value (for example, increase second gesture criterion of identification in moving parameter (for example, moving distance, speed etc.) needed for threshold value)
To update second gesture criterion of identification.For example, leading to before being used for the standard of gesture of target rotation for identification and obtaining satisfaction
The standard for meeting the gesture for translating or scaling for identification is crossed (assuming that the standard for translating or scaling is not expired before
Foot) it is relatively easy to object initiation translation or zoom operations.Once the standard for the gesture of target rotation obtains for identification
Meet, initiates just to become increasingly difficult to the translation of object or zoom operations (for example, the standard for Pan and Zoom is updated to have
Have the moving parameter threshold value of increase), and object manipulation be partial to it is identified and for manipulate the gesture of the object corresponding
Manipulative behavior.Equipment updates third gesture identification standard by increasing the threshold value of third gesture identification standard (for example, increasing
Threshold value needed for moving parameter (for example, moving distance, speed etc.) in third gesture identification standard).For example, for identification
Before the standard of gesture for target rotation obtains satisfaction, by the mark for meeting the gesture for translating or scaling for identification
Quasi- (assuming that unmet before the standard for being used to translate or scale) initiates translation to object or zoom operations are relatively easy.One
The standard that denier is used for the gesture of target rotation for identification is met, and initiation just becomes more the translation of object or zoom operations
Difficult (for example, the standard for Pan and Zoom is updated to the moving parameter threshold value increased), and object manipulation is biased to
In with it is identified and for manipulating the corresponding manipulative behavior of the gesture of the object.Meeting first gesture identification mark according to determining
It is quasi- or input before meeting third gesture identification standard and meet second gesture criterion of identification: first part of the equipment based on input
(for example, the direction of the first part based on input and/or magnitude) changes user interface pair according to the second object manipulation behavior
The appearance (for example, size of translation user interface object or reset user interface object) of elephant;And (for example, in not basis
In the case where the appearance of first object manipulation behavior change user interface object) pass through the threshold value of increase first gesture criterion of identification
To update first gesture criterion of identification (for example, the moving parameter in increase first gesture criterion of identification is (for example, moving distance, speed
Degree etc.) needed for threshold value).For example, meet for identification for translate or the standard of the gesture of scale objects before, by expire
Foot is relatively easy (assuming that for identification for revolving to initiate rotation process to object for the standard of the gesture of rotation for identification
It is unmet before turning the standard of the gesture of object).Once for identification for translating or the standard of the gesture of scale objects
Met, initiates that the rotation process of object is just become increasingly difficult to (to increase for example, the standard for target rotation is updated to have
Big moving parameter threshold value), and object manipulation behavior be partial to it is identified and for manipulate the gesture of the object corresponding
Manipulative behavior.In some embodiments, according to the value of the corresponding moving parameter of input dynamic and continuously (for example, display
Different size, position, visual angle, reflection, shade etc.) change user interface object appearance.In some embodiments, equipment
Moving parameter (for example, corresponding moving parameter of each type of manipulative behavior) is followed to be done with the appearance to user interface object
Change (for example, corresponding aspect of the appearance of each type of manipulative behavior) between default corresponding relationship (for example, every type
The corresponding corresponding relationship of the manipulative behavior of type).Equipment updates third gesture by increasing the threshold value of third gesture identification standard
Criterion of identification is (for example, threshold needed for increasing the moving parameter (for example, moving distance, speed etc.) in third gesture identification standard
Value).For example, being used for for identification before being used for the standard of gesture of target rotation for identification and obtaining satisfaction by meeting
The standard (assuming that unmet before the standard for being used to translate or scale) of the gesture of translation or scaling is initiated to translate to object
Or zoom operations are relatively easy.Once the standard for the gesture of target rotation is met for identification, initiate to object
Translation or zoom operations just become increasingly difficult to (for example, the standard for Pan and Zoom is updated to the moving parameter increased
Threshold value), and object manipulation be partial to it is identified and for manipulating the corresponding manipulative behavior of the gesture of the object.According to true
It is scheduled on to meet first gesture criterion of identification or input before meeting second gesture criterion of identification and meets third gesture identification standard: setting
Standby first part's (for example, the direction of the first part based on input and/or magnitude) based on input is according to third object manipulation
Behavior changes the appearance size of user interface object (for example, reset) of user interface object;And equipment (for example,
In the case where not according to the appearance of the first object manipulation behavior and the second object manipulation behavior change user interface object) pass through
Increase the threshold value of first gesture criterion of identification to update first gesture criterion of identification (for example, increasing in first gesture criterion of identification
Moving parameter (for example, moving distance, speed etc.) needed for threshold value).For example, meeting for identification for translating or scaling
Before the standard of the gesture of object, by meeting for identification for the standard of the gesture of rotation come to object initiation rotation process
Relatively easily (assuming that for identification for unmet before the standard of the gesture of target rotation).Once being used for for identification
The standard of the gesture of translation or scale objects is met, and initiates just to become increasingly difficult to the rotation process of object (for example, for revolving
Turn object standard be updated to increase moving parameter threshold value), and object manipulation behavior be partial to it is identified
And the corresponding manipulative behavior of gesture for manipulating the object.In some embodiments, according to the corresponding moving parameter of input
Value dynamic and continuously (for example, display different size, position, visual angle, reflection, shade etc.) changes user interface object
Appearance.In some embodiments, equipment follows moving parameter (for example, corresponding moving parameter of each type of manipulative behavior)
Between the change (for example, corresponding aspect of the appearance of each type of manipulative behavior) that the appearance to user interface object is made
Default corresponding relationship (for example, corresponding corresponding relationship of each type of manipulative behavior).Equipment is known by increasing second gesture
The threshold value of other standard come update second gesture criterion of identification (for example, increase second gesture criterion of identification in moving parameter (example
Such as, moving distance, speed etc.) needed for threshold value).For example, the standard in the gesture for being used for target rotation for identification is expired
Before foot, by meet for identification for the standard of the gesture that translates or scale (assuming that standard for translating or scaling it
It is preceding unmet) it is relatively easy to object initiation translation or zoom operations.Once being used for the gesture of target rotation for identification
Standard met, initiate just to become increasingly difficult to the translation of object or zoom operations (for example, being used for the standard of Pan and Zoom
Be updated to the moving parameter threshold value increased), and object manipulation be partial to it is identified and for manipulating the object
The corresponding manipulative behavior of gesture.In response to one of the input only detected when meeting corresponding third gesture identification standard
Point, the operability of equipment is enhanced (for example, by helping user to avoid tasting according to third object manipulation behavior upgating object
Examination unexpectedly executes third object manipulation when providing the input for executing the first object manipulation or the second object manipulation).It reduces
Input quantity needed for executing operation improves the operability of equipment and keeps user-equipment interface more efficient, this leads to again
It crosses and allows users to more rapidly and efficiently use equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments, multiple object manipulation behaviors include (19032) in response to meeting third gesture identification mark
Quasi- input and the third object manipulation behavior executed, before meeting first gesture criterion of identification or second gesture criterion of identification
The first part of input is unsatisfactory for third gesture identification standard, meets first gesture criterion of identification or in the first part of input
After two gesture criterion of identification, equipment updates third gesture identification standard by increasing the threshold value of third gesture identification standard,
The second part inputted before the second gesture criterion of identification for meeting the first gesture criterion of identification or update that update is unsatisfactory for
The third gesture identification standard of update is (for example, the first part in input meets first gesture criterion of identification or second gesture is known
After one of other standard, equipment updates third gesture identification standard by increasing the threshold value of third gesture identification standard).
In response to detecting the Part III (19034) of input: meeting the third gesture updated according to the Part III for determining input and know
Other standard is not (for example, consider whether the Part III of input meets first gesture criterion of identification or second gesture criterion of identification (example
Such as, update or it is original)), equipment based on input Part III (for example, the direction of the Part III based on input and/or
Magnitude) change the appearance of user interface object according to the behavior of third object manipulation (for example, according to the first object manipulation behavior
With when the appearance of the second object manipulation behavior change user interface object (for example, even if the Part III of input be unsatisfactory for it is original
First gesture criterion of identification and second gesture criterion of identification)).The third updated is unsatisfactory for according to the Part III of determining input
Gesture identification standard, equipment abandon the Part III based on input according to the behavior of third object manipulation to change user interface object
Appearance (for example, when according to the appearance of the first object manipulation behavior and the second object manipulation behavior change user interface object
(for example, even if the Part III of input is unsatisfactory for original first gesture criterion of identification and second gesture criterion of identification)).?
After the third gesture identification standard of two gesture criterion of identification, the first gesture criterion of identification of update and update obtains satisfaction, ring
Ying Yu detects a part of input, according to the first object manipulation behavior, the second object manipulation behavior and third object manipulation row
To carry out upgating object, the operability of equipment is enhanced (for example, by all to execute by meeting the threshold value increased in foundation
After the intention of three kinds of object manipulation types, the first object manipulation type, the second object manipulation class are used by providing a user
The ability of type and third object manipulation type freely manipulating objects provides new input without user).It reduces and executes operation institute
The input quantity needed improves the operability of equipment and keeps user-equipment interface more efficient, this is further through enabling users to
It reaches more rapidly and efficiently uses equipment and reduce electricity usage and extend the battery life of equipment.
In some embodiments (19036), the Part III of input meets the third gesture identification standard updated.In base
After the Part III of input updates the appearance of user interface object (for example, first gesture criterion of identification and update the
After two gesture criterion of identification and third gesture identification standard obtain satisfaction, or in second gesture criterion of identification and update
First gesture criterion of identification and third gesture identification standard obtain after satisfaction), equipment detects the 4th of (19038) input
Point (for example, by the first part of input, the second part contact continuously kept identical in Part III, or defeated
Contact in first part, second part and the Part III entered terminates the different contacts detected after (for example, being lifted away from)).
In response to detecting the Part IV of input, equipment updates the outer of (19040) user interface object based on the Part IV of input
It sees, comprising: change the appearance of user interface object according to the first object manipulation behavior based on the Part IV of input;Based on defeated
The Part IV entered changes the appearance of user interface object according to the second object manipulation behavior;And the 4th based on input
Divide the appearance for changing user interface object according to the behavior of third object manipulation.For example, in first gesture criterion of identification and update
Second gesture criterion of identification and after third gesture identification standard obtains satisfaction, or in second gesture criterion of identification and update
First gesture criterion of identification and after third gesture identification standard obtains satisfaction, input then can cause all three types
Manipulative behavior, without considering first gesture criterion of identification, second gesture criterion of identification and third gesture identification original or update
Threshold value in standard.
In some embodiments, the Part IV of input does not include (19042): meeting the defeated of first gesture criterion of identification
Enter, meet the input of second gesture criterion of identification, or meets the input of third gesture identification standard.For example, knowing in first gesture
After the second gesture criterion of identification and third gesture identification standard of other standard and update obtain satisfaction, or in second gesture knowledge
After the first gesture criterion of identification and third gesture identification standard of other standard and update obtain satisfaction, input then can cause institute
There are three types of type manipulative behavior, without consider it is original or update first gesture criterion of identification, second gesture criterion of identification and
Threshold value in third gesture identification standard.Multiple contacts being detected simultaneously by for gesture are needed to enhance operating for equipment
Property (for example, by helping user to avoid unexpectedly holding to be less than when required amount of contact for being detected simultaneously by provides input
Row object manipulation).Input quantity needed for executing operation is reduced to improve the operability of equipment and make user-equipment interface
More efficiently, this reduces electricity usage and extends and set further through allowing users to more rapidly and efficiently using equipment
Standby battery life.
In some embodiments (19044), first gesture criterion of identification and second gesture criterion of identification (and third gesture
Criterion of identification) contact (for example, two contacts) detected while the first quantity is required to be met.In some realities
It applies in scheme, single finger gesture can be used for translating, and singly refer to that translation threshold value translates threshold value lower than two fingers.In some embodiment party
In case, the mobile threshold value that is original and updating for two fingers translation gesture setting is 40 points of gravity motion by contact respectively
With 70 points.In some embodiments, the mobile threshold value that is original and updating for two fingers rotation gesture setting is logical respectively
Cross 12 degree and 18 degree of the moving in rotation that contact carries out.In some embodiments, for the original of two fingers scaling gesture setting
Mobile threshold value with update is 50 points (the distance between contact) and 90 points respectively.In some embodiments, for list
The threshold value for referring to drag gesture setting is 30 points.
In some embodiments (19046), the zoom level of the first object manipulation behavior change user interface object or
Display size is (for example, reset the size of object by kneading gesture (for example, based on first gesture criterion of identification (example
Such as, original or update) after identification kneading gesture, contact movement toward each other)), and the second object manipulation behavior
Change the rotation angle of user interface object (for example, changing user interface object by distortion/rotation gesture surrounds outer shaft or interior
Axis observation visual angle (for example, reversed by second gesture criterion of identification (for example, original or update) identification/rotate hand
After gesture, contact surrounds the movement of public track)).For example, the display of the first object manipulation behavior change virtual objects 11002
Size, as described in 4G to Figure 14 I referring to Fig.1, and the rotation angle of the second object manipulation behavior change virtual objects 11002
Degree, as described in Figure 14 B to Figure 14 E.In some embodiments, the second object manipulation behavior change user interface pair
The zoom level or display size of elephant are (for example, reset the size of object by kneading gesture (for example, based on second
After gesture identification standard (for example, original or update) identification kneading gesture, movement toward each other is contacted)), and the
An object manipulative behavior changes the rotation angle of user interface object (for example, changing user interface pair by distortion/rotation gesture
As the observation visual angle around outer shaft or inner shaft is (for example, passing through first gesture criterion of identification (for example, original or update) knowledge
After awkward turn/rotation gesture, contact surrounds the movement of public track)).
In some embodiments (19048), the zoom level of the first object manipulation behavior change user interface object or
Display size is (for example, reset the size of object by kneading gesture (for example, based on first gesture criterion of identification (example
Such as, original or update) after identification kneading gesture, contact movement toward each other)), and the second object manipulation behavior
Change the user interface object in the first interface region position (for example, by singly refer to or two fingers drag gesture dragging use
Family interface object (for example, after through second gesture criterion of identification (for example, original or update) identification drag gesture,
The movement of contact in the corresponding direction)).For example, the display size of the first object manipulation behavior change virtual objects 11002, such as
Referring to Fig.1 described in 4G to Figure 14 I, and the virtual objects 11002 in the second object manipulation behavior change user interface
Position, as described in Figure 14 B to Figure 14 E.In some embodiments, the second object manipulation behavior change user interface
The zoom level or display size of object are (for example, reset the size of object by kneading gesture (for example, based on the
After two gesture criterion of identification (for example, original or update) identification kneading gesture, movement toward each other is contacted)), and
The position of user interface object in second the first interface region of object manipulation behavior change is (for example, by singly referring to or double
Refer to drag gesture drag user interface object (for example, passing through first gesture criterion of identification (for example, original or update) knowledge
After other drag gesture, movement in the corresponding direction is contacted)).
User in some embodiments (19050), in first the first interface region of object manipulation behavior change
The position of interface object is (for example, by singly referring to or two fingers drag gesture drag object is (for example, in first gesture criterion of identification (example
Such as, original or update) after identification drag gesture, contact movement in the corresponding direction)), and the second object manipulation
The rotation angle of behavior change user interface object is (for example, changing user interface object by distortion/rotation gesture surrounds outer shaft
Or the observation visual angle of inner shaft is (for example, passing through second gesture criterion of identification (for example, original or update) identification torsion/rotation
After changing hands gesture, contact surrounds the movement of public track)).For example, the first object manipulation behavior change virtual objects 11002
Display size, as described in 4B to Figure 14 E referring to Fig.1, and it is virtual in the second object manipulation behavior change user interface
The position of object 11002, as described in Figure 14 B to Figure 14 E.In some embodiments, the second object manipulation behavior
Change the user interface object in the first interface region position (for example, by singly refer to or two fingers drag gesture dragging pair
As (for example, contacting after second gesture criterion of identification (for example, original or update) identification drag gesture in respective party
Upward movement)), and the rotation angle of the first object manipulation behavior change user interface object is (for example, pass through distortion/rotation
It changes hands gesture and changes user interface object around outer shaft or the observation visual angle of inner shaft (for example, by first gesture criterion of identification (example
Such as, original or update) identification reverses/rotate gesture after, contact surrounds the movement of public track)).
In some embodiments (19052), the first part of input and the second part of input are by multiple continuous holdings
Contact provide.Equipment re-establishes (19054) first gesture criterion of identification and second gesture criterion of identification (for example, having original
Beginning threshold value), to initiate other the first object manipulation behavior and after detecting being lifted away from of multiple contacts continuously kept
Two object manipulation behaviors.For example, after contact is lifted away from, equipment re-establish the touch input for newly detecting rotation,
The gesture identification threshold value of Pan and Zoom.By contact be lifted away from terminate input after re-establish it is mobile for inputting
Threshold value enhances the operability of equipment (for example, reducing by resetting the mobile threshold value increased in the new input of each offer
Input degree needed for executing object manipulation).It reduces input degree needed for executing operation and improves the operability of equipment simultaneously
And keeping user-equipment interface more efficient, this reduces electricity further through allowing users to more rapidly and efficiently using equipment
Power uses and extends the battery life of equipment.
In some embodiments (19056), first gesture criterion of identification is corresponding with around the rotation of first axle, and the
Two gesture criterion of identification are corresponding with around the rotation of second axis orthogonal with first axle.In some embodiments, instead of updating
For the threshold value of different types of gesture, update apply also for for the gesture-type that is identified (for example, distortion/pivot hand
Gesture) different subtypes in corresponding a type of manipulative behavior manipulative behavior (for example, around first axle rotation rather than
Around not coaxial rotating) setting threshold value.For example, once identifying and executing the rotation around first axle, then out-of-alignment is surrounded
Threshold rotating value group is updated (for example, increase) and must be overcome by subsequent input, surrounds out-of-alignment rotation to trigger.
When input is mobile to be increased to above for making object around the mobile threshold value of the required input of the second axis rotation, increase for making
The mobile threshold value of object input needed for first axle rotation enhances the operability of equipment (for example, by helping user
It avoids rotating object around the second axis when attempting makes object around first axle rotation).It reduces needed for executing operation
Input quantity improve the operability of equipment and keep user-equipment interface more efficient, this is further through allowing users to
More rapidly and efficiently uses equipment and reduce electricity usage and extend the battery life of equipment.
It should be appreciated that the particular order that the operation in Figure 19 A to Figure 19 H is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,1000,16000,17000,18000 and 20000) description is equally with class
As mode be suitable for above in relation to method 19000 described in Figure 19 A to Figure 19 H.For example, above with reference to 19000 institute of method
Contact, input, virtual objects, interface region, visual field, tactile output, movement and/or the animation stated optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,1000,16000,17000,18000 and 20000)
One of contact, input, virtual objects, interface region, visual field, tactile output, mobile and/or animation feature are more
Person.For brevity, these details are not repeated herein.
Figure 20 A to Figure 20 F is flow chart, shows and shows for being moved to virtual objects according to the movement of determining equipment
The method 20000 of audio alert is generated except the visual field of the one or more equipment cameras shown.Method 20000 has display
Generating unit (for example, display, projector, head-up display etc.), one or more input equipment (for example, touch sensitive surface or
Function simultaneously as the touch-screen display of display generating unit and touch sensitive surface), one or more audio output generator, Yi Jiyi
It is executed at the electronic equipment (for example, the equipment 300 of Fig. 3 or portable multifunction device 100 of Figure 1A) of a or multiple cameras.Side
Some operations in method 20000 are optionally combined and/or the sequence of some operations is optionally changed.
Equipment is including the first user interface area of the expression of the visual field of one or more cameras via display generating unit
Show (20002) (for example, in response to the enhancing for the physical environment being placed on virtual objects around the equipment including camera in domain
Request (for example, in response to tapping " world " button displaying together with the view of going up on the stage of virtual objects) in real view) it is empty
The expression of quasi- object is (for example, the first interface region is to show that the enhancing of the physical environment around the equipment including camera is existing
The user interface of real-time coupling), wherein display includes that the expression of virtual objects is kept to catch in the visual field of one or more cameras
Between the plane detected in the physical environment obtained the first spatial relationship (for example, virtual objects so that virtual objects table
Show the fixed angle between plane be maintained (for example, virtual objects seem to keep fixed position in the plane or
Person is along visual field plane rolling) orientation and position show over the display).For example, virtual objects 11002 are aobvious as shown in Figure 15 V
Show in the interface region of visual field 6036 for including one or more cameras.
Equipment detect (20004) adjust one or more cameras visual field equipment movement (e.g., including one or more
The transverse shifting of the equipment of a camera and/or rotation).For example, as described in 5V to Figure 15 W referring to Fig.1, the shifting of equipment 100
The dynamic visual field for adjusting one or more cameras.
The movement (20006) of the equipment of visual field of one or more cameras is adjusted in response to detecting: in adjustment one or
When the visual field of multiple cameras, equipment is according to virtual objects and between the plane detected in the visual field of one or more cameras
First spatial relationship (for example, orientation and/or position), adjusts the display of the expression of virtual objects in the first interface region,
And the virtual objects greater than threshold quantity (for example, 100%, 50% or 20%) are caused to be moved to one according to the movement of determining equipment
(for example, because of the mobile period in equipment relative to physical environment, virtually except the display portion of the visual field of a or multiple cameras
The expression of object and the space between the plane detected in the physical environment captured in the visual field of one or more cameras are closed
System is kept fixed), equipment passes through one or more audio output generators and generates the first audio alert (for example, instruction is in camera
No longer verbal announcement of the display greater than the virtual objects of threshold quantity in view).For example, as described in 5W referring to Fig.1, in response to
The movement of equipment 100 causes virtual objects 11002 to be moved to except the display portion of visual field 6036 of one or more cameras, raw
At audio alert 15118.Virtual objects are caused to be moved to except shown augmented reality view according to the movement of determining equipment
It generates audio output and provides a user feedback, the moving influence virtual objects of the feedback indicating equipment are relative to augmented reality view
Display degree.The improved operability for enhancing equipment of feeding back is provided (for example, allowing user to feel by providing for user
Know whether virtual objects have been moved out display, and not having to other display information keeps display mixed and disorderly, and does not need user and look into
See display), and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively use equipment
And reduces electricity usage and extend the battery life of equipment.
In some embodiments, the first audio alert of output includes that (20008) generate audio output, which refers to
Show the amount for keeping visible virtual objects in the display portion of the visual field of one or more cameras (for example, keeping visible void
The amount of quasi- object is measured relative to the overall size from the virtual objects in terms of currently viewing visual angle (for example, 20%, 25%, 50%
Deng)) (for example, audio output is said, " object x is 20% visible.").For example, the movement in response to equipment 100 causes virtual objects
11002 are partially moved except the display portion of visual field 6036 of one or more cameras, such as 5X to Figure 15 Y institute referring to Fig.1
Description, audio alert 15126 is generated as including notifying 15128, indicates " chair 90% is as it can be seen that occupy the 20% of screen ".It is raw
At instruction, the audio output of the amount of visible virtual objects provides feedback (example for user in shown augmented reality view
Such as, the degree of the mobile change virtual objects visible level of indicating equipment).Improved feedback is provided (for example, by mentioning for user
Allow for whether perception virtual objects in family allowable have been moved out display, and not having to other display information keeps display mixed and disorderly, and
Do not need user and check display) enhance the operability of equipment, and keep user-equipment interface more effective, this further through
It allows users to faster and more effectively reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, the first audio alert of output includes that (20010) generate audio output, which refers to
Show the amount of the shown portion of the visual field occupied by virtual objects (for example, the enhancing of physical environment occupied by virtual objects is existing
The amount (for example, 20%, 25%, 50% etc.) of real-time coupling) (for example, audio output includes notice, say that " object x occupies world's view
The 15% " of figure).In some embodiments, audio output further includes the display state performed by the user for causing virtual objects
Change movement description.For example, audio output includes notice, say that " device left is mobile;Object x is 20% as it can be seen that accounting for generation
The 15% of boundary's view." for example, generating audio alert 15126 in Figure 15 Y, which includes that " chair 90% can for instruction
See, occupy the 20% " of screen notifies 15128.Generate the audio for indicating the amount of the augmented reality view occupied by virtual objects
Output provides feedback (for example, the degree for the degree that the mobile change augmented reality view of indicating equipment is occupied) for user.For
User provides the improved operability for enhancing equipment of feeding back (for example, allowing user's perception virtual objects opposite by providing
In the information of the size of display, and not having to other display information keeps display mixed and disorderly, and does not need user and check display
Device), and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, equipment detection (20012) on touch sensitive surface with the visual field of one or more cameras
Indicate the input carried out at corresponding position by contact (for example, the augmented reality view of the display physical environment of detection touch screen
Tap input or double-click input in a part of figure).In response to detecting input, and according to determining on touch sensitive surface
With do not detected input at the corresponding first position of the first part of visual field of one or more cameras that virtual objects occupy,
Equipment generates (20014) second audio alerts (for example, instruction can not position the click sound of virtual objects in the region tapped
Or buzz).For example, as described in 5Z referring to Fig.1, in response to the view with one or more cameras on touch screen 112
6036 input not detected at a part of corresponding position that virtual objects 11002 occupy, it is alert that equipment generates audio
Report 15130.In some embodiments, in response to detecting input, according to determining in the visual field with one or more cameras
The corresponding second place of the second part occupied by virtual objects detects input, abandons generating the second audio alert.One
In a little embodiments, indicate that user fails to position virtual objects instead of generating the second audio alert, equipment generates instruction user
The different audio alerts of virtual objects are positioned.In some embodiments, instead of generating the second audio alert, equipment output is retouched
The audible notification of the operation executed to virtual objects is stated (for example, " object x is selected.", " size of object x is reset to
Default size.", " object x is rotated to default direction " etc.) or the state of virtual objects (for example, object x, 20% as it can be seen that account for
According to the 15% of world view.).
In response at a part of corresponding position not occupied by virtual objects with shown augmented reality view
The input that detects and generate audio output and provide feedback (for example, instruction must provide input at different locations) for user
(for example, obtain the information about virtual objects and/or execute operation)).Improved feedback, which is provided, for user enhances equipment
In addition operability (for example, allowing user to perceive the information whether input is successfully connected with virtual objects by providing, and does not have to
Display information keep display mixed and disorderly, and do not need user and check display), and keep user-equipment interface more effective, this
Further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery longevity of equipment
Life.
In some embodiments, the first audio alert of output includes generating (20016) audio output, which refers to
Show virtual objects are executed operation (for example, before generating audio output, operation that equipment determination currently selects and in response to
Confirm that user executes the input (for example, double-click) of the intention of the operation currently selected to execute operation), and execute void after operation
The result phase of quasi- object.For example, audio output includes notice, say that " device left is mobile;Object x is 20% as it can be seen that occupying generation
The 15% of boundary's view, " object x rotates clockwise 30 degree;Object rotates 50 degree around y-axis, " or " object x amplification 20% simultaneously occupies
The 50% " of world view.For example, as described in 5AH to Figure 15 AI referring to Fig.1, in response to relative to virtual objects 11002
The execution of rotation process, generates audio alert 15190, which includes the notice of instruction " chair rotates five degree counterclockwise "
15192.Chair is now relative to screen rotation zero degree " audio output for the operation that instruction executes virtual objects is generated as user
The feedback how input provided by instruction influences virtual objects is provided.For user provide it is improved feed back enhance equipment can
Operability (for example, allowing user to perceive the information how operation changes virtual objects by providing, and does not have to other display and believes
Breath keeps display mixed and disorderly, and does not need user and check display), and keep user-equipment interface more effective, this is further through making
User faster and more effectively can be reduced electricity usage using equipment and extend the battery life of equipment.
In some embodiments (20018), in the audio output of the first audio alert relative to at one or more
The corresponding reference frame of the physical environment captured in the visual field of a camera describes the result shape of virtual objects after performing an operation
(for example, after (for example, in response to movement of gesture or equipment based on touch) manipulating objects, equipment generates voice and retouches state
The new state of object is stated (for example, when dummy object is initially placed in the augmented reality view of physical environment, relative to virtual object
The initial position of body/orientation rotates 30 degree, rotates 60 degree or be moved to the left)).For example, as 5AH to Figure 15 AI referring to Fig.1 is retouched
It states, in response to the execution of the rotation process relative to virtual objects 11002, generates audio alert 15190, the audio alert packet
Include instruction " chair rotates five degree counterclockwise " notifies 15192.Chair is now relative to screen rotation zero degree " in some embodiment party
In case, operation includes movement of the equipment relative to physical environment (for example, causing virtual objects relative in one or more cameras
Visual field in the movement of the expression of the part of physical environment that captures), and voice response in equipment relative to physical environment
The mobile new state to describe virtual objects.The audio for the state for generating instruction virtual objects after executing operation to object is defeated
Out, providing for user allows user to perceive the feedback how operation changes virtual objects.Improved feedback enhancing is provided for user
The operability of equipment (for example, allowing user to perceive the information how operation changes virtual objects by providing, and does not have to another
Outer display information keeps display mixed and disorderly, and does not need user and check display), and keep user-equipment interface more effective,
This is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the battery of equipment
Service life.
In some embodiments, equipment detection (20020) equipment it is other move (e.g., including one or more phases
The transverse shifting of the equipment of machine and/or rotation), the other movement further adjusted after generating the first audio alert one or
The visual field of multiple cameras.For example, as described in Figure 15 W to Figure 15 X, the movement of equipment 100 further adjust one or
The visual field of multiple cameras is (in the view of the one or more cameras occurred in response to equipment 100 from the movement of Figure 15 V to Figure 15 W
After the adjustment of field).The other movement of the equipment of visual field of one or more cameras is further adjusted in response to detecting
(20022): in the further visual field for adjusting one or more cameras, equipment is according to virtual objects and in one or more cameras
Visual field in the first spatial relationship (for example, orientation and/or position) between the plane that detects, adjust the first user interface area
The display of the expression of virtual objects in domain, and according to determine equipment it is other it is mobile cause greater than second threshold amount (for example,
50%, 80% or the visual field that is moved to one or more cameras of virtual objects 100%) display portion within (for example, because
In mobile period of the equipment relative to physical environment, the expression of virtual objects with captured in the visual field of one or more cameras
The spatial relationship between plane detected in physical environment is kept fixed), equipment passes through one or more audio output and occurs
The first audio alert of device generation (e.g., including the audio output of notice, the virtual objects that notice instruction is greater than threshold quantity are moved
It returns in camera view).For example, the movement in response to equipment 100 causes virtual objects 11002 as described in 5X referring to Fig.1
It is moved within the display portion of visual field 6036 of one or more cameras, generation audio alert 15122 (e.g., including bulletin,
" chair projects in the world now, and 100% as it can be seen that occupy the 10% " of screen).It is virtual right to be caused according to the movement of determining equipment
Feedback is provided a user as being moved to generation audio output within shown augmented reality view, the shifting of the feedback indicating equipment
The dynamic degree for influencing virtual objects relative to the display of augmented reality view.Improved feedback, which is provided, for user enhances equipment
Operability (for example, allowing user to perceive whether virtual objects have been moved into display by providing, and does not have to other display and believes
Breath keeps display mixed and disorderly, and does not need user and check display), and keep user-equipment interface more effective, this is further through making
User faster and more effectively can be reduced electricity usage using equipment and extend the battery life of equipment.
It in some embodiments, is when the expression for showing virtual objects in the first interface region and currently void
When the first object manipulation type of multiple object manipulation types that quasi- Object Selection is suitable for virtual objects, equipment is detected
(20024) request of another object manipulation type suitable for virtual objects is switched to (for example, detection is on touch sensitive surface and aobvious
Show at a part of corresponding position of the first interface region of the expression of the visual field of one or more cameras by contact into
It is capable gently to sweep input (e.g., including contact movement in the horizontal direction)).For example, as described in 5AG referring to Fig.1, currently
Selection is when rotating clockwise control 15170, detects and sweeps input gently to be switched to pivot controls counterclockwise 15180 (for making void
The quasi- rotation counterclockwise of object 15160).Asking for another object manipulation type suitable for virtual objects is switched in response to detecting
It asks, equipment generates (20026) and names the second object manipulation type multiple object manipulation types suitable for virtual objects
Audio output (for example, audio output includes notice, is said " around x-axis target rotation ", " regulating object size " or " in the plane
Mobile object " etc.), wherein the second object manipulation type is different from the first object manipulation type.For example, in Figure 15 AH, response
In detecting the request referring to 15AG description, audio alert 15182 is generated, including notifies 15184 (" to choose: rotation counterclockwise
Turn ").In some embodiments, equipment in response in the same direction it is continuous gently sweep input and traverse predefined answer
With object manipulation list of types.In some embodiments, in response to detecting from the immediately preceding negative side for gently sweeping input
To gently sweep input, equipment generates audio output, which includes pair suitable for virtual objects of name previous notification
Notice (for example, notice before the object manipulation type notified recently) as manipulating type.In some embodiments, if
It is standby not show corresponding control (for example, not showing for being sent out by gesture for each object manipulation type suitable for virtual objects
The button or control of the operation (for example, rotation, adjustment size, translation etc.) risen).In response to switching the request of object manipulation type
And it generates audio output and provides the feedback that instruction has been carried out handover operation for user.Improved feedback enhancing is provided for user
The operability of equipment (for example, by providing the information of confirmation switching input successful execution, and do not have to other display information
Keep display mixed and disorderly, and do not need user and check display), and keep user-equipment interface more effective, this is further through using
Family faster and more effectively can be reduced electricity usage using equipment and extend the battery life of equipment.
In some embodiments, it is ordered multiple object manipulation types suitable for virtual objects in generation (20028)
After the audio output of the second object manipulation type of name (for example, audio output includes notice, say " around x-axis target rotation ",
" resetting object size " or " mobile object in the plane " etc.), the object manipulation class that equipment detection is executed and currently selected
The corresponding object manipulation behavior of type request (for example, detection on touch sensitive surface with the visual field that shows one or more cameras
The double-click input carried out at a part of corresponding position of the first interface region indicated by contact).For example, such as reference
Described in Figure 15 AH, detect double-click input to rotate virtual objects 11002 counterclockwise.In response to detect execute with it is current
The request of the corresponding object manipulation behavior of the object manipulation type of selection, equipment execute (20030) and the second object manipulation type
Corresponding object manipulation behavior (for example, virtual objects is made to rotate 5 degree around y-axis, perhaps make object size increase by 5% or
Make mobile 20 pixels of the object in plane) (for example, being adjusted according to the second object manipulation type empty in the first interface region
The display of the expression of quasi- object).For example, in response to detecting the request referring to described in 15AH, it is virtual right to make in Figure 15 AI
As 11002 rotations counterclockwise.In some embodiments, in addition to executing object manipulation row corresponding with the second object manipulation type
For except, equipment also exports audio output, which includes the object manipulation behavior for indicating to execute relative to virtual objects
And after executing object manipulation behavior the result phase of virtual objects notice.For example, it is defeated to generate audio in Figure 15 AI
Out 15190 comprising notifying 15192, (" chair rotates five degree counterclockwise.Now, chair is relative to screen " rotation zero degree ".) ring
The input that Ying Yu is detected in selection operation executes object manipulation operation and provides for executing the additional controls option operated
(for example, allowing user by providing tap input rather than needing to double-click input to execute operation).It is provided for inputting
Additional controls option and not having to other display control makes user interface enhance the operability of equipment in a jumble (for example, leading to
Cross to provide the limited user of ability of more contact gestures and providing the option of manipulating objects), and have user-equipment interface more
Effect, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend the electricity of equipment
The pond service life.
In some embodiments, another object manipulation type suitable for virtual objects is switched in response to detecting
Request (20032): according to determining that the second object manipulation type is continuously adjustable manipulation type, equipment generates audio alert and life
Name the second object manipulation type audio output, with indicate the second object manipulation type be continuously adjustable manipulation type (for example,
The audio output for saying " adjustable " is exported (for example, " surrounding object after the audible notification for naming the second object manipulation type
Y-axis rotates clockwise "));Equipment detects the request for executing object manipulation behavior corresponding with the second object manipulation type, packet
Include a part detected on touch sensitive surface with the first interface region of the expression for the visual field for showing one or more cameras
At corresponding position gently sweep input (for example, on touch sensitive surface with the expression of the visual field of display one or more cameras the
It is detected at a part of corresponding position of one interface region after being inputted by the double-click that contact carries out);And it responds
In detecting the request for executing object manipulation behavior corresponding with the second object manipulation type, equipment is executed to be grasped with the second object
The vertical corresponding object manipulation behavior of type, amount correspond to the size slidably inputed (for example, rotating virtual objects 5 around y-axis
Degree or 10 degree, perhaps increase by 5% or 10% for the size of object or the object in plane are moved 20 pixels or 40 pixels,
Being specifically dependent upon the light magnitude for sweeping input is the second amount that the first amount is also greater than the first amount).For example, such as 5J extremely schemes referring to Fig.1
Described in 15K, when current selection rotates clockwise control 15038, detection sweeps input gently to be switched to zoom control 15064.
Generate audio alert 15066 comprising notify 15068 (" ratios: adjustable ").As described in 5K to Figure 15 L referring to Fig.1, inspection
It surveys and sweeps input gently to be used to amplify virtual objects 11002, and in response to the input, scaling behaviour is executed to virtual objects 11002
Make (in the exemplary example of Figure 15 K to Figure 15 L, to be detected when view interface 6010 is gone up on the stage in display for continuously adjustable behaviour
Vertical input, but it would be recognized that can the with the expression for the visual field for showing one or more cameras on touch sensitive surface
Similar input is detected at a part of corresponding position of one interface region).In some embodiments, in addition to executing
Except second object manipulation behavior, equipment also exports audible notification, which indicates pair executed relative to virtual objects
As the amount of manipulative behavior, and after executing object manipulation behavior virtual objects result phase.It is held in response to gently sweeping input
The operation of row object manipulation provides the additional controls option for executing operation (for example, user is allowed gently to sweep input by providing
Rather than need double contact inputs to execute operation).It is provided for the additional controls option of input and does not have to other show
Show that control keeps user interface mixed and disorderly (for example, by providing manipulating objects to provide the limited users of ability for contacting gestures more
Option), and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively subtract using equipment
Lack electricity usage and extends the battery life of equipment.
In some embodiments, before showing the expression of virtual objects in the first interface region, equipment is
Expression (for example, user interface of going up on the stage) the wherein second user interface of (20034) virtual objects is shown in two interface regions
Region do not include the visual field of one or more cameras expression (for example, second user interface zone is user interface of going up on the stage, wherein
(for example, rotation, reset size and movement) virtual objects can be manipulated to capture without holding and in the visual field of camera
The fixed relationship of the plane detected in physical environment).When showing the expression of virtual objects in second user interface zone,
Currently first of multiple operations suitable for virtual objects have been selected to operate for virtual objects, equipment detection (20036) is switched to
Another operation suitable for virtual objects request (e.g., including switching it is virtual right suitable for second user interface zone
The request (for example, resetting size, rotation, inclination etc.) of the object manipulation type of elephant is suitable for second user interface area
The operating user interface of virtual objects in domain is (for example, the augmented reality for returning to 2D user interface, object being made to fall into physical environment
In view)) (for example, detection request includes that detection passes through at position corresponding with the first interface region on touch sensitive surface
What contact carried out gently sweeps input (e.g., including contact movement in the horizontal direction)).For example, such as referring to Fig.1 5F to Figure 15 G
It is described, when display is gone up on the stage user interface 6010 and when currently selection tilts down control 15022, detects and gently sweep input
Control 15038 is rotated clockwise to be switched to.In response to detect be switched to it is virtual suitable for second user interface zone
The request of another operation of object, equipment generate (20038) audio output, which is being suitable for the multiple of virtual objects
The second operation of name (for example, audio output includes notice, says " enclosing object to rotate around x axis ", " resets object in operation
Size ", " make object towards display tilt " or " showing object in augmented reality view " etc.), wherein the second operation is different
In the first operation.In some embodiments, equipment is gently swept input in response in the same direction continuous and is traversed predefined
Can application operating list.For example, in response to detecting the request referring to 15F description, generating audio alert in Figure 15 G
15040, including notify 15042 (" choose: rotating clockwise button ").In response to the request of handover operation type, name is generated
The audio output of selected action type provides the feedback that instruction has been successfully received switching input for user.In response to cutting
The request of action type is changed, the audio output for naming selected action type is generated, instruction is provided for user and has been properly received
To the feedback of switching input.The improved operability for enhancing equipment of feeding back is provided (for example, allowing to use by providing for user
Family perceives the selected when changed information of control, and not having to other display information keeps display mixed and disorderly, and not
User is needed to check display), and keep user-equipment interface more effective, this is further through allowing users to more rapidly and effectively
Ground is reduced electricity usage using equipment and extends the battery life of equipment.
In some embodiments, before showing the expression of virtual objects in the first interface region (20040):
It does not include display void in the second user interface zone (for example, user interface of going up on the stage) of the expression of the visual field of one or more cameras
(for example, second user interface zone is user interface of going up on the stage, wherein can manipulate (for example, rotating, again when the expression of quasi- object
New settings size and movement) virtual objects without keep with physical environment in plane fixed relationship), equipment detection is including
Shown in first interface region of the expression of the visual field of one or more cameras the expression of virtual objects request (for example,
It is " showing object in augmented reality view " in the operation currently selected, and in equipment just in response to gently sweeping input (example
Such as, just received before double-clicking input) after the audible notification of operation that currently selects of output name, detect double-click it is defeated
Enter).For example, as described in 5P to Figure 15 V referring to Fig.1, when display goes up on the stage user interface 6010 and to select toggle control
When 6018, detects and double-click input to show to the interface region of the visual field 6036 for including one or more cameras indicated
The expression of virtual objects 11002.It is including the first user circle of the expression of the visual field of one or more cameras in response to detecting
The request of the expression of virtual objects is shown in the region of face: equipment is according to the expression of virtual objects and in the view of one or more cameras
The first spatial relationship between plane detected in the physical environment captured in, shows void in the first interface region
The expression of quasi- object is (for example, when falling into virtual objects in the physical environment indicated in augmented reality view, in view of going up on the stage
The rotation angle and size of middle virtual objects are maintained in augmented reality view, and according to the physical environment captured in visual field
In the orientation of plane that detects, reset tilt angle in augmented reality view.);And equipment generates the 4th audio alert,
4th audio alert instruction virtual objects are placed on relative to the physical environment captured in the visual field of one or more cameras
In augmented reality view.For example, as described in 5V referring to Fig.1, in response to for including the visual field of one or more cameras
The input that the expression of virtual objects 11002 is shown in the interface region of 6036 expression is including one or more cameras
Visual field 6036 interface region in show the expression of virtual objects 11002, and generate audio alert 15114, the sound
Frequency alarm includes notifying 15116 (" chair projects in the world now, and 100% as it can be seen that occupy the 10% " of screen).In response to inciting somebody to action
Request that object is placed in augmented reality view and generate audio output, for user provide instruction successful execution place it is virtual right
The feedback of the operation of elephant.The improved operability for enhancing equipment of feeding back is provided (for example, allowing user by providing for user
The information that perceptive object is shown in augmented reality view, and not having to other display information keeps display mixed and disorderly, and is not required to
User is wanted to check display), and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively
Reduce electricity usage using equipment and extends the battery life of equipment.
In some embodiments, third audio alert instruction (20042) is about virtual objects relative to one or more
The information of the appearance of the field of view portion of camera is (for example, third audio alert includes audio output comprising notice says " object x
It is placed in the world, object x is 30% as it can be seen that occupying the 90% of screen.").For example, being generated as with reference to described in Figure 15 V
Audio alert 15114 comprising notifying 15116, (" chair projects in the world now, and 100% as it can be seen that occupy screen
10% ").It is user that instruction, which is generated, relative to the audio output of the appearance of the shown visible virtual objects of augmented reality view
Feedback (for example, appearance that the degree of placement of the instruction object in augmented reality view influences virtual objects) is provided.For user
The improved operability for enhancing equipment of feeding back is provided (for example, allowing user's perceptive object to regard in augmented reality by providing
The information how shown in figure, and not having to other display information keeps display mixed and disorderly, and does not need user and check display
Device), and keep user-equipment interface more effective, this is further through allowing users to faster and more effectively reduce using equipment
Electricity usage and the battery life for extending equipment.
In some embodiments, in conjunction with virtual objects relative to the physics captured in the visual field of one or more cameras
Placement of the environment in augmented reality view, equipment generate the output of (20044) tactile.For example, when object is placed on camera view
When in the plane detected in, equipment generates tactile output, and instruction object is fallen in the plane.In some embodiments, when
When object reaches predefined default size during the size for resetting object, equipment generates tactile output.In some realities
It applies in scheme, equipment is directed to each operation executed relative to virtual objects and generates tactile output (for example, for predetermined angle
Measure carry out each rotation, for virtual objects are dragged in Different Plane, for by object be reset to original orientation and/or
Size etc.).In some embodiments, these tactiles export the result phase of the operation and virtual objects that execute prior to description
Corresponding audio alert.For example, as described in 5V referring to Fig.1, in conjunction with virtual objects 11002 one or more cameras view
Placement in field 6036 generates tactile output 15118.It is placed in conjunction with relative to the physical environment captured by one or more cameras
Virtual objects provide feedback (for example, the operation that virtual objects are placed in instruction is successfully executed) to generate tactile output as user.
The improved operability for enhancing equipment of feeding back is provided (for example, allowing user to perceive virtual objects by providing for user
The sensory information having occurred and that is placed, keeps user interface mixed and disorderly without the information due to display) and make user-equipment interface more
Effectively, this is further through allowing users to faster and more effectively reduce electricity usage using equipment and extend equipment
Battery life.
In some embodiments, equipment is at the first position in the first interface region (for example, in the first user
Among the multiple controls shown at different location in interface zone) (20046) first controls of display, while showing one or more
The expression of the visual field of a camera.It (is shown for example, working as the first interface region according to determining that the gradually light standard that controls is met
When at least threshold amount of time on touch sensitive surface without detecting touch input, gradually light standard is met for control), equipment is stopped
Only (20048) shown in the first interface region the first control (for example, and in the first interface region it is all its
His control), while the expression for the visual field that one or more cameras are shown in the first interface region is maintained at (for example, when use
When family is relative to physical environment mobile device, control will not be shown again).Showing the first interface region without first
When showing the first control in interface region, equipment is detected on (20050) touch sensitive surface and in the first interface region
The touch input of the corresponding corresponding position in first position.In response to detecting that touch input, equipment generate (20052) five notes of traditional Chinese music
Frequency alarm, the fifth audio alarm include the audio output of specified operation corresponding with the first control (for example, " return goes up on the stage to regard
Figure " or " rotating object around y-axis ").In some embodiments, in response to detecting touch input, equipment is also first
Again the first control is shown at position.In some embodiments, it once user knows the position of control on display, shows again
Show and carry out becoming the control currently selected when touch input at control and the over the display usual position of control, compares
Using it is a series of gently sweep input browsing can with control provide more efficiently mode come access control.In response to determining control gradually light mark
It will definitely arrive and meet and being automatically stopped display control reduces the quantity of input needed for stopping shows control.It reduces and executes operation institute
The quantity of input needed enhances the operability of equipment, and keeps user-equipment interface more effective, this is further through enabling users to
Enough battery lifes for faster and more effectively reducing electricity usage using equipment and extend equipment.
It should be appreciated that the particular order that the operation in Figure 20 A to Figure 20 F is described is only an example, not
It is intended to indicate that the sequence is the unique order that can execute these operations.Those skilled in the art will recognize that a variety of sides
Formula resequences to operations described herein.Additionally, it should be noted that herein in relation to other methods as described herein
The details of other processes of (for example, method 800,900,1000,16000,17000,18000 and 20000) description is equally with class
As mode be suitable for above in relation to method 20000 described in Figure 20 A to Figure 20 F.For example, above with reference to 20000 institute of method
Contact, input, virtual objects, interface region, visual field, tactile output, movement and/or the animation stated optionally have herein
With reference to described in other methods as described herein (for example, method 800,900,1000,16000,17000,18000 and 19000)
One of contact, input, virtual objects, interface region, visual field, tactile output, mobile and/or animation feature are more
Person.For brevity, these details are not repeated herein.
Above with reference to Fig. 8 A to Fig. 8 E, Fig. 9 A to Fig. 9 D, Figure 10 A to Figure 10 D, Figure 16 A to Figure 16 G, Figure 17 A to Figure 17 D,
Operation described in Figure 18 A to Figure 18 I, Figure 19 A to Figure 19 H and Figure 20 A to Figure 20 F is optionally described by Figure 1A into Figure 1B
Component implement.For example, display operation 802,806,902,906,910,1004,1008,16004,17004,18002,
19002 and 20002;Detection operation 804,904,908,17006,18004,19004 and 20004;Change operation 910, receive behaviour
Make 1002,1006,16002 and 17002;Stop operation 17008;Rotation process 18006;Update operation 19006;Adjustment operation
20006;And generate operation 20006 optionally by event classifier 170, event recognizer 180 and button.onrelease 190
It realizes.Event monitor 171 in event classifier 170 detects the contact on touch-sensitive display 112, and event dispatcher
Event information is transmitted to application program 136-1 by module 174.The corresponding event identifier 180 of application program 136-1 believes event
Breath defines 186 with corresponding event and is compared, and whether determine the first contact on touch sensitive surface at first position (or this sets
Whether standby rotation) correspond to predefined event or subevent, such as selection to the object in user interface or the equipment
The rotation being orientated from an orientation to another.When detecting corresponding predefined event or subevent, event recognizer
180 activation button.onreleases 190 associated with the detection to the event or subevent.Button.onrelease 190 is optionally
Using or call data renovator 176 or object renovator 177 to carry out more new application internal state 192.In some embodiments
In, button.onrelease 190 accesses corresponding GUI renovator 178 and carrys out content shown by more new application.Similarly, ability
The technical staff in domain can know clearly based in Figure 1A, into Figure 1B, how discribed component can realize other processes.
For illustrative purposes, the description of front is described by reference to specific embodiment.However, example above
The property shown discussion is not intended to exhausted or limits the invention to disclosed precise forms.According to above teaching content, very
More modifications and variations are all possible.Selection and description embodiment are to most preferably illustrate the principle of the present invention
And its practical application, so as to so that others skilled in the art most preferably can be suitable for being conceived using having
The described embodiment of the invention and various of the various modifications of special-purpose.
Claims (209)
1. a kind of method, comprising:
At the equipment with display, touch sensitive surface and one or more cameras:
The expression of virtual objects is shown in the first interface region on the display;
When show the virtual objects in first interface region on the display described first indicates,
Detection is on the touch sensitive surface by connecing at position corresponding with the expression of the virtual objects on the display
The first input that touching carries out;
In response to detecting first input carried out by the contact:
The first standard is met by first input that the contact carries out according to determining:
Second user interface zone is shown on the display, the table including the visual field with one or more of cameras
Show at least part of display for replacing first interface region;And
When from showing that first interface region is switched to the display second user interface zone, continuously display described
The expression of virtual objects.
2. according to the method described in claim 1, wherein first standard includes when the contact is on the touch sensitive surface
It is at least predefined to be less than the mobile holding of amount of threshold shift at the position corresponding with the expression of the virtual objects
The standard met when time quantum.
3. according to the method described in claim 1, wherein:
The equipment includes one or more sensors to detect and the intensity of the contact of the touch sensitive surface;And
First standard includes the standard met when the characteristic strength of the contact increases to above the first intensity threshold.
4. according to the method described in claim 1, wherein first standard includes when the mobile satisfaction of the contact is predefined
Mobile standard when the standard that is met.
5. method according to claim 1 to 4, in which:
The equipment includes one or more tactile output generators;Also,
The method includes in response to detecting first input carried out by the contact, according to determining by described
First input that contact carries out has met first standard, is exported using one or more of tactile output generators
Tactile output is to indicate that first input meets first standard.
6. the method according to any one of claims 1 to 5, comprising:
In response to detect it is described first input at least initial part, analyze the visual field of one or more of cameras with
Detect one or more planes in the visual field of one or more of cameras;And
It is opposite based on the respective planes after the respective planes in the visual field for detecting one or more of cameras
In the relative position of the visual field of one or more of cameras come determine the virtual objects the expression size and/
Or position.
7. according to the method described in claim 6, wherein in response on the touch sensitive surface on the display described in
The contact is detected at the corresponding position of the expression of virtual objects, initiates the institute to one or more of cameras
Visual field is stated to be analyzed to detect one or more of planes in the visual field of one or more of cameras.
8. according to the method described in claim 6, wherein in response to detecting first input carried out by the contact
Meet first standard, initiation analyzes the visual field of one or more of cameras one or more to detect
One or more of planes in the visual field of a camera.
9. according to the method described in claim 6, wherein in response to detecting that the initial part of first input meets plane
It detects trigger criteria and is unsatisfactory for first standard, initiate to analyze the visual field of one or more of cameras and
Detect one or more of planes in the visual field of one or more of cameras.
10. method according to any one of claims 6 to 9, comprising:
The expression for showing the virtual objects in the second user interface zone in the corresponding way, so that the void
Quasi- object is orientated at a predefined angle relative to the respective planes detected in the visual field of one or more of cameras.
11. according to the method described in claim 10, wherein:
The equipment includes one or more tactile output generators;And
The method includes, respective planes in the visual field in response to detecting one or more of cameras, benefit
It is exported with one or more of tactile output generators output tactile, to indicate the view in one or more of cameras
The respective planes are detected in.
12. method described in any one of 0 to 11 according to claim 1, in which:
The equipment includes one or more tactile output generators;Also,
The described method includes:
When from showing that first interface region is switched to the display second user interface zone, show described virtual
The expression of object is converted in the second user interface zone relative to the dynamic of the predefined position of the respective planes
It draws;And
It shows that the described of the virtual objects indicates in conjunction with the predefined angle relative to the respective planes, utilizes institute
State the output tactiles output of one or more tactile output generators, with indicate the virtual objects relative to the respective planes with
The predefined angle is shown in the second user interface zone.
13. according to the method for claim 12, wherein tactile output is with corresponding with the feature of the virtual objects
Tactile export distribution.
14. method described in any one of 0 to 13 according to claim 1, comprising:
When showing the expression of the virtual objects in the second user interface zone, detection adjustment is one or more
The movement of the equipment of the visual field of a camera;And
In response to detecting the movement of the equipment, when adjusting the visual field of one or more of cameras, according to described
The fixed spatial relationship between the respective planes in the visual field of virtual objects and one or more of cameras, adjustment
The expression of virtual objects described in the second user interface zone.
15. according to claim 1 to method described in any one of 14, comprising: be shown in from display first user interface
Region is switched to the animation that the expression of the virtual objects is continuously displayed when showing the second user interface zone.
16. according to claim 1 to method described in any one of 15, comprising:
When showing the second user interface zone on the display, the second input that detection is carried out by the second contact,
Wherein second input includes the second contact moving along first path on the display;And
In response to detecting second input carried out by second contact, along corresponding with the first path second
The expression of the virtual objects in the mobile second user interface zone in path.
17. according to the method for claim 16, being moved including the expression when the virtual objects along second path
When dynamic, the movement and respective planes corresponding with the virtual objects based on the contact adjust the virtual objects
The size of the expression.
18. according to the method for claim 16, comprising:
When the expression of the virtual objects is moved along second path, the expression of the virtual objects is kept
First size;
The termination for second input that detection is carried out by second contact;And
In response to detecting the termination of second input carried out by second contact:
The expression of the virtual objects is placed on the drop-off positions in the second user interface zone;And
It is aobvious with second size different from the first size in the second user interface zone at the drop-off positions
Show the expression of the virtual objects.
19. method described in any one of 6 to 18 according to claim 1, including, according to determination second contact described aobvious
Show that described move on device along the first path meets the second standard:
Stop the second user interface zone of the expression for the visual field that display includes one or more of cameras;
And
Again display has first interface region of the expression of the virtual objects.
20. according to the method for claim 19, comprising:
In the time corresponding with first interface region is shown again, show from the second user interface zone
Show the expressions of the virtual objects to the table for showing the virtual objects in first interface region
The animation transition shown.
21. method described in any one of 6 to 20 according to claim 1, including, when second contact is along the first path
When mobile, change one identified in the visual field corresponding with the current location of the contact of one or more of cameras
The visual appearance of a or multiple respective planes.
22. according to claim 1 to method described in any one of 21, including, it is carried out in response to detecting by the contact
It is described first input, according to determine by the contact carry out it is described first input meet third standard, in the display
Third interface region is shown on device, at least part of display including replacing first interface region.
23. according to claim 1 to method described in any one of 22, including, according to determining the institute carried out by the contact
State the first input and be unsatisfactory for first standard, keep the display of first interface region, and do not have to it is one or
At least part of display of first interface region is replaced in the expression of the visual field of multiple cameras.
24. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
The expression of virtual objects is shown in the first interface region on the display;
When showing that described the first of the virtual objects indicates in first interface region on the display, inspection
It surveys and contacts on the touch sensitive surface with passing through at the corresponding position of the expression of the virtual objects on the display
The first input carried out;
In response to detecting first input carried out by the contact:
The first standard is met by first input that the contact carries out according to determining:
Second user interface zone is shown on the display, the table including the visual field with one or more of cameras
Show at least part of display for replacing first interface region;And
It is continuously displayed when from showing that first interface region is switched to and shows the second user interface zone described
The expression of virtual objects.
25. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
When described instruction is executed by the computer system with display, touch sensitive surface and one or more cameras, make the calculating
Machine system:
The expression of virtual objects is shown in the first interface region on the display;
When showing that described the first of the virtual objects indicates in first interface region on the display, inspection
It surveys and contacts on the touch sensitive surface with passing through at the corresponding position of the expression of the virtual objects on the display
The first input carried out;
In response to detecting first input carried out by the contact:
The first standard is met by first input that the contact carries out according to determining:
Second user interface zone is shown on the display, the table including the visual field with one or more of cameras
Show at least part of display for replacing first interface region;And
It is continuously displayed when from showing that first interface region is switched to and shows the second user interface zone described
The expression of virtual objects.
26. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
For showing the device of the expression of virtual objects in the first interface region on the display;
It shows in first interface region on the display and is opened when described the first of the virtual objects indicates
, for detecting indicate corresponding position with the described of the virtual objects on the display on the touch sensitive surface
The device for the first input that place is carried out by contact;
In response to detecting the device enabled by first input that the contact carries out, comprising:
It is being enabled by the first input first standard of satisfaction that the contact carries out, for following operation according to determining
Device:
Second user interface zone is shown on the display, the table including the visual field with one or more of cameras
Show at least part of display for replacing first interface region;And
It is continuously displayed when from showing that first interface region is switched to and shows the second user interface zone described
The expression of virtual objects.
27. a kind of for the information used in the computer system with display, touch sensitive surface and one or more cameras
Processing unit, comprising:
For showing the device of the expression of virtual objects in the first interface region on the display;
It shows in first interface region on the display and is opened when described the first of the virtual objects indicates
, for detecting indicate corresponding position with the described of the virtual objects on the display on the touch sensitive surface
The device for the first input that place is carried out by contact;
In response to detecting the device enabled by first input that the contact carries out, comprising:
It is being enabled by the first input first standard of satisfaction that the contact carries out, for following operation according to determining
Device:
Second user interface zone is shown on the display, the table including the visual field with one or more of cameras
Show at least part of display for replacing first interface region;And
It is continuously displayed when from showing that first interface region is switched to and shows the second user interface zone described
The expression of virtual objects.
28. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing any according to claim 1 into method described in 23
The instruction of method.
29. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction makes the calculating when being executed by the computer system with display, touch sensitive surface and one or more cameras
Machine system executes the method either into method described in 23 according to claim 1.
30. the graphic user interface in a kind of computer system, the computer system has display, touch sensitive surface, one
Or multiple cameras, memory and the one or more for executing the one or more programs being stored in the memory
Processor, the graphic user interface include the user that method is shown either into method described in 23 according to claim 1
Interface.
31. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;With
For executing the device of any method according to claim 1 into method described in 23.
32. a kind of for the information used in the computer system with display, touch sensitive surface and one or more cameras
Processing unit, comprising:
For executing the device of any method according to claim 1 into method described in 23.
33. a kind of method, comprising:
At the equipment with display, touch sensitive surface and one or more cameras:
The first of virtual objects are shown in the first interface region on the display indicates;
When showing that described the first of the virtual objects indicates in first interface region on the display, inspection
Survey passes through at position corresponding with first expression of the virtual objects on the display on the touch sensitive surface
The first input that first contact carries out;
In response to detecting first input carried out by first contact, and connect according to determining by described first
First input that touching carries out meets the first standard, and the second table of the virtual objects is shown in second user interface zone
Show, the second user interface zone is different from first interface region;
When showing that described the second of the virtual objects indicates in the second user interface zone, the second input of detection;With
And
In response to detecting second input:
It is corresponding with the request of the virtual objects manipulated in the second user interface zone according to determination second input,
Change the described second display category indicated of the virtual objects in the second user interface zone based on second input
Property;And
It is inputted according to determination described second and shows that the request of the virtual objects is corresponding in augmented reality environment, with described one
The third that the expression of the visual field of a or multiple cameras display together the virtual objects indicates.
34. according to the method for claim 33, wherein first standard include when first input be included in it is described
What the tap for contacting progress by described first at position corresponding with virtual objects indicator on touch sensitive surface met when inputting
Standard.
35. according to the method for claim 33, wherein first standard includes when first contact is described touch-sensitive
To be less than the mobile holding of threshold quantity at least at the position corresponding with first expression of the virtual objects on surface
The standard met when predefined thresholds time quantum.
36. according to the method for claim 33, in which:
The equipment includes one or more sensors to detect and the intensity of the contact of the touch sensitive surface;And
First standard includes being met when the characteristic strength of first contact increases to above the first intensity threshold
Standard.
37. according to the method for claim 33, comprising:
In response to detecting first input carried out by first contact and being connect according to determining by described first
First input that touching carries out meets the second standard, wherein the first input described in second standard requirements includes described first
Contact is mobile on the direction across the touch sensitive surface to be greater than threshold distance, in the moving direction pair contacted with described first
The side answered scrolls up first interface region.
38. the method according to any one of claim 33 to 37, comprising:
In response to detecting first input carried out by first contact, and connect according to determining by described first
First input that touching carries out meets third standard, together with the expression of the visual field of one or more of cameras
Show that the third of the virtual objects indicates.
39. according to the method for claim 38, in which:
The equipment includes one or more apparatus orientation sensors;
The method includes in response to detecting first input carried out by first contact, by one
Or multiple equipment orientation sensors determine the current device orientation of the equipment;And
Current device described in the third standard requirements is orientated within the scope of first orientation, to meet the third standard.
40. the method according to any one of claim 33 to 39, wherein second expression of the virtual objects
The third that at least one display properties is applied to the virtual objects indicates.
41. the method according to any one of claim 33 to 40, comprising:
In response to detecting at least initial part of first input carried out by first contact:
Activate one or more of cameras;And
Analyze one in the visual field of the visual field of one or more of cameras to detect one or more of cameras
A or multiple planes.
42. according to the method for claim 41, in which:
The equipment includes one or more tactile output generators;And
The method includes respective planes in the visual field in response to detecting one or more of cameras utilize institute
One or more tactile output generator output tactile outputs are stated, to indicate in the visual field of one or more of cameras
Detect respective planes.
43. the method according to any one of claim 33 to 42, wherein the true generation of simulation based on the virtual objects
Ungraduated ruler cun and one or more of cameras in the visual field of one or more of cameras with the virtual objects
The third indicates the distance between the position with fixed spatial relationship, determines the virtual objects on the display
The size that the third indicates.
44. the method according to any one of claim 33 to 43, wherein with the void is shown in augmented reality environment
Corresponding second input of the request of quasi- object includes dragging the described second input indicated of the virtual objects.
45. the method according to any one of claim 33 to 44, including, it is shown in the second user interface zone
When described the second of the virtual objects indicates, detection meets the corresponding mark for showing first interface region again
The 4th quasi- input;And
In response to detecting the 4th input:
It stops at and shows that described the second of the virtual objects indicate in the second user interface zone;And
Showing described the first of the virtual objects again in first interface region indicates.
46. the method according to any one of claim 33 to 45, comprising:
The third of the virtual objects is display together in the expression of the visual field with one or more of cameras
When expression, detection meets the 5th input for showing the respective standard of the second user interface zone again;And
In response to detecting the 5th input:
Stop showing that the third of the virtual objects indicates the table with the visual field of one or more of cameras
Show;And
Showing described the second of the virtual objects again in the second user interface zone indicates.
47. the method according to any one of claim 33 to 46, including with described in one or more of cameras
When the third that the expression of visual field display together the virtual objects indicates, detection meets for showing described the again
6th input of the respective standard of one interface region;And
In response to detecting the 6th input:
Stop showing that the third of the virtual objects indicates the table with the visual field of one or more of cameras
Show;And
Showing described the first of the virtual objects again in first interface region indicates.
48. the method according to any one of claim 33 to 47, comprising:
In response to detect by it is described first contact carry out it is described first input and according to determine by it is described contact into
The capable input meets first standard, uses being changed into display described second from display first interface region
The virtual objects are continuously displayed when the interface zone of family, including showing the virtual objects in first interface region
The described first the described second animation indicated for indicating a transition to virtual objects in the second user interface zone.
49. the method according to any one of claim 33 to 48, comprising:
In response to detecting second input carried out by second contact and being connect according to determining by described second
Second input that touching carries out in the augmented reality environment shows that the request of the virtual objects is corresponding, when from
Show that the second user interface zone is changed into the third user that display includes the visual field of one or more of cameras
The virtual objects are continuously displayed when interface zone, including showing the virtual objects in the second user interface zone
Described second indicates a transition to include in the third interface region of the visual field of one or more of cameras
The animation that the third of the virtual objects indicates.
50. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
The first of virtual objects are shown in the first interface region on the display indicates;
When showing that described the first of the virtual objects indicates in first interface region on the display, inspection
Survey passes through at position corresponding with first expression of the virtual objects on the display on the touch sensitive surface
The first input that first contact carries out;
In response to detecting first input carried out by first contact, and connect according to determining by described first
The input that touching carries out meets the first standard, and the expression of the virtual objects is shown in second user interface zone, described
Second user interface zone is different from first interface region;
When showing that described the second of the virtual objects indicates in the second user interface zone, the second input of detection;With
And
In response to detecting second input:
It is corresponding with the request of the virtual objects manipulated in the second user interface zone according to determination second input,
Change the described second display category indicated of the virtual objects in the second user interface zone based on second input
Property;And
It is inputted according to determination described second and shows that the request of the virtual objects is corresponding in augmented reality environment, with described one
The third that the expression of the visual field of a or multiple cameras display together the virtual objects indicates.
51. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction makes the calculating when being executed by the computer system with display, touch sensitive surface and one or more cameras
Machine system:
The first of virtual objects are shown in the first interface region on the display indicates;
When showing that described the first of the virtual objects indicates in first interface region on the display, inspection
Survey passes through at position corresponding with first expression of the virtual objects on the display on the touch sensitive surface
The first input that first contact carries out;
In response to detecting first input carried out by first contact, and connect according to determining by described first
The input that touching carries out meets the first standard, and the expression of the virtual objects is shown in second user interface zone,
The second user interface zone is different from first interface region;
When showing that described the second of the virtual objects indicates in the second user interface zone, the second input of detection;With
And
In response to detecting second input:
It is corresponding with the request of the virtual objects manipulated in the second user interface zone according to determination second input,
Change the described second display category indicated of the virtual objects in the second user interface zone based on second input
Property;And
It is inputted according to determination described second and shows that the request of the virtual objects is corresponding in augmented reality environment, with described one
The third that the expression of the visual field of a or multiple cameras display together the virtual objects indicates.
52. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
For showing the first device indicated of virtual objects in the first interface region on the display;
It shows in first interface region on the display and is opened when described the first of the virtual objects indicates
, for detecting indicate corresponding with described first of the virtual objects on the display on the touch sensitive surface
The device of the first input carried out at position by the first contact;
In response to detecting first input carried out by first contact and being contacted according to determining by described first
The input carried out meets the first standard and enables, for showing the virtual objects in second user interface zone
The device of expression, the second user interface zone are different from first interface region;
Shown in the second user interface zone enabled when described the second of the virtual objects indicates for detecting the
The device of two inputs;With
The device enabled in response to detecting second input, comprising:
It is corresponding with the request of the virtual objects manipulated in the second user interface zone according to determination second input
And enable, for changing described second of the virtual objects in the second user interface zone based on second input
The device of the display properties of expression;With
Corresponding with the request of the virtual objects is shown in augmented reality environment and enabling according to determination second input,
Expression for the visual field with one or more of cameras display together the device that the third of the virtual objects indicates.
53. a kind of information processing unit for the computer system with display, touch sensitive surface and one or more cameras,
Include:
For showing the first device indicated of virtual objects in the first interface region on the display;
It shows in first interface region on the display and is opened when described the first of the virtual objects indicates
, for detecting indicate corresponding with described first of the virtual objects on the display on the touch sensitive surface
The device of the first input carried out at position by the first contact;
In response to detecting first input carried out by first contact and being contacted according to determining by described first
The input carried out meets the first standard and enables, for showing the virtual objects in second user interface zone
The device of expression, the second user interface zone are different from first interface region;
Shown in the second user interface zone enabled when described the second of the virtual objects indicates for detecting the
The device of two inputs;With
The device enabled in response to detecting second input, comprising:
It is corresponding with the request of the virtual objects manipulated in the second user interface zone according to determination second input
And enable, for changing described second of the virtual objects in the second user interface zone based on second input
The device of the display properties of expression;With
Corresponding with the request of the virtual objects is shown in augmented reality environment and enabling according to determination second input,
Expression for the visual field with one or more of cameras display together the device that the third of the virtual objects indicates.
54. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing any in the method according to claim 33 to 49
The instruction of method.
55. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction makes the calculating when being executed by the computer system with display, touch sensitive surface and one or more cameras
Machine system execution method either in the method according to claim 33 to 49.
56. the graphic user interface in a kind of computer system, the computer system has display, touch sensitive surface, one
Or multiple cameras, memory and the one or more for executing the one or more programs being stored in the memory
Processor, the graphic user interface include the user that method is shown either in the method according to claim 33 to 49
Interface.
57. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more cameras;With
For either executing in the method according to claim 33 to 49 device of method.
58. a kind of information processing unit for the computer system with display, touch sensitive surface and one or more cameras,
Include:
For either executing in the method according to claim 33 to 49 device of method.
59. a kind of method, comprising:
At the equipment with display and touch sensitive surface:
Receive the request that display includes the first user interface of first item;
In response to showing the request of first user interface, showing has described the first of the expression of the first item
User interface, comprising:
It is corresponding according to the determination first item and corresponding virtual three-dimensional object, show the expression of the first item,
The expression of the first item, which has, indicates the corresponding vision of the first item virtual three-dimensional object corresponding with first
Instruction;And
Not corresponding with corresponding virtual three-dimensional object according to the determination first item, display does not have the institute visually indicated
State the expression of first item;
After the expression for showing the first item, asking for the second user interface that display includes second item is received
It asks;And
In response to showing the request at the second user interface, showing has described the second of the expression of the second item
User interface, comprising:
It is corresponding according to the determination second item and corresponding virtual three-dimensional object, show the expression of the second item,
The expression of the second item, which has, indicates that the second item virtual three-dimensional object corresponding with second is corresponding described
It visually indicates;And
Not corresponding with corresponding virtual three-dimensional object according to the determination second item, display does not have the institute visually indicated
State the expression of second item.
60. method according to claim 59, in which:
The equipment includes one or more apparatus orientation sensors;And
Show the expression of the first item, the expression of the first item, which has, indicates the first item and the
It is visually indicated described in one corresponding virtual three-dimensional object is corresponding, shows that the expression of the first item includes:
Lead to change into the movement of the equipment of the second apparatus orientation from the first apparatus orientation in response to detecting, display with from
The movement of the corresponding first item of the variation of first apparatus orientation to second apparatus orientation.
61. the method according to any one of claim 59 to 60, wherein showing the expression of the first item, institute
The expression for stating first item, which has, indicates the corresponding view of the first item virtual three-dimensional object corresponding with first
Feel instruction, comprising:
It is detected when in response to showing the expression of the first item in first user interface by described in rolling
The first input that first contact of the first user interface carries out:
The expression of the first item on the display is translated according to the rolling of first user interface;And
According to the direction of first user interface scrolling, relative to described in the Plane Rotation as first user interface definition
The expression of first item.
62. the method according to any one of claim 59 to 61, including, show have in first user interface
When the expression of the first item visually indicated, the expression of third item is shown, wherein the third item
The expression is shown as not having described visually indicate to indicate that the third item is not corresponding with virtual three-dimensional object.
63. the method according to any one of claim 59 to 62, including, when showing tool in the second user interface
When having the expression of the second item visually indicated, the expression of fourth item is shown, wherein the fourth item
The expression be shown as not having it is described visually indicate so as to indicate the fourth item not with corresponding virtual three-dimensional pair
As correspondence.
64. the method according to any one of claim 59 to 63, in which:
First user interface is corresponding with the first application program;
The second user interface is corresponding with the second application program, and second application program is different from described first and applies journey
Sequence;And
It is shown as the expression of the first item visually indicated with described in and is shown as visually indicating with described
Predefined one group of visual signature and/or behavioural characteristic are shared in the expression of the second item.
65. the method according to any one of claim 59 to 64, wherein first user interface is internet browsing
Device application program user interface, and the first item is the element of webpage.
66. the method according to any one of claim 59 to 64, wherein first user interface is that Email is answered
With program user interface, and the first item is the attachment of Email.
67. the method according to any one of claim 59 to 64, wherein first user interface is that messaging is answered
With program user interface, and the first item is the attachment or element in message.
68. the method according to any one of claim 59 to 64, wherein first user interface is that file management is answered
With program user interface, and the first item is previewing file object.
69. the method according to any one of claim 59 to 64, wherein first user interface is map application journey
Sequence user interface, and the first item is the expression of the point of interest in map.
70. the method according to any one of claim 59 to 69, wherein the first item and corresponding virtual three-dimensional
Object is corresponding to visually indicate the animation including the first item, and the animation is to need not point to corresponding three-dimensional right
Occur in the case where the input of the expression of elephant.
71. the method according to any one of claim 59 to 70, in which:
The equipment includes one or more cameras;And
The described method includes:
Display have indicate the second item it is corresponding with corresponding virtual three-dimensional object described in visually indicate described the
When being indicated described in binomial purpose, detects and pass through at position corresponding with the expression of the second item on the touch sensitive surface
The second input that second contact carries out;Also,
In response to detecting second input carried out by second contact, and connect according to determining by described second
Second input that touching carries out meets the first standard:
Third interface region is shown on the display, indicates to replace including the visual field with one or more of cameras
Change at least part of display at the second user interface;And
When from the second user changing interface is shown to show the third interface region, described second is continuously displayed
Virtual three-dimensional object.
72. the method according to any one of claim 59 to 70, in which:
The equipment includes one or more cameras;And
The described method includes:
Display have indicate the second item it is corresponding with the second virtual three-dimensional object described in visually indicate described in
When second item, detects and contacted at the corresponding position of the expression of the second item by third on the touch sensitive surface
The third of progress inputs;
In response to detecting the third input carried out by third contact and being connect according to determining by the third
The third input that touching carries out meets the first standard, and the second virtual three-dimensional object, institute are shown in fourth user interface
It is different from the second user interface to state fourth user interface;
When showing corresponding virtual three-dimensional object in the fourth user interface, the 4th input of detection;And
In response to detecting the 4th input:
According to request determination the 4th input and manipulate the second virtual three-dimensional object in the fourth user interface
It is corresponding, change the display properties of the second virtual three-dimensional object in the fourth user interface based on the 4th input;
And
It is corresponding with the request of second virtual objects is shown in augmented reality environment according to determination the 4th input, with institute
The expression for stating the visual field of one or more cameras display together the second virtual three-dimensional object.
73. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
Receive the request that display includes the first user interface of first item;
In response to showing the request of first user interface, showing has described the first of the expression of the first item
User interface:
It is corresponding according to the determination first item and corresponding virtual three-dimensional object, show the expression of the first item, it is described
The expression of first item has the instruction first item virtual three-dimensional object corresponding with first is corresponding to visually indicate;
And
Not corresponding with corresponding virtual three-dimensional object according to the determination first item, display does not have the institute visually indicated
State the expression of first item;
After the expression for showing the first item, asking for the second user interface that display includes second item is received
It asks;And
In response to showing the request at the second user interface, showing has described the second of the expression of the second item
User interface:
It is corresponding according to the determination second item and corresponding virtual three-dimensional object, show the expression of the second item, it is described
The expression of second item, which has, indicates the corresponding vision of the second item virtual three-dimensional object corresponding with second
Instruction;And
Not corresponding with corresponding virtual three-dimensional object according to the determination second item, display does not have the institute visually indicated
State the expression of second item.
74. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction makes the computer system when being executed by the computer system with display and touch sensitive surface:
Receive the request that display includes the first user interface of first item;
In response to showing the request of first user interface, showing has described the first of the expression of the first item
User interface:
It is corresponding according to the determination first item and corresponding virtual three-dimensional object, show the expression of the first item, it is described
The expression of first item has the instruction first item virtual three-dimensional object corresponding with first is corresponding to visually indicate;
And
Not corresponding with corresponding virtual three-dimensional object according to the determination first item, display does not have the institute visually indicated
State the expression of first item;
After the expression for showing the first item, asking for the second user interface that display includes second item is received
It asks;And
In response to showing the request at the second user interface, showing has described the second of the expression of the second item
User interface:
It is corresponding according to the determination second item and corresponding virtual three-dimensional object, show the expression of the second item, it is described
The expression of second item, which has, indicates the corresponding vision of the second item virtual three-dimensional object corresponding with second
Instruction;And
Not corresponding with corresponding virtual three-dimensional object according to the determination second item, display does not have the institute visually indicated
State the expression of second item.
75. a kind of computer system, comprising:
Display;
Touch sensitive surface;
The device of the request for the first user interface for including first item for receiving display;
It enables, in response to showing the request of first user interface for showing the table with the first item
The device for first user interface shown:
It is corresponding with corresponding virtual three-dimensional object according to the determination first item and enabling for showing the first item
Expression device, the expression of the first item, which has, indicates the first item virtual three-dimensional pair corresponding with first
It is visually indicated as corresponding;With,
It is not corresponding with corresponding virtual three-dimensional object according to the determination first item and enabling for showing without described
The device of the expression of the first item visually indicated;
What is enabled after the expression for showing the first item shows the second user including second item for receiving
The device of the request at interface;With,
In response to show the request at the second user interface and enable for show the table with the second item
The device at the second user interface shown, comprising:
The table of corresponding with corresponding virtual three-dimensional object according to the determination second item and enabling the display second item
The device shown, the expression of the second item, which has, indicates the second item virtual three-dimensional object pair corresponding with second
It is visually indicated described in answering;With,
Display that is not corresponding with corresponding virtual three-dimensional object according to the determination second item and enabling does not have the vision
The device of the expression of the second item indicated.
76. a kind of for the information processing unit used in the computer system with display and touch sensitive surface, comprising:
The device of the request for the first user interface for including first item for receiving display;
In response to show the request of first user interface and enable for show the table with the first item
The device for first user interface shown:
It is corresponding with corresponding virtual three-dimensional object according to the determination first item and enabling for showing the first item
Expression device, the expression of the first item, which has, indicates the first item virtual three-dimensional pair corresponding with first
It is visually indicated as corresponding;With,
It is not corresponding with corresponding virtual three-dimensional object according to the determination first item and enabling for showing without described
The device of the expression of the first item visually indicated;
What is enabled after the expression for showing the first item shows the second user including second item for receiving
The device of the request at interface;With,
In response to show the request at the second user interface and enable for show the table with the second item
The device at the second user interface shown, comprising:
The table of corresponding with corresponding virtual three-dimensional object according to the determination second item and enabling the display second item
The device shown, the expression of the second item, which has, indicates the second item virtual three-dimensional object pair corresponding with second
It is visually indicated described in answering;With,
Display that is not corresponding with corresponding virtual three-dimensional object according to the determination second item and enabling does not have the vision
The device of the expression of the second item indicated.
77. a kind of computer system, comprising:
Display;
Touch sensitive surface;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing any in the method according to claim 59 to 72
The instruction of method.
78. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction executes the computer system according to power when being executed by the computer system with display and touch sensitive surface
Benefit require 59 to 72 described in method either in method.
79. the graphic user interface in a kind of computer system, the computer system has display, touch sensitive surface, storage
Device and one or more processors for executing the one or more programs being stored in the memory, the figure
User interface includes the user interface that method is shown either in the method according to claim 59 to 72.
80. a kind of computer system, comprising:
Display;
Touch sensitive surface;With
For either executing in the method according to claim 59 to 72 device of method.
81. a kind of for the information processing unit used in the computer system with display and touch sensitive surface, comprising:
For either executing in the method according to claim 59 to 72 device of method.
82. a kind of method, comprising:
At the equipment with display generating unit, one or more input equipments and one or more cameras:
The request that virtual objects are shown in the first interface region is received, first interface region includes described one
At least part of the visual field of a or multiple cameras;
In response to showing the request of the virtual objects in first interface region, generated via the display
Be included in first interface region at least one of the component in the visual field of one or more of cameras
The expression for dividing the upper display virtual objects, wherein the visual field of one or more of cameras is one or more of phases
The view of physical environment locating for machine, and wherein show that the expressions of the virtual objects includes:
According to determining that object places that standard is unmet, showing has the described virtual of first group of perceptual property and first orientation
The expression of object, wherein the object places the placement location of virtual objects described in standard requirements one or more of
It is identified in the visual field of camera, places standard, the first orientation and the physical environment to meet the object
Which is partially shown in unrelated in the visual field of one or more of cameras;And
It places standard according to the determination object to be met, showing has the void of second group of perceptual property and second orientation
The expression of quasi- object, second group of perceptual property are different from first group of perceptual property, the second orientation with
Plane in the physical environment detected in the visual field of one or more of cameras is corresponding.
83. the method according to claim 82, comprising:
When showing that the virtual objects described indicates with first group of perceptual property and the first orientation, institute is detected
Object placement standard is stated to be met.
84. the method according to claim 83, comprising:
Met in response to detecting that the object places standard, shows animation transition, institute via the display generating unit
It states animation transition the expressions of the virtual objects is shown and be moved to the second orientation from the first orientation, and from tool
There is first group of perceptual property to change into second group of perceptual property.
85. the method according to any one of claim 83 to 84, wherein detecting that the object is placed standard and expired
Foot includes one or more of following operation:
It detects and has identified plane in the visual field of one or more of cameras;
Detect the movement for being less than threshold quantity between equipment described at least threshold amount of time and the physical environment;And
It has detected since receiving the request for showing the virtual objects in first interface region
It has passed through at least predetermined time quantum.
86. the method according to any one of claim 82 to 85, comprising:
When in the first part of the physical environment captured in the visual field in one or more of cameras with described
When one group of perceptual property and the first orientation show the expression of the virtual objects, one or more of cameras are detected
First movement;And
In response to detecting the first movement of one or more of cameras, in the view of one or more of cameras
Described in being shown on the second part of the physical environment captured in first group of perceptual property and the first orientation
The expression of virtual objects, wherein the second part of the physical environment is different from described the first of the physical environment
Part.
87. the method according to any one of claim 82 to 86, comprising:
When on the Part III of the physical environment captured in the visual field in one or more of cameras with described
When two groups of perceptual properties and the second orientation show the expression of the virtual objects, one or more of cameras are detected
It is second mobile;And
Described second in response to detecting the equipment is mobile, captures when in the visual field in one or more of cameras
The physical environment it is mobile according to described the second of the equipment and move, and the second orientation continue with described one
The plane in the physical environment detected in the visual field of a or multiple cameras is maintained at one to when corresponding to
Or with second group of perceptual property and institute on the Part III of the physical environment captured in the visual field of multiple cameras
State the expression that second orientation shows the virtual objects.
88. the method according to any one of claim 82 to 87, comprising:
It places standard according to the determination object to be met, in conjunction with aobvious with second group of perceptual property and the second orientation
Show the expression of the virtual objects, generates tactile output, the second orientation and the institute in one or more of cameras
The plane stated in the physical environment detected in visual field is corresponding.
89. the method according to any one of claim 82 to 88, comprising:
With second group of perceptual property and with the object that is detected in the visual field of one or more of cameras
When the corresponding second orientation of the plane in reason environment shows the expression of the virtual objects, receive about in institute
State at least position or the orientation of the plane in the physical environment detected in the visual field of one or more cameras
Update;And
In response to receiving about in the physical environment detected in the visual field of one or more of cameras
At least described position of the plane or the update of the orientation, according to the expression for updating the adjustment virtual objects
At least position and/or orientation.
90. the method according to any one of claim 82 to 89, in which:
First group of perceptual property includes first size and the first translucent level;And
Second group of perceptual property includes the second size different from the first size, and translucent lower than described first
The second horizontal translucent level.
91. the method according to any one of claim 82 to 90, in which:
Be shown in the virtual objects do not include one or more of cameras the visual field it is at least part of corresponding
When in user interface, receiving is including at least part of first user of the visual field of one or more of cameras
The request of the virtual objects is shown in interface zone, and
When receiving the request, when the virtual objects are shown in the respective user interfaces, the first orientation
It is corresponding with the orientation of the virtual objects.
92. the method according to any one of claim 82 to 90, wherein the first orientation is corresponding with predefined orientation.
93. the method according to any one of claim 82 to 92, comprising:
When in first interface region with second group of perceptual property and in one or more of cameras
The corresponding second orientation of the plane in the physical environment detected in the visual field shows the virtual objects
When, detection the analog physical size of the virtual objects is changed into from the first analog physical size relative to one or
The request of second analog physical size of the physical environment captured in the visual field of multiple cameras;And
Change the request of the analog physical size of the virtual objects in response to detecting:
According to the analog physical size of the virtual objects from the first analog physical size to second analogies
Gradually changing for reason size, gradually changes the display ruler of the expression of virtual objects described in first interface region
It is very little;And
In the mistake that shown size of the expression in first interface region of the virtual objects gradually changes
Cheng Zhong has reached predefined analog physical size according to the analog physical size of the determination virtual objects, generates tactile
Output is to indicate that the analog physical size of the virtual objects has reached the predefined analog physical size.
94. the method according to claim 93, comprising:
With the institute different from the predefined analog physical size of the virtual objects in first interface region
When stating the second analog physical size and showing the virtual objects, detection makes the virtual objects back to the predefined analogies
Manage the request of size;And
In response to detecting the request for making the virtual objects return to the predefined analog physical size, according to described
The analog physical size of virtual objects changes first interface region to the variation of the predefined analog physical size
Described in virtual objects the expression the display size.
95. the method according to any one of claim 82 to 94, comprising:
Select plane, the plane be used for according to one or more of cameras relative to the physical environment corresponding position and
The second orientation with the expression of the virtual objects of second group of perceptual property is arranged, wherein selecting in orientation
Selecting the plane includes:
According to determining when in the first part of the physical environment captured in the visual field of one or more of cameras
Show that the object placement standard is met when the expression of the virtual objects, is selected in one or more of cameras
The visual field in the physical environment in the first plane in multiple planes for detecting, as be arranged have it is described
The plane of the second orientation of the expression of the virtual objects of second group of perceptual property;And
According to determining when on the second part of the physical environment captured in the visual field of one or more of cameras
Show that the object placement standard is met when the expression of the virtual objects, is selected in one or more of cameras
The visual field in the physical environment in the second plane in the multiple plane that detects, have as being arranged
The plane of the second orientation of the expression of the virtual objects of second group of perceptual property, wherein the object
The first part for managing environment is different from the second part of the physical environment, and first plane is different from institute
State the second plane.
96. the method according to any one of claim 82 to 95, comprising:
Display has the void of second group of perceptual property and the second orientation in first interface region
Showing that snapshot shows while quasi- object can indicate;And
Show that the activation that can be indicated, capture include the fast of the active view of the expression of the virtual objects in response to the snapshot
According to image, the expression of the virtual objects is located at the physical environment in the visual field of one or more of cameras
In placement location, and have second group of perceptual property and the second orientation, the second orientation with one
Or the plane in the physical environment detected in the visual field of multiple cameras is corresponding.
97. the method according to any one of claim 82 to 96, comprising:
In first interface region with second group of perceptual property the virtual objects the expression
One or more controls are display together to show and can indicate;And
It is display together in the expression with the virtual objects with second group of perceptual property one or more of
Control is shown when can indicate, detects that gradually light standard is met for control;And
In response to detecting that gradually light standard is met for the control, the one or more of controls of stopping display being shown and can be indicated,
Continuing the display in first interface region of the visual field for including one or more of cameras simultaneously has institute
State the expression of the virtual objects of second group of perceptual property.
98. the method according to any one of claim 82 to 97, comprising:
In response to showing the request of the virtual objects in first interface region: being included in described first
The virtual objects are shown at least part of the visual field of one or more of cameras in interface region
Before the expression, according to determining that calibration standard is unmet, shown for the user mobile relative to the physical environment
The prompt of the equipment.
99. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
The request that virtual objects are shown in the first interface region is received, first interface region includes described one
At least part of the visual field of a or multiple cameras;
In response to showing the request of the virtual objects in first interface region, generated via the display
Component is including at least part of the visual field of one or more of cameras in first interface region
The expression of the upper display virtual objects, wherein the visual field of one or more of cameras is one or more of cameras
The view of locating physical environment, and wherein show that the expressions of the virtual objects includes:
According to determining that object places that standard is unmet, showing has the described virtual of first group of perceptual property and first orientation
The expression of object, wherein the object places the placement location of virtual objects described in standard requirements one or more of
It is identified in the visual field of camera, places standard, the first orientation and the physical environment to meet the object
Which is partially shown in unrelated in the visual field of one or more of cameras;And
It places standard according to the determination object to be met, showing has the void of second group of perceptual property and second orientation
The expression of quasi- object, second group of perceptual property are different from first group of perceptual property, the second orientation with
Plane in the physical environment detected in the visual field of one or more of cameras is corresponding.
100. a kind of computer readable storage medium for storing one or more programs, one or more of programs include referring to
It enables, described instruction is when the department of computer science by having display generating unit, one or more input equipments and one or more cameras
When system executes, make the computer system:
The request that virtual objects are shown in the first interface region is received, first interface region includes described one
At least part of the visual field of a or multiple cameras;
In response to showing the request of the virtual objects in first interface region, generated via the display
Component is including at least part of the visual field of one or more of cameras in first interface region
The expression of the upper display virtual objects, wherein the visual field of one or more of cameras is one or more of cameras
The view of locating physical environment, and wherein show that the expressions of the virtual objects includes:
According to determining that object places that standard is unmet, showing has the described virtual of first group of perceptual property and first orientation
The expression of object, wherein the object places the placement location of virtual objects described in standard requirements one or more of
It is identified in the visual field of camera, places standard, the first orientation and the physical environment to meet the object
Which is partially shown in unrelated in the visual field of one or more of cameras;And
It places standard according to the determination object to be met, showing has the void of second group of perceptual property and second orientation
The expression of quasi- object, second group of perceptual property are different from first group of perceptual property, the second orientation with
Plane in the physical environment detected in the visual field of one or more of cameras is corresponding.
101. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
For receiving the device for showing the request of virtual objects in the first interface region, first interface region
At least part of visual field including one or more of cameras;With
It is activated, in response to showing the request of the virtual objects in first interface region for passing through
It is including the view of one or more of cameras in first interface region by the display generating unit
The device that the expression of the virtual objects is shown at least part of field, wherein the visual field of one or more of cameras
It is the view of physical environment locating for one or more of cameras, and what is be wherein activated is used to show the virtual objects
The described device of the expression include:
There is first group of perceptual property and first according to the display that is used for for determining that object placement standard is unmet and is activated
The device of the expression of the virtual objects of orientation, wherein the object places the placement of virtual objects described in standard requirements
Position is identified in the visual field of one or more of cameras, places standard to meet the object, and described first
Orientation is partially shown in unrelated in the visual field of one or more of cameras with which of the physical environment;And
Standard is placed according to the determination object to be met and what is be activated has second group of perceptual property and for showing
The device of the expression of the virtual objects of two orientations, second group of perceptual property are different from first group of vision category
Property, the plane in the second orientation and the physical environment detected in the visual field of one or more of cameras
It is corresponding.
102. a kind of in the computer with display generating unit, one or more input equipments and one or more cameras
Information processing unit used in system, comprising:
For receiving the device for showing the request of virtual objects in the first interface region, first interface region
At least part of visual field including one or more of cameras;With
It is activated, in response to showing the request of the virtual objects in first interface region for passing through
It is including the view of one or more of cameras in first interface region by the display generating unit
The device that the expression of the virtual objects is shown at least part of field, wherein the visual field of one or more of cameras
It is the view of physical environment locating for one or more of cameras, and what is be wherein activated is used to show the virtual objects
The described device of the expression include:
There is first group of perceptual property and first according to the display that is used for for determining that object placement standard is unmet and is activated
The device of the expression of the virtual objects of orientation, wherein the object places the placement of virtual objects described in standard requirements
Position is identified in the visual field of one or more of cameras, places standard to meet the object, and described first
Orientation is partially shown in unrelated in the visual field of one or more of cameras with which of the physical environment;With
Standard is placed according to the determination object to be met and what is be activated has second group of perceptual property and for showing
The device of the expression of the virtual objects of two orientations, second group of perceptual property are different from first group of vision category
Property, the plane in the second orientation and the physical environment detected in the visual field of one or more of cameras
It is corresponding.
103. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing any in the method according to claim 82 to 98
The instruction of method.
104. a kind of computer readable storage medium for storing one or more programs, one or more of programs include referring to
It enables, when described instruction is by having the department of computer science of display generating unit, one or more input equipments and one or more cameras
When system executes, the computer system is made either to execute in the method according to claim 82 to 98 method.
105. the graphic user interface in a kind of computer system, the computer system have display generating unit, one or
Multiple input equipments, one or more camera, memory and for executing be stored in the memory one or more
The one or more processors of a program, the graphic user interface include in the method according to claim 82 to 98
The user interface that either method is shown.
106. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;With
For either executing in the method according to claim 82 to 98 device of method.
107. a kind of in the computer with display generating unit, one or more input equipments and one or more cameras
Information processing unit used in system, comprising:
For either executing in the method according to claim 82 to 98 device of method.
108. a kind of method, comprising:
There is display generating unit, one or more input equipment, one or more cameras and for detecting including described
At the equipment of one or more attitude transducers of the attitudes vibration of the equipment of one or more cameras:
Receive the request that the augmented reality view of physical environment is shown in the first interface region, first user interface
Region includes the expression of the visual field of one or more of cameras;
The request of the augmented reality view of the physical environment is shown in response to receiving, and is shown one or more
The expression of the visual field of a camera, and according to the school for determining the augmented reality view for the physical environment
Fiducial mark standard is unmet, and display is drawn according to the movement of one or more of cameras in the physical environment by dynamic earthquake
The calibration user interface object of change, wherein showing that the calibration user interface object includes:
When showing the calibration user interface object, the physical environment is detected via one or more of attitude transducers
In one or more of cameras attitudes vibration;And
In response to detecting the attitudes vibration of one or more camera described in the physical environment, according to the physical rings
The detected attitudes vibration of one or more of cameras in border adjusts the calibration user interface object at least
One display parameters;
In display according to the detected attitudes vibration of one or more of cameras in the physical environment described aobvious
When showing the calibration user interface object moved on device, detect that the calibration standard is met;And
In response to detecting that the calibration standard is met, stop showing the calibration user interface object.
109. method described in 08 according to claim 1, wherein including the institute of the visual field of one or more of cameras
State the request packet that the augmented reality view of the physical environment is shown in first interface region of expression
Include the request that the expression of virtual three-dimensional object is shown in the augmented reality view of the physical environment.
110. method described in 09 according to claim 1, comprising:
It is including the institute of the visual field of one or more of cameras after stopping showing the calibration user interface object
State the expression that the virtual three-dimensional object is shown in first interface region of expression.
111. method described in any one of 09 to 110 according to claim 1, comprising:
The calibration is shown while showing the expression of the virtual three-dimensional object in first interface region
User interface object, wherein during the movement of one or more of cameras in the physical environment, it is described
The expression of virtual three-dimensional object is maintained at the fixation position in first interface region.
112. method described in 08 according to claim 1, wherein including the institute of the visual field of one or more of cameras
State the request packet that the augmented reality view of the physical environment is shown in first interface region of expression
The request for showing the expression of the visual field of one or more of cameras is included, without requesting in one or more of phases
The expression of any virtual three-dimensional object is shown in the physical environment captured in the visual field of machine.
113. method described in any one of 08 to 112 according to claim 1, comprising:
The request of the augmented reality view of the physical environment is shown in response to receiving, and is shown one or more
The expression of the visual field of a camera, and according to the institute for determining the augmented reality view for the physical environment
It states calibration standard to be met, abandons the display of the calibration user interface object.
114. method described in any one of 08 to 113 according to claim 1, comprising:
The calibration user interface object, the school are shown while display text object in first interface region
Quasi- user interface object provides the movement for the calibration for improving the augmented reality view that can be taken about the user
Information.
115. method described in any one of 08 to 114 according to claim 1, comprising:
In response to detecting that the calibration standard is met, it is shown in the visual field of one or more of cameras and captures
The physical environment in the plane that detects visually indicate.
116. method described in any one of 08 to 115 according to claim 1, comprising:
The request of the augmented reality view of the physical environment is shown in response to receiving:
It is unmet according to the determination calibration standard and before showing the calibration user interface object, show animation
Object is prompted, the animation prompt object includes the expression of the equipment mobile relative to the expression of plane.
117. method described in any one of 08 to 116 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
According to the first movement magnitude of one or more camera described in the physical environment, by the calibration user interface object
Mobile first amount;And
Magnitude is moved by the calibration user interface object according to second of one or more camera described in the physical environment
Mobile second amount, wherein first amount is different from second amount, and the first movement magnitude is different from described second
Mobile magnitude.
118. method described in any one of 08 to 117 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
According to the mobile corresponding of the detected attitudes vibration of the one or more of cameras of determination and the first kind, it is based on institute
The movement of the first kind is stated to move the calibration user interface object;And
According to the mobile corresponding of the detected attitudes vibration of the one or more of cameras of determination and Second Type, base is abandoned
The calibration user interface object is moved in the movement of the Second Type.
119. method described in any one of 08 to 118 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
The calibration is moved according to the detected attitudes vibration of one or more of cameras in the physical environment
User interface object is shown without changing feature of the calibration user interface object above first interface region
Position.
120. method described in any one of 08 to 119 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
The calibration user interface object is set to surround the mobile side perpendicular to one or more camera described in the physical environment
To axis rotation.
121. method described in any one of 08 to 120 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
It is described to move with the speed determined according to the change rate detected in the visual field of one or more of cameras
Calibration user interface object.
122. method described in any one of 08 to 121 according to claim 1, wherein according to described one in the physical environment
The detected attitudes vibration of a or multiple cameras adjusts at least one display parameters of the calibration user interface object
Include:
With the direction that is determined according to the change direction that is detected in the visual field of one or more of cameras to move
State calibration user interface object.
123. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more attitude transducers;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
Receive the request that the augmented reality view of physical environment is shown in the first interface region, first user interface
Region includes the expression of the visual field of one or more of cameras;
The request of the augmented reality view of the physical environment is shown in response to receiving, and is shown one or more
The expression of the visual field of a camera, and according to the school for determining the augmented reality view for the physical environment
Fiducial mark standard is unmet, and display is drawn according to the movement of one or more of cameras in the physical environment by dynamic earthquake
The calibration user interface object of change, wherein showing that the calibration user interface object includes:
When showing the calibration user interface object, the physical environment is detected via one or more of attitude transducers
In one or more of cameras attitudes vibration;And
In response to detecting the attitudes vibration of one or more of cameras in the physical environment, according to the physics
The detected attitudes vibration of one or more of cameras in environment adjusts the calibration user interface object extremely
Few display parameters;
In display according to the detected attitudes vibration of one or more of cameras in the physical environment described aobvious
When showing the calibration user interface object moved on device, detect that the calibration standard is met;And
In response to detecting that the calibration standard is met, stop showing the calibration user interface object.
124. a kind of computer readable storage medium for storing one or more programs, one or more of programs include referring to
It enables, described instruction is when by having display generating unit, one or more input equipments, one or more cameras and one or more
When the computer system of attitude transducer executes, make the computer system:
Receive the request that the augmented reality view of physical environment is shown in the first interface region, first user interface
Region includes the expression of the visual field of one or more of cameras;
The request of the augmented reality view of the physical environment is shown in response to receiving, and is shown one or more
The expression of the visual field of a camera, and according to the school for determining the augmented reality view for the physical environment
Fiducial mark standard is unmet, and display is drawn according to the movement of one or more of cameras in the physical environment by dynamic earthquake
The calibration user interface object of change, wherein showing that the calibration user interface object includes:
When showing the calibration user interface object, the physical environment is detected via one or more of attitude transducers
In one or more of cameras attitudes vibration;And
In response to detecting the attitudes vibration of one or more camera described in the physical environment, according to the physical rings
The detected attitudes vibration of one or more of cameras in border adjusts the calibration user interface object at least
One display parameters;
In display according to the detected attitudes vibration of one or more of cameras in the physical environment described aobvious
When showing the calibration user interface object moved on device, detect that the calibration standard is met;And
In response to detecting that the calibration standard is met, stop showing the calibration user interface object.
125. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more attitude transducers;
For receiving the device for showing the request of augmented reality view of physical environment in the first interface region, described
One interface region includes the expression of the visual field of one or more of cameras;
In response to receive show the request of the augmented reality view of the physical environment and be activated for
The device of lower operation: the expression of the visual field of one or more of cameras is shown, and according to determining for described
The calibration standard of the augmented reality view of physical environment is unmet, shows according to described one in the physical environment
The movement of a or multiple cameras is by the calibration user interface object of dynamic earthquake pictureization, wherein showing the calibration user interface pair
As including:
When showing the calibration user interface object, the physical environment is detected via one or more of attitude transducers
In one or more of cameras attitudes vibration;And
In response to detecting the attitudes vibration of one or more camera described in the physical environment, according to the physical rings
The detected attitudes vibration of one or more of cameras in border adjusts the calibration user interface object at least
One display parameters;
In display according to the detected attitudes vibration of one or more of cameras in the physical environment described aobvious
What is enabled when showing the calibration user interface object moved on device is used to detect the device that the calibration standard is met;With
In response to detecting that the calibration standard is met and what is be activated shows the calibration user interface pair for stopping
The device of elephant.
126. one kind for have display generating unit, one or more input equipment, one or more cameras and one or
Information processing unit used in the computer system of multiple attitude transducers, comprising:
For receiving the device for showing the request of augmented reality view of physical environment in the first interface region, described
One interface region includes the expression of the visual field of one or more of cameras;
In response to receive show the request of the augmented reality view of the physical environment and be activated for
The device of lower operation: the expression of the visual field of one or more of cameras is shown, and according to determining for described
The calibration standard of the augmented reality view of physical environment is unmet, shows according to described one in the physical environment
The movement of a or multiple cameras is by the calibration user interface object of dynamic earthquake pictureization, wherein showing the calibration user interface pair
As including:
When showing the calibration user interface object, the physical environment is detected via one or more of attitude transducers
In one or more of cameras attitudes vibration;And
In response to detecting the attitudes vibration of one or more camera described in the physical environment, according to the physical rings
The detected attitudes vibration of one or more of cameras in border adjusts the calibration user interface object at least
One display parameters;
In display according to the detected attitudes vibration of one or more of cameras in the physical environment described aobvious
What is enabled when showing the calibration user interface object moved on device is used to detect the device that the calibration standard is met;With
In response to detecting that the calibration standard is met and what is be activated shows the calibration user interface pair for stopping
The device of elephant.
127. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more attitude transducers;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing appointing in method described in 08 to 122 according to claim 1
The instruction of one method.
128. a kind of computer readable storage medium for storing one or more programs, one or more of programs include referring to
It enables, described instruction is when by having display generating unit, one or more input equipments, one or more cameras and one or more
When the computer system of attitude transducer executes, the computer system is made to execute side described in 08 to 122 according to claim 1
Method either in method.
129. the graphic user interface in a kind of computer system, the computer system have display generating unit, one or
Multiple input equipments, one or more camera, one or more attitude transducers, memory and institute is stored in for executing
The one or more processors of one or more programs in memory are stated, the graphic user interface includes according to claim
The user interface that method is shown either in method described in 108 to 122.
130. a kind of computer system, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more attitude transducers;With
For either executing in method described in 08 to 122 according to claim 1 the device of method.
131. one kind for have display generating unit, one or more input equipment, one or more cameras and one or
Information processing unit used in the computer system of multiple attitude transducers, comprising:
For either executing in method described in 08 to 122 according to claim 1 the device of method.
132. a kind of method, comprising:
At the equipment with display generating unit and one or more input equipments including touch sensitive surface:
The expression at the first visual angle of virtual three-dimensional object is shown in the first interface region by the display generating unit;
When first view for showing the virtual three-dimensional object in first interface region on the display
When the expression at angle, detects and rotate the virtual three-dimensional object relative to display to show the virtual three-dimensional object
Corresponding first input of request from sightless a part in first visual angle of the virtual three-dimensional object;
In response to detecting first input:
According to determination first input, to rotate the request of the three dimensional object corresponding with first axle is surrounded, and makes the virtual three-dimensional
Object rotates the amount of the magnitude based on first input and determination relative to the first axle, and the amount is by limitation institute
It states virtual three-dimensional object and is constrained relative to first axle rotation more than the limitation of the movement of threshold value rotation amount;And
According to request determination first input and rotate the three dimensional object around the second axis different from the first axle
Amount that is corresponding, determining the virtual three-dimensional object based on the magnitude of first input relative to second axis rotation,
In, for having the input of the magnitude higher than respective threshold, the equipment makes the virtual three-dimensional object relative to described second
Axis rotation is more than the threshold value rotation amount.
133. method described in 32 according to claim 1, comprising:
In response to detecting first input:
Include the first movement of the contact across touch sensitive surface in a first direction according to determination first input, and determines institute
The first movement of contact in said first direction is stated to meet for rotating the virtual objects relative to the first axle
The expression the first standard, wherein first standard include it is described first input in said first direction include be greater than
The movement of first threshold amount determines that first input is revolved with around the first axle to meet the requirement of first standard
The request for turning the three dimensional object is corresponding;And
It include the second movement of the contact across the touch sensitive surface in a second direction according to determination first input, and
And determine the described second mobile satisfaction of the contact in this second direction for relative to described in second axis rotation
Second standard of the expression of virtual objects, wherein second standard include it is described first input in this second direction
Including being greater than the movement of second threshold amount to meet the requirement of second standard, first input is determined and around described
The request that second axis rotates the three dimensional object is corresponding, wherein the first threshold is greater than the second threshold.
134. method described in any one of 32 to 133 according to claim 1, in which:
The feature that the virtual three-dimensional object inputs parameter relative to the rotation of the first axle with the first of first input
Be worth and be applied to the virtual three-dimensional object the first degree between the rotation amount of the first axle corresponding relationship send out
It is raw;And
The virtual three-dimensional object is joined relative to the rotation of second axis with first input of the second input gesture
Pair of the several characteristic value and the second degree between the rotation amount of second axis for being applied to virtual three-dimensional object
It should be related to generation;And
Compared to the corresponding relationship of second degree, the corresponding relationship of first degree is related to the virtual three-dimensional object phase
Smaller rotation for the first input parameter.
135. method described in any one of 32 to 134 according to claim 1, comprising:
Detect the end of first input;And
After the end for detecting first input, before detecting the end of the input, it is based on institute
The magnitude for stating the first input continues to rotate the three dimensional object, comprising:
It is rotated according to the determination three dimensional object relative to the first axle, makes the object relative to described in the first axle
The first amount of spin down, first amount and the three dimensional object relative to the rotation of the first axle the magnitude at
Ratio;And
It is rotated according to the determination three dimensional object relative to second axis, makes the object relative to described in second axis
The second amount of spin down, second amount and the three dimensional object relative to the rotation of second axis the magnitude at
Ratio, wherein second amount is different from first amount.
136. method described in any one of 32 to 135 according to claim 1, comprising:
Detect the end of first input;And
After the end for detecting first input:
It has been rotated relative to the first axle more than corresponding threshold rotating value according to the determination three dimensional object, has kept the three-dimensional right
As at least part of the rotation relative to the first axle inverts;And
It is less than corresponding threshold rotating value relative to first axle rotation according to the determination three dimensional object, abandons inverting
The rotation of the three dimensional object relative to the first axle.
137. method described in any one of 32 to 134 according to claim 1, in which:
Described three are rotated with around the third axis different from the first axle and second axis according to determination first input
The request of dimensional object is corresponding, abandons rotating the virtual three-dimensional object relative to the third axis.
138. method described in any one of 32 to 137 according to claim 1, comprising:
When showing the expression at first visual angle of the virtual three-dimensional object in first interface region, show
Show the expression of the shade by virtual three-dimensional object projection;And
Change the shade relative to the rotation of the first axle and/or the second axis according to the virtual three-dimensional object
The shape of the expression.
139. method described in 38 according to claim 1, comprising:
When rotating the virtual three-dimensional object in first interface region:
The virtual three-dimensional object is shown according to determining to appear the second visual angle of the predefined bottom of the virtual three-dimensional object,
Abandon the expression that the shade is display together with the expression at second visual angle of the virtual three-dimensional object.
140. method described in any one of 32 to 139 according to claim 1, comprising:
After rotating the virtual three-dimensional object in first interface region, detect and in first user interface
Corresponding second input of request of the virtual three-dimensional object is reset in region;And
In response to detecting second input, the pre- of the virtual three-dimensional object is shown in first interface region
Define the expression at original visual angle.
141. methods described in any one of 32 to 140 according to claim 1, comprising:
When showing the virtual three-dimensional object in first interface region, detection and the reset virtual three-dimensional
The corresponding third input of the request of the size of object;And
In response to detecting the third input, the institute in first interface region is adjusted according to the magnitude of the input
State the size of the expression of virtual three-dimensional object.
142. methods described in 41 according to claim 1, comprising:
When adjusting the size of the expression of virtual three-dimensional object described in first interface region, institute is detected
The size for stating virtual three-dimensional object has reached the predefined default display size of the virtual three-dimensional object;
In response to detecting that the size of the virtual three-dimensional object has reached the described predefined of the virtual three-dimensional object
Default display size, generates tactile output to indicate that the virtual three-dimensional object is shown with the predefined default display size.
143. methods described in any one of 32 to 142 according to claim 1, wherein the equipment includes one or more phases
Machine, and the described method includes:
When showing the expression at the third visual angle of the virtual three-dimensional object in first interface region, detection with the
Corresponding 4th input of the request of the virtual three-dimensional object, the second user interface zone are shown in two interface regions
Visual field including one or more cameras;And
In response to detecting the 4th input, it is being included in the second user interface zone via the display generating unit
In one or more of cameras the visual field at least part on show the expressions of the virtual objects, wherein described
The visual field of one or more cameras is the view of physical environment locating for one or more of cameras, and wherein shows
The expression of the virtual objects includes:
The virtual three-dimensional object is set to rotate to predefined angle around the first axle;And
Keep the virtual three-dimensional object relative to the current angular of second axis.
144. methods described in any one of 32 to 143 according to claim 1, comprising:
When showing the expression at the 4th visual angle of the virtual three-dimensional object in first interface region, detection and return
To corresponding 5th input of request of the two-dimensional user interface for the two-dimensional representation for including the virtual three-dimensional object;And
In response to detecting the 5th input:
It is corresponding with the two-dimensional representation of the virtual three-dimensional object described virtual to show to rotate the virtual three-dimensional object
The visual angle of three dimensional object;And
It rotates in the virtual three-dimensional object to show the phase corresponding with the two-dimensional representation of the virtual three-dimensional object
After answering visual angle, the two-dimensional representation of the virtual three-dimensional object is shown.
145. methods described in any one of 32 to 144 according to claim 1, comprising:
Before showing the expression at first visual angle of the virtual three-dimensional object, display includes the virtual three-dimensional pair
The user interface of the expression of elephant, the user interface include that the expression of the virtual three-dimensional object is checked from corresponding visual angle;
When showing the expression of the virtual three-dimensional object, detection shows the request of the virtual three-dimensional object;And
The request that the virtual three-dimensional object is shown in response to detecting, with being rotated to match the virtual three-dimensional object
The expression the corresponding visual angle the virtual three-dimensional object replace the virtual three-dimensional object the expression it is aobvious
Show.
146. methods described in any one of 32 to 145 according to claim 1, comprising:
Before showing first user interface, display includes two-dimentional user circle of the two-dimensional representation of the virtual three-dimensional object
Face;
When display includes the two-dimensional user interface of the two-dimensional representation of the virtual three-dimensional object, touch input is detected
First part, the first part meets the two-dimensional representation pair on the touch sensitive surface with the virtual three-dimensional object
Preview standard at the position answered;And
Meet the first part of the touch input of the preview standard in response to detecting, shows than described virtual three
The preview of the big virtual three-dimensional object of the two-dimensional representation of dimensional object.
147. methods described in 46 according to claim 1, comprising:
When showing the preview of the virtual three-dimensional object, the second part of the touch input is detected;And
In response to detecting the second part of the touch input:
Meet menu according to the second part of the determination touch input and show standard, display is closed with the virtual objects
The corresponding multiple optional options of multiple operations of connection;And
Meet standard of going up on the stage according to the second part of the determination touch input, with the institute including the virtual three-dimensional object
State the display of the two-dimensional user interface for the two-dimensional representation that the replacement of the first user interface includes the virtual three-dimensional object.
148. methods described in any one of 32 to 147 according to claim 1, wherein first user interface includes multiple controls
Part, and the described method includes:
Before showing first user interface, display includes two-dimentional user circle of the two-dimensional representation of the virtual three-dimensional object
Face;And
The request of the virtual three-dimensional object is shown in first user interface in response to detecting:
The virtual three-dimensional object is shown in first user interface, it is associated with the virtual three-dimensional object without showing
One group of one or more control;And
After showing the virtual three-dimensional object in first user interface, one group of one or more control is shown.
A kind of 149. computer systems, comprising:
Show generating unit;
One or more input equipments including touch sensitive surface;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
The expression at the first visual angle of virtual three-dimensional object is shown in the first interface region by the display generating unit;
When first view for showing the virtual three-dimensional object in first interface region on the display
When the expression at angle, detects and rotate the virtual three-dimensional object relative to display to show the virtual three-dimensional object
Corresponding first input of request from sightless a part in first visual angle of the virtual three-dimensional object;
In response to detecting first input:
According to determination first input, to rotate the request of the three dimensional object corresponding with first axle is surrounded, and makes the virtual three-dimensional
Object rotates the amount of the magnitude based on first input and determination relative to the first axle, and the amount is by limitation institute
It states virtual three-dimensional object and is constrained relative to first axle rotation more than the limitation of the movement of threshold value rotation amount;And
According to request determination first input and rotate the three dimensional object around the second axis different from the first axle
Amount that is corresponding, determining the virtual three-dimensional object based on the magnitude of first input relative to second axis rotation,
In, for having the input of the magnitude higher than respective threshold, the equipment makes the virtual three-dimensional object relative to described second
Axis rotation is more than the threshold value rotation amount.
A kind of 150. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
It enables, described instruction is when the computer system by having display generating unit and one or more input equipments including touch sensitive surface
When execution, so that the computer system:
The expression at the first visual angle of virtual three-dimensional object is shown in the first interface region by the display generating unit;
When first view for showing the virtual three-dimensional object in first interface region on the display
When the expression at angle, detects and rotate the virtual three-dimensional object relative to display to show the virtual three-dimensional object
Corresponding first input of request from sightless a part in first visual angle of the virtual three-dimensional object;
In response to detecting first input:
According to determination first input, to rotate the request of the three dimensional object corresponding with first axle is surrounded, and makes the virtual three-dimensional
Object rotates the amount of the magnitude based on first input and determination relative to the first axle, and the amount is by limitation institute
It states virtual three-dimensional object and is constrained relative to first axle rotation more than the limitation of the movement of threshold value rotation amount;And
According to request determination first input and rotate the three dimensional object around the second axis different from the first axle
Amount that is corresponding, determining the virtual three-dimensional object based on the magnitude of first input relative to second axis rotation,
In, for having the input of the magnitude higher than respective threshold, the equipment makes the virtual three-dimensional object relative to described second
Axis rotation is more than the threshold value rotation amount.
A kind of 151. computer systems, comprising:
Show generating unit;
One or more input equipments including touch sensitive surface;
For showing that generating unit shows the first visual angle of virtual three-dimensional object in the first interface region by described
The device of expression;
When first view for showing the virtual three-dimensional object in first interface region on the display
It is being enabled when the expression at angle, described virtual to show with the virtual three-dimensional object is rotated relative to display for detecting
Corresponding first input of the request of sightless a part in first visual angle from the virtual three-dimensional object of three dimensional object
Device;With
For enabling the device to perform the following operation in response to detecting first input:
According to determination first input, to rotate the request of the three dimensional object corresponding with first axle is surrounded, and makes the virtual three-dimensional
Object rotates the amount of the magnitude based on first input and determination relative to the first axle, and the amount is by limitation institute
It states virtual three-dimensional object and is constrained relative to first axle rotation more than the limitation of the movement of threshold value rotation amount;And
According to request determination first input and rotate the three dimensional object around the second axis different from the first axle
Amount that is corresponding, determining the virtual three-dimensional object based on the magnitude of first input relative to second axis rotation,
In, for having the input of the magnitude higher than respective threshold, the equipment makes the virtual three-dimensional object relative to described second
Axis rotation is more than the threshold value rotation amount.
A kind of 152. departments of computer science in one or more input equipments with display generating unit, including touch sensitive surface
Information processing unit used in system, comprising:
For showing that generating unit shows the first visual angle of virtual three-dimensional object in the first interface region by described
The device of expression;
When first view for showing the virtual three-dimensional object in first interface region on the display
What is enabled when the expression at angle is described virtual to show with the virtual three-dimensional object is rotated relative to display for detecting
Corresponding first input of the request of sightless a part in first visual angle from the virtual three-dimensional object of three dimensional object
Device;With
For enabling the device to perform the following operation in response to detecting first input:
According to determination first input, to rotate the request of the three dimensional object corresponding with first axle is surrounded, and makes the virtual three-dimensional
Object rotates the amount of the magnitude based on first input and determination relative to the first axle, and the amount is by limitation institute
It states virtual three-dimensional object and is constrained relative to first axle rotation more than the limitation of the movement of threshold value rotation amount;And
According to request determination first input and rotate the three dimensional object around the second axis different from the first axle
Amount that is corresponding, determining the virtual three-dimensional object based on the magnitude of first input relative to second axis rotation,
In, for having the input of the magnitude higher than respective threshold, the equipment makes the virtual three-dimensional object relative to described second
Axis rotation is more than the threshold value rotation amount.
A kind of 153. computer systems, comprising:
Show generating unit;
One or more input equipments including touch sensitive surface;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing appointing in method described in 32 to 148 according to claim 1
The instruction of one method.
A kind of 154. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
It enables, described instruction is when the computer system by having display generating unit and one or more input equipments including touch sensitive surface
When execution, the computer system is made either to execute in method described in 32 to 148 according to claim 1 method.
Graphic user interface in a kind of 155. computer systems, it includes touching that the computer system, which has display generating unit,
One or more input equipments of sensitive surfaces, memory and one or more of described memory is stored in for executing
The one or more processors of program, the graphic user interface include according to claim 1 in method described in 32 to 148
User interface shown by either method.
A kind of 156. computer systems, comprising:
Show generating unit;
One or more input equipments including touch sensitive surface;With
For either executing in method described in 32 to 148 according to claim 1 the device of method.
157. is a kind of for having display generating unit, one or more input equipments and one or more including touch sensitive surface
The information processing unit of the computer system of camera, comprising:
For either executing in method described in 32 to 148 according to claim 1 the device of method.
A kind of 158. methods, comprising:
At the equipment with display generating unit and touch sensitive surface:
Show the first interface region via the display generating unit, first interface region include with it is multiple right
As the associated user interface object of manipulative behavior, the multiple object manipulation behavior includes in response to meeting first gesture identification
It the input of standard and the first object manipulation behavior for executing and is executed in response to the input for meeting second gesture criterion of identification
Second object manipulation behavior;
When showing first interface region, detection is directed to the first part of the input of the user interface object, this
Movement including the one or more contacts of detection on the touch sensitive surface, and it is described when being detected on the touch sensitive surface
It is one relative to the first gesture criterion of identification and the second gesture criterion of identification assessment when one or more contact
Or the movement of multiple contacts;
In response to detecting the first part of the input, the first part based on the input updates the user
The appearance of interface object, comprising:
Meet described first in the first part for meeting the second gesture criterion of identification foregoing description input according to determining
Gesture identification standard:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;And
Meet the second gesture criterion of identification meeting first gesture criterion of identification foregoing description input according to determining:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification.
159. methods described in 58 according to claim 1, comprising:
After the first part based on the input updates the appearance of the user interface object, detect described defeated
The second part entered;And
In response to detecting the second part of the input, the second part based on the input updates the user
The appearance of interface object, comprising:
Described in meeting the first gesture criterion of identification and the input according to the first part of the determination input
Second part is unsatisfactory for the second gesture criterion of identification of the update: based on the second part of the input according to described
An object manipulative behavior changes the appearance of the user interface object, without being changed according to the second object manipulation behavior
Become the appearance of the user interface object;
Described in meeting the second gesture criterion of identification and the input according to the first part of the determination input
Second part is unsatisfactory for the first gesture criterion of identification of the update: based on the second part of the input according to described
Two object manipulation behaviors change the appearance of the user interface object, without being changed according to the first object manipulation behavior
Become the appearance of the user interface object.
160. methods described in 59 according to claim 1, wherein meet in the first part of the input described first-hand
After gesture criterion of identification, based on the second part of input user according to the first object manipulation behavior change
The appearance of interface object, the second part of the input are included in front of updating the second gesture criterion of identification full
The input of the foot second gesture criterion of identification.
161. methods described in any one of 59 to 160 according to claim 1, wherein in the first part of the input
After meeting the second gesture criterion of identification, based on the second part of the input according to the second object manipulation row
For the appearance for changing the user interface object, the second part of the input, which is included in, updates the first gesture
Meet the input of the first gesture criterion of identification before criterion of identification.
162. methods described in 59 according to claim 1, wherein meet in the first part of the input described first-hand
After gesture criterion of identification, based on the second part of input user according to the first object manipulation behavior change
The appearance of interface object, the second part of the input do not include meet the first gesture criterion of identification defeated
Enter.
163. methods described in any one of 59 or 162 according to claim 1, wherein in the first part of the input
After meeting the second gesture criterion of identification, based on the second part of the input according to the second object manipulation row
For the appearance for changing the user interface object, the second part of the input does not include meeting the second gesture
The input of criterion of identification.
164. methods described in 59 according to claim 1, wherein the second part based on the input updates the user
The appearance of interface object includes:
Described in meeting the second gesture criterion of identification and the input according to the first part of the determination input
Second part meets the first gesture criterion of identification of the update:
The user interface object is changed according to the first object manipulation behavior based on the second part of the input
The appearance;And
The user interface object is changed according to the second object manipulation behavior based on the second part of the input
The appearance;And
Described in meeting the first gesture criterion of identification and the input according to the first part of the determination input
Second part meets the second gesture criterion of identification of the update:
The user interface object is changed according to the first object manipulation behavior based on the second part of the input
The appearance;And
The user interface object is changed according to the second object manipulation behavior based on the second part of the input
The appearance.
165. methods described in 64 according to claim 1, comprising:
After the second part based on the input updates the appearance of the user interface object, detect described defeated
The Part III entered;And
In response to detecting the Part III of the input, the Part III based on the input updates the user
The appearance of interface object, comprising:
The user interface object is changed according to the first object manipulation behavior based on the Part III of the input
The appearance;And
The user interface object is changed according to the second object manipulation behavior based on the Part III of the input
The appearance.
166. methods described in 65 according to claim 1, wherein the Part III of the input does not include meeting described
The input of one gesture criterion of identification or the input for meeting the second gesture criterion of identification.
167. methods described in any one of 64 to 166 according to claim 1, in which:
The multiple object manipulation behavior includes the third object executed in response to meeting the input of third gesture identification standard
Manipulative behavior;And
The appearance that the first part based on the input updates the user interface object includes:
Meeting the second gesture criterion of identification according to determining or meeting the third gesture identification standard foregoing description input
The first part meet the first gesture criterion of identification:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;
The third gesture identification standard is updated by increasing the threshold value of the third gesture identification standard;
Meeting the first gesture criterion of identification according to determining or meeting the third gesture identification standard foregoing description input
Meet the second gesture criterion of identification:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification;And
The third gesture identification standard is updated by increasing the threshold value of the third gesture identification standard;And
Meeting the first gesture criterion of identification according to determining or meeting the second gesture criterion of identification foregoing description input
Meet the third gesture identification standard:
The user interface object is changed according to the third object manipulation behavior based on the first part of the input
The appearance;
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification.
168. methods described in any one of 64 to 166 according to claim 1, in which:
The multiple object manipulation behavior includes the third object executed in response to meeting the input of third gesture identification standard
Manipulative behavior,
Before meeting the first gesture criterion of identification or the second gesture criterion of identification, described first of the input
Divide and be unsatisfactory for the third gesture identification standard,
The first part of the input meet the first gesture criterion of identification or the second gesture criterion of identification it
Afterwards, the equipment updates the third gesture identification standard by increasing the threshold value of the third gesture identification standard,
Before the second gesture criterion of identification of the first gesture criterion of identification or the update that meet the update, the input
The second part be unsatisfactory for the third gesture identification standard of the update;And
The described method includes:
In response to detecting the Part III of the input:
Meet the third gesture identification standard of the update according to the Part III of the determination input, is based on the input
Part III user interface object according to the third object manipulation behavior change the appearance;And
It is unsatisfactory for the third gesture identification standard of the update according to the Part III of the determination input, abandons based on institute
State the appearance of Part III user interface object according to the third object manipulation behavior change of input.
169. methods described in 64 to 168 according to claim 1, wherein the Part III of the input meets the update
Third gesture identification standard, and the described method includes:
After the Part III based on the input updates the appearance of the user interface object, detect described defeated
The Part IV entered;And
In response to detecting the Part IV of the input, the Part IV based on the input updates the user
The appearance of interface object, comprising:
The user interface object is changed according to the first object manipulation behavior based on the Part IV of the input
The appearance;
The user interface object is changed according to the second object manipulation behavior based on the Part IV of the input
The appearance;And
The user interface object is changed according to the third object manipulation behavior based on the Part IV of the input
The appearance.
170. methods described in 69 according to claim 1, wherein the Part IV of the input does not include:
Meet the input of the first gesture criterion of identification,
Meet the input of the second gesture criterion of identification, or
Meet the input of the third gesture identification standard.
171. methods described in any one of 58 to 170 according to claim 1, wherein the first gesture criterion of identification and described
Second gesture criterion of identification requires the contact detected while the first quantity to be met.
172. methods described in any one of 58 to 171 according to claim 1, wherein the first object manipulation behavior change institute
State the zoom level or display size of user interface object, and user interface pair described in the second object manipulation behavior change
The rotation angle of elephant.
173. methods described in any one of 58 to 171 according to claim 1, wherein the first object manipulation behavior change institute
State the zoom level or display size of user interface object, and the first user circle described in the second object manipulation behavior change
The position of the user interface object in the region of face.
174. methods described in any one of 58 to 171 according to claim 1, wherein the first object manipulation behavior change institute
The position of the user interface object in the first interface region is stated, and described in the second object manipulation behavior change
The rotation angle of user interface object.
175. methods described in any one of 58 to 174 according to claim 1, wherein the first part and institute of the input
The second part for stating input is provided by multiple contacts continuously kept, and the described method includes:
Re-establish the first gesture criterion of identification and the second gesture criterion of identification, with detect it is the multiple continuous
Other the first object manipulation behavior and the second object manipulation behavior are initiated in being lifted away from for the contact of holding later.
176. methods described in any one of 58 to 175 according to claim 1, wherein the first gesture criterion of identification with surround
The rotation of first axle is corresponding, and the second gesture criterion of identification with around the rotation of second axis orthogonal with the first axle
It is corresponding.
A kind of 177. computer systems, comprising:
Show generating unit;
Touch sensitive surface;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
Show the first interface region via the display generating unit, first interface region include with it is multiple right
As the associated user interface object of manipulative behavior, the multiple object manipulation behavior includes in response to meeting first gesture identification
It the input of standard and the first object manipulation behavior for executing and is executed in response to the input for meeting second gesture criterion of identification
Second object manipulation behavior;
When showing first interface region, first part of the detection for the input of the user interface object, packet
Movement of the one or more contacts of detection on the touch sensitive surface is included, and works as and detects described one on the touch sensitive surface
When a or multiple contacts, relative to the first gesture criterion of identification and the second gesture criterion of identification assessment it is one or
The movement of multiple contacts;
In response to detecting the first part of the input, the first part based on the input updates the user
The appearance of interface object, comprising:
Meet described first in the first part for meeting the second gesture criterion of identification foregoing description input according to determining
Gesture identification standard:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;And
Meet the second gesture criterion of identification meeting first gesture criterion of identification foregoing description input according to determining:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification.
A kind of 178. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
It enables, when described instruction is by having the computer system of display generating unit and touch sensitive surface to execute, makes the computer system:
Show the first interface region via the display generating unit, first interface region include with it is multiple right
As the associated user interface object of manipulative behavior, the multiple object manipulation behavior includes in response to meeting first gesture identification
It the input of standard and the first object manipulation behavior for executing and is executed in response to the input for meeting second gesture criterion of identification
Second object manipulation behavior;
When showing first interface region, first part of the detection for the input of the user interface object, packet
Movement of the one or more contacts of detection on the touch sensitive surface is included, and works as and detects described one on the touch sensitive surface
When a or multiple contacts, relative to the first gesture criterion of identification and the second gesture criterion of identification assessment it is one or
The movement of multiple contacts;
In response to detecting the first part of the input, the first part based on the input updates the user
The appearance of interface object, comprising:
Meet described first in the first part for meeting the second gesture criterion of identification foregoing description input according to determining
Gesture identification standard:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;And
Meet the second gesture criterion of identification meeting first gesture criterion of identification foregoing description input according to determining:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification.
A kind of 179. computer systems, comprising:
Show generating unit;
Touch sensitive surface;
For showing the device of the first interface region, the first interface region packet via the display generating unit
Include user interface object associated with multiple object manipulation behaviors, the multiple object manipulation behavior includes in response to meeting the
The input of one gesture criterion of identification and the first object manipulation behavior for executing and in response to meeting the defeated of second gesture criterion of identification
The the second object manipulation behavior for entering and executing;
The device for following operation being activated when showing first interface region: detection is directed to user circle
Movement in face of the first part of the input of elephant, including the one or more contacts of detection on the touch sensitive surface;And work as
When detecting one or more of contacts on the touch sensitive surface, relative to the first gesture criterion of identification and described second
The movement of the one or more of contacts of gesture identification criterion evaluation;With
In response to detecting that the first part of the input is activated for the first part based on the input
Update the device of the appearance of the user interface object, comprising:
Meet described first in the first part for meeting the second gesture criterion of identification foregoing description input according to determining
Gesture identification standard:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;And
Meet the second gesture criterion of identification meeting first gesture criterion of identification foregoing description input according to determining:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification.
180. is a kind of for having information processing unit used in the computer system for showing generating unit and touch sensitive surface,
Include:
For showing the device of the first interface region, the first interface region packet via the display generating unit
Include user interface object associated with multiple object manipulation behaviors, the multiple object manipulation behavior includes in response to meeting the
The input of one gesture criterion of identification and the first object manipulation behavior for executing and in response to meeting the defeated of second gesture criterion of identification
The the second object manipulation behavior for entering and executing;
The device for following operation being activated when showing first interface region: detection is directed to user circle
In face of the first part of the input of elephant, the movement on the touch sensitive surface is contacted including detection one or more, and work as
When detecting one or more of contacts on the touch sensitive surface, relative to the first gesture criterion of identification and described second
The movement of the one or more of contacts of gesture identification criterion evaluation;With
In response to detecting that the first part of the input is activated for the first part based on the input
Update the device of the appearance of the user interface object, comprising:
Meet described first in the first part for meeting the second gesture criterion of identification foregoing description input according to determining
Gesture identification standard:
The user interface object is changed according to the first object manipulation behavior based on the first part of the input
The appearance;And
The second gesture criterion of identification is updated by increasing the threshold value of the second gesture criterion of identification;And
Meet the second gesture criterion of identification meeting first gesture criterion of identification foregoing description input according to determining:
The user interface object is changed according to the second object manipulation behavior based on the first part of the input
The appearance;And
The first gesture criterion of identification is updated by increasing the threshold value of the first gesture criterion of identification.
A kind of 181. computer systems, comprising:
Show generating unit;
Touch sensitive surface;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing appointing in method described in 58 to 176 according to claim 1
The instruction of one method.
A kind of 182. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
It enables, described instruction makes the computer system when by having the computer system of display generating unit and touch sensitive surface to execute
Execute method either in method described in 58 to 176 according to claim 1.
Graphic user interface in a kind of 183. computer systems, the computer system have display generating unit, one or
Multiple input equipments, one or more camera, memory and for executing be stored in the memory one or more
The one or more processors of a program, the graphic user interface include according to claim 1 in method described in 58 to 176
Either user interface shown by method.
A kind of 184. computer systems, comprising:
Show generating unit;
Touch sensitive surface;With
For either executing in method described in 58 to 176 according to claim 1 the device of method.
185. is a kind of for having information processing unit used in the computer system for showing generating unit and touch sensitive surface,
Include:
For either executing in method described in 58 to 176 according to claim 1 the device of method.
A kind of 186. methods, comprising: have display generating unit, one or more input equipments, one or more audios defeated
Out at the equipment of generator and one or more cameras:
The expression of virtual objects, first user circle are shown in the first interface region via the display generating unit
Face region includes the expression of the visual field of one or more cameras, wherein the display includes the table for keeping the virtual objects
Show and first between plane detected in the physical environment captured in the visual field of one or more of cameras
Spatial relationship;
Detection adjusts the movement of the equipment of the visual field of one or more of cameras;With
The movement of the equipment of the visual field of one or more of cameras is adjusted in response to detecting:
When according to the visual field for adjusting one or more of cameras the virtual objects in one or more of cameras
The visual field in first spatial relationship between the plane that detects, adjust in first interface region
The display of the expression of the virtual objects, and,
Cause to be moved to more than the virtual objects of threshold quantity according to the movement of the determination equipment one or more
Except the display portion of the visual field of a camera, it is alert that the first audio is generated via one or more of audio output generators
Report.
187. methods described in 86 according to claim 1, wherein exporting first audio alert includes generating audio output, institute
It states audio output instruction and keeps the visible void in the display portion of the visual field of one or more of cameras
The amount of quasi- object.
188. methods described in any one of 86 to 187 according to claim 1, wherein exporting first audio alert includes life
At audio output, the audio output indicates the amount of the display portion of the visual field occupied by the virtual objects.
189. methods described in any one of 86 to 188 according to claim 1, comprising:
It detects and leads at position corresponding with the expression of the visual field of one or more of cameras on the touch sensitive surface
Cross the input that contact carries out;And
In response to detecting the input, and according to determine on the touch sensitive surface with do not occupied by the virtual objects
The input is detected at the corresponding first position of first part of the visual field of one or more of cameras, generates second
Audio alert.
190. methods described in any one of 86 to 189 according to claim 1, wherein exporting first audio alert includes life
It is described after operation and the execution operation that the audio output instruction is executed relative to the virtual objects at audio output
The result phase of virtual objects.
191. methods described in 90 according to claim 1, in the audio output of first audio alert, described in execution
The result phase of the virtual objects is captured relative to in the visual field of one or more of cameras after operation
The physical environment corresponding reference frame describes.
192. methods described in any one of 86 to 191 according to claim 1, comprising:
The other movement of the equipment is detected, the other movement further adjusts institute after generating first audio alert
State the visual field of one or more cameras;And
In response to detecting the other shifting of the equipment for the visual field for further adjusting one or more of cameras
It is dynamic:
When further adjusting the visual field of one or more of cameras, according to the virtual objects with one or
First spatial relationship between the plane detected in the visual field of multiple cameras adjusts first user circle
The display of the expression of virtual objects described in the region of face, and,
The virtual objects more than second threshold amount are caused to be moved to according to the other movement of the determination equipment described
In the display portion of the visual field of one or more cameras, third is generated via one or more of audio output generators
Audio alert.
193. methods described in any one of 86 to 192 according to claim 1, comprising:
The expression of the virtual objects is shown in first interface region, and is currently the virtual objects
When having selected the first object manipulation type suitable for multiple object manipulation types of the virtual objects, detection is switched to suitable
The request of another object manipulation type for the virtual objects;And
The request of another object manipulation type suitable for the virtual objects is switched in response to detecting, generation is pointed out
The audio output of the title of the second object manipulation type suitable for multiple object manipulation types of the virtual objects, wherein
The second object manipulation type is different from the first object manipulation type.
194. methods described in 93 according to claim 1, comprising:
Generating the title for pointing out the second object manipulation type multiple object manipulation types suitable for the virtual objects
Audio output after, detection executes the request of corresponding with the object manipulation type currently selected object manipulation behavior;And
It is executed described in the object manipulation behavior corresponding with the object manipulation type currently selected in response to detecting
Request executes object manipulation behavior corresponding with the second object manipulation type.
195. methods described in any one of 93 to 194 according to claim 1, comprising:
The request of another object manipulation type suitable for the virtual objects is switched in response to detecting:
It is continuously adjustable manipulation type according to determination the second object manipulation type, in conjunction with pointing out the second object manipulation class
The audio output of the title of type generates audio alert, to indicate that the second object manipulation type is continuously adjustable manipulation class
Type;
Detection executes the request of the object manipulation behavior corresponding with the second object manipulation type, including the detection touching
On sensitive surfaces with first interface region of the expression for the visual field for showing one or more of cameras
Input is gently swept at a part of corresponding position;And
The request of the object manipulation behavior corresponding with the second object manipulation type is executed in response to detecting, with
The corresponding amount of the magnitude of input is gently swept with described, executes the object manipulation row corresponding with the second object manipulation type
For.
196. methods described in any one of 86 to 195 according to claim 1, comprising:
Before showing the expression of the virtual objects in first interface region, in second user interface zone
The expression of the middle display virtual objects, wherein the second user interface zone does not include the institute of one or more cameras
State the expression of visual field;
The expression of the virtual objects is shown in the second user interface zone, and is currently the virtual objects
When having selected the first operation suitable for multiple operations of the virtual objects, detection is switched to suitable for the virtual objects
Another operation request;And
In response to detecting another operation of the virtual objects that are switched to suitable for the second user interface zone
The request generates and points out that the audio of the title of the second operation suitable for the multiple operation of the virtual objects is defeated
Out, wherein second operation is different from first operation.
197. methods described in any one of 86 to 196 according to claim 1, comprising:
Before showing the expression of the virtual objects in first interface region: do not include it is one or
When showing the expression of the virtual objects in the second user interface zone of the expression of the visual field of multiple cameras, detection
It is shown in first interface region of expression for including the visual field of one or more of cameras described virtual
The request of the expression of object;And
It is including first user interface area of the expression of the visual field of one or more of cameras in response to detecting
The request of the expression of the virtual objects is shown in domain:
The object captured according to the expression of the virtual objects and in the visual field of one or more of cameras
The first spatial relationship between the plane detected in reason environment, shows the void in first interface region
The expression of quasi- object;And
The 4th audio alert is generated, the 4th audio alert indicates the virtual objects relative in one or more of phases
The physical environment captured in the visual field of machine is placed in the augmented reality view.
198. methods described in 97 according to claim 1, wherein third audio alert instruction is about the virtual objects phase
For the information of the appearance of the part of the visual field of one or more of cameras.
199. methods described in 98 according to claim 1, comprising:
In conjunction with the virtual objects relative to the visual field in one or more of cameras in the augmented reality view
The placement of the physical environment of middle capture generates tactile output.
200. methods described in any one of 86 to 198 according to claim 1, comprising:
The first control is shown at the first position in first interface region, while showing one or more of phases
The expression of the visual field of machine;
According to determining that gradually light standard is met for control, stops in first interface region and show first control
Part, while being maintained at the table that the visual field of one or more of cameras is shown in first interface region
Show;
When showing first interface region without showing first control in first interface region,
Detect the touching of corresponding position corresponding with the first position in first interface region on the touch sensitive surface
Touch input;And
In response to detecting the touch input, generating includes the audio output for specifying operation corresponding with first control
Fifth audio alarm.
A kind of 201. computer systems, comprising:
Show generating unit;
One or more input equipments;
One or more audio output generators;
One or more cameras;
One or more processors;With
Memory, the memory stores one or more programs, wherein one or more of programs are configured as by described
One or more processors execute, and one or more of programs include the instruction for performing the following operation:
The expression of virtual objects, first user circle are shown in the first interface region via the display generating unit
Face region includes the expression of the visual field of one or more cameras, wherein the display includes the table for keeping the virtual objects
Show and the between plane detected in the physical environment captured in the visual field in one or more of cameras
One spatial relationship;
Detection adjusts the movement of the equipment of the visual field of one or more of cameras;And
The movement of the equipment of the visual field of one or more of cameras is adjusted in response to detecting:
When according to the visual field for adjusting one or more of cameras the virtual objects in one or more of cameras
The visual field in first spatial relationship between the plane that detects, adjust in first interface region
The display of the expression of the virtual objects, and,
Cause to be moved to more than the virtual objects of threshold quantity according to the movement of the determination equipment one or more
Except the display portion of the visual field of a camera, it is alert that the first audio is generated via one or more of audio output generators
Report.
A kind of 202. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
Enable, described instruction when by have display generating unit, one or more input equipment, one or more audio output generators and
When the computer system of one or more cameras executes, so that the computer system:
The expression of virtual objects, first user circle are shown in the first interface region via the display generating unit
Face region includes the expression of the visual field of one or more cameras, wherein the display includes the table for keeping the virtual objects
Show and first between plane detected in the physical environment captured in the visual field of one or more of cameras
Spatial relationship;
Detection adjusts the movement of the equipment of the visual field of one or more of cameras: and
The movement of the equipment of the visual field of one or more of cameras is adjusted in response to detecting:
When according to the visual field for adjusting one or more of cameras the virtual objects in one or more of cameras
The visual field in first spatial relationship between the plane that detects, adjust in first interface region
The display of the expression of the virtual objects, and,
Cause to be moved to more than the virtual objects of threshold quantity according to the movement of the determination equipment one or more
Except the display portion of the visual field of a camera, it is alert that the first audio is generated via one or more of audio output generators
Report.
A kind of 203. computer systems, comprising:
Show generating unit;
One or more input equipments;
One or more audio output generators;
One or more cameras;
It is described for showing the device of the expression of virtual objects in the first interface region via the display generating unit
First interface region includes the expression of the visual field of one or more cameras, wherein the display includes keeping described virtual right
Detected plane in the expression of elephant and the physical environment captured in the visual field of one or more of cameras
Between the first spatial relationship;
For detecting the device for adjusting the movement of the equipment of the visual field of one or more of cameras;With
In response to detecting the movement for the equipment of the visual field for adjusting one or more of cameras and being used for for enabling
The device operated below:
When according to the visual field for adjusting one or more of cameras the virtual objects in one or more of cameras
The visual field in first spatial relationship between the plane that detects, adjust in first interface region
The display of the expression of the virtual objects, and,
Cause to be moved to more than the virtual objects of threshold quantity according to the movement of the determination equipment one or more
Except the display portion of the visual field of a camera, it is alert that the first audio is generated via one or more of audio output generators
Report.
204. is a kind of for having display generating unit, one or more input equipments, one or more audio output to occur
Information processing unit used in the computer system of device and one or more cameras, comprising:
It is described for showing the device of the expression of virtual objects in the first interface region via the display generating unit
First interface region includes the expression of the visual field of one or more cameras, wherein the display includes keeping described virtual right
Detected plane in the expression of elephant and the physical environment captured in the visual field of one or more of cameras
Between the first spatial relationship;
For detecting the device for adjusting the movement of the equipment of the visual field of one or more of cameras;With
In response to detecting the movement for the equipment of the visual field for adjusting one or more of cameras and being used for for enabling
The device operated below:
When according to the visual field for adjusting one or more of cameras the virtual objects in one or more of cameras
The visual field in first spatial relationship between the plane that detects, adjust in first interface region
The display of the expression of the virtual objects, and,
Cause to be moved to more than the virtual objects of threshold quantity according to the movement of the determination equipment one or more
Except the display portion of the visual field of a camera, it is alert that the first audio is generated via one or more of audio output generators
Report.
A kind of 205. computer systems, comprising:
Show generating unit;
One or more input equipments;
One or more cameras;
One or more audio output generators;
One or more processors;With
The memory for storing one or more programs, wherein one or more of programs are configured as by one or more of
Processor executes, and one or more of programs include for executing appointing in method described in 86 to 200 according to claim 1
The instruction of one method.
A kind of 206. computer readable storage mediums for storing one or more programs, one or more of programs include referring to
Enable, described instruction when by have display generating unit, one or more input equipment, one or more audio output generators and
When the computer system of one or more cameras executes, execute the computer system according to claim 1 described in 86 to 200
Method in either method.
Graphic user interface in a kind of 207. computer systems, the computer system have display generating unit, one or
It multiple input equipments, one or more audio output generator, one or more cameras, memory and is stored for executing
The one or more processors of one or more programs in the memory, the graphic user interface include according to right
It is required that user interface shown by method either in method described in 186 to 200.
A kind of 208. computer systems, comprising:
Show generating unit;
One or more input equipments;
One or more audio output generators;
One or more cameras;With
For either executing in method described in 86 to 200 according to claim 1 the device of method.
209. is a kind of for having display generating unit, one or more input equipments, one or more audio output to occur
Information processing unit used in the computer system of device and one or more cameras, comprising:
For either executing in method described in 86 to 200 according to claim 1 the device of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911078900.7A CN110851053A (en) | 2018-01-24 | 2018-09-29 | Apparatus, method and graphical user interface for system level behavior of 3D models |
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862621529P | 2018-01-24 | 2018-01-24 | |
US62/621,529 | 2018-01-24 | ||
US201862679951P | 2018-06-03 | 2018-06-03 | |
US62/679,951 | 2018-06-03 | ||
DKPA201870346 | 2018-06-11 | ||
DKPA201870347A DK201870347A1 (en) | 2018-01-24 | 2018-06-11 | Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models |
DKPA201870348A DK180842B1 (en) | 2018-01-24 | 2018-06-11 | Devices, procedures, and graphical user interfaces for System-Wide behavior for 3D models |
DKPA201870346A DK201870346A1 (en) | 2018-01-24 | 2018-06-11 | Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models |
DKPA201870348 | 2018-06-11 | ||
DKPA201870347 | 2018-06-11 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911078900.7A Division CN110851053A (en) | 2018-01-24 | 2018-09-29 | Apparatus, method and graphical user interface for system level behavior of 3D models |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110069190A true CN110069190A (en) | 2019-07-30 |
Family
ID=67365888
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811165504.3A Pending CN110069190A (en) | 2018-01-24 | 2018-09-29 | Equipment, method and the graphic user interface of system-level behavior for 3D model |
CN201911078900.7A Pending CN110851053A (en) | 2018-01-24 | 2018-09-29 | Apparatus, method and graphical user interface for system level behavior of 3D models |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911078900.7A Pending CN110851053A (en) | 2018-01-24 | 2018-09-29 | Apparatus, method and graphical user interface for system level behavior of 3D models |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6745852B2 (en) |
CN (2) | CN110069190A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865704A (en) * | 2019-10-21 | 2020-03-06 | 浙江大学 | Gesture interaction device and method for 360-degree suspended light field three-dimensional display system |
CN111340962A (en) * | 2020-02-24 | 2020-06-26 | 维沃移动通信有限公司 | Control method, electronic device, and storage medium |
US10939047B2 (en) | 2019-07-22 | 2021-03-02 | Himax Technologies Limited | Method and apparatus for auto-exposure control in a depth sensing system |
TWI722542B (en) * | 2019-08-22 | 2021-03-21 | 奇景光電股份有限公司 | Method and apparatus for performing auto-exposure control in depth sensing system including projector |
US20230260202A1 (en) * | 2022-02-11 | 2023-08-17 | Shopify Inc. | Augmented reality enabled dynamic product presentation |
CN117354622A (en) * | 2020-06-03 | 2024-01-05 | 苹果公司 | Camera and visitor user interface |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2021161719A1 (en) * | 2020-02-12 | 2021-08-19 | ||
CN111672121A (en) * | 2020-06-11 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Virtual object display method and device, computer equipment and storage medium |
JP6801138B1 (en) | 2020-07-16 | 2020-12-16 | 株式会社バーチャルキャスト | Terminal device, virtual object operation method, and virtual object operation program |
JP6919050B1 (en) * | 2020-12-16 | 2021-08-11 | 株式会社あかつき | Game system, program and information processing method |
CN112419511B (en) * | 2020-12-26 | 2024-02-13 | 董丽萍 | Three-dimensional model file processing method and device, storage medium and server |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104486430A (en) * | 2014-12-18 | 2015-04-01 | 北京奇虎科技有限公司 | Method, device and client for realizing data sharing in mobile browser client |
CN105824412A (en) * | 2016-03-09 | 2016-08-03 | 北京奇虎科技有限公司 | Method and device for presenting customized virtual special effects on mobile terminal |
CN107071392A (en) * | 2016-12-23 | 2017-08-18 | 网易(杭州)网络有限公司 | Image processing method and device |
US20170270715A1 (en) * | 2016-03-21 | 2017-09-21 | Megan Ann Lindsay | Displaying three-dimensional virtual objects based on field of view |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080071559A1 (en) * | 2006-09-19 | 2008-03-20 | Juha Arrasvuori | Augmented reality assisted shopping |
JP5573238B2 (en) * | 2010-03-04 | 2014-08-20 | ソニー株式会社 | Information processing apparatus, information processing method and program |
JP5799521B2 (en) * | 2011-02-15 | 2015-10-28 | ソニー株式会社 | Information processing apparatus, authoring method, and program |
JP5942456B2 (en) * | 2012-02-10 | 2016-06-29 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US10078384B2 (en) * | 2012-11-20 | 2018-09-18 | Immersion Corporation | Method and apparatus for providing haptic cues for guidance and alignment with electrostatic friction |
US9286727B2 (en) * | 2013-03-25 | 2016-03-15 | Qualcomm Incorporated | System and method for presenting true product dimensions within an augmented real-world setting |
WO2016036412A1 (en) * | 2014-09-02 | 2016-03-10 | Apple Inc. | Remote camera user interface |
TWI567691B (en) * | 2016-03-07 | 2017-01-21 | 粉迷科技股份有限公司 | Method and system for editing scene in three-dimensional space |
US20200319834A1 (en) * | 2016-05-31 | 2020-10-08 | Sony Corporation | Information processing device, information processing method, and program |
-
2018
- 2018-09-28 JP JP2018183940A patent/JP6745852B2/en active Active
- 2018-09-29 CN CN201811165504.3A patent/CN110069190A/en active Pending
- 2018-09-29 CN CN201911078900.7A patent/CN110851053A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104486430A (en) * | 2014-12-18 | 2015-04-01 | 北京奇虎科技有限公司 | Method, device and client for realizing data sharing in mobile browser client |
CN105824412A (en) * | 2016-03-09 | 2016-08-03 | 北京奇虎科技有限公司 | Method and device for presenting customized virtual special effects on mobile terminal |
US20170270715A1 (en) * | 2016-03-21 | 2017-09-21 | Megan Ann Lindsay | Displaying three-dimensional virtual objects based on field of view |
CN107071392A (en) * | 2016-12-23 | 2017-08-18 | 网易(杭州)网络有限公司 | Image processing method and device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10939047B2 (en) | 2019-07-22 | 2021-03-02 | Himax Technologies Limited | Method and apparatus for auto-exposure control in a depth sensing system |
TWI722542B (en) * | 2019-08-22 | 2021-03-21 | 奇景光電股份有限公司 | Method and apparatus for performing auto-exposure control in depth sensing system including projector |
CN110865704A (en) * | 2019-10-21 | 2020-03-06 | 浙江大学 | Gesture interaction device and method for 360-degree suspended light field three-dimensional display system |
CN111340962A (en) * | 2020-02-24 | 2020-06-26 | 维沃移动通信有限公司 | Control method, electronic device, and storage medium |
CN111340962B (en) * | 2020-02-24 | 2023-08-15 | 维沃移动通信有限公司 | Control method, electronic device and storage medium |
CN117354622A (en) * | 2020-06-03 | 2024-01-05 | 苹果公司 | Camera and visitor user interface |
US20230260202A1 (en) * | 2022-02-11 | 2023-08-17 | Shopify Inc. | Augmented reality enabled dynamic product presentation |
US11941750B2 (en) * | 2022-02-11 | 2024-03-26 | Shopify Inc. | Augmented reality enabled dynamic product presentation |
Also Published As
Publication number | Publication date |
---|---|
CN110851053A (en) | 2020-02-28 |
JP6745852B2 (en) | 2020-08-26 |
JP2019128941A (en) | 2019-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069190A (en) | Equipment, method and the graphic user interface of system-level behavior for 3D model | |
CN109061985B (en) | User interface for camera effect | |
CN108139863B (en) | For with the equipment, method and graphic user interface of feedback are provided during strength sensitive button interaction | |
CN108353126B (en) | Handle method, electronic equipment and the computer readable storage medium of the content of camera | |
CN108351750B (en) | For handling equipment, method and the graphic user interface of strength information associated with touch input | |
JP7033152B2 (en) | User interface camera effect | |
CN106489112B (en) | For equipment, the method using vision and/or touch feedback manipulation user interface object | |
CN104471521B (en) | For providing the equipment, method and graphic user interface of feedback for the state of activation for changing user interface object | |
CN104487929B (en) | For contacting the equipment for carrying out display additional information, method and graphic user interface in response to user | |
CN104903834B (en) | For equipment, method and the graphic user interface in touch input to transition between display output relation | |
CN108762605B (en) | Device configuration user interface | |
CN104903835B (en) | For abandoning equipment, method and the graphic user interface of generation tactile output for more contact gestures | |
CN107690619B (en) | For handling the device and method of touch input in touch sensitive surface multiple regions | |
CN104487927B (en) | For selecting the equipment, method and graphic user interface of user interface object | |
CN109643217A (en) | By based on equipment, method and user interface close and interacted based on the input of contact with user interface object | |
CN105264479B (en) | Equipment, method and graphic user interface for navigating to user interface hierarchical structure | |
CN109844711A (en) | Equipment, method and the graphic user interface of the annotated unified annotation layer of content for being shown in equipment | |
CN109219796A (en) | Digital touch on real-time video | |
CN109240500A (en) | For providing the equipment, method and graphic user interface of touch feedback | |
CN107430488A (en) | Threshold value and feedback based on activity | |
CN106462321A (en) | Application menu for video system | |
CN108052264A (en) | For moving and placing the equipment of user interface object, method and graphic user interface | |
CN107408012A (en) | Carry out control system scaling magnifying power using rotatable input mechanism | |
CN108287651A (en) | Method and apparatus for providing touch feedback for the operation executed in the user interface | |
CN109643214A (en) | Equipment, method and the graphic user interface of power sensitive gesture on equipment rear portion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |