CN106062514A - Input/output functions related to a portable device in an automotive environment - Google Patents
Input/output functions related to a portable device in an automotive environment Download PDFInfo
- Publication number
- CN106062514A CN106062514A CN201580011364.2A CN201580011364A CN106062514A CN 106062514 A CN106062514 A CN 106062514A CN 201580011364 A CN201580011364 A CN 201580011364A CN 106062514 A CN106062514 A CN 106062514A
- Authority
- CN
- China
- Prior art keywords
- head
- end unit
- instruction
- portable
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3641—Personalized guidance, e.g. limited guidance on previously travelled routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3664—Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3688—Systems comprising multiple parts or multiple output devices (not client-server), e.g. detachable faceplates, key fobs or multiple output screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Navigation (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
Abstract
To facilitate various functionality related to interactions between a portable device and a vehicle head unit, systems and methods (i) efficiently provide audio navigation instructions to a vehicle head unit; (ii) enable data exchange between a portable device which is not in direct communication with a vehicle head unit and the vehicle head unit; and (iii) provide visual output in response to user gestures in an automotive environment.
Description
Technical field
The application relates generally to and intercorrelation between mancarried device and vehicle head-end unit (head unit)
The various functions of connection.
Background technology
Now, many automakers provide the various assemblies in the head-end unit of vehicle, such as, display, raise
Sound device, mike, hardware input control etc..Such as, some head-end unit also support the external device (ED) with such as smart phone
Junction service.But, head-end unit normally only supports very limited amount of communication plan, such as, viaLink
Being directly connected between head-end unit and smart phone.
Additionally, the modern automotive user interface (UI) in the head-end unit of vehicle can include hardware button, raise one's voice
Device, mike, and the screen of display warning, vehicle condition renewal, navigation way guide, numerical map etc..Along with more and more
Function become to access via the head-end unit of automobile, New function developer faces provides right with safety and intuitive way
The challenge of the control answered.It is said that in general, the hardware button in head-end unit is the least, and operates these buttons and may make driving
Person diverts one's attention.On the other hand, when head-end unit includes touch screen, fatware button take preciousness screen substrate face (and due to
The reason identical with little hardware button, small software button is difficult to operate).
Additionally, the many navigation system in a portable device or operated in vehicle head-end unit provide navigation way
Guide, and some in these systems generate audio announcement based on these route guidings.It is said that in general, existing navigation system
It is based only on navigation way to generate route guiding and audio announcement.Thus, route guiding includes when driver is close to its family
The level-of-detail identical with when driver is in unfamiliar region.Some drivers feel when they are familiar with this region
The most detailed route guiding is the most disagreeable, so that they turn off navigation at least some of route or carry out the language of auto-navigation
Sound assists.As a result, they may miss the suggestion (it depends on Current traffic) about best route, the estimation of the time of advent and
Other useful information.Additionally, just the driver in the pleasant to the ear music of automobile or news is likely to think that long notice is horrible, i.e.
Make when they this region is unfamiliar with and long notice the most proper.
Summary of the invention
It is said that in general, " mainly " mancarried device of such as smart phone receives from another via short range communications link
The data of " secondary " mancarried device, and received data are supplied to the head-end unit of vehicle, these data can be wrapped
Include by route guiding, audio packet, map image etc..Data from head-end unit can also be turned by main mancarried device
Send to secondaries.By this way, main mancarried device provides communication link between head-end unit and secondaries, by
In a variety of causes (such as, security restriction, agreement are incompatible, concurrent connection number is exceeded restriction), this secondaries may nothing
Method foundation is directly connected to head-end unit.
In certain embodiments, the base of the motor vehicles realized in mancarried device or vehicle head-end unit
In the UI of gesture, in response to " gently dialling " or " gently sweeping " gesture, how fast or many gestures that slowly performs independently are led to driver
Cross viewport and make the orderly or other structured set advance some of item.Such as, when at any one time only N
When the subset of individual item is suitable for screen, in order to allow user to have stepped through the list of item, UI is initially displayed an I1、I2、…、INAnd
Light gesture of dialling in response to any speed makes list advance with display items IN+1、IN+2、…、I2N.Thus, driver need not load
The heart gently dials too fast so that list is advanced too remote, or light dials too slow so that list is advanced and too enough far and still on screen seen
Most of identical items.According to embodiment, item can be with Search Results, to certain classification (such as, in 15 mile radius
In gas station) automatically suggestion, the corresponding release such as map tile of composition digital map images.
In one embodiment, one in mancarried device and/or vehicle head-end unit includes navigation system.For
Navigation way is guided and is effectively provided to driver, the navigation realized in the mancarried device and/or vehicle head-end unit
System is in view of such as user to the current state of the familiarity of route, the present level of audio frequency in vehicle and vehicle (such as,
Movement, static, signal for turn is shown) factor in one or more dynamically change individual audio instruction
Length.In some embodiments, navigation system also based on these because usually changing the interval between continual command.Such as,
When driver is familiar with the part of route, navigation system can be abandoned audio instructions or provide shorter audio instructions.Another
Aspect, when driver is unfamiliar with this part of route, system can provide longer audio instructions.Further, if just
Portable device or head-end unit are currently played music, then navigation system can reduce audio frequency by controlling level-of-detail
The persistent period of instruction is so that the inconvenience of driver and passenger minimizes.
The example embodiment of the technology of the disclosure is the head end list for audio navigation instruction is effectively provided to vehicle
The method of unit.The method includes that calculating device by one or more determines the current operation status of head-end unit.The method
Farther include to calculate device by one or more and determine certain in the navigation way that the driver at vehicle is following
Individual maneuver.Further, the method includes that calculating device by one or more generates the sound of description maneuver
Frequency instruction, and make audio instructions be supplied to head-end unit via communication link.Generation audio instructions includes: at least partly ground
In the current operation status of (i) driver familiarity to one section of navigation way of generation maneuver with (ii) head-end unit,
Select the level-of-detail of audio instructions.
Another embodiment of these technology is portable computing, and it includes one or more processor and vehicle
Head-end unit communication interface and storage instruction non-transitory computer-readable memory.This instruction when at one or
When being performed on the multiple processor of person, make portable computing: obtain for the driver of vehicle being navigated along navigation way
Navigation way to certain destination guides, and wherein, each during multiple navigation ways guide describes corresponding motor-driven dynamic
Make.This instruction makes mancarried device further: the operation shape of at least one determining in head-end unit or vehicle via interface
State, and guiding for selected navigation way, determines that the user of mancarried device is to a section of corresponding maneuver occurs
The familiarity level of navigation way, and generation audio instructions is guided for selected navigation way.Refer to generate audio frequency
Order, this instruction make mancarried device at least based on determined by mode of operation and determined by the familiarity level of this section is come really
Determine the level-of-detail of audio instructions.
The another embodiment of these technology is calculating system, and it includes navigation Service module, the head-end unit of storage vehicle
The depositor of current operation status, familiarity rating engine and speech production system.This navigation Service module is configured to make a living
Becoming for being guided by the navigation way that the driver of vehicle navigates to certain destination along navigation way, wherein, navigation way refers to
Each in drawing describes corresponding maneuver.Familiarity rating engine is configured to: in guiding for navigation way
A selected navigation way guides generation familiarity tolerance, and this familiarity tolerance instruction driver is to the motor-driven of correspondence occurs
The estimation familiarity of the stretch line of action.Speech production system is configured to: the familiarity that (i) receives from depositor is measured
With the current operation status of head-end unit, to determine the level-of-detail of audio instructions, and (ii) for determined by having in detail
The maneuver of thin level generates audio instructions.
In another example embodiment, for via being configured to receive the automobile-use of user based on gesture input
Interface, family (UI) provides the method for the set of item to include the orderly set of receiving item.The method also includes: make the first son of item
Collection is shown along certain axis via motor vehicles UI, and detection has the gesture along the component motion of described axis and is applied in institute
State motor vehicles UI, and in response to this gesture, make the second subset of item be shown via motor vehicles UI, so that the first subset
Include multiple item with each in the second subset, and wherein, the second subset is N number of by the followed by item in the first subset
Item composition.According to the method, second subset location on motor vehicles UI is unrelated with the speed of the component motion of gesture.
The another embodiment of these technology is portable computing, comprising: one or more processor;Short distance is led to
Letter interface, portable computing is coupled to the head-end unit of vehicle to receive from realizing at vehicle by this short-range communication interface
Head-end unit in the input of motor vehicles user interface (UI), and provide output to this motor vehicles UI;And deposit on it
Contain the non-transitory computer-readable memory of instruction.These instructions are configured on one or more processor perform
With: (i) receives multiple orderly item I1、I2、……、IM, (ii) via motor vehicles UI by N number of continuous items I1、I2、……、IN's
Initial subset is supplied to head-end unit for display, and (iii) receives via the light finger dialling gesture detected by motor vehicles UI
Show, and (iv) is in response to received instruction, provides N number of continuous items with the light speed independently head-end unit dialling gesture
I1+O、I2+O、……、IN+ONew subset, these N number of continuous items I1+O、I2+O、……、IN+OCertain fixed number is offset from initial subset
O。
It addition, another embodiment is the system for providing output in response to the user's gesture in automotive environment.Should
System includes: one or more processor;User interface (UI), described UI be communicatively coupled to this one or more process
Device and be configured to the driver to vehicle and show content and receive the input based on gesture from driver;And its
Upper storage has the non-transitory computer-readable memory of instruction.This instruction is when being performed on one or more processor
Time, make this one or more processor with: (i) shows the first subset of multiple orderly item via user interface along axis,
(ii) detecting gesture via user interface, this gesture has the component motion being directed along axis, and (iii) is in response to this hands
Gesture, with the speed of component motion independently via user interface select multiple orderly items the second subset for display, its
In, each in this first subset and this second subset includes multiple item, and wherein, this second subset includes followed by existing
The item of the item in the first subset, and (iv) is via user interface display subset.
Additionally, another embodiment of these technology is for realizing the number between mancarried device and external output devices
The method performed by one or more processor according to exchange.The method includes: at the first portable user device and vehicle
Head-end unit between set up the first short range communications link;The first portable user device and the second portable user device it
Between set up the second short range communications link, so that this second short range communications link is wireless link;And make the first portable use
Family device (i) receives the data from the second mancarried device via the second short range communications link, and (ii) is short via first
These data are sent to head-end unit by journey communication link.
Another example embodiment of these technology is portable computing, comprising: one or more processor;Connect
Mouthful, this interface is configured to be communicatively coupled by portable computing via the first communication link and the second communication link
Head-end unit and neighbouring portable computing to vehicle;And the non-transitory computer-readable memory of storage instruction.
This instruction, when being performed on one or more processor, makes portable computing receive via the second communication link
From the data of neighbouring portable computing, and via the first communication link, received data are forwarded to head end list
Unit.
The another example embodiment of these technology is portable computing, comprising: one or more processor;Dress
Putting interface, this device interface is configured to portable computing is communicatively coupled to neighbouring calculating device;And storage refers to
The non-transitory computer-readable memory of order.This instruction, when being performed on one or more processor, makes portable
Calculate device detection and the resource in the head-end unit of vehicle is had the neighbouring portable computing of access right, wherein, this money
Source includes at least one in audio output device or display device, sets up the communication chain to neighbouring portable computing
Road, and the head-end unit of vehicle is transferred data to via this communication link.
Accompanying drawing explanation
Figure 1A illustrates the first example context, and in this first example context, the technology of the disclosure may be used for generating
The audio navigation instruction of variable-length;
Figure 1B illustrates the second example context, in this second example context, the technology of the disclosure may be used for via
Data from mancarried device are transmitted the head-end unit to vehicle by another mancarried device;
Fig. 1 C illustrates the 3rd example context, and in the 3rd example context, the technology of the disclosure may be used for processing
Motor vehicles UI gesture;
Fig. 2 A illustrates the of the exemplary portable device that can operate in the system of Figure 1A and exemplary head end unit
One block diagram;
Fig. 2 B illustrates the exemplary portable device that can operate in the system of Figure 1B and exemplary head end unit pair
Second block diagram;
Fig. 2 C illustrates the of the exemplary portable device that can operate in the system of Fig. 1 C and exemplary head end unit
Three block diagrams;
Fig. 3 A illustrates the block diagram of the first example communication system, the mancarried device of Fig. 2 A and head-end unit can be at this
First example communication system operates;
Fig. 3 B illustrates the block diagram of the second example communication system, and the mancarried device of Fig. 2 B and head-end unit are to can be
This second example communication system operates;
Fig. 4 illustrates message sequence chart, and this message sequence illustrates the example letter between the assembly shown in Fig. 2 B
Breath exchange, is connected to set up between mancarried device with head-end unit via another mancarried device;
Fig. 5 illustrates combo box and logic chart, and this combo box and logic chart show that the audio navigation of variable-length refers to
The generation of order;
Fig. 6 A schematically illustrates the discrete paging of the list in response to the light item dialling gesture, can be in the system of Fig. 1 C
In realize this discrete paging;
Fig. 6 B schematically illustrates the discrete paging in response to the light numerical map based on tile dialling gesture, Ke Yi
The system of Fig. 1 C realizes this discrete paging;
Fig. 7 is the flow chart of the exemplary method of the audio instructions for generating variable-length, can portable at Fig. 2 A
Device and/or head-end unit realize this exemplary method;
Fig. 8 be for the mancarried device in identical vehicle between set up the flow process of exemplary method of connection
Figure, can realize this exemplary method in the sample authorization server of Fig. 3 B;
Fig. 9 is for setting up, with the head-end unit being positioned in identical vehicle and mancarried device, the exemplary method being connected
Flow chart, can realize this exemplary method in the mancarried device of Figure 1B, Fig. 2 B and Fig. 3 B;
Figure 10 is the flow chart for setting up the exemplary method being connected with head-end unit via another mancarried device, permissible
In the mancarried device of Figure 1B, Fig. 2 B and Fig. 3 B one realizes this exemplary method;
Figure 11 be for the mancarried device in identical vehicle between set up another exemplary method of connection
Flow chart, this exemplary method can be realized in the sample authorization server of Fig. 3 B;And
Figure 12 is the example side of the orderly set for advancing through the item in motor vehicles UI in response to light group of gesture
The flow chart of method, can realize this exemplary method in the system of Fig. 1 C.
Detailed description of the invention
The mancarried device (such as, smart phone) being connected directly to the head-end unit of vehicle provides for by portable dress
Putting the user interface capabilities being configured to accessing points, other mancarried device can be via this accessing points and head-end unit communication.For
For the sake of convenience, below the mancarried device being connected directly to head-end unit is referred to as main device, and will be via main dress
Put and be connected to the mancarried device of head-end unit and be referred to as secondaries.In some sense, main device operates as master device,
And secondaries operates as from device.
In example embodiment, (such as, speaker, screen, the physical control input of main device bulletin head-end unit
Deng) available resources.If candidate's secondaries is in the range of certain of main device, the most such as user circle of the speaker icon
Surface element occurs on the screen of candidate's secondaries.The user of candidate's secondaries then can be via candidate's secondaries
User interface requests and the communication link of master device.Master device can accept or refuse from candidate's secondaries at two
The request of connection is set up between device.
After a connection is established, secondaries can be by data (such as, audio data packet, the figure of expression numerical map
As etc.) it is sent to main device to be forwarded to head-end unit.Further, main device can will input via head-end unit
Order or event (such as, " volume increase ") are forwarded to secondaries.By this way, main device can be in secondaries
And set up bidirectional communication link between head-end unit.
Further, main device can allow multiple secondaries and head-end unit communication in some cases, even if
When head-end unit the most only supports a communication link with mancarried device.Thus, a secondaries can be via master
Want device to provide audio stream to head end, and navigation instruction and map image can be provided to head end list by another secondaries
Unit.Main device can be configured to implement for the expectation access strategy communicated with head-end unit.
In exemplary scenario, main device is the intelligence being connected to head-end unit via USB (universal serial bus) (USB) cable
Phone.Passenger wishes to be sent to head by navigation (turn-by-turn navigation) route guiding from its smart phone
End unit is to utilize the display and supersound projector being built in head-end unit.Its smart phone is configured to permit by driver
Permitted its smart phone to be found by the smart phone of its passenger.Then passenger operates its smart phone with the smart phone to driver
Position, request and set up short distance smart phone to the communication link of smart phone in the case of driver allows, make
Driver smart phone as main device operation and the smart phone of passenger as secondaries operate.Then passenger exists
Start navigation application on its smart phone, and the packet of the smart phone from passenger is turned by the smart phone of driver
Send to head-end unit.
Furthermore, it is possible to implement in including the environment of mancarried device and the vehicle with head-end unit the disclosure for
Process at least some in the technology of the gesture input in motor vehicles UI.In the illustrated embodiment, mancarried device will
Interactive map and navigation data provide to the head-end unit equipped with touch screen.Putting on of head-end unit detection driver is tactile
Touch the input based on gesture of screen, and the instruction of detected input is supplied to mancarried device, this mancarried device
Input according to detecting updates map and the display of navigation data via touch screen.More specifically, in response to light dialling being detected
Gesture, the lightest speed dialling gesture, mancarried device all makes the orderly set advance some of item.By this way,
Mancarried device eliminates high cognitive load task, and allows the driver of vehicle and not having in the case of minimum is divert one's attention
The list by item of the paging more safely or the battle array of item it is not intended in the case of information of missing because of the excessively high speed of gesture
Row.
For the sake of clarity, at least some in the example below concentrates on embodiment, in this embodiment, portable
Device enforcement gesture process function, but the structured set of display items, and via in the head-end unit being embedded in automobile
Touch screen receives gesture input.But, in another embodiment, head-end unit do not rely on mancarried device 10 or other
Receive and process input based on gesture in the case of external device (ED).In another embodiment, user will gently to dial gesture direct
Put on mancarried device, and display is not being derived to head-end unit by mancarried device, light in response to this
Dial the display that gesture adjusts the structured set of item.More generally, can be being temporarily or permanently arranged in vehicle
One or several devices are implemented the technology of the disclosure.
Further, although input the input based on gesture discussed in the following example with reference to touch screen, but one
For as, the technology of the disclosure is not necessarily limited to two-dimensional surface gesture.Gesture input in other embodiments can include three
Dimension (3D) gesture, such as, the track being suitable for some pattern of mancarried device in the 3 d space (such as, works as mancarried device
Time in driver's hands, it makes gently to stir work forward or backward).In these embodiments, no matter driver is the quickest
Ground or mancarried device, the structured set of the item provided via head-end unit and/or mancarried device are the gentliest provided
Display can in response to this 3D gesture advance a number of item.It is possible to further via video camera and/or other
Sensor detection 3D gesture in some embodiments, and process this 3D gesture according to computer vision technique.
In another embodiment, can be at mancarried device, the head-end unit of automobile, one or several network services
Device or include is implemented in the several system in these devices to refer to for dynamically changing Voice Navigation during navigation session
The technology of the length (and the length at the interval between two continuous print audio instructions) of order.But, for the sake of clarity, with
At least some in lower example focuses primarily upon following example, and in this embodiment, navigation is applied in portable user device
Upper execution, uses the navigation data received from one or several webservers and familiarity signal to generate audio navigation
Instruction (for simplicity, be referred to as " audio instructions "), and provide instructions to the head-end unit of automobile.
Exemplary hardware and component software
With reference to Figure 1A, wherein can be embodied as techniques outlined above dynamically changing the first of the length of audio instructions
Example context 1 includes mancarried device 10 and has the vehicle 12 of head-end unit 4.Such as, mancarried device 10 can be intelligence
Phone or tablet PC.Mancarried device 10 communicates with the head-end unit 14 of vehicle 12 via communication link 16, this communication
Link 16 can be wired (such as, USB (universal serial bus) (USB)) or wireless (such as, Bluetooth, Wi-Fi are straight
Even).Mancarried device 10 can also be via cordless communication network, such as, and forth generation cellular network or third generation cellular network
(respectively 4G or 3G), communicates with various content providers, server etc..
In operation, mancarried device 10 obtains the navigation data form general with the sequence according to instruction or maneuver
Driver navigates to a B from an A.Being discussed in greater detail as following, mancarried device 10 can receive via communication network
From the navigation data of navigation Service or can depend on that embodiment is locally generated navigation data.Based on such as driver
The such factor of current state to the familiarity of route, the present video level in vehicle 12 and vehicle 12, portable dress
Put 10 generations audio instructions in different level-of-detail.Such as, when there being certain assurance to determine the very familiar route of driver, just
Portable device 10 can shorten or even omit some audio instructions.As another example, if head-end unit 14 report is driven
The person of sailing has been started up left turn signal, then mancarried device can omit the audio instructions of left-hand bend.
In addition to generating the compression audio instructions describing maneuver or omission audio instructions, mancarried device 10 exists
The interval between audio instructions can be adjusted under certain situation.Such as, may determine that can be by several for mancarried device 10
The description of maneuver combines is familiar with the relevant portion of this highway, just so that driver to be directed to " No. 94 highways " and driver
Several description can be combined to form single audio instructions by portable device 10, such as, " from east, then turns right to No. 94
Highway ".
The embodiment of these technology may require that to make mancarried device 10 use and driver's familiarity to route
Relevant information and the out of Memory specific to this driver, he or she selects some to arrange and/or installs some application.
Head-end unit 14 can include the display 18 of the navigation information for presenting such as numerical map.Implement at some
In mode, display 18 is touch screen and includes the software keyboard for the input of typing text, and text input can include
The title of destination or address, starting point etc..Hardware input control 20 and 22 in head-end unit 14 and steering wheel respectively
May be used for typing alphanumeric character or perform for other function asking navigation way to guide.Such as, head-end unit
14 can also include audio frequency input and output precision, such as, mike 24 and speaker 26.Speaker 26 may be used for play from
The audio instructions that mancarried device 10 sends.
With reference to Figure 1B, wherein techniques outlined above can be embodied as data via another mancarried device from portable
The transmission of formula device includes main device 10, at least one secondaries 11 and to the second example context 13 of the head-end unit of vehicle
There is the vehicle 12 of head-end unit 14.Each in main device 10 and secondaries 11 can be smart phone, flat board meter
Calculation machine, wearable computer etc..Similar to Figure 1A, main device 10 leads to via the head-end unit 14 of communication link 16 with vehicle 12
Letter, this communication link 16 can be wired (such as, USB) or wireless (such as,Wi-Fi
DirectTM).Similarly, main device 10 and secondaries 11 can be via short-distance wireless communication link or short distance cable modems
Letter link communicates.Each in main device 10 and secondaries 11 can also be via cordless communication network, such as, the
Four generation cellular networks or third generation cellular network (not shown to avoid confusion), come and various content providers, server etc.
Communication.
In operation, secondaries 11 transfers data to main device 10, this main device 10 and then will be transmitted
Data provide to head-end unit 14.The data transmitted in the example of Figure 1B include digital map images.Head-end unit 14
This information is shown via display 18.In some embodiments, display 18 is touch screen and includes for typing text
The software keyboard of input.Such as, another type of display 18 can be to carry together with the input equipment of such as Rotation Controllers
The non-touch screen of confession or single touch pad.It is said that in general, display 18 need not can show text and image simultaneously.
Head-end unit in another vehicle can include such as being merely capable of showing alphanumeric character on one or several lines
Simple displaying device.
Head-end unit 14 can include hardware input control, such as, button, knob etc..These controls can be arranged on head
Other on end unit 14 or in vehicle 12 is local.Such as, vehicle 12 in fig. ib is included in head-end unit 14
Navigation controls 20 and be communicatively coupled to the steering wheel control 22 of head-end unit 14.If it is required, then can be by control 20 He
The 22 various Navigation Control functions being mapped on main device 10.In some embodiments, control 20 and 22 can also be used
In typing alphanumeric character.
Such as, vehicle 12 can also include that the audio input component of such as mike 24 and the audio frequency of such as speaker 26 are defeated
Go out assembly.Similar to hardware control 20 and 22, mike 24 and speaker 26 can be arranged directly in head-end unit 14 or
Other in vehicle 12 is local.
With reference to Fig. 1 C, techniques outlined above wherein can be embodied as processing the 3rd example context of motor vehicles UI gesture
15 include mancarried device 10 and have the vehicle 12 of head-end unit 14.Mancarried device 10 can be smart phone, flat board meter
Calculation machine, wearable computer etc..Mancarried device 10 can communicate with the head-end unit 14 of vehicle 12 via communication link 16, and this leads to
Letter link 16 can be wired, such as, and USB (universal serial bus) (USB), or can be wireless, such as,
Or Wi-Fi DirectTM.Mancarried device 10 can also be via cordless communication network, such as, forth generation cellular network or
Third generation cellular network (respectively 4G or 3G), communicates with various content providers, server etc..
Head-end unit 14 can include hardware input control, such as, button, knob etc..These controls can be arranged on head
Other on end unit 14 or in vehicle 12 is local.Such as, vehicle 12 in fig. 1 c is included in head-end unit 14
Hardware control 20 and be also communicatively coupled to the hardware control 22 on the steering wheel of head-end unit 14.Can be by control 20 He
The 22 various Navigation Control functions being mapped on mancarried device 10.For example, it is possible to " volume increase " button is mapped to
The mapping run on mancarried device 10 and " next navigation instruction " function of navigation software.In some embodiments, control
Part 20 and 22 can be used for typing alphanumeric character.
Additionally, such as, vehicle 12 can include audio frequency input and output precision, such as, mike 24 and speaker 26.With
Hardware control 20 is similar with 22, and mike 24 and speaker 26 can be arranged directly in head-end unit 14 or in vehicle 12
Other is local.
Although the touch screen 18 in Fig. 1 C is embedded in head-end unit 14, but it is said that in general, can be according to any conjunction
Touch-surface is arranged on the steering wheel of such as vehicle 12 or windshield, on mancarried device 10 by suitable mode,
Individually special purpose device is first-class.
In exemplary scenario, mancarried device 10 can perform to map and navigation software module, this mapping and navigation software
Module head-end unit 14 provides the numerical map being divided into several map " tile ".Such as, each map tile can be
The image of bitmap format.Head-end unit 14 receives map tile, these map tile is assembled into map image, and is touching
Map image is shown on screen 18.For the sake of apparent, Fig. 1 C schematically illustrates and will just be displayed on touch screen 18
Numerical map is divided into several tile.However, it is to be understood that in the exemplary implementation, user will not see connecing between tile
Stitch, and numerical map is presented as single image by head-end unit 14.
Such as, as user (the usually driver of vehicle 12), its finger is placed on touch screen 18 and by map image
When being gently allocated to right, to mancarried device 10, head-end unit 14 reports that this gently dials gesture.As response, mancarried device 10 will be new
Map tile provide to head-end unit 14 for display.More specifically, mancarried device 10 can make map tile array
Advancing, therefore, no matter driver dials map image the most rapidly or the gentliest, and head-end unit 14 all shows now with first
The front tile that the tile of display is adjacent in head-end unit 14.It is discussed in greater detail these and other real with reference to Fig. 6 A and Fig. 6 B
Execute mode.
Referring next to Fig. 2 A, mancarried device 10 and the first example embodiment of head-end unit 14 are discussed.Such as institute above
Discussing, head-end unit 14 can include display 18, hardware control 20,22, audio input unit 24 and audio output unit
26.Head-end unit can also include processor 25, the set of one or several sensors 28 and one or several short distances
Communication unit 30B.
The set of sensor 28 can include such as determining the current location of the vehicle being wherein provided with head-end unit 14
Global positioning system (GPS) module, measure the speed of vehicle, acceleration and the Inertial Measurement Unit (IMU) of current orientation, determine
The device etc. the most signal for turn being pushed up or push down on.Although Fig. 2 A depicts the sensing in head-end unit 14
The set of device, however, it is noted that sensor 28 is not necessarily the black box of head-end unit 14.But, vehicle can include place
In any amount of sensor of various positions, and head-end unit 14 can receive from these sensors during operation
Data.In operation, sensor 28 is determined for the state of vehicle 12.
Short-range communication unit 30B allows head-end unit 14 to communicate with mancarried device 10.Short-range communication unit 30B can be propped up
Hold wire communication or radio communication, such as, USB, Bluetooth, Wi-Fi Direct, near-field communication (NFC) etc..
Processor 25 can operate so that transmit between head-end unit 14 and mancarried device 10 is message formatting, place
Manage the data inputting 24 from sensor 28 and audio frequency, show map image via display 18, play sound via audio frequency output
Frequently instruction etc..
Mancarried device 10 can include short-range communication unit 30A for communicating with head-end unit 14.With unit 30B phase
Seemingly, short-range communication unit 30A can support one or more communication plan, such as, USB, Bluetooth, Wi-Fi
Direct etc..Mancarried device 10 can include audio frequency input and output precision, such as, mike 32 and speaker 33.It addition,
Mancarried device 10 includes one or more processor or CPU 34, GPS module 36, memorizer 38 and via 3G Cellular Networks
Network, 4G cellular network or other suitable network transmission any and receive the cellular communication unit 50 of data.Portable dress
Put 10 and can also include additional sensor (such as, accelerometer, gyroscope) or on the contrary, mancarried device 10 can rely on
The sensing data that head-end unit 14 is provided.In one embodiment, in order to improve the accuracy during real-time navigation, just
Portable device 10 relies on location data rather than the output of GPS module 36 that head-end unit 14 is provided.
Memorizer 38 can store other personal data of such as contact person 40 and driver.As shown in Figure 2 A, memorizer
Can also store the instruction part as navigation Service application 48 of operating system 42 and speech production system 44, this navigation takes
Navigation API 46 is called in business application 48 during operation.Speech production system 44 can generate audio instructions, can be used in portable
Speaker 33 in formula device 10 or the speaker in head-end unit 14 26 play out this audio instructions.Implement at some
In example, audio instructions can be generated at the remote server of such as navigation server.Then speech production system 44 can connect
Receive the audio instructions generated and be used in the speaker 33 in mancarried device 10 or the speaker in head-end unit 26
Play the audio instructions that place generates.
Component software 42,44 and 48 can include compiling instruction and/or with the most interpretable any suitable volume
The instruction of Cheng Yuyan.Under any circumstance, component software 42,44 and 48 performs on one or more processor 34.One
In individual embodiment, it is provided that navigation Service application 48 is as the service in operating system 42 or additionally as local component.?
In another embodiment, navigation Service application 48 is and the application of operating system 42 compatibility, but may be with operating system 42 points
Turn up the soil and provided by different software supplier.
Typically can provide navigation API46 with the different editions for different corresponding operating systems.Such as, portable
The maker of device 10 can provide include the navigation API 46 for AndroidTM platform SDK (SDK),
Another SDK etc. for iOSTM platform.
With reference to Fig. 2 B, main device 10, secondaries 11 and the example embodiment of head-end unit 14 are discussed.Such as Figure 1A extremely
Shown in Fig. 1 C and Fig. 2 A to Fig. 2 C, head-end unit 14 include display 18, hardware control 20,22, audio input unit 24 and
Audio output unit 26.Head-end unit 14 can also include processor 25, the set of one or several sensors 28 and
Individual or several short-range communication unit 30B.
The set of sensor 28 can include such as determining the current location of the vehicle being wherein provided with head-end unit 14
Global positioning system (GPS) module, measure the speed of vehicle, acceleration and the Inertial Measurement Unit (IMU) of current orientation, determine
The barometer etc. of the height above sea level of vehicle.Although Fig. 2 B depicts the set of the sensor 28 in head-end unit 14, however, it is noted that
Sensor 28 is not necessarily the black box of head-end unit 14.But, vehicle can include any quantity being in various position
Sensor, and head-end unit 14 can receive the data from these sensors during operation.
Depend on that embodiment, processor 25 can be carried out being stored on computer card reader memorizer (not shown)
The general processor of instruction or the special IC (ASIC) of the function of enforcement head-end unit 14.Under any circumstance, place
Reason device 25 can operate so that from message formatting to main device 10 of head-end unit 14, receiving and process from main dress
Put the message of 10, show map image via display 18, via audio frequency output 26 playback audio message etc..
With continued reference to Fig. 2 B, main device 10 also includes one or more processor or CPU34, GPS module 36, deposits
Reservoir 38 and via 3G cellular network, 4G cellular network or other suitable network transmission any and the honeycomb receiving data
Communication unit 50.Such as, main device 10 can also include add-on assemble, such as, Graphics Processing Unit (GPU).It is said that in general,
Main device 10 can also include additional sensor (such as, accelerometer, gyroscope) or on the contrary, main device 10 is permissible
Rely on the sensing data that head-end unit 14 is provided.In one embodiment, accurate in order to improve during real-time navigation
Degree, main device 10 relies on location data rather than the output of GPS module 36 that head-end unit 14 is provided.
One or several short-range communication unit 30A allow main device 10 communicate with head-end unit 10 and with secondary dress
Put 11 communications.Short-range communication unit 30A can support wire communication or radio communication, such as, USB, Bluetooth, Wi-Fi
Direct, near-field communication (NFC) etc..In some versions, main device 10 is set up not with head-end unit 14 and secondaries 11
Connection with type.Such as, main device 10 can communicate and via Bluetooth with head-end unit 14 via USB connection
Connect and communicate with secondaries 11.
Memorizer 38 can store other personal data of such as contact person 40 and user.As shown in Figure 2 B, a reality
Executing in example, memorizer 38 also stores computer-readable instruction, and this computer-readable instruction is implemented to be used for setting up connection and promoting
The authorization module 45 of the communication between main device 10 and secondaries 11 and generate from the webserver or obtain numeral
Map image, mapping block 47 by navigation instruction etc..Component software 45 and 47 can include compiling instruction and/or with in fortune
The instruction that during row, interpretable any suitable language able to programme is write.Under any circumstance, component software 45 and 47 is at one
Or perform on multiple processors 34.
In some embodiments, authorization module 55 includes that identical software instruction is as authorization module 45.Real at other
Executing in mode, authorization module 45 implements identical function set with 55, but includes the different instruction for different platform.Below
It is discussed in greater detail the illustrative functions of authorization module 45 and 55.Although for simplicity, secondaries 11 being only depicted as
Only there is authorization module 55, it is to be appreciated that secondaries 11 can have identical to main device 10 or similar frame
Structure.Although additionally, depict only a secondaries 11, but described system can implement the secondary dress of more than one
Put.
The 3rd example embodiment of mancarried device 10 and head-end unit 14 is considered briefly with reference to Fig. 2 C.Such as institute above
Instruction, head-end unit 14 can include touch screen 18, hardware control 20,22, audio input unit 24 and audio output unit
26.Head-end unit 14 can also include one or more processor 25, the set of one or several sensors 28 and one
Or several short-range communication unit 30B.Each in short-range communication unit 30B allows head-end unit 14 and mancarried device 10
Communication.Short-range communication unit 30B can support wire communication or radio communication, and such as, USB, Bluetooth, Wi-Fi are straight
Company, near-field communication (NFC) etc..
The set of sensor 28 can include such as determining the current location of the vehicle being wherein provided with head-end unit 14
Global positioning system (GPS) module, measure the speed of vehicle, acceleration and the Inertial Measurement Unit (IMU) of current orientation, determine
The barometer etc. of the height above sea level of vehicle.Although Fig. 2 C depicts the set of the sensor 28 in head-end unit 14, however, it is noted that
Sensor 28 is not necessarily the black box of head-end unit 14.But, vehicle can include any quantity being in various position
Sensor, and head-end unit 14 can receive the data from these sensors during operation.
Depend on the instruction that embodiment, processor 25 can be carried out being stored on computer reader memorizer 27
The special IC (ASIC) of the function of general processor or enforcement head-end unit 14.Under any circumstance, processor 25
Can operate so that from message formatting to mancarried device 10 of head-end unit 14, receiving and process from mancarried device
The message of 10, shows map image via display 18, via audio frequency output 26 playback audio message etc..
Mancarried device 10 can include one or more short-range communication unit for communicating with head-end unit 14
30A.Similar to short-range communication unit 30B, short-range communication unit 30A can support one or more junction service scheme.Main
Want device 10 can also include one or more processor or CPU 34, GPS module 36, memorizer 38 and via 3G honeycomb
Network, 4G cellular network or other suitable network transmission any and receive the cellular communication unit 50 of data.Portable
Device 10 can also include add-on assemble, such as, voice input device 32, audio output device 33, touch screen 31 or other
User interface components etc..
Memorizer 38 can store other personal data of such as contact person 40 and user.As shown in Figure 2 C, memorizer 38
The instruction that can also store operating system (OS) 42 and the navigation Service application 48 performed on OS 42.Navigation Service application 48
Can make in operation the request of map datum is formatted and via longer range communications network, this request is sent to map number
According to server, receive map datum (such as, with vector format, grid format or both), generate number based on map datum
Word map tile images, and these map tile images are provided to head-end unit 14.Similarly, navigation Service application 48 can
To receive the Search Results in response to user's inquiry, navigation way guides and can provide as image, text and/or audio frequency
Out of Memory to head-end unit 14.
In one embodiment, it is provided that navigation Service application 48 as the service in operating system 42 or is additionally made
For local component.In another embodiment, navigation Service application 48 is and the application of operating system 42 compatibility, but may be with
Operating system 42 is provided by different software supplier dividually.Further, in some embodiments, being used in another software should
The function of navigation Service application 48 is implemented with the component software of operation in (such as, web browser).
Memorizer 38 can also store navigation API 46, this navigation API 46 allow on mancarried device 10 perform its
The function of its software applications access navigation Service application 48.Such as, the maker of car head end unit 14 can develop application, should
Apply and on OS 42, run and call navigation API 46 to obtain navigation data, map datum etc..
It is said that in general, component software 46 and 48 can include compiling instruction and/or with the most interpretable any conjunction
The instruction that suitable programming language is write.Under any circumstance, component software 46 and 48 is held on one or more processor 34
OK.
As shown in Figure 2 C, navigation Service application 48 can implement paging gesture controller 49, this paging gesture controller 49
It is configured to process and receives via the user interface of mancarried device 10 via touch screen 18 or in other scheme
Gesture.Illustrative functions referring to Fig. 6 A, Fig. 6 B and Figure 12 paging discussed further gesture controller 44.Although being appreciated that
In the example embodiment of Fig. 2 C, paging gesture controller 49 operates as the assembly of navigation Service application 48, but typically
For, paging gesture controller 49 can operate to process user based on gesture input also in any suitable software architecture
And by for driver be directly perceived and safe in the way of, show via the head-end unit of vehicle or the UI of mancarried device
The structured set of item.
Fig. 3 illustrates the first example communication system, and in this first example communication system, mancarried device 10 can be grasped
Make to ask in response to the user submitted to via head-end unit 14 or mancarried device 10 to obtain navigation data.For the ease of
Illustrate, in a simplified manner, i.e. at the assembly not having other place shown in Fig. 2 A and/or the disclosure to be discussed
In some in the case of, illustrate mancarried device 10 and head-end unit 14 in figure 3 a.
Mancarried device 10 has via the long-range wireless communication link (such as, the cellular link) wide area to such as the Internet
The access right of communication network 52.Referring back to Fig. 2 A, mancarried device 10 can access communication network via cellular communication unit 50
Network 52.In the example arrangement of Fig. 3 A, mancarried device 10 and offer navigation data and the navigation server 54 of map datum, base
Generate the suggestion server 56 of suggestion and familiarity server 58 communicates in certain customers' input, service in this familiarity
In device 58, familiarity rating engine 62 is according to the past navigation requests of the most such as user and these signals of the family position of user
It is analyzed user data estimating that driver's familiarity to route or position is (according at least in embodiment
A bit, it is provided that: user selects some to arrange and/or installs some application).For each maneuver, familiarity rating engine 62
Tolerance, such as, the score in scope [0,100] can be generated, such as, be familiar with the institute of the route of corresponding section with reflection driver
The probability estimated.
With reference to Fig. 2 A, in some embodiments, speech production system 44 can be navigation server 54, mancarried device
10 or the part of combination of navigation server 54 and mancarried device 10.Such as, in certain embodiments, at mancarried device
The part of the speech production system 44 that 10 include can receive (not to be shown by navigation server 54 or audio frequency generation server
Go out) the audio navigation instruction that generates of the part of speech production system 44 that includes.Speech production system 44 then can be just
The audio navigation instruction received is play in portable device 10.It is possible to further in mancarried device 10 rather than at net
In network server, implement familiarity rating engine 62.
More generally, mancarried device 10 can be with any amount of suitable server communication.Such as, implement at another
In example, hand over when single map server provides map datum (such as, in a vector graphics format), traffic data to provide along route
Logical updating, weather data server provides weather data and/or warning, audio frequency to generate server and can generate audio navigation instruction
Deng time, navigation server 54 provides route guiding and other navigation data.
According to exemplary scenario, driver is by pressing the suitable button in the head-end unit of vehicle and typing purpose
Navigation information is asked on ground.Request is provided to mancarried device by head-end unit, and this mancarried device and then request carry out auto-navigation
The navigation data of server.Common reference illustrates Figure 1A, Fig. 2 A and Fig. 3 A of more specifically example, and head-end unit 14 is permissible
Request is provided to mancarried device 10, wherein, for calling API 46 with the software application of the Connection Service of head-end unit 14
Destination is provided to navigation server 54.Navigation server 54 is then by with leading the form of the description of maneuver sequence
Boat data send to speech production system 44, and this speech production system 44 generates the audio instructions of level-of-detail change.Portable
Then audio instructions is provided to head-end unit 14 for audio frequency playback by device 10.
In other embodiments, such as, mancarried device 10 can generate the video of map datum (it can include static state
Image or video flowing), and video is sent to head-end unit 14.Then head-end unit 14 may be received on display 18
The touch event from user.In this embodiment, touch event will not be explained by head-end unit 14, but transmits
The touch event of " original " form.Such as, user can rap the part of the display 18 corresponding with point of interest to select purpose
Ground or user can perform a series of gesture of gently sweeping to carry out between the previous destination being stored on mancarried device 10
Switching." original " touch event can be sent to mancarried device 10, " original " touch event is entered by this mancarried device 10
Row is explained to determine the navigation information asked from user.Such as, mancarried device 10 can generate and include Australia
The video of the map in Sydney, and video can be sent to head-end unit 14.Then user can touch and Sydney Opera House
The upper right corner of corresponding display 18.As a result, " original " touch event (such as, can be touched display by head-end unit 14
The upper right corner) it is sent to mancarried device 10, and based on " original " touch event, mancarried device can determine that user once please
The navigation way seeking Sydney Opera House guides.
Being appreciated that in other embodiments, driver or passenger can input via the audio frequency of mancarried device 10
32 or head-end unit 14 audio frequency input 24 offer destinatioies (and, if it is desired, when source is different from current location, carry
Supply source).Further, in some embodiments, navigation Service 48 can use the data being stored in mancarried device 10
Route guiding is determined for route.
Fig. 3 B illustrates the second example communication system, and in this second example communication system, secondaries 11 can be grasped
Make to transfer data to head-end unit 14 via main device 10.For the ease of illustrating, illustrate the most in figure 3b
Go out main device 10 and head-end unit 14.
In this embodiment, main device 10 and secondaries 11 have via long-range wireless communication link (such as, honeybee
Nest link) access right to the wide-area communication network 52 of such as the Internet.Referring back to Fig. 2 B, main device 10 and secondaries
11 can access communication network 52 via the respective instance of cellular communication unit 50.In the example arrangement of Fig. 3 B, main device
10 and secondaries 11 there is the access right to authorization server 59, this authorization server 59 generates Connecting quantity and by wide
Territory connects network 52 and this Connecting quantity is sent to main device 10 and secondaries 11.
In order to consider exemplary scenario, the secondaries 11 that the passenger of vehicle is controlled referring again to Figure 1B, Fig. 2 B and Fig. 3 B
The main device 10 controlled via the driver of vehicle transfers data to head-end unit 14.Main device 10 is connected to head end
Unit 14 and announce one or more available head end unit resource, such as, display, speaker, hardware input control etc..
Secondaries 11 is sent to authorization server 59 by setting up, with main device 10, the connection request being connected.Authorization server 59 transmits
Receive the authorization requests setting up the permission being connected between main device 10 with secondaries 11 from driver.Driver carries
The interdigital driver of showing allows the input connected, and sets up between main device 10 with secondaries 11 and be connected.
For realizing the example sequence figure communicated between secondaries with head-end unit
For the sake of the clearest, depict the example message sequence chart 400 corresponding with the program in the diagram.Each
Vertical line shows schematically the timeline of corresponding assembly, wherein, is depicted as lower event and betides at the page on the page
On be depicted as lower event after.Flow of information between the components is indicated by means of an arrow.Arrow in varied situations can be with table
Show between different physical units propagate message, on identical device run task between propagate message, from one
Individual software layer to the function call of another software layer, in response to trigger event invoked call back function etc..Further, single
Individual arrow in some cases can be with representative function calling sequence and/or message sequence.
As shown in Figure 4, main device 10 announces the available resources (event 402) of head-end unit to authorization server 59.Example
As, driver can submit to this driver of instruction wish the input of advertisement resource or the setting of main device 10 may indicate that by
Announce available resources under certain conditions.In certain embodiments, main device 10 can announce head via social networking service
The available resources of end unit 14.
Authorization server 59 receives the message (402) of advertisement resource, and stores the identifier of main device 10, available money
Some or all (events 404) in the instruction in source and the position of main device 10.Secondaries 11 will be to available head end
The request of element resources is sent to authorization server 59 (event 406).Authorization server 59 receives this request together with secondaries
The device identification of 11 and the position of secondaries 11.Authorization server 59 determines that bulletin is in the range of certain of secondaries 11
The main device of available head end unit resource whether exist.In shown scheme, authorization server 59 determines main device 10
Announcing the available head end resource in relevant range, and response 408 is being sent to secondaries 11.This response 408 can
With instruction available resources and the device identification of main device 11.
In response to receiving the response 408 from authorization server 59, secondaries 11 in this example starts at screen
UI element (event 410) on curtain. such as, if the available resources of bulletin are speakers, then at the display of secondaries 11
On be likely to occur interactive the speaker icon.Passenger can select the speaker icon choose by music via main device 10 from
Secondaries 11 streams to head-end unit 14.
In certain embodiments, the main device 10 also available money of mancarried device bulletin in certain distance in this locality
Source.Similarly, secondaries 11 can attempt to find the main device in adjacency.In these embodiments, secondary dress
Put the transmission of the available resources announced of 11 reception head-end unit 14 and by main device 10 and the device of secondaries 11
Identifier is sent to authorization server 59.It is turning briefly to Figure 1B, user interface can be shown on the screen of secondaries 11
Icon 29.It addition, the screen of secondaries 11 can show the dialogue presenting the apparatus adjacent with available head end unit resource.
Referring again to the message sequence chart of Fig. 4, passenger submits input (412) to, this input indicate this passenger be desirable for by
The available resources of main device 10 bulletin.Such as, user can click on icon, selects main dress from available apparatus adjacent list
Put 10 etc..Secondaries 11 processes user and inputs 412 and passed by the connection request including the device identification of main device 10
Deliver to authorization server 59 (event 414).
With continued reference to the exemplary scenario of Fig. 4, authorization server 59 receives connection request 414 and authorization requests 416 is passed
Deliver to main device 410.Authorization requests 416 can include the description to secondaries 11 (that is, " phone of John "), so that
Driver can confirm that correct secondaries 11 is connected.Again it is turning briefly to Figure 1B, on the screen of main device 10
Display sample dialogue, its request user accepts or refuses the connection request from secondaries 11.
Then driver indicates her to allow between main device 10 with secondaries 11 and sets up and be connected (event 418).Main
Want device, in response to event 418, authorization message 420 is sent to authorization server 59.Authorization server 59 receives to authorize to be permitted
Can 420 and determine Connecting quantity (event 422), this Connecting quantity can include staying in the connection set up between device 10 and 11
Type (such as, Bluetooth, Wi-Fi direct, infrared ray) instruction, must be set up connect during time interval etc..Award
Connecting quantity is sent to main device 10 and secondaries 11 (event 426) by power server 59.
Main device 10 receives Connecting quantity and is connected (event 428) with secondaries 11 foundation.Once connection establishment,
Secondaries 11 just can transfer data to head-end unit 14 via main device 10.In some embodiments, mandate is
Symmetrical, if therefore becoming secondaries after main device 10, then device 10 and 11 can awarded the most further
Data are exchanged in the case of power.
The example logic of length and interval for dynamically changing audio instructions
With reference to Fig. 2 A and for dynamically changing the technology of the length of audio instructions, Fig. 5 schematically illustrates voice
How generation system 44 determines the suitable level-of-detail instructed for the audio navigation in exemplary scenario.In frame in Figure 5
Some represent nextport hardware component NextPort and/or component softwares (such as, frame 44 and frame 62), other frame represents data structure or stores this
A little memorizeies of data structure, depositor or state variable (such as, frame 74, frame 76 and frame 90), and other frame represents defeated
Go out data (such as, frame 80 to frame 88).Input signal is represented by the arrow indicating corresponding signal name.
Similar to above-mentioned example, use term " user " and " driver " convertibly, it is to be appreciated that such as, if
The mancarried device of the passenger of automobile is used for navigating, then can be that this passenger generates navigating audio instruction and makes this navigating audio
Instruction personalization.
Such as, the system of Fig. 5 receives from local behaviour in the navigation server 54 or next comfortable identical device of Fig. 3 A
The detailed route guiding for the route in file of the navigation engine made.In this example, detailed route guiding 90 is by machine
The description composition of dynamic action 1 to maneuver 5, but it is said that in general, route guiding 90 in detail can include any amount of machine
Dynamic action.
As it is shown in figure 5, familiarity rating engine 62 receives the description to maneuver and the specific data of user, all
As, user ID data, past driving data and the instruction of the distance between the position of user and her family.Such as, these data
In some or all may be from by provide navigation data online service safeguard user profiles.Online service also may be used
To allow user to store its people's preference, such as, preferred routes, charge/without turn pike preference etc..It addition, user can deposit
Storage man position, this family position can be chosen so as to be directed to user its family, or may be used for maneuver and determine distance
The distance of the family of user.User profiles can also reflect the previous navigation request of user.
Familiarity rating engine 62 uses the description to maneuver and the specific data of user to come for each maneuver
Generation familiarity is marked.Such as, if reflecting maneuver in the past driving data of user, and if it is also determined that used
Family is close to family (such as, in 2 miles), then familiarity scoring may be the highest.In some embodiments, if familiarity is commented
Point more than certain threshold value, then familiarity rating engine 62 generates instruction user and is familiar with " being familiar with " signal and additionally of maneuver
Instruction user is unfamiliar with " being unfamiliar with " signal of maneuver.In other embodiments, familiarity rating engine 62 can be by
The scoring of " original " familiarity is sent directly to speech production system 44.
In some cases, familiarity rating engine 62 can receive and indicate driver have vehicle or renting
The signal of vehicle.Such as, referring back to Fig. 2 A, head-end unit 14 can by the information of identification (such as, VIN, at head
The machine address of the COM1 on end unit 14, serial number) it is supplied to mancarried device 10.Mancarried device 10 may determine that
The most whether it has been received by this identification information, and determines that based on this adjusting vehicle is the uncertain proposition rented.More
Specifically, mancarried device 10 can be by the identification information that will receive from head-end unit 14 and the identification in user profiles
Information is compared to make this and determines.In another embodiment, mancarried device 10 receives from other of head-end unit 14
Parameter, this other parameter indirect suggestion user may the most this vehicle of drive the cross.Such as, mancarried device 10 can by
The previous navigation request of reflection in user profiles compares with the previous routes being stored in head-end unit 14.Based on this ratio
Relatively, mancarried device 10 can adjust vehicle is the uncertain proposition rented.
If vehicle is rented, position can be classified as not by user by familiarity rating engine 62 in some cases
It is familiar with.In other words, when it is determined that generation " being familiar with " signal still " is unfamiliar with " signal, familiarity rating engine 62 can make
Determine as in several signals with this.
In addition to for " being familiar with " signals of various maneuvers and " being unfamiliar with " signal, when generating each audio instructions
Time, speech production system 44 can also receive the instruction of the current state of the head-end unit from depositor 74 and from depositor
The instruction of the current state of the vehicle of 76 such as, if the speaker of head-end unit is playing music, then vehicle head-end unit
State 74 be probably " audio frequency playback ".If coming from head-end unit currently without audio frequency, then state is probably " idle ".Separately
Outward, according to the volume of audio frequency playback, such as, " audio frequency is high " or " audio frequency is low ", it is understood that there may be individually state.Implement at some
In mode, according to the volume of audio frequency playback, play instruction can be carried out with more louder volume or more amount of bass.Such as, if head end
Unit is in " audio frequency is low " state, then speech production system 44 can generate the audio instructions of relatively amount of bass and divide to reduce driver
The heart.In the exemplary scenario of Fig. 5, vehicle head-end unit can be determined respectively for the corresponding time interval of each maneuver
State 74.Thus, head-end unit is in " idle " state for maneuver 1, " audio frequency playback " shape for maneuver 2
State, and it is back to " idle " state for maneuver 3 to maneuver 5.
Referring back to Fig. 2 A, the sensor in head-end unit 14, the biography in mancarried device 10 can be passed through respectively
The audio frequency input 24 and 32 of sensor and/or head-end unit 14 and mancarried device 10 determines the state 76 of vehicle.Such as, if
Vehicle is the most moving or be not " vehicle moves ", then the state 76 of vehicle is probably " stationary vehicle ".According to car
Speed, it is also possible to there is single state.In some embodiments, if vehicle is just to advance at utmost speed and at next
There is short distance before individual maneuver, then speech production system 44 can generate shorter route and guide.If the letter additionally, turn
One in number flashes, then the state of vehicle is also likely to be " turing indicator is opened ".In some embodiments, vehicle
State can be the combination of state of the speed of vehicle and signal for turn.
In the exemplary scenario of Fig. 5, familiarity rating engine 62 generates " being unfamiliar with " signal 64 for maneuver 1.Now,
It is " stationary vehicle " that vehicle head-end unit is in the state of " idle " state and vehicle.As a result, speech production system 44 generates
" length " corresponding with the total length text description to the maneuver 1 included at detailed route guiding 90 or complete audio frequency
Instruction 80.Such as, audio instructions 80 can be " turn left after 300 meters upper main stem ".
For maneuver 2, familiarity rating engine 62 also generates " being unfamiliar with " signal 66.But, vehicle head-end unit
State be now " audio frequency playback ", and the state of vehicle is " vehicle moves ".In this case, speech production system
System 44 determines that user has no time for the longest instruction, because vehicle moves, and user is listening music, Er Qieke
Can be not desired to be disturbed.Therefore, speech production system 44 generates short audio instruction 82, and this short audio instruction 82 eliminates to machine
Some in the text that the total length of dynamic action 2 describes.
It is said that in general, instruction can be shortened in any suitable manner, it is special that this any suitable mode is probably language
Fixed.In example embodiment, speech production system 44 is by removing nonessential information, such as, in the current location of vehicle
And the distance between the position of imminent maneuver instruction or follow the highway class after the proprietary name of highway
The instruction of type (" leading " rather than " main stem "), shortens audio instructions in due course.Such as, the detailed of maneuver 2 is described
Thin audio instructions can be " upper middle Yonge of turning right after 600 meters ", and speech production system 44 can export " central authorities in right-hand rotation "
82 are instructed as short audio.
For maneuver 3, familiarity rating engine 62 generates " being familiar with " signal 68.Such as, maneuver 3 can be as
The part of in the preferred routes of the user indicated by user profiles.When head-end unit is in " idle " state, due to
Familiarity and the vehicle of user move, and speech production system 44 generates short audio instruction 84.But, generating audio instructions
Before, next maneuver is also checked, to determine two maneuvers for a user by speech production system 44
The most familiar, it is also possible to these two maneuvers to be combined into the shortening audio instructions describing two maneuvers.
Further, familiarity rating engine 62 generates " being familiar with " signal 70 for maneuver 4.Speech production system 44 is right
The rear short audio instruction 86 generating description maneuver 4, and the interval between instruction 84 and 86 is decreased to zero.Change speech
It, short instruction 84 and 86 is combined into single instruction by speech production system 44.Such as, in conjunction with after audio instructions 84,86 permissible
It is " in right-hand rotation elm street and after 500 meters, be incorporated to No. 34 highways ".Speech production system 44 may then continue with pre-seeing additionally
Maneuver to combine more instruction potentially, until there is following maneuver: familiarity rating engine 62 is this machine
Dynamic action generates " being unfamiliar with " signal.
With continued reference to Fig. 5, speech production system 44 receives " being unfamiliar with " signal 72 for maneuver 5 and determines car
Head-end unit is in " idle " state.Speech production system 44 further determines that the turing indicator consistent with maneuver 5
It is activated (such as, by receiving the corresponding instruction from head-end unit).Such as, if maneuver 5 is included in relatively short
Carry out indicator ON of turning left and turn left in time, then speech production system 44 may determine that driver may know that and will turn
Curved, and audio instructions can be shortened.But, if maneuver 5 do not include turn, then " turing indicator unlatchings " state and
It doesn't matter for audio instructions, and be probably and just stayed from maneuver earlier.If it addition, maneuver 5 is true
Recognizing instruction, such as, " the turning left after 300 meters " after previously instruction " left-hand rotation after a mile ", then speech production system 44 is permissible
Skip this audio instructions completely.
For processing the example schematic diagram of gesture input
Referring now to Fig. 6 A and with continued reference to Fig. 1 C, 2C with for processing the skill of the gesture input in motor vehicles UI
Art, in exemplary scenario, paging gesture controller 49 processes gesture input and showing via touch screen 18 control item A to item I
Show.For the ease of illustrating, in this example, item A to item I is rendered as the identical figure of generally size and/or text element.
According to an embodiment, the size (such as, length, width) that paging gesture controller 49 receives touch screen 18 is described
Parameter, to determine that how many items in each item A to item I can be suitable for touch screen.In the example shown in Fig. 6 A, paging gesture
Controller 49 determines at most can show 3 items on touch screen 18.
Such as, each in item A to item I can be release, and it is emerging that this release describes with certain matches criteria
Interest point.As more specifically example, driver may have requested that display is along the cafe to the route selecting destination.Item A
Therefore each to item I can include the address of cafe, the photo of cafe, business hours etc..Navigation Service is applied
48 can receive and describe the data of item A to item I and data set is woven to orderly list so that item B follow item A, item C with
Item B etc..
Paging gesture controller 49 can be come item A in response to the input based on gesture received via touch screen 18
The display of subset to item I is updated.More specifically, paging gesture controller 49 is dialled in response to light or gently sweep gesture 110
Display layout 102 is updated to display layout 104, and dials gesture 112 by display layout 104 more then in response to subsequently light
New is display layout 106.It is applied to gently sweeping gesture 110 and 112 in substantially the same horizontal direction, but gently sweeps gesture
The speed of 110 is significantly larger than the speed gently sweeping gesture 112, as in fig. 6 by represented by the corresponding length of arrow 110 and 112
's.
In initial display layout 102, the set 120 of shown item includes an A, item B and item C.When user applies phase
To when the gentliest dialling gesture 110, paging gesture controller 44 determines the direction of gesture 110 and makes list advance to show bag
Include the new set 130 of a D, item E, item F.Then user applies relatively slow light gesture 112 of dialling, and paging gesture controller
44 make list advance includes the new set 140 of a G, item H and item I with display.Thus, in both cases, paging gesture controls
Device 44 guarantees in response to the light newly set dialling gesture display items, and dials the instantiation of gesture the soonest, when excessively regardless of light
To when newly gathering, item is not all had to be missed.
In this example, paging gesture controller 49 is further in view of the size of touch screen 18 or currently at touch screen 18
Upper available viewing area, determines and dials how far gesture list should advance in response to light.Similarly, if user will gently dial gesture
Put on the touch screen on mancarried device 10, then paging gesture controller 44 can be in view of the touch screen of mancarried device 10
Size determine and can show how many items every time.Thus, paging gesture controller 44 can be only by response to the lightest
Dial gesture be shown as to item: (item C, item D) and then (item A, item B), (item E, item F) and then (item C, item D) etc. travel through an A extremely
This set of item I.
In the example of Fig. 6 A, set 120,130 and 140 is non-overlapped.But, in other embodiments, these
Set can be overlapping so that the additional assurance not missing item is supplied to driver with certain controlled way.Referring to Fig. 6 B
It is discussed in greater detail this embodiment.
Referring now to Fig. 6 B and referring still to Fig. 1 C and Fig. 2 C, navigation Service 48 can show by ground via touch screen 18
Figure tile 1-A, 1-B ..., the interactive digital map of 5-G composition.Map tile can be embodied as specific scaling water
The square chart picture of certain flat fixed size.In this exemplary scenario, this series of display layout 200 includes initially showing cloth
Office 202, dial the light of gesture 210 and dial in response to light the second display layout 204 dialling gesture 210 and generate with in response to the lightest
Gesture 212 and the 3rd display layout 206 that generates.
Initial display layout 202 includes map tile array 220, and this map tile array 220 includes the first row tile 1-
A, 1-B and 1C, second row tile 2-A, 2-B and 2-C etc..In response to light group of relatively slow gesture 210, paging gesture controller
The 44 new map tile arrays 230 of display, this array only share with map tile array 220 row C (that is, map tile 1-C,
2-C ..., 5-C) and include new row D and E.Further, in response to comparatively faster light group of gesture 212, paging gesture
Controller 44 shows new map tile array 240, and this array only shares row E (that is, map watt with map tile array 230
Sheet 1-E, 2-E ..., 5-E) and include new row F and G.
Similar to the scheme of Fig. 6 A, paging gesture controller 49 in fig. 6b dials hands in response to dramatically different light of speed
Gesture makes the fixed amount that the advance of map tile array is identical, and this fixed amount depends on the size of touch screen 18.But, in the program
In, paging gesture controller 49 display continuously generate generate between result overlapping with by user will not because of light dial the fastest
And the additional assurance of the part of innocent omission numerical map is supplied to user.Additionally, driver need not try to light dial sufficiently fast
So that list is fully advanced, because paging gesture controller 49 will make map tile array advance fixed amount, even if gesture is the slowest.
It is possible if desired to each column map tile in fig. 6b to be taken as the item A to Fig. 6 A is to item similar for item I.By
This, can be considered as paging gesture controller 49 list operation with one-dimensional rather than two-dimensional array.But, if will be light
Dial gesture and vertically rather than flatly put on numerical map based on tile, then should be by the row of map tile rather than row
It is considered as limiting each.
For dynamically changing the example flow diagram of the length of audio instructions
Referring now to Fig. 7, it is shown that for raw by the speech production system 44 (or another suitable system) of Fig. 2 A
Become the exemplary method of audio instructions.For example, it is possible to being stored on computer-readable memory and can be at mancarried device 10
One or more processor on perform instruction set implement the method.More generally, can be at user's set, network clothes
Business device or partly in user's set and the method for partly implementing Fig. 7 in the webserver.
Method starts from frame 702, wherein, receives the description of the set to maneuver.According to embodiment, can be from separately
One device (such as, can be via the navigation server of communication network access) or from same apparatus from another left software group
Part receives this description.Description to maneuver can be provided in any other suitable format, including alphanumeric character string, its
In, the description to each maneuver is separated by branch.
At frame 704, select the subset of the maneuver received at frame 702.Subset is the most only wrapped
Include single maneuver.But, when being combined by corresponding audio instructions, subset can include multiple maneuver.Equally, make
By technology discussed above or other suitable technology, at frame 704, determine motor-driven dynamic with in this subset of user couple
Make the familiarity of corresponding route segment.
At frame 706 and 708, determine the state of vehicle head-end unit and the state of vehicle respectively.It follows that the method makes
The result being used in the determination at frame 704, frame 706 and frame 708 determines at frame 710 the need of audio instructions.Such as institute above
Discuss, sometimes can omit audio instructions.If need not audio instructions, then flow process carries out to determining whether it is also contemplated that another
The frame 716 of the next subset of one maneuver.
Otherwise, if it is determined that need audio navigation to instruct, then flow process is carried out to frame 712, wherein it is determined that in the subsets one
Individual or persistent period of multiple audio instructions.The method can also determine whether at frame 712 should be by motor-driven for the next one dynamic
Make to be considered as the part of subset, or follow-up with relevant at the audio instructions about one or more maneuver in the subsets
Whether interval should be there is between the audio instructions of maneuver.
The method then advancees to the frame 714 of the combination producing audio instructions into each maneuver or maneuver.
At frame 716, it is determined whether the part of each maneuver has been considered as in subset, without motor-driven dynamic
It is left, then terminates.Otherwise, flow process comes back for selecting the frame 704 of the next subset of maneuver.
For realizing the example flow diagram of the communication between secondaries and vehicle head-end unit
Referring now to Fig. 8, the exemplary method 800 being connected can be set up between main device with secondaries in fact by being used for
Execute the instruction set for being stored on computer-readable memory and can perform on one or more processor.Real in example
Execute in example, implementation 800 in the authorization server 59 of Fig. 3 B.
The method starts from frame 802, wherein, sets up communication link between head-end unit and main device.In typical case side
In case, communication link is short range communications link, such as, and USB, Bluetooth wireless connections etc..It follows that at frame 804, really
Determine main device and the most announce the available resources of head-end unit.Such as, the advertisement resource of head-end unit can be display,
Speaker, hardware input control etc..
At frame 806, determine whether main device accepts the communication link with secondaries.In typical scenario, drive
Person submits the input accepting communication link to via main device.At frame 808, set up logical between main device and secondaries
Believe link, and method 800 terminates after frame 810.
With reference to Fig. 9, can implement at main dress the head-end unit of automobile is had in the mancarried device of access right
Put and set up the exemplary method 900 being connected between secondaries.Similar to method 900, can be embodied as being stored in by method 900
On computer-readable memory and the computer-readable instruction collection that can perform on one or more processor.
The method starts from frame 902, wherein, the available resources of candidate's main device bulletin head-end unit.At frame 904,
Candidate's main device receives the authorization requests from authorization server.In typical scenario, authorization requests includes asking connection to be awarded
The device identification of the device of power and/or additional descriptor.Driver can use main device to submit to accept authorization requests
User input.In certain embodiments, main device can be via the available resources of social networking service bulletin head-end unit.
At frame 906, candidate's main device confirms to authorize permission to ask by authorization requests is sent to authorization server
Ask.At frame 908, candidate's main device receives the Connecting quantity from secondaries.It follows that at frame 910, candidate is main
Device uses this Connecting quantity to come with secondaries and sets up and is connected, and operates initially as main device.Once connection establishment,
At frame 912, main device just can transmit data between head-end unit and secondaries.According to embodiment, transmission is
Unidirectional (such as, from secondaries to head-end unit) or two-way (such as, from secondaries to head-end unit and from the beginning
End unit is to secondaries).Further, in certain embodiments, head-end unit receive from head-end unit state update,
User command etc., and disappear for secondaries generation according to the communication plan defined between main device and secondaries
Breath.In other words, if it is desired, secondaries and main device can be implemented to support leading between secondaries and head-end unit
The robust function of letter.Method terminates after frame 912.
Referring now to Figure 10, can will be used for via neighbouring mancarried device and showing that the head-end unit foundation of vehicle is connected
Example method 1000 is embodied as being stored on computer-readable memory and can be at one or more of such as secondaries 11
The computer-readable instruction collection performed on processor.
Method starts from frame 1002, and wherein, secondaries detection has the apparatus adjacent of the available resources of head-end unit.?
In typical scenario, the request of the available resources asked in adjacency is sent to authorization server by secondaries.Authorize clothes
Business device responds request by the secondaries providing the device identification in adjacency of bulletin available resources.
At frame 1004, authorization requests is sent to authorization server by secondaries, and this authorization requests includes main device
Device identification, secondaries asks to be connected to the permission of this main device.It follows that at frame 1006, secondary dress
Put and receive from the Connecting quantity of authorization server and be connected with main device foundation.At frame 1008, secondaries is permissible
Head-end unit exchange data via main device with vehicle.Method terminates after frame 1008.
Referring now to Figure 11, can connect being used between a pair mancarried device in identical vehicle setting up
Exemplary method 1100 be embodied as being stored on computer-readable memory and can be performed by one or more processor
Instruction set.In the exemplary embodiment, implementation 1100 in the authorization server 59 of Fig. 3 B.
Method starts from frame 1102, wherein, receives the candidate's main device from the available resources announcing head-end unit
Message.In one embodiment, the authorization server storage device identification of candidate's main device and the resource announced
Descriptor.After candidate's secondaries uses junction service or " finds " candidate's main device via the webserver,
The authorization requests from candidate's secondaries is received at frame 1104.Authorization requests can include the device mark of candidate's main device
Knowing symbol, candidate's secondaries asks to be connected to the permission of this candidate's main device.
It follows that at frame 1106, device identification and the available resources of neighbor candidate main device are sent to candidate
Secondaries.At frame 1108, receive authorization message from candidate's main device.Such as, the user of candidate's main device can
To accept connection via user interface.At frame 1110, determine Connecting quantity, and at frame 1112, by Connecting quantity transmission
To main device and secondaries.Method terminates after frame 1112.
For processing the example flow diagram of motor vehicles UI gesture
Such as, Figure 12 illustrates the exemplary method 1200 for processing motor vehicles UI gesture, can be by this exemplary method
1200 are embodied as writing and be stored in non-transitory computer-readable storage media (such as, with any suitable programming language
The memorizer 27 of the memorizer 38 or Fig. 2 C of Fig. 2 C) on computer-readable instruction collection.In the exemplary embodiment, Fig. 2 C's
Implementation 1200 in paging gesture controller 49.
At frame 1202, the orderly set of receiving item.As discussed above, can be along one-dimensional (such as, with dependency
The list of Search Results that order is arranged), two dimension (such as, being arranged as the map tile array of grid) or higher dimension pair
Orderly set is organized.Each item can include graphical content, content of text etc..
At frame 1204, along at least one axis via the first subset of motor vehicles UI display items.Such as, in fig. 6
Item A to item I arranges along horizontal axis, and map tile in fig. 6b is along horizontal axis and along vertical axis layout.More
Usually, item can be arranged along the single axis or multiple axis with any suitable orientation.Such as, in the first subset
The quantity of item and the quantity of item in the subset selected subsequently can depend on the size of screen.
The gesture with the component motion along at least one axis is received at frame 1206.Gesture can be flatly, hang down
Directly, the light group of gesture etc. applied diagonally.Further, gesture can have the kinematic parameter in two dimension or three-dimensional.
More specifically, can be via touch screen or the 3d space in automotive environment detect gesture.
It follows that at frame 1208, with the new subset of the speed of gesture independently options for display.New subset can
Form with the several items by followed by previously shown item.According to embodiment, new subset can be with the subset of previously display
There are some overlapping or there is no overlap.
Additional consideration
Following additional consideration is applied to discussed above.Running through this specification, multiple examples can be implemented as single instance
Assembly, operation or the structure being described.Although each operation of one or more method is shown and described as individually
Operation, but one or more in each operation can be performed simultaneously, and do not require to perform behaviour in the order shown
Make.The 26S Proteasome Structure and Function being presented as the independent assembly in example arrangement can be embodied as combinative structure or group
Part.Likewise it is possible to the 26S Proteasome Structure and Function being presented as single component is embodied as single assembly.These and other becomes
Change, revise, add and improve in the range of the theme of the disclosure.
It addition, be described as some embodiment in this article including logic or many assemblies, module or mechanism.Module
May be constructed software module and (such as, embody code on a machine-readable medium or in the transmission signal, wherein, this code
Performed by processor) or hardware module.Hardware module is to be able to carry out the tangible unit of some operation, and can be with certain
Mode configures or arranges.In the exemplary embodiment, can pass through software (such as, apply or application part) by one or
Multiple computer systems (such as, independent client or server computer system) or one of computer system or many
Individual hardware module (such as, processor or one group of processor) is configured to hardware module, and this hardware module operates to perform this paper
Some described operation.
In various embodiments, hardware module can mechanically or electronically be implemented.Such as, hardware module can include
Permanent configuration (such as, as application specific processor, such as, field programmable gate array (FPGA) or special IC
(ASIC)) to perform special circuit or the logic of some operation.Hardware module can also include temporarily by software arrangements to hold
The FPGA of some operation of row or circuit (such as, containing in general processor or other programmable processor).
It is to be understood that, machine in circuit that is special and that forever configure or in the circuit of temporarily configuration (such as, by software arrangements)
Tool ground is implemented the decision of hardware module and is considered to be ordered about by cost and time.
Therefore, it should be interpreted as containing tangible entity by term " hardware module ", i.e. physical build-up, permanent configuration (example
Such as, hardwired) or temporarily configuration (such as, being programmed) for operate in some way or perform described herein some
The entity of operation.As it is used herein, " module that hardware is implemented " refers to hardware module.It is temporary transient in view of wherein hardware module
The embodiment of configuration (such as, be programmed), each in hardware module need not quilt in any one example in time
Configuration or instantiation.Such as, when hardware module includes the general processor using software arrangements, can be in the different time
General processor is configured to corresponding different hardware module.Such as, software therefore can be by processor a time instance
In be configured to form specific hardware module and be configured to form different hardware modules in different time instance.
Hardware module can provide information to other hardware module and receive the information from other hardware module.Cause
This, can be considered as being communicatively coupled by described hardware module.In the case of multiple this hardware modules are simultaneous,
Can realize connecting communicating of hardware module by signal transmission (such as, by suitable circuit and bus).Different
Multiple hardware modules are configured or in the embodiment of instantiation by the time, such as can be by the information in storage organization
Storage and the retrieval communication that realizes between this hardware module, multiple hardware modules have access to this storage organization
Power.Such as, a hardware module can perform operation and the output of this operation be stored in the storage device, this hardware
Module is communicatively coupled to this storage device.After, then another hardware module can access storage device with defeated to store
Go out and retrieve and process.Hardware module can also initiate the communication with input equipment or output device, and can be to money
Source carries out operating (such as, information).
Method 700,800,900,1000,1100 and 1200 can be to include form with tangible computer executable instruction
One or more functional block, module, each function or routine, be stored in nonvolatile by this tangible computer executable instruction
Property computer-readable recording medium and use calculating device (such as, server, personal computer, smart phone, portable dress
Put, ' secondary ' mancarried device, vehicle head-end unit, tablet PC, head mounted display, intelligent watch, mobile computing fill
Putting or other personal computing device, processor as described herein) performs this tangible computer executable instruction.Side
Method 700,800,900,1000,1100 and 1200 can (such as, navigation server, familiarity be commented as any back-end server
Sub server, authorization server or any other type of server computational device), mancarried device module or motor-driven
The part of the vehicle head-end unit module of car environment, such as, or is wrapped as the part of the module at this environmental externality
Include.Although for the ease of explaination, be referred to other accompanying drawing and describe accompanying drawing, but method 700,800,900,1000,1100 and
1200 can be with other object and some uses of user interface.Although additionally, explaination above describes by specific device
The method 700,800,900,1000,1100 and 1200 that (such as, mancarried device 10, secondaries 11 and head-end unit) is carried out
Step, but complete the purpose that this explaination is solely to show that.Can be by one or more dress of automotive environment
Put or other parts are to perform the frame of method 700,800,900,1000,1100 and 1200.
Or forever can be configured to perform associative operation at least partially through temporarily configuring (such as, passing through software)
One or more processor performs the various operations of examples described herein method.Whether temporarily configuration is the most permanent
Configuration, this processor can constitute the mould operating the processor enforcement to perform one or more operation or function
Block.In some example embodiments, the module involved by this paper can include the module that processor is implemented.
Similarly, approach described herein or routine can be implemented by processor at least in part.Such as, may be used
At least some in the operation of method is performed with the hardware module implemented by one or more processor or processor.
The performance profile that can some be operated is among one or more processor, and this one or more processor is the most resident
In individual machine, and it is deployed on multiple machines.In some example embodiments, can be by a processor or many
Individual processor is positioned in single position (such as, in environment of being in, working environment or as server zone), but, at it
In its embodiment, processor can be distributed in multiple position.
One or more processor can also operate with support associative operation in cloud computing environment performance or
(SaaS) is i.e. serviced as software.Such as, as indicated above, can be by one group of computer (as including processor
The example of machine) perform at least some in operation, can via network (such as, the Internet) and can via one or
The multiple suitable interface (such as, application programming interface (API)) of person accesses these operations.
The performance profile that some can be operated among one or more processor, this one or more processor
Do not only reside in individual machine, and be deployed on multiple machines.In some example embodiments, can by one or
The module that multiple processors or processor are implemented be positioned in single geographical position (environment of such as, being in, working environment or
In server zone).In other example embodiment, the module that one or more processor or processor are implemented can be divided
Cloth is in multiple geographical position.
Further, just to the purpose illustrated, accompanying drawing depicts some embodiments of automotive environment.This area
Technical staff be readily appreciated that by following discussion and can use this in the case of without departing from principle described
The alternate embodiment of the structures and methods described by literary composition.
When reading the disclosure, those skilled in the art is understood for automotive environment by principles disclosed herein
Other additional interchangeable 26S Proteasome Structure and Function design.Thus, although explained and described specific embodiment and
Application, it is to be appreciated that the disclosed embodiments are not limited to accurate structure disclosed herein and assembly.Can without departing from
On the premise of spirit and scope defined in appended claims to presently disclosed method and the layout of equipment, operation and
Details is made the most obvious various amendment, is changed and change.
Claims (60)
1., for the method that audio navigation instruction is effectively provided to the head-end unit of vehicle, described method includes:
Calculate device by one or more and determine the current operation status of described head-end unit;
Calculate device by one or more to determine in the navigation way that the driver at described vehicle is following
Certain maneuver;
Calculate device by one or more and generate the audio instructions of the described maneuver of description, including: at least partly
Ground is based on the familiarity of one section of described navigation way and (ii) described head end to there is described maneuver of driver (i) described
The described current operation status of unit, selects the level-of-detail of described audio instructions;And
Calculating device by one or more makes described audio instructions be supplied to described head-end unit via communication link.
Method the most according to claim 1, wherein it is determined that the described current operation status of described head-end unit includes passing through
One or more calculates device and determines that described head-end unit is currently the most exporting audio frequency.
Method the most according to claim 2, wherein, generates described audio instructions and includes:
In response to determining that described head-end unit, currently without exporting audio frequency, calculates device by one or more raw
Become more detailed audio instructions, and
In response to determining that described head-end unit is the most exporting audio frequency, calculate device by one or more and generate not
The most detailed audio instructions.
Method the most according to claim 1, farther includes:
Calculate device by one or more and determine the current operation status of described vehicle,
Wherein, the described level-of-detail selecting described audio instructions is based further on the described current operation status of described vehicle.
Method the most according to claim 4, turns wherein it is determined that the described current operation status of described vehicle comprises determining that
Whether curved indicator is activated, and wherein, selects the described level-of-detail of described audio instructions to include: if described motor-driven dynamic
Indicate by the turning indicated by described turing indicator, then generate and shorten audio instructions, and, if described maneuver is not
Instruction by the turning indicated by described turing indicator, then generates total length audio instructions.
Method the most according to claim 1, wherein, described maneuver is the first maneuver, and described method is further
Including:
Calculate device by one or more and determine the second maneuver in described navigation way, wherein, described second
Maneuver is directly immediately following described first maneuver;And
Wherein, the audio instructions generated describes described first maneuver and described second maneuver, wherein, institute
The audio instructions generated is provided as single uninterrupted notice via described head-end unit.
Method the most according to claim 1, wherein, selects the described level-of-detail of described audio instructions to comprise determining that to be
The instruction of remaining distance between the no current location being included in described vehicle and the position of described maneuver.
Method the most according to claim 1, farther includes:
Calculate device by one or more to guide based on the past navigation way being supplied to described driver, estimate described
Driver's familiarity to described route segment.
Method the most according to claim 8, wherein, estimates that the familiarity of described route segment is wrapped by described driver further
Include: calculate device by one or more and estimate whether described driver has described vehicle or described driver is
No once rented described vehicle.
10. a portable computing, including:
One or more processor;
The head-end unit communication of interface, described interface and vehicle;
On it, storage has the non-transitory computer-readable memory of instruction, and described instruction is when processing in one or more
Described portable computing is made when being performed on device:
Obtain for multiple navigation ways that the driver of described vehicle navigates to certain destination along navigation way are guided, its
In, each during the plurality of navigation way guides describes corresponding maneuver,
The mode of operation of at least one in described head-end unit or described vehicle is determined via described interface;
A navigation way selected by guiding for the plurality of navigation way guides, and determines described mancarried device
The familiarity level of the user's one section of described navigation way to there is described corresponding maneuver, and
Selected navigation way is guided and generates audio instructions, including: mode of operation determined by least based on and institute are really
The fixed familiarity level to described section determines the level-of-detail of described audio instructions.
11. portable computings according to claim 10, wherein, in order to determine the described operation of described head-end unit
State, described instruction determines that described head-end unit is currently the most exporting audio frequency.
12. portable computings according to claim 10, wherein, described instruction makes described mancarried device further
Via described interface, the audio instructions generated is supplied to described head-end unit for playback.
13. portable computings according to claim 10, farther include speaker, wherein, described instruct into one
Step makes described mancarried device play back the audio instructions generated via described speaker.
14. portable computings according to claim 10, wherein, in order to select described audio instructions described in detail
Level, described instruction determine whether to be included between the current location of described vehicle and the position of described maneuver remaining away from
From instruction.
15. 1 kinds calculate system, including:
Navigation Service module, described navigation Service module is configurable to generate for the driver of vehicle being navigated along navigation way
Multiple navigation ways to certain destination guide, and wherein, each during the plurality of navigation way guides describes accordingly
Maneuver;
Depositor, described depositor stores the current operation status of the head-end unit of described vehicle;
Familiarity rating engine, described familiarity rating engine is configured to: the institute in guiding for the plurality of navigation way
The navigation way selected guides generation familiarity tolerance, and described familiarity tolerance indicates described driver to correspondence occurs
The estimation familiarity of the stretch line of maneuver;And
Speech production system, described speech production system is configured to: (i) receives described familiarity tolerance and deposit from described
The described current operation status of the described head-end unit of device, to determine the level-of-detail of audio instructions, and (ii) is for having
Determined by level-of-detail described maneuver generate audio instructions.
16. calculating systems according to claim 15, farther include depositor, and described depositor stores described vehicle
Current operation status, wherein, described speech production system is configured to receive the described current operation shape of described vehicle
State is to determine the described level-of-detail of described audio instructions.
17. calculating systems according to claim 15, wherein, described speech production system is configured to:
When the described current operation status of described head-end unit indicates described head-end unit current not when exporting audio frequency, generate in detail
Thin audio instructions, and
When the described current operation status of described head-end unit indicates described head-end unit the most to export audio frequency, generate contracting
Short audio instructs.
18. calculating systems according to claim 17, wherein, in order to generate described shortening audio instructions, described voice is raw
One-tenth system is configured to be omitted in the finger of remaining distance between the current location of described vehicle and the position of described maneuver
Showing, wherein, the described detailed audio instructions corresponding with described identical maneuver includes the described instruction of described distance.
19. calculating systems according to claim 15, wherein, described speech production system is configured to:
First shortening audio instructions is generated for described maneuver,
Second shortening audio instructions is generated for follow-up maneuver, and
Shorten audio instructions by described first and described shortening audio instructions is combined into single uninterrupted notice.
20. calculating systems according to claim 14, wherein, described familiarity rating engine is based at least partially on offer
Guide to the past navigation way of described driver and generate described familiarity tolerance.
21. 1 kinds of portable computings, including:
One or more processor;
Interface, described interface is configured to described portable computing be filled via the first communication link and the second communication link
Put and be communicatively coupled to the head-end unit of vehicle and neighbouring portable computing;And
On it, storage has the non-transitory computer-readable memory of instruction, and described instruction is when processing in one or more
Described portable computing is made when being performed on device:
The data from described neighbouring portable computing are received via described second communication link, and
Via described first communication link, received data are forwarded to described head-end unit.
22. portable computings according to claim 21, wherein, described interface is first interface, described portable
Calculating device farther includes:
Second interface, described second interface is configured to described portable computing coupling communicatedly via wide-area communication network
It is bonded to authorization server;
Wherein, described instruction make further described portable computing receive from described authorization server for setting up
State the parameter of the second communication link.
23. portable computings according to claim 22, wherein, described instruction makes described portable dress further
Put:
Determining that resource is available at described head-end unit, wherein, described resource includes audio output device and display device
In at least one,
The instruction of described available resources is supplied to described authorization server, and wherein, described authorization server is by described available money
The instruction in source is supplied to described neighbouring portable computing.
24. portable computings according to claim 21, farther include user interface, and described user interface is joined
It is set to:
Show the request setting up described second communication link from described neighbouring portable computing, and
Receive the confirmation that should set up described second communication link from user.
25. portable computings according to claim 24, wherein, described user interface is configured to:
Showing before the described request of described neighbouring portable computing, receiving filling to neighbouring from described user
Put the order of bulletin resource available at described head-end unit.
26. 1 kinds for the method realizing the data exchange between mancarried device and external output devices, described method bag
Include:
Between the first portable user device and the head-end unit of vehicle, first is set up short by one or more processor
Journey communication link;
By one or more processor described first portable user device and the second portable user device it
Between set up the second short range communications link, wherein, described second short range communications link is wireless link;And
Make described first portable user device (i) via described second junction service by one or more processor
Link receives from the data of described second mancarried device, and (ii) via described first short range communications link by described number
According to being sent to described head-end unit.
27. methods according to claim 26, farther include:
Receive the neighbouring described head-end unit of described first portable user device (i) and (ii) can be used for setting up and described head end
The instruction of the communication link of unit;And
Notify that the first portable user device described in described second mancarried device can by one or more processor
For setting up described second short range communications link.
28. methods according to claim 27, wherein, receive described first portable user device and can be used for setting up and institute
The described instruction of the communication link stating head-end unit includes: receive the user of described first mancarried device by described first
Mancarried device is configured to announce the instruction of its availability that mancarried device coupled to described head-end unit.
29. methods according to claim 26, farther include to make described by one or more processor
One portable user device (i) receives the order from described head-end unit via described first short range communications link, and
(ii) via described first short range communications link, received order is sent to described second mancarried device.
30. methods according to claim 26, farther include to make described first portable user device and described second
Described second communication link is held consultation by mancarried device via online service, wherein, and described first portable user device
Via the first longer range communications link and described online service communication, and described second portable user device is via the second long-range
Communication link and described online service communication.
31. methods according to claim 26, wherein, make described first portable user device receive from described second
The described data of mancarried device and described data are sent to described head-end unit include: via described first portable use
Family device makes DAB packet from described second mancarried device streaming to described head-end unit.
32. methods according to claim 26, wherein, described first portable user device and described second portable use
Each in the device of family is one in the personal communicator operated by relative users.
33. methods according to claim 26, wherein, described head-end unit be configured to every time the most only with a portable dress
Set up vertical communication link.
34. 1 kinds of portable computings, including:
One or more processor;
Device interface, described device interface is configured to that described portable computing is communicatively coupled to neighbouring calculating and fills
Put;And
On it, storage has the non-transitory computer-readable memory of instruction, and described instruction is when processing in one or more
Described portable computing is made when being performed on device:
Detection has the neighbouring portable computing of access right, wherein, described resource to the resource in the head-end unit of vehicle
Including at least one in audio output device or display device,
Via described device interface, set up the communication link to described neighbouring portable computing, and
The described head-end unit of described vehicle is transferred data to via described communication link.
35. portable computings according to claim 34, farther include:
User interface, described user interface is configured to receive user's input and provide user to export;
Wherein, described instruction makes described portable computing further:
The instruction having been detected by described neighbouring portable computing is provided via described user interface, and
Receive via described neighbouring portable computing by described portable computing communicatedly via described user interface
It coupled to user's request of described head-end unit, wherein, set up described communication link in response to described user request.
36. portable computings according to claim 34, farther include:
Network interface, described network interface is configured to described portable computing coupling communicatedly via wide-area communication network
It is bonded to authorization server;
Wherein, in order to set up the described communication link to described neighbouring portable computing, described instruction makes described portable
Calculate the device connection via described authorization server request to described neighbouring portable computing.
37. portable computings according to claim 34, wherein, fill to detect described neighbouring portable computing
Putting, described instruction makes described portable computing receive the message announcing described resource.
38. according to the portable computing described in claim 37, wherein, and described message instruction (i) of public described resource
The resource type that the identity of the user of the neighbouring portable computing of operation and (ii) can use at described head-end unit.
39. portable computings according to claim 34, wherein, described instruction is the first instruction, described nonvolatile
Property computer-readable memory further by second instruction be stored thereon, described second instruction at one or more
Manage and when being performed on device, make described portable computing:
Generate the numerical map of geographic area, wherein, be transferred into the described head end list of described vehicle via described communication link
The described data of unit include described numerical map.
40. portable computings according to claim 34, wherein, are transferred into described car via described communication link
The described data of described head-end unit include that DAB is grouped.
41. 1 kinds of methods for the structured set via motor vehicles user interface UI offer item, described motor vehicles UI is joined
Being set to receive user based on gesture input, described method includes:
Orderly multiple items are received by one or more processor;
Make the first subset of the plurality of item along certain axis via described motor vehicles by one or more processor
UI and be shown;
Detecting gesture by one or more processor, described gesture has along the institute being applied to described motor vehicles UI
State the component motion of axis trend;
In response to described gesture, make the second subset of the plurality of item and described gesture by one or more processor
The speed of described component motion be independently shown via described motor vehicles UI, wherein, described first subset and described
Each in two subsets includes multiple item, and wherein, described second subset includes the item in described first subset
Item afterwards.
42. methods according to claim 41, wherein, described orderly multiple items are the orderly lists of Search Results,
And wherein, make described first subset and described second subset be shown via described motor vehicles UI and include for each Xiang Sheng
Become equal-sized release.
43. methods according to claim 41, wherein, each in described orderly multiple items is at composition digitally
In the two-dimensional array of the equal-sized map tile of figure row or row in one, wherein, each map tile is corresponding
Digital picture.
44. methods according to claim 43, wherein, make described second subset of the plurality of item be shown and include: choosing
Selecting described second subset, described second subset includes multiple row of being not included in described first subset or multiple row and bag
Include at least one row in described first subset or at least one row, wherein, described first subset and described second subset
In each row including equal number or row.
45. methods according to claim 41, farther include: based on can be used for showing in described motor vehicles UI
The amount in space, by one or more processor, determine the size of each subset.
46. methods according to claim 41, wherein, described motor vehicles UI includes being arranged in the head-end unit of vehicle
Touch screen.
47. methods according to claim 46, wherein, one or more processor is via short range communications link
It coupled in the mancarried device of described head-end unit operation;Described method farther includes:
The explanation of described gesture is made to be provided to described mancarried device by one or more processor;And
It is provided to by one or more processor the first subset described in chien shih and described second subset when corresponding
Described head-end unit is in described touch screen display.
48. 1 kinds of portable computings, including:
One or more processor;
Short-range communication interface, described portable computing is coupled to the head-end unit of vehicle to connect by described short-range communication interface
Receive the input from motor vehicles user interface UI realized in the head-end unit of vehicle, and provide output to described motor-driven
Car UI;
On it, storage has the non-transitory computer-readable memory of instruction, described instruction to be configured in one or many
Perform on individual processor with:
Receive multiple orderly I1、I2、……、IM,
Via described motor vehicles UI by N number of continuous items I1、I2、……、INInitial subset be supplied to described head-end unit for
Display,
Receive via the light instruction dialling gesture detected by described motor vehicles UI, and
In response to received instruction, independently N number of continuously to the offer of described head-end unit with the speed of described light group of gesture
Item I1+O、I2+O、……、IN+ONew subset, described N number of continuous items I1+O、I2+O、……、IN+OCertain is offset from described initial subset
Individual fixed number O.
49. portable computings according to claim 48, wherein, described instruction is configured to:
The parameter of the size of the available screen space being described in described motor vehicles UI is received via described short-range communication interface, and
And
Based on fixed number O described in received parameter determination.
50. portable computings according to claim 48, farther include longer range communications network, described longer range communications
Network receives the described orderly multiple items from the webserver.
51. portable computings according to claim 48, wherein, described orderly multiple items are having of Search Results
The list of sequence, each Search Results provides in fixed-size release via described motor vehicles UI.
52. portable computings according to claim 48, wherein, each in described orderly multiple items be
Composition numerical map equal-sized map tile two-dimensional array in row or row in one, wherein, each map
Tile is corresponding digital picture.
53. portable computings according to claim 48, wherein, described short-range communication interface is configured to receive institute
Stating the light described instruction dialling gesture, described instruction includes the instruction and described in (ii) at least in the direction of (i) at least one motion
The instruction of the described speed of individual motion.
54. 1 kinds of systems being used for providing output in response to the user's gesture in automotive environment, described system includes:
One or more processor;
User interface UI, described UI is communicatively coupled to one or more processor and is configured to driving to vehicle
The person of sailing shows content and receives the input based on gesture from described driver;And
On it, storage has the non-transitory computer-readable memory of instruction, and described instruction is when processing in one or more
One or more processor is made when being performed on device:
Show first subset of orderly multiple along axis via described user interface,
Detecting gesture via described user interface, described gesture has the component motion moved towards along described axis,
In response to described gesture, independently select described the most via described user interface with the speed of described component motion
Second subset of individual item is for display, and wherein, each in described first subset and described second subset includes multiple item,
And wherein, described second subset includes the item after the item in described first subset, and
Described subset is shown via described user interface.
55. systems according to claim 54, wherein, described user interface includes being embedded in the head-end unit of vehicle
Touch screen.
56. systems according to claim 55, wherein, one or more processor and described computer-readable are deposited
Reservoir is embedded in described head-end unit.
57. systems according to claim 55, wherein, one or more processor and described computer-readable are deposited
Reservoir is implemented in a portable device, and described system farther includes:
Short-range communication interface, described portable computing is coupled to described head-end unit by described short-range communication interface.
58. systems according to claim 54, described system farther includes:
Longer range communications network, described longer range communications network receives the described orderly multiple items from the webserver.
59. systems according to claim 54, wherein, described orderly multiple items are the orderly lists of Search Results,
Each Search Results provides in fixed-size release via described UI.
60. systems according to claim 54, wherein, each in described orderly multiple items is at composition digitally
In the two-dimensional array of the equal-sized map tile of figure row or row in one, wherein, each map tile is corresponding
Digital picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210377709.8A CN114756124A (en) | 2014-01-03 | 2015-01-02 | Interaction between a portable device and a vehicle head unit |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461923484P | 2014-01-03 | 2014-01-03 | |
US61/923,484 | 2014-01-03 | ||
US201461923882P | 2014-01-06 | 2014-01-06 | |
US61/923,882 | 2014-01-06 | ||
US201461924418P | 2014-01-07 | 2014-01-07 | |
US61/924,418 | 2014-02-07 | ||
PCT/US2015/010014 WO2015103457A2 (en) | 2014-01-03 | 2015-01-02 | Input/output functions related to a portable device in an automotive environment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210377709.8A Division CN114756124A (en) | 2014-01-03 | 2015-01-02 | Interaction between a portable device and a vehicle head unit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106062514A true CN106062514A (en) | 2016-10-26 |
CN106062514B CN106062514B (en) | 2022-04-19 |
Family
ID=53494232
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210377709.8A Pending CN114756124A (en) | 2014-01-03 | 2015-01-02 | Interaction between a portable device and a vehicle head unit |
CN201580011364.2A Active CN106062514B (en) | 2014-01-03 | 2015-01-02 | Interaction between a portable device and a vehicle head unit |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210377709.8A Pending CN114756124A (en) | 2014-01-03 | 2015-01-02 | Interaction between a portable device and a vehicle head unit |
Country Status (4)
Country | Link |
---|---|
US (2) | US20150192426A1 (en) |
EP (1) | EP3090235B1 (en) |
CN (2) | CN114756124A (en) |
WO (1) | WO2015103457A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109947256A (en) * | 2019-03-27 | 2019-06-28 | 思特沃克软件技术(北京)有限公司 | A kind of method and vehicular touch screen for reducing driver and watching the touch screen time attentively |
CN112368547A (en) * | 2018-11-02 | 2021-02-12 | 谷歌有限责任公司 | Context-aware navigation voice assistant |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9930474B2 (en) * | 2014-07-08 | 2018-03-27 | Denso International America, Inc. | Method and system for integrating wearable glasses to vehicle |
US20160076903A1 (en) * | 2014-09-12 | 2016-03-17 | Microsoft Corporation | User Geographic Area Familiarity Based Navigation Instructions |
KR101673305B1 (en) * | 2014-12-11 | 2016-11-22 | 현대자동차주식회사 | Head unit for providing streaming service between different device and streaming control method the same, and computer-readable medium storing program for executing the same |
KR101630726B1 (en) * | 2014-12-11 | 2016-06-17 | 현대자동차주식회사 | Method for recongnizing driver using mobile device and vehicle for the same |
US20180266842A1 (en) * | 2015-01-09 | 2018-09-20 | Harman International Industries, Incorporated | Techniques for adjusting the level of detail of driving instructions |
US9912754B2 (en) * | 2015-05-01 | 2018-03-06 | GM Global Technology Operations LLC | Vehicular data isolation device |
US9778054B2 (en) * | 2015-11-03 | 2017-10-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle navigation systems and methods for presenting driving directions on familiar routes |
KR101755913B1 (en) * | 2015-12-03 | 2017-07-07 | 현대자동차주식회사 | Apparatus for device control in vehicle using streering wheel and method thereof |
KR101788188B1 (en) * | 2016-01-05 | 2017-10-19 | 현대자동차주식회사 | Method for changing sound output mode considering streamed audio from smart device and apparatus for carrying out the same |
US10123155B2 (en) * | 2016-01-20 | 2018-11-06 | Livio, Inc. | Secondary-connected device companion application control of a primary-connected device |
US20170255339A1 (en) * | 2016-03-07 | 2017-09-07 | Myine Electronics, Inc. | Primary-connected device control from vehicle computing platforms and secondary-connected devices |
US10194013B2 (en) * | 2016-06-12 | 2019-01-29 | Apple Inc. | Instrument cluster metadata to support second screen |
US10027759B2 (en) | 2016-08-05 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle human-machine interface (HMI) device operation of a handheld mobile device |
US10449968B2 (en) * | 2016-09-23 | 2019-10-22 | Ford Motor Company | Methods and apparatus for adaptively assisting developmentally disabled or cognitively impaired drivers |
US11486717B1 (en) * | 2017-03-13 | 2022-11-01 | Mapbox, Inc. | Generating navigation instructions based on digital map context |
DE102017209955A1 (en) * | 2017-06-13 | 2018-12-13 | Bayerische Motoren Werke Aktiengesellschaft | System and method for dynamic, personalized navigation adaptation in the vehicle |
US10567512B2 (en) * | 2017-10-13 | 2020-02-18 | GM Global Technology Operations LLC | Systems and methods to aggregate vehicle data from infotainment application accessories |
JP6969311B2 (en) * | 2017-11-16 | 2021-11-24 | トヨタ自動車株式会社 | Information processing equipment |
CN109903758B (en) * | 2017-12-08 | 2023-06-23 | 阿里巴巴集团控股有限公司 | Audio processing method and device and terminal equipment |
US11567632B2 (en) * | 2018-07-03 | 2023-01-31 | Apple Inc. | Systems and methods for exploring a geographic region |
US10907986B2 (en) | 2018-08-28 | 2021-02-02 | Here Global B.V. | User familiarization with a novel route for reducing cognitive load associated with navigation |
US11047697B2 (en) | 2018-08-28 | 2021-06-29 | Here Global B.V. | User familiarization with a novel route for reducing cognitive load associated with navigation |
US11029171B2 (en) | 2018-08-28 | 2021-06-08 | Here Global B.V. | User familiarization with a novel route for reducing cognitive load associated with navigation |
US10904686B2 (en) * | 2019-03-29 | 2021-01-26 | Mitsubishi Heavy Industries, Ltd. | Method of acoustic tuning in aircraft cabin |
US11269351B2 (en) | 2019-12-05 | 2022-03-08 | International Business Machines Corporation | Modifying navigation commands |
US11768083B2 (en) | 2020-05-15 | 2023-09-26 | Apple Inc. | User interfaces for providing navigation directions |
US11846515B2 (en) | 2020-06-11 | 2023-12-19 | Apple Inc. | User interfaces for customized navigation routes |
WO2022055495A1 (en) | 2020-09-11 | 2022-03-17 | Google Llc | Detecting and improving simultaneous navigation sessions on multiple devices |
JP2023547324A (en) | 2020-10-22 | 2023-11-10 | グーグル エルエルシー | Content-aware navigation instructions |
EP4244578A1 (en) | 2020-12-16 | 2023-09-20 | Google LLC | Sharing a navigation session to minimize driver distraction |
US20220390248A1 (en) | 2021-06-07 | 2022-12-08 | Apple Inc. | User interfaces for maps and navigation |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130059538A1 (en) * | 2011-09-02 | 2013-03-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle multimedia head unit with two bluetooth antennas and two receivers |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5864330A (en) * | 1993-06-29 | 1999-01-26 | International Business Machines Corp. | Method and apparatus for providing a two-dimensional position-sensitive scroll icon in a data processing system user interface |
US6397145B1 (en) * | 2000-03-06 | 2002-05-28 | Magellan Dis, Inc. | Navigation system with complex maneuver instruction |
US7269504B2 (en) * | 2004-05-12 | 2007-09-11 | Motorola, Inc. | System and method for assigning a level of urgency to navigation cues |
JP4855654B2 (en) * | 2004-05-31 | 2012-01-18 | ソニー株式会社 | On-vehicle device, on-vehicle device information providing method, on-vehicle device information providing method program, and on-vehicle device information providing method program |
US7424363B2 (en) * | 2004-08-20 | 2008-09-09 | Robert Bosch Corporation | Method and system for adaptive navigation using a driver's route knowledge |
JP4736996B2 (en) * | 2006-07-31 | 2011-07-27 | 株式会社デンソー | Map display control device and map display control program |
US20080147308A1 (en) * | 2006-12-18 | 2008-06-19 | Damian Howard | Integrating Navigation Systems |
US20080167812A1 (en) * | 2007-01-10 | 2008-07-10 | Pieter Geelen | Navigation device and method for fuel pricing display |
US9740386B2 (en) * | 2007-06-13 | 2017-08-22 | Apple Inc. | Speed/positional mode translations |
US9336695B2 (en) * | 2008-10-13 | 2016-05-10 | Yahoo! Inc. | Method and system for providing customized regional maps |
US8457353B2 (en) * | 2010-05-18 | 2013-06-04 | Microsoft Corporation | Gestures and gesture modifiers for manipulating a user-interface |
US8885498B2 (en) * | 2010-12-23 | 2014-11-11 | Deutsche Telekom Ag | Network traffic aggregation method and device for in-vehicle telematics systems using tethering and peer-to-peer networking of mobile devices |
US8863256B1 (en) * | 2011-01-14 | 2014-10-14 | Cisco Technology, Inc. | System and method for enabling secure transactions using flexible identity management in a vehicular environment |
JP5805601B2 (en) * | 2011-09-30 | 2015-11-04 | 京セラ株式会社 | Apparatus, method, and program |
US8791835B2 (en) * | 2011-10-03 | 2014-07-29 | Wei Zhang | Methods for road safety enhancement using mobile communication device |
DE112012004785T5 (en) * | 2011-11-16 | 2014-08-07 | Flextronics Ap, Llc | Feature recognition for configuring a vehicle console and associated devices |
US9079499B1 (en) * | 2011-11-22 | 2015-07-14 | Sara Elyse Raubvogel | Automatic activation of turn signals in a vehicle |
US9892098B2 (en) * | 2011-12-29 | 2018-02-13 | Intel Corporation | HTML tag for improving page navigation user experience |
US20130226369A1 (en) * | 2012-02-23 | 2013-08-29 | Sirius XM Radio, Inc. | Portable vehicle telematics systems and methods |
US10156455B2 (en) * | 2012-06-05 | 2018-12-18 | Apple Inc. | Context-aware voice guidance |
US9453734B2 (en) * | 2012-06-05 | 2016-09-27 | Apple Inc. | Smart loading of map tiles |
US20140098008A1 (en) * | 2012-10-04 | 2014-04-10 | Ford Global Technologies, Llc | Method and apparatus for vehicle enabled visual augmentation |
-
2015
- 2015-01-02 US US14/588,487 patent/US20150192426A1/en not_active Abandoned
- 2015-01-02 CN CN202210377709.8A patent/CN114756124A/en active Pending
- 2015-01-02 EP EP15733075.4A patent/EP3090235B1/en active Active
- 2015-01-02 CN CN201580011364.2A patent/CN106062514B/en active Active
- 2015-01-02 WO PCT/US2015/010014 patent/WO2015103457A2/en active Application Filing
-
2017
- 2017-06-21 US US15/628,859 patent/US20170284822A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130059538A1 (en) * | 2011-09-02 | 2013-03-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle multimedia head unit with two bluetooth antennas and two receivers |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112368547A (en) * | 2018-11-02 | 2021-02-12 | 谷歌有限责任公司 | Context-aware navigation voice assistant |
CN112368547B (en) * | 2018-11-02 | 2024-04-30 | 谷歌有限责任公司 | Context-aware navigation voice assistant |
US12038300B2 (en) | 2018-11-02 | 2024-07-16 | Google Llc | Context aware navigation voice assistant |
CN109947256A (en) * | 2019-03-27 | 2019-06-28 | 思特沃克软件技术(北京)有限公司 | A kind of method and vehicular touch screen for reducing driver and watching the touch screen time attentively |
Also Published As
Publication number | Publication date |
---|---|
US20150192426A1 (en) | 2015-07-09 |
EP3090235A2 (en) | 2016-11-09 |
WO2015103457A8 (en) | 2016-02-18 |
CN106062514B (en) | 2022-04-19 |
WO2015103457A2 (en) | 2015-07-09 |
US20170284822A1 (en) | 2017-10-05 |
EP3090235B1 (en) | 2024-06-19 |
EP3090235A4 (en) | 2018-02-07 |
CN114756124A (en) | 2022-07-15 |
WO2015103457A3 (en) | 2015-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106062514A (en) | Input/output functions related to a portable device in an automotive environment | |
CN102918360B (en) | Navigation or mapping device and method | |
KR102521834B1 (en) | Method of providing image to vehicle, and electronic device therefor | |
CN105051494B (en) | Mapping application with several user interfaces | |
CN110023178B (en) | Directing autonomous vehicles near a destination using an intent signal | |
JP6606494B2 (en) | Apparatus and method for displaying navigation instructions | |
TWI410906B (en) | Method for guiding route using augmented reality and mobile terminal using the same | |
EP2188715B1 (en) | Communications apparatus, system and method of providing a user interface | |
CN110457034A (en) | Generate the navigation user interface for being used for third party application | |
CN107708066A (en) | Operate geo-positioning system | |
WO2007056449A2 (en) | Mapping in mobile devices | |
CN104899237B (en) | Map application with improved research tool | |
CN107810387A (en) | Mobile geographic application in automotive environment | |
CN101903747A (en) | Navigation device & method | |
JP5494318B2 (en) | Mobile terminal and communication system | |
CN109643317A (en) | For being indicated and the system and method for the qi that disappears in the opposite of interface Spatial Objects | |
CN109029480B (en) | Map application with improved navigation tool | |
KR101746503B1 (en) | Mobile terminal and method for controlling the same | |
JP2016024166A (en) | Electronic device, neighboring parking lot search method of the same, and neighboring parking lot search program thereof | |
JP2012251791A (en) | Information terminal having navigation function | |
JP2012173686A (en) | Map display device and navigation apparatus | |
US20200011698A1 (en) | Navigation system and navigation program | |
CN104019826A (en) | Automatic navigation method and system based on touch control | |
KR20170004449A (en) | Mobile terminal and method for controlling the same | |
WO2018179771A1 (en) | Navigation system and navigation program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: American California Applicant after: Google limited liability company Address before: American California Applicant before: Google Inc. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |