CN116007642A - Destination navigation method and device - Google Patents

Destination navigation method and device Download PDF

Info

Publication number
CN116007642A
CN116007642A CN202111235917.6A CN202111235917A CN116007642A CN 116007642 A CN116007642 A CN 116007642A CN 202111235917 A CN202111235917 A CN 202111235917A CN 116007642 A CN116007642 A CN 116007642A
Authority
CN
China
Prior art keywords
address
vehicle
user
destination
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111235917.6A
Other languages
Chinese (zh)
Inventor
彭璐
赵安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN202111235917.6A priority Critical patent/CN116007642A/en
Priority to PCT/CN2022/117935 priority patent/WO2023065879A1/en
Publication of CN116007642A publication Critical patent/CN116007642A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers

Abstract

According to the destination navigation method and device, data containing address information are collected through the terminal device, then a destination to which a user needs to go is determined before the user gets on the vehicle, after the user gets on the vehicle, the vehicle-mounted device can directly navigate according to the destination predetermined by the terminal device, the user does not need to manually input the destination information, user operation can be reduced, and user experience is improved.

Description

Destination navigation method and device
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a destination navigation method and apparatus.
Background
With the development of internet technology and the diversification of travel modes of users, users often use navigation functions on an intelligent automobile when traveling to a destination.
Currently, users typically enter a destination in a map application, such as a Goldmap, then search for the destination, and finally click on "Start navigation" after searching for the destination, thereby driving to the destination according to navigation prompts. The mode needs the user to actively input the destination, is complex in operation and is poor in user experience.
Disclosure of Invention
The application provides a destination navigation method and destination navigation equipment, which are used for reducing user operation and improving user experience.
In a first aspect, the present application provides a destination navigation method, which is applicable to an in-vehicle apparatus. Specifically, the method may include: firstly, establishing connection between vehicle-mounted equipment and terminal equipment; then, the vehicle-mounted equipment acquires first destination information of user travel from the terminal equipment; finally, the in-vehicle device displays a first interface including a navigation interface that navigates from the starting location to the first destination.
It should be noted that the method may also be applied to other devices than the in-vehicle device, which is not particularly limited in this application.
Through the technical scheme, after a user gets on the vehicle, the vehicle-mounted equipment can directly navigate according to the destination predetermined by the terminal equipment, the user does not need to manually input destination information, the user operation can be reduced, and the user experience is improved.
In one possible design, the first destination information originates from a first application APP.
The method further comprises the steps of: the vehicle-mounted device acquires second destination information from the terminal device, wherein the second destination information is derived from a second APP.
Through the technical scheme, the vehicle-mounted equipment can acquire different destination information from different APP of the terminal equipment. I.e. the destination information of the user's travel can be determined by means of information obtained by different APPs.
In one possible design, the first destination information is derived from at least one of a friend sharing link, map search navigation, and voice query.
Through the technical scheme, when the user receives the links shared by friends, searches and navigates a certain destination on a map or queries a certain destination through voice, the user can determine that the intention of going out in the plurality of scenes is relatively strong, and the destination in the scenes can be recommended to the user as the destination of going out.
In one possible design, the first destination information is determined by a candidate address, the candidate address being derived from at least one of a link, a voice message, a chat log, a text message, a browser, and a map.
Through the technical scheme, the candidate address of the user trip can be obtained from multiple dimensions, then the destination information of the user trip is determined based on the candidate address, and the accuracy of determining the destination can be improved through obtaining the candidate address from the multiple dimensions.
In one possible design, the first destination information is obtained by the terminal device by acquiring a first candidate address and correcting the first candidate address.
In one possible design, before the vehicle-mounted device displays the first interface, the method further includes: the vehicle-mounted equipment outputs first prompt information, and the first prompt information is used for prompting a user to confirm whether to navigate to the first destination.
Through the technical scheme, the vehicle-mounted equipment can output the prompt information before displaying the navigation interface so as to remind a user of determining whether to navigate to the destination or not, thereby improving user experience.
In one possible design, the vehicle-mounted device outputs a first prompt message, including: the vehicle-mounted device displays first prompt information, wherein the first prompt information comprises first destination information. The method further comprises the steps of: the vehicle-mounted device displays second prompt information, wherein the second prompt information is used for reminding a user of time information required by navigation driving to the first destination.
Through the technical scheme, the vehicle-mounted equipment can display the first prompt information through the display screen, and a user can operate on the display screen so as to determine the destination information of travel. Meanwhile, time information can be displayed so as to remind the user. For example, a card may be displayed on a display screen, which may include a destination and time information required for navigation, and a user may click on the card to confirm the destination. Of course, the user may also change the navigation route when the navigation duration is abnormal.
In one possible design, the vehicle-mounted device outputs a first prompt message, including: the vehicle-mounted equipment sends out first prompt information through a loudspeaker.
Through the technical scheme, the vehicle-mounted equipment can carry out voice interaction with the user through the voice assistant so as to confirm destination information, for example, after the user gets on the vehicle, a reminding message can be sent out: is it to navigate to a company? The user may then reply to the voice assistant's message to determine the destination of the trip.
In one possible design, the first destination information alerts the user through a first alert mode, and the second destination information alerts the user through a second alert mode.
Through the technical scheme, if the trip comprises a plurality of destinations (for example, when the user goes to work, the child goes to school and then goes from school to company), the user can be reminded in different reminding modes. Or when the user goes out at different time, reminding the user by different reminding modes if the destination is different. Particularly, when the travel destination of the user is temporarily changed, the user can be reminded in different reminding modes.
In a second aspect, the present application provides a destination navigation method applied to a terminal device. The method comprises the following steps: the terminal equipment determines first destination information of user travel; the terminal equipment establishes connection with the vehicle-mounted equipment and sends the first destination information to the vehicle-mounted equipment so as to enable the vehicle-mounted equipment to navigate to the first destination.
Through the technical scheme, the terminal equipment can determine the destination information of travel before the user gets on the vehicle and send the destination to the vehicle-mounted equipment after the user gets on the vehicle, so that the user experience can be improved without manually inputting the destination information after the user gets on the vehicle.
In one possible design, the terminal device determines first destination information of a user trip, including: the terminal equipment acquires a candidate address, wherein the candidate address is derived from at least one of a link, voice information, chat records, a short message, a browser and a map; and the terminal equipment corrects the candidate address to obtain the first destination information.
In one possible design, the first destination information originates from a first application APP. The method further comprises the steps of: the terminal equipment acquires second destination information from a second APP.
In one possible design, the first destination information is derived from at least one of a friend sharing link, map search navigation, and voice query.
In one possible design, the terminal device corrects the candidate address to obtain the first destination information, including:
the terminal equipment determines first destination information of user travel according to the driving frequent address, the time interval and the distance information between the driving frequent address and the candidate address; the time interval is a time interval between the event occurrence time corresponding to the candidate address and the current time.
Through the technical scheme, the terminal equipment can carry out high-precision address correction based on the address which is frequently removed by the user, the distance between the address which is frequently removed by the user and the candidate address and the occurrence time interval information of the event, so that the accuracy of determining the destination can be improved.
In one possible design, the terminal device determines first destination information of the user trip according to the user driving frequent address, the time interval, and distance information between the user driving frequent address and the candidate address, including:
the terminal equipment determines a first confidence coefficient of the candidate address according to the driving frequently-removed address of the user; the terminal equipment weights the first confidence coefficient based on the time interval to obtain a second confidence coefficient of the candidate address; the terminal equipment weights the second confidence coefficient according to the distance between the driving frequently-removed address of the user and the candidate address to obtain a third confidence coefficient of the candidate address; and the terminal equipment takes the address information with the highest confidence in the third confidence as the first destination information of the user trip.
Through the technical scheme, the terminal equipment can obtain more accurate destination addresses by determining the confidence coefficient of each candidate address and then taking the address information with the highest confidence coefficient as the destination information.
In a third aspect, the present application provides an in-vehicle apparatus including a display screen; one or more processors; one or more memories; one or more sensors; a plurality of applications; and one or more computer programs; wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions that, when executed by the one or more processors, cause the in-vehicle device to perform the method of any of the above-described first aspects and possible designs of the first aspect thereof.
In a fourth aspect, the present application provides a terminal device, where the terminal device includes a display screen; one or more processors; one or more memories; one or more sensors; a plurality of applications; and one or more computer programs; wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the one or more processors, cause the terminal device to perform the method of any of the above second aspects and possible designs of the second aspect.
In a fifth aspect, the present application also provides a computer readable storage medium having instructions stored therein, which when run on an on-board device, cause the on-board device to perform the method of the first aspect and any one of the possible designs of the first aspect thereof.
In a sixth aspect, the present application also provides a computer readable storage medium having instructions stored therein which, when run on a terminal device, cause the terminal device to perform the second aspect and any one of the possible designs of the second aspect thereof.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on an in-vehicle device, causes the in-vehicle device to perform the method of the first aspect of the embodiments of the present application and any one of the possible designs of the first aspect thereof.
In an eighth aspect, embodiments of the present application provide a computer program product which, when run on a terminal device, causes the terminal device to perform the method of the second aspect of embodiments of the present application and any one of the possible designs of the second aspect thereof.
The technical effects of each of the second to eighth aspects and the technical effects that may be achieved by each of the aspects are referred to the technical effects that may be achieved by each of the possible aspects of the first aspect, and the detailed description is not repeated here.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2B is a block diagram of a software architecture according to an embodiment of the present application;
FIG. 3 is a flowchart of a destination navigation method according to an embodiment of the present application;
fig. 4A is a schematic flow chart of a destination determining method according to an embodiment of the present application;
fig. 4B to fig. 4D are schematic views of a display interface of a vehicle machine according to an embodiment of the present application;
fig. 5A to 5B are schematic views of a display interface according to an embodiment of the present application;
FIG. 6 is a flowchart of another destination navigation method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail below with reference to the drawings in the following embodiments of the present application.
Since the user currently uses the navigation function on the vehicle, it is common to actively input address information and then select specific address information by searching. The method relies on the user to input address information, is complex in operation and has poor user experience.
In view of this, the embodiment of the application provides a destination navigation method, which identifies a destination that a user wants to go before the user gets on the vehicle, and then does not need to manually input the destination after the user gets on the vehicle, so that navigation can be performed according to the identified destination, and user operation can be reduced, and user experience can be improved.
An application program (abbreviated as an application) to which an embodiment of the present application relates is a software program capable of implementing some or more specific functions. Typically, a plurality of applications may be installed in an electronic device. Such as camera applications, text messaging applications, mailbox applications, video applications, music applications, map applications, etc. The application mentioned below may be an application installed when the electronic device leaves the factory, or may be an application downloaded from a network or acquired by a user from other electronic devices during use of the electronic device.
Furthermore, at least one of the following embodiments is directed to, including one or more; wherein, a plurality refers to greater than or equal to two. In addition, it should be understood that in the description of this application, the words "first," "second," and the like are used merely for distinguishing between the descriptions.
First, an application scenario of the present application will be described. Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application. Referring to fig. 1, the application scenario may include a terminal device 10 and an automobile 20. The terminal device 10 may include a plurality of devices, for example, may include a mobile phone 11, a tablet computer 12, a notebook computer 13, a smart watch 14, and the like, and the vehicle 20 may include a vehicle-mounted device, such as a vehicle machine 21. It should be understood that the terminal device in fig. 1 is only shown by taking 4 examples as an example, and in practical application, the terminal device 10 may be a smart screen, a bluetooth speaker, etc., and of course, the terminal device 10 may also include more or fewer devices, which is not limited in the embodiment of the present application.
In some embodiments, the connection between the terminal device 10 and the vehicle machine 21 may be established through a communication network. By way of example, the communication network may be a local area network, such as a wireless fidelity (wireless fidelity, wi-Fi) hotspot network, a wireless fidelity peer-to-peer (Wi-Fi P2P) network, a Bluetooth (BT) network, a zigbee network, or a near field wireless communication technology (near field communication, NFC) network, etc. As a possible implementation, the wireless connection may also be established between the plurality of electronic devices based on a mobile network, e.g. the mobile network comprises a mobile network established based on 2g,3g,4g,5g and subsequent standard protocols. As a possible implementation, the plurality of electronic devices may also establish a connection with at least one server through a mobile network, and data, and/or messages, and/or information, and/or signaling, and/or instructions are transmitted between the respective devices through the server.
In other embodiments, the terminal device 10 and the vehicle 21 may also be connected by a wired connection. By way of example, the wired connection may be a universal serial bus (universal serial bus, USB) line, patch cord, or the like.
In still other embodiments, the connection between the terminal device 10 and the vehicle device 21 may also be established by the same account number (for example, the same person is logged in as an account number), or the like.
It should be noted that the above connection methods are also applicable to connection between a plurality of terminal devices 10, for example, between the mobile phone 11 and the tablet computer 12, between the mobile phone 13 and the smart watch 14, and may be established by logging in the same account, which is not limited in this application.
In this embodiment of the present application, as a possible implementation manner, the terminal device 10 may acquire candidate address information from a plurality of application programs (for example, a sms application, a map application, a browser application, a WeChat application, etc.), then send the candidate address information to the vehicle machine 21, calculate, by the vehicle machine 21, a confidence level of each candidate address information sent by the terminal device 10 based on the time information and the user frequently-removed address information, and then select the address information with the highest confidence level as the destination address.
As another possible implementation manner, the terminal device 10 may acquire candidate address information from a plurality of application programs, then calculate a confidence level of each candidate address information based on the time information and the address information frequently visited by the user, and then send the calculated confidence level of each candidate address information to the vehicle machine 21, and the vehicle machine 21 selects the address information with the highest confidence level as the destination address; or the terminal device 10 may also select the address information with the highest confidence after calculating the confidence of one candidate address information, and send the address information with the highest confidence to the car machine 21, and the car machine 21 may navigate using the address information with the highest confidence as the destination address. Of course, the data collected by the terminal device may also be uploaded to the cloud, which is not limited in this application.
The structure of the terminal device in the scenario shown in fig. 1 will be described below by taking a mobile phone as an example.
As shown in fig. 2A, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a display screen 190, and the like. The sensor module 180 may include a pressure sensor 180A, a touch sensor 180B, among others. Of course, the sensor module 180 may also include a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, an ambient light sensor, a bone conduction sensor, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller may be a neural center or a command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect to a charger to charge the mobile phone 100, or may be used to transfer data between the mobile phone 100 and a peripheral device. The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including WLAN (Wi-Fi network), BT, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc. applied on the handset 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. For example, in the embodiment of the present application, a communication connection may be established between the mobile phone/other terminal device and the vehicle machine through BT, WLAN, or USB line.
In some embodiments, the antenna 1 and the mobile communication module 150 of the handset 100 are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the handset 100 can communicate with a network and other devices through wireless communication technology. The wireless communication techniques may include a global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), fifth generation (the fifth generation, 5G) mobile communication systems, future communication systems such as sixth generation (6th generation,6G) systems, etc., BT, GNSS, WLAN, NFC, FM and/or IR techniques, etc. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The display screen 190 is used to display a display interface of an application, or the like. The display screen 190 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the cell phone 100 may include 1 or N display screens 190, N being a positive integer greater than 1.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, and software code of at least one application program (e.g., an aiqi application, a WeChat application, etc.), etc. The data storage area may store data (e.g., images, video, etc.) generated during use of the handset 100, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as pictures and videos are stored in an external memory card.
The handset 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 190. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 190, the mobile phone 100 detects the intensity of the touch operation based on the pressure sensor 180A. The mobile phone 100 may also calculate the position of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The touch sensor 180B, also referred to as a "touch device". The touch sensor 180B may be disposed on the display screen 190, and the touch sensor 180B and the display screen 190 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180B is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 190. In other embodiments, the touch sensor 180B may also be disposed on the surface of the mobile phone 100 at a different location than the display 190.
It will be appreciated that the components shown in fig. 2A are not intended to be limiting in detail, and that the handset may include more or less components than shown (e.g., may also include keys, cameras, etc.), or may combine certain components, or split certain components, or a different arrangement of components.
Fig. 2B is a schematic diagram of a software architecture according to an embodiment of the application. Referring to fig. 2B, the software architecture may include two parts, namely a terminal device (for example, a mobile phone) and a vehicle. The mobile phone side can comprise a data acquisition module, a data mining module, a data storage module and a data synchronization module. The data collected by the data collection module can comprise data such as voice, text, address link, short message and the like, and the data mining module can comprise an address recognition module, a semantic analysis module and a confidence calculation module. The data storage module can be used for storing the data acquired by the data acquisition module and the data included in the data mining module. The data synchronization module is used for realizing the synchronization of the data on the terminal equipment and the data on the vehicle machine, for example, the data synchronization module can be used for synchronizing the data stored on the terminal equipment to the vehicle machine.
The data acquisition module is used for acquiring data containing address information. In some embodiments, the data acquisition module may acquire data including address information in real-time from a plurality of applications in the application layer. In other embodiments, the data acquisition module may also acquire data containing address information from other terminal devices.
The semantic analysis module is used for carrying out semantic analysis on the collected data, analyzing to obtain address information, the address recognition module is used for carrying out address recognition on the collected data, and the confidence coefficient calculation module is used for calculating the confidence coefficient of the address information.
In this embodiment of the present application, when a user uses an application, the data acquisition module may acquire data including address information in the application, then parse the data including the address information, identify a candidate address, and then the confidence calculation module may calculate the confidence of the candidate address, and use the candidate address with the highest confidence as the destination address.
For convenience of description, the semantic analysis module, the address identification module and the confidence calculation module may be collectively referred to as a "data processing module". In some embodiments, the data acquisition module may acquire data, and the data processing module may process the data while the data acquisition module acquires data; or the data acquisition module acquires data, and the data processing module can process the data acquired by the data acquisition module only when the data acquired by the data acquisition module reaches a set time (for example, 10 minutes) or reaches a set number (for example, 20 pieces of data), which is not limited in the application.
In addition, the data collection module may begin collecting data when a keyword (e.g., navigation, address, etc.) is detected or when a specific event is detected (e.g., a user clicks on a map application or a browser clicks on a search interface, etc.), so that data containing address information may be collected.
In some embodiments, the functions of data collection, address identification, confidence calculation may be implemented through interfaces of system applications or application framework layers, etc. That is, the data collection module, address identification module, semantic analysis module, confidence calculation module may be located in an application system (such as Android
Figure BDA0003317660300000081
) Or may be located at the application framework layer. It should be understood that the system in the embodiments of the present application may also be +.>
Figure BDA0003317660300000082
And the like, and is not limited thereto.
The vehicle machine can comprise a vehicle-mounted data acquisition module, a vehicle-mounted data storage module, a data mining module and a data synchronization module. The data mining module on the vehicle machine can comprise a frequently-removed address identification module, a confidence coefficient calculation module and an address correction module. The address correction module is used for carrying out address correction on the vehicle-mounted machine according to the candidate address and the frequently-removed address of the user so as to obtain high-precision destination address information. The data synchronization module is used for realizing the synchronization of the data on the terminal equipment and the data on the vehicle machine, for example, the data synchronization module can be used for synchronizing the data stored on the vehicle machine to the mobile phone.
In some embodiments, the vehicle may learn to obtain the frequently-removed address according to address information input by the user in a map application program on the vehicle, or the vehicle may learn to obtain the frequently-removed address according to address information input by the user on a mobile phone, and then establish a connection with the vehicle and navigate to a destination through the vehicle. In other embodiments, the global positioning system (global positioning system, GPS) location information of the vehicle and the time information of getting on/off the vehicle of the user may be used to obtain the GPS location information of the driving destination of the user, and then the corresponding address information may be obtained according to the GPS location information of the driving destination, so as to obtain the driving destination of the user.
The following describes a destination navigation method according to an embodiment of the present application. As shown in fig. 3, a flowchart of a destination navigation method according to an embodiment of the present application is shown in fig. 3, and the method may include the following steps:
s301: the mobile phone acquires data containing address information.
In some embodiments, the mobile phone may collect data containing address information from multiple dimensions, such as multiple applications, for example, the mobile phone may obtain data containing address information from chat logs, text messages, maps, browsers, etc. It should be noted that, the mobile phone may collect data including address information in real time, and before the user gets on the vehicle (or leaves home), the mobile phone may obtain data including address information collected in a period of time.
In other embodiments, the mobile phone may obtain data including address information from other terminal devices. Other terminal devices and mobile phones can log in the same account, such as the same account. For other terminal devices, the method of collecting the data including the address information is the same as that of the mobile phone side, and in this embodiment, only the mobile phone is used as an example for description. After other terminal devices collect the data containing address information, the data containing address information can be sent to the mobile phone. It should be understood that other terminal devices may uniformly send feedback messages to the mobile phone at fixed time intervals, where the feedback messages may include data of address information, or may sequentially send data including address information to the mobile phone according to a certain order, which is not limited in this application. Of course, if the terminal device does not collect the data containing the address information within a fixed time interval, the feedback message does not include the data containing the address information.
By way of example, taking a terminal device in the application scenario shown in fig. 1 as an example, assume that other terminal devices include: the tablet pc 12, the notebook pc 13 and the smart watch 14 can send data containing address information to the mobile phone 11 respectively every 5 minutes by the tablet pc 12, the notebook pc 13 and the smart watch 14. Or when which terminal device collects the data containing address information first, the data containing address information is sent to the mobile phone first, for example, the tablet computer 12 collects the data containing address information in the morning of 7:30, the notebook computer 13 collects the data containing address information in the morning of 7:35, the tablet computer 12 sends the data containing address information to the mobile phone 11 first, and the notebook computer 13 sends the data containing address information to the mobile phone later.
Fig. 4A is a schematic flow chart of a destination determining method according to an embodiment of the present application. Referring to fig. 4A, a cell phone and other terminal devices (e.g., which may be referred to as "8+N" devices, such as watches, speakers, etc.) may collect data. Specifically, the mobile phone can collect data from multiple dimensions of a voice assistant, chat information, an input method, a clipboard, an application program such as a browser, a short message, a calendar and the like, and then obtain data containing address information in different dimensions such as voice, chat record text, input method text, clipboard text, browser search words, short message resolution addresses, map navigation addresses, friend sharing addresses and the like. The following describes data containing address information collected in different dimensions with specific scene examples.
1. Chat logging
(1) User A tells user B through WeChat chat 5 minutes before going up: "I want to go to company".
(2) User a has shared address links to user B10 minutes before going, such as: day An Yungu A seat Longpost snow post road 93.
2. Browser
User a searches the browser for "how to get to the day An Yungu? "
3. Map(s)
User a searches and navigates in the map for "day An Yungu 1" 20 minutes before going forward.
4. Short message
The user A sends a short message to the user B before going or receives the short message sent by the user B, and the content of the short message can be: day An Yungu A seat Longpost snow post road 93.
5. Speech sound
User a say 5 minutes before going up for the loudspeaker: i go to work; or say "mini-art, navigate to date An Yungu" to the voice assistant.
It should be noted that the above-mentioned data source containing address information is only an example, and the application program may also obtain the data containing address information from other application programs, such as a calendar, (for example, the user adds schedule information to the calendar: 8 months and 21 days go to a ban Tian Yiyuan to get a laboratory sheet), etc., which is not limited in particular. It should be understood that, in the application scenario, for example, the content searched or sent by the user in the chat record, the browser, or the sms may be the content input by the user through the input method, or may be the content pasted by the user from the clipboard, which is not limited in this application. In addition, only soft sensing data is acquired in the scheme of the application, and the system has the capability of crossing an operating system and hardware equipment and is high in universality.
Based on the scene containing the address information, the mobile phone can collect text data and voice data containing the address information. Meanwhile, the mobile phone can collect the occurrence time of text data and voice data, for example, the user a 7 in the morning: 20 address links shared by the user B, and the user A searches and navigates certain address information through the Gordon map in 7:10 minutes in the morning. For example, after the mobile phone collects the data including the address information, the mobile phone may save the event corresponding to the data including the address information and the occurrence time of the event, for example, in a table shown in table 1 below. Of course, the mobile phone may also store the collected data including address information according to the occurrence time of the event, which is not limited in this application.
TABLE 1
Time Event(s)
Morning 7:00 minutes Browser search
Morning 7:10 minutes Map search address
Morning 7:20 minutes Sharing address links to friends
Morning 7:25 minutes For the sound box: i go to work
It should be understood that the time information in the above tables and the events occurring at the corresponding times are only illustrative, and the present application is not limited thereto.
S302: and the mobile phone determines the candidate address according to the acquired data.
For convenience of description, the address corresponding to the data including the address information acquired in S301 may be denoted as a candidate address, and the candidate address may be at least one.
In some embodiments of the present application, the mobile phone may parse the data containing the address information to obtain the address information included therein. Taking fig. 4A as an example, for voice, the voice may be converted into text, and then steps such as word segmentation, keyword extraction, semantic understanding, address recognition and the like are performed on the text to obtain address information. For example, for the voice "i am going to work" the keywords are available: working, and then carrying out semantic understanding to obtain: going to company for work, so the candidate addresses in the voice "I go to work" are: companies. As another example, for text, such as text searched in a browser: "how to get to the day An Yungu", the mobile phone can segment the text, and keyword extraction, semantic understanding and address recognition obtain candidate addresses in "how to get to the day An Yungu" as follows: day An Yungu.
S303: and the mobile phone determines the destination address according to the candidate address.
Continuing to take fig. 4A as an example, after obtaining the candidate address, the mobile phone may correct the candidate address based on the address frequently removed by the user driving, the occurrence time information of the event, the distance, the access number, and the like, to obtain the destination address. The process of how the destination address is obtained is described in detail below.
As a possible implementation manner, after the mobile phone performs address identification on at least one data containing address information to obtain at least one candidate address, the confidence level of the at least one candidate address can be determined based on the address frequently removed by driving of the user. The address of the user driving frequently-removed may be an address manually input and stored by the user on the vehicle or on the mobile phone, or a GPS position information of the user driving frequently-removed address obtained according to GPS position information of a vehicle and time information of getting on/off the vehicle by the user, so as to obtain the address of the user driving frequently-removed address, or may also be learned by the mobile phone within a preset time period (for example, 1 month) according to the address input by the user, or may also be learned by the mobile phone according to data on the vehicle (for example, address information is input on the vehicle by the user after getting on the vehicle, connection between the mobile phone and the vehicle is established, and the mobile phone can obtain the data on the vehicle), or the like.
Illustratively, the hypothetical candidate address includes: (1) day An Yungu; (2) company; (3) day An Yungu A seat three addresses. Suppose that the user driving constant address includes two: (1) day An Yungu A parking lot 101 parking spaces; (2) Shenzhen Bay No. 2B 2 underground parking garage. The mobile phone may match the three candidate addresses with the two frequently-removed addresses by using a keyword to obtain a first confidence coefficient of each candidate address, for example, a first confidence coefficient of day An Yungu is: 0.8, the first confidence of the company is 0.6, and the first confidence of the day An Yungu a seat is 0.9. In practical applications, names such as "company", "home", and the like in the candidate address need to be replaced with corresponding address information for calculation.
In the method, address information is obtained by combining soft sensing data such as voice, text, search words, navigation address, friend sharing address links and the like, matching degree calculation is carried out with the user historical address, address misrecognition can be avoided, and accuracy of address recognition is improved.
It should be understood that "day An Yungu a parking lot 101" in the above example may be an address of a company, and "shenzhen bay a 2B 2 underground parking lot" may be a location of a home, which is not limited in this application.
Further, after the mobile phone obtains the first confidence coefficient of the candidate address according to the address which is frequently removed by the user, the mobile phone can obtain the second confidence coefficient of the candidate address according to the duration between the current time (i.e. the user getting-on time) and the event occurrence time. The second confidence coefficient is a confidence coefficient calculated by combining time on the basis of the first confidence coefficient.
As a possible implementation, the second confidence level may be calculated by a time decay function, i.e. the longer the event occurrence time is from the current time, the lower the confidence coefficient, and the closer the event occurrence time is from the current time, the higher the confidence coefficient. For example, events collected before the user goes to the car include: event 1: user a tells user B by a chat on Wechat 5 minutes before going, say 7:30 minutes in the morning: "I are now going to company"; event 2: user a 7 in the morning: 20 address links shared to user B; event 3: user a searches and navigates through the hadamard map in the morning at 7:35 minutes. In the above three events, the confidence coefficient of the address information included in the event 2 is smaller than the confidence coefficient of the address information included in the event 1, and the confidence coefficient of the address information included in the event 1 is smaller than the confidence coefficient of the address information included in the event 3.
Illustratively, the second confidence level may be calculated by the following formula:
s i =exp(-k*t i )·m i k is a constant
Wherein t is i Time, m i Representing a first confidence level, s i Representing a second confidence level.
In the above formula, confidence coefficient C i =exp(-k*t i ) I.e. s i =C i ·m i . Since the confidence coefficient is related to the time difference (the difference between the user boarding time and the event occurrence time), the confidence coefficient may be set according to the time difference in the embodiment of the present application. Exemplary, time difference<Confidence coefficient C at 10 min i Can take a value of 1; when 10 minutes<Time difference value<Confidence coefficient C at=30 min i Can take a value of 0.8; when the time difference is>Confidence coefficient C at 30 min i The value can be 0.5. It should be understood that the confidence coefficient values described above are merely examples, and the present application is not limited thereto.
Since multiple identical addresses may appear in the candidate addresses, the confidence of the identical addresses may be summed to obtain a third confidence for the address, for example, by the following formula:
ω=∑s i
wherein s is i Representing the second confidence, ω represents the value of the third confidence summed up with the confidence of the same address.
It should be appreciated that the third confidence level may be the same as or different from the second confidence level. The third confidence is the same as the second confidence when one candidate address appears only once, and the third confidence is different from the second confidence when the same candidate address appears multiple times.
A third confidence level of the candidate address may be calculated by the above formula, for example, the third confidence level of day An Yungu is: 0.7, a third confidence of 0.85 for the company, and a third confidence of 0.95 for the day An Yungu a seat.
Further, after obtaining the third confidence coefficient of the candidate address, the mobile phone may weight the third confidence coefficient based on the distance information, the access times, and the like, that is, correct the candidate address continuously to obtain the destination address. In some embodiments, the handset may query a detailed list of addresses corresponding to the destination address through the mapping application, and then revise the candidate address based on the distance between the user frequently-visited address and the addresses in the list. Taking the example of the day An Yungu a seat in the candidate addresses as an example, the mobile phone can query the day An Yungu a seat in the map, and then can find the detailed address list corresponding to the day An Yungu a seat, and at this time, the driving frequently-going parking addresses of the user, such as the distance between the "day An Yungu 1 a seat parking lot 101 number parking spaces" and each address in the list, can be calculated.
As one possible implementation, in an embodiment of the present application, a distance threshold (e.g., 200 meters) may be set, and some addresses may be filtered by the distance threshold, for example, addresses that are the same name and far from the address that the user drives often, for example, searching for "day An Yungu a base" may occur: light area day An Yungu, longguang area day An Yungu, etc. When the distance between the user driving constant address and the addresses in the list is greater than 200 meters, the addresses may be filtered out. Assuming that the driving is always addressed to (on the Dragon's post district) day An Yungu a parking lot 101 parking space "closest to" on the Dragon's post district day An Yungu ", the" day An Yungu A seat "is modified to" day An Yungu 1 a parking lot 101 parking space ". In the scheme of the application, more accurate addresses such as parking spaces can be obtained, so that destination information is more accurate.
As another possible implementation, after some addresses are filtered out by the distance threshold, the addresses may continue to be further filtered based on the number of accesses. For example, the addresses within the range of the distance threshold include two addresses, and when one address needs to be selected from the two addresses and the candidate address is corrected to the address, the comparison can be performed according to the access times. For example, assuming that addresses satisfying the threshold range include D1 and D2, if the user passes D1 more times than D2 in one month, the confidence coefficient of D1 may be set to be higher than the confidence coefficient of D2. This may correct the candidate address to the address with the highest confidence, such as correcting the candidate address to the address of D1.
Through the method, the address with the highest confidence coefficient after the candidate address is corrected can be obtained, and the address with the highest confidence coefficient after the candidate address is corrected is taken as the destination address.
S304: and the mobile phone sends the destination address to the car machine.
In some embodiments, before the user gets on the vehicle, the mobile phone can obtain the destination to be visited by the user, and then after the user gets on the vehicle, the destination address can be directly sent to the vehicle, so that the user does not need to manually input the destination, the user operation can be reduced, and the user experience can be improved.
Specifically, the triggering condition that the mobile phone sends the destination address to the car machine may include the following cases:
first case: after a user gets on the vehicle, a camera on the vehicle can acquire face images of the user, and then face recognition authentication is carried out. After the authentication is passed, the mobile phone can establish connection with the vehicle, and the mobile phone can transmit data with the vehicle, for example, the mobile phone can send the determined destination address to the vehicle.
Second case: when the user gets on the car, the user can log in the account on the car machine, and the account is the same as the account logged in on the mobile phone, for example, the account can be the same as the account. After the account number on the car machine is successfully logged in, the mobile phone can be connected with the car machine, and the mobile phone can send the determined destination address to the car machine.
Third case: the mobile phone and the car machine are connected through a near-end communication technology, for example, the mobile phone and the car machine are connected through Bluetooth. During or after the connection is established, the mobile phone and the vehicle machine can synchronize data, for example, the mobile phone can send destination data to the vehicle machine.
Of course, the above trigger condition is only a schematic illustration, and in practical applications, other trigger conditions may be also used, for example, the mobile phone and the vehicle may also synchronize data periodically (for example, synchronize data every 1 minute), which is not limited in this application.
S305: and outputting prompt information by the vehicle.
In some embodiments, the prompt may be in the form of a card, such as a navigation card displayable on a vehicle. Fig. 4B is a schematic diagram of a display interface of a vehicle machine according to an embodiment of the present application. When the user gets on the vehicle, an interface 400 may be displayed on the vehicle, and a navigation card 401 may be included in the interface 400. Of course, icons of applications, such as icons of map applications, application icons of mini-advice applications, may also be included in interface 400. The navigation card 401 may display the destination name 4011, such as "navigate to get to company", and the time prompt 4012, such as "drive to company now, and it is expected to get to company for 20 minutes more than usual. After the user clicks on the navigation card 401, the car machine may open the map application and display a plurality of routes for navigating to the company, such as the display interface 410, in response to the clicking operation of the user. The user may select one route from the plurality of routes shown in the interface 410, and then click a "start navigation" button 411, and the vehicle may initiate a navigation function in response to a click operation of the "start navigation" button by the user.
In other embodiments, after the user clicks on the navigation card 401, the vehicle may directly enter the navigation interface in response to the clicking operation of the user. That is, the user does not need to select a route, and the vehicle can select the route according to a preset rule and then navigate according to the selected route. For example, the preset rule may be a route with the shortest driving time, or may be a route that the user drives frequently, which is not limited in this application.
The small skill advice can dynamically recommend service to the user according to the using habit of the user, can display the application programs commonly used by the user, and can also display contents such as cards, service and the like based on the habit of the user.
It should be understood that if the address of the company is stored on the vehicle and the address is "day An Yungu a parking lot 101 parking space", then when the destination address is the address of the company, "navigate to the company" can be displayed; if the address of the company is not stored in the vehicle, the display interface of the vehicle can display a navigation destination An Yungu A parking lot 101 number parking space, which is not limited in the application. The addresses of the "home" and the "company" may be set on the vehicle, may be learned by the vehicle itself, or may be set on a mobile phone or other terminal device by the user, and may be synchronized with the vehicle by the mobile phone or other terminal device, which is not limited.
As one possible implementation, the vehicle may calculate a navigation duration from the current location to the destination, and if the navigation duration is abnormal (for example, the duration is more than 20% of the usual time), display a time hint information, such as hint information 4012 shown in the display diagram: "drive to company now, it is expected to take 20 minutes more than usual". Of course, the above-mentioned time information is merely a schematic illustration, and for example, "50 minutes are expected to take or" 9:20 minutes are expected to reach "may be displayed, which is not limited thereto. In practical applications, more or less content than that shown in the drawings may be displayed on the display interface of the vehicle, which is not limited in this application. For example, if the time required for the user to start from the current location to the destination is within a certain range from the time required for driving at ordinary times, for example, the difference is within 5 minutes, the time prompt 4012 may not be displayed.
In other embodiments, the prompt may be a voice prompt, for example: please confirm that it is to navigate to the company? At this point, the user may interact with a voice assistant on the vehicle, such as reverting to "navigate away from the company" or reverting to "yes" or the like. Then, the voice assistant on the vehicle can receive the reply information of the user, and the vehicle can call the map application program to display a plurality of routes for navigating to the company, and then the user selects the routes for navigation.
By way of example, assume that the user speaks into the voice assistant on the cell phone 5 minutes before going forward: in the small art, navigation going to ban Tian Xiaoxue, at this time, because the user is closer to the boarding time, the intention of the user to go to ban Tian Xiaoxue is considered stronger, that is, the possibility of the user navigating to ban field primary school is highest, and when the user gets on the car, the car machine can output voice prompt information, such as "whether to navigate to ban Tian Xiaoxue? ". Or, the user shares the address link with the friend 10 minutes before the user gets on the bus, and the user can also consider that the possibility of going to the shared address is higher, and after the user gets on the bus, voice prompt information such as "whether to navigate to the xxx amusement park? ".
In still other embodiments, the user may click on the car after getting on the car
Figure BDA0003317660300000131
The application program, the vehicle machine may respond to the clicking operation of the user to open the interface of the Goldmap application program, and if the user clicks the address searching interface, the address searching interface may display the destination address list, for example, the interface shown in FIG. 4C. The destination address list is a full/partial address list of the candidate addresses after the candidate addresses are ordered according to the high and low of the second confidence, and the address located at the first address is the candidate address with the highest second confidence. For example, "Tian An Yungu (company)" in the destination address list of the interface shown in fig. 4C is located at the first, "ban Tian Xiaoxue" is located at the second, "Shenzhen Bay one (home)" is located at the third.
In still other embodiments, the driving route of the user can be learned according to the position information input by the user on the mobile phone or the position information input by the user in the map application program of the vehicle. For example, a user may first learn to drive to "sakagu Tian Xiaoxue" and then continue to drive to the company from "sakagu Tian Xiaoxue" each day while going to the company for work. After the user gets on the car, if the user clicks
Figure BDA0003317660300000141
Application program->
Figure BDA0003317660300000142
The address search interface of the application may display a list of destination addresses according to the habit of the user, for example, the first address is ban Tian Xiaoxue, the second address is Tian An Yungu (company), so that when the user clicks ban Tian Xiaoxue, he can navigate to ban Tian Xiaoxue, switch the destination to company after ban Tian Xiaoxue is reached, and then navigate from ban Tian Xiaoxue to company.
In still other embodiments, the destination address may be obtained in conjunction with multiple dimensions, such as user A adding calendar information on day 8, 21 in the calendar: the test sheet is removed by a ban Tian Yiyuan, and the reminding time is as follows: 10:00-10:10 in the morning, then a message is sent to user B by a Wechat at 9:55 in the morning on day 21 of 8 months: i want to get the laboratory sheet, company has something to do at the time, I need to deal with, and it may not be available today. The mobile phone side performs semantic understanding on the message sent by the user A, so that the user can not go to the hospital temporarily and needs to go to the company, namely, the destination is the company. Thus, after the user gets on the vehicle, a prompt message such as "navigate to get to the company" or a voice assistant sends out a prompt to get to the company. Of course, the voice assistant may also issue a reminder "go to company or hospital" and the user may then interact with the voice assistant in response to the voice assistant's message.
It should be understood that fig. 4C and 4D are only one illustrative illustration, and that the destination address list may display more or fewer addresses when the actual product is being implemented.
The above method is described below taking a specific scenario as an example.
Scene 1: map navigation
Referring to fig. 5A, assuming that the user opens the gorgeous map 5 minutes before going to check the road condition of the driving company (for example, check whether there is a traffic jam road), the mobile phone can collect the data, and consider that the intention of the user to go to the address is stronger, the address can be used as the destination of the user for the current trip. After the user gets on the vehicle, the vehicle may output a prompt, for example, the vehicle may display the interface 500 in fig. 5A, where the interface 500 may include a prompt 501 output by a voice assistant, for example, the voice assistant on the vehicle may send: is it to navigate to the company? At this point, the user may interact with the voice assistant to confirm the destination to navigate to.
Scene 2: friend sharing address
As shown in fig. 5B, suppose that the user chat with the friend through a WeChat 10 minutes before the user, the friend shares an address link with the user a, for example, the address is a Shenzhen bay park. After the mobile phone collects the data, the user can consider that the intention of going to the address is stronger, and the address is recorded as the address of the destination. After the user gets on the vehicle, a prompt message may be displayed on the vehicle, for example, the interface 510 shown in fig. 5B may be displayed. The interface 510 may include a navigation card 511, and the navigation card 511 may include a prompt message, such as "navigate to Shenzhen Bay park" 512, and then the user clicks the navigation card 511, so that the vehicle can automatically select a route to navigate to Shenzhen Bay park.
Through the embodiment, the destination to be moved by the user can be presumed before the user gets on the vehicle, and then after the user gets on the vehicle, the destination prompt information can be output on the vehicle, namely, the user does not need to manually input the destination information, so that the user operation is reduced, and the user experience can be improved.
Based on the above embodiments, the present application further provides a destination navigation method flowchart, referring to fig. 6, the method may include the following steps:
s601: the mobile phone collects data containing address information.
S602: and the mobile phone determines candidate addresses according to the acquired data.
S603: and the mobile phone sends the candidate address to the car machine.
It should be noted that, the mobile phone may obtain data including address information from other terminal devices, then the mobile phone uniformly sorts the data to obtain candidate addresses, and finally the mobile phone sends the candidate addresses to the vehicle. Alternatively, the mobile phone and other terminal devices may respectively send the data containing the address information to the vehicle, which is not limited in this application.
S604: and the vehicle machine corrects the candidate address to obtain the destination address.
S605: and outputting prompt information by the vehicle.
It should be understood that the specific implementation process of S601 to S602, S605 in the embodiment shown in fig. 6 may refer to the detailed descriptions of S301 to S302, S305 in the embodiment shown in fig. 3, and the detailed descriptions are not repeated here. The difference between the embodiments shown in fig. 6 and fig. 3 is that in the embodiment shown in fig. 6, the vehicle machine corrects the candidate address, in the embodiment shown in fig. 3, the mobile phone corrects the candidate address first to obtain the destination address, then the destination address is sent to the vehicle machine, in the embodiment shown in fig. 3, the mobile phone sends the candidate address to the vehicle machine first, and then the vehicle machine corrects the candidate address to obtain the destination address.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the point of view that the electronic device is the execution subject. In order to implement the functions in the methods provided in the embodiments of the present application, the electronic device may include a hardware structure and/or a software module, where the functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Some of the functions described above are performed in a hardware configuration, a software module, or a combination of hardware and software modules, depending on the specific application of the solution and design constraints.
As shown in fig. 7, some other embodiments of the present application disclose a device, which may be a device with a display screen, for example, may be a vehicle-mounted device (vehicle machine) or a terminal device in the foregoing embodiment. Referring to fig. 7, the apparatus 700 includes: a transceiver 701 and a display 702; one or more processors 703; one or more memories 704; one or more sensors 705 (not shown); and one or more computer programs 706 (not shown), which may be coupled via one or more communication buses 707.
The transceiver 701 is used for the vehicle-mounted device to interact with the terminal device, and the display screen 702 is used for displaying a display interface of an application or displaying prompt information. The memory 704 has stored therein one or more computer programs, including instructions; the processor 703 invokes the instructions stored in the memory 704 so that the device 700 may perform the methods of the above-described embodiments.
In the embodiments of the present application, the processor 703 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, where the methods, steps and logic blocks disclosed in the embodiments of the present application may be implemented or performed. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software module may be located in the memory 704, the processor 703 reading the program instructions in the memory 704 and performing the steps of the method described above in connection with its hardware.
In the embodiment of the present application, the memory 704 may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (RAM). The memory may also be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be a circuit or any other device capable of implementing a memory function, for storing instructions and/or data.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
Based on the above embodiments, the present application further provides a computer storage medium having stored therein a computer program which, when executed by a computer, causes the computer to perform the method provided in the above embodiments.
Also provided in embodiments of the present application is a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method provided in the above embodiments.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by instructions. These instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (17)

1. A destination navigation method, comprising:
the vehicle-mounted equipment and the terminal equipment are connected;
the vehicle-mounted equipment acquires first destination information of user travel from the terminal equipment;
the in-vehicle device displays a first interface including a navigation interface that navigates from a starting location to the first destination.
2. The method of claim 1, wherein the first destination information originates from a first application APP;
the method further comprises the steps of: the vehicle-mounted device acquires second destination information from the terminal device, wherein the second destination information is derived from a second APP.
3. The method of claim 2, wherein the first destination information is derived from at least one of a friend sharing link, map search navigation, and voice query.
4. The method of claim 1 or 2, wherein the first destination information is determined by a candidate address, the candidate address being derived from at least one of a link, a voice message, a chat log, a text message, a browser, a map.
5. The method of claim 4, wherein the first destination information is obtained by the terminal device by obtaining a first candidate address and correcting the first candidate address.
6. The method of any of claims 1-5, wherein prior to the vehicle-mounted device displaying the first interface, the method further comprises:
the vehicle-mounted device outputs first prompt information, and the first prompt information is used for prompting a user to confirm whether to navigate to the first destination.
7. The method of claim 6, wherein the vehicle-mounted device outputs a first prompt message comprising:
the vehicle-mounted equipment displays first prompt information, wherein the first prompt information comprises first destination information;
the method further comprises the steps of: the vehicle-mounted device displays second prompt information, and the second prompt information is used for reminding a user of time information required by navigation driving to the first destination.
8. The method of claim 6, wherein the vehicle-mounted device outputs a first prompt message comprising:
the vehicle-mounted device sends out first prompt information through a loudspeaker.
9. The method of claim 2, wherein the first destination information alerts the user via a first alert mode and the second destination information alerts the user via a second alert mode.
10. A destination navigation method, comprising:
the terminal equipment determines first destination information of user travel;
the terminal equipment establishes connection with the vehicle-mounted equipment and sends the first destination information to the vehicle-mounted equipment so as to enable the vehicle-mounted equipment to navigate to the first destination.
11. The method of claim 10, wherein the terminal device determining first destination information for the user travel comprises:
the terminal equipment acquires a candidate address, wherein the candidate address is derived from at least one of a link, voice information, chat records, a short message, a browser and a map;
and the terminal equipment corrects the candidate address to obtain the first destination information.
12. The method of claim 10 or 11, wherein the first destination information originates from a first application APP;
The method further comprises the steps of: the terminal equipment acquires second destination information from a second APP.
13. The method of claim 12, wherein the first destination information is derived from at least one of a friend sharing link, map search navigation, and voice query.
14. The method of claim 11, wherein the terminal device correcting the candidate address to obtain the first destination information comprises:
the terminal equipment determines first destination information of user travel according to the driving frequent address and time interval of the user and the distance information between the driving frequent address and the candidate address; the time interval is a time interval between the event occurrence time corresponding to the candidate address and the current time.
15. The method of claim 14, wherein the terminal device determining the first destination information for the user travel based on the user drive-thru address, the time interval, and the distance information between the user drive-thru address and the candidate address comprises:
the terminal equipment determines a first confidence coefficient of the candidate address according to the driving frequently-removed address of the user;
The terminal equipment weights the first confidence coefficient based on the time interval to obtain a second confidence coefficient of the candidate address;
the terminal equipment weights the second confidence coefficient according to the distance between the driving frequently-removed address of the user and the candidate address to obtain a third confidence coefficient of the candidate address;
and the terminal equipment takes the address information with the highest confidence in the third confidence as first destination information of user travel.
16. A vehicle-mounted apparatus, characterized in that the vehicle-mounted apparatus includes a display screen; one or more processors; one or more memories; one or more sensors; a plurality of applications; and one or more computer programs;
wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions that, when executed by the one or more processors, cause the in-vehicle device to perform the method of any of claims 1-15.
17. A computer readable storage medium having instructions stored therein, which when run on an in-vehicle device cause the in-vehicle device to perform the method of any one of claims 1 to 15.
CN202111235917.6A 2021-10-22 2021-10-22 Destination navigation method and device Pending CN116007642A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111235917.6A CN116007642A (en) 2021-10-22 2021-10-22 Destination navigation method and device
PCT/CN2022/117935 WO2023065879A1 (en) 2021-10-22 2022-09-08 Destination navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111235917.6A CN116007642A (en) 2021-10-22 2021-10-22 Destination navigation method and device

Publications (1)

Publication Number Publication Date
CN116007642A true CN116007642A (en) 2023-04-25

Family

ID=86021901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111235917.6A Pending CN116007642A (en) 2021-10-22 2021-10-22 Destination navigation method and device

Country Status (2)

Country Link
CN (1) CN116007642A (en)
WO (1) WO2023065879A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3101392B1 (en) * 2013-03-15 2021-12-15 Apple Inc. Mapping application with turn-by-turn navigation mode for output to vehicle display
CN110388935B (en) * 2013-03-15 2023-04-28 苹果公司 Acquiring addresses
CN104034343A (en) * 2014-06-30 2014-09-10 深圳市众鸿科技股份有限公司 Navigation method, navigation system, vehicle-mounted terminal and acquisition method for navigation information of vehicle-mounted terminal
JP6436010B2 (en) * 2015-07-27 2018-12-12 株式会社デンソー Cooperation system, program and portable terminal for vehicle device and portable terminal
CN111089603B (en) * 2018-10-23 2023-07-25 博泰车联网科技(上海)股份有限公司 Navigation information prompting method based on social application communication content and vehicle
CN109379275A (en) * 2018-11-23 2019-02-22 重庆长安汽车股份有限公司 The system and method for equipment remote interaction and vehicle mounted guidance is realized based on wechat public platform
CN112284409B (en) * 2020-10-23 2024-03-08 上海博泰悦臻网络技术服务有限公司 Method, system and storage medium for navigation based on social software sharing information

Also Published As

Publication number Publication date
WO2023065879A1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
US9582317B2 (en) Method of using use log of portable terminal and apparatus using the same
US8938355B2 (en) Human assisted techniques for providing local maps and location-specific annotated data
CN109556621B (en) Route planning method and related equipment
CN112397062A (en) Voice interaction method, device, terminal and storage medium
KR102092057B1 (en) Method and apparatus for sharing location information of electronic device
CN105101366A (en) Mobile terminal control method and mobile terminal
EP4293535A1 (en) Information recommendation method and related device
CN109219953B (en) Alarm clock reminding method, electronic equipment and computer readable storage medium
CN110225176B (en) Contact person recommendation method and electronic device
CN103916473A (en) Travel information processing method and relative device
CN113064185B (en) Positioning method, terminal and server
US11874876B2 (en) Electronic device and method for predicting an intention of a user
US20120303269A1 (en) Navigation system, server connected thereto, and method for controlling vehicle navigation
WO2021218837A1 (en) Reminding method and related apparatus
WO2021082608A1 (en) Method and electronic device for prompting travel plan
US11455178B2 (en) Method for providing routine to determine a state of an electronic device and electronic device supporting same
CN116007642A (en) Destination navigation method and device
CN112887483B (en) Event reminding method and electronic equipment
US10189481B2 (en) Social network service (SNS) server for providing profile information of mobile device user
CN111052050A (en) Information input method and terminal
CN110720104B (en) Voice information processing method and device and terminal
CN110873560A (en) Navigation method and electronic equipment
WO2023237061A1 (en) Network searching method, electronic device and medium
EP4293608A1 (en) Control method based on prediction using use pattern and apparatus therefor
CN116092098A (en) Model training method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination