CN113267189A - Intelligent navigation system and method based on Internet of things and big data - Google Patents

Intelligent navigation system and method based on Internet of things and big data Download PDF

Info

Publication number
CN113267189A
CN113267189A CN202110525846.7A CN202110525846A CN113267189A CN 113267189 A CN113267189 A CN 113267189A CN 202110525846 A CN202110525846 A CN 202110525846A CN 113267189 A CN113267189 A CN 113267189A
Authority
CN
China
Prior art keywords
navigation
voice
key point
user
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110525846.7A
Other languages
Chinese (zh)
Other versions
CN113267189B (en
Inventor
杨皓淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110525846.7A priority Critical patent/CN113267189B/en
Publication of CN113267189A publication Critical patent/CN113267189A/en
Application granted granted Critical
Publication of CN113267189B publication Critical patent/CN113267189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention discloses an intelligent navigation system and method based on the Internet of things and big data. The method comprises the steps of obtaining a navigation path corresponding to a user identifier, wherein the navigation path comprises a path starting point and a path ending point; inquiring all navigation key points in the navigation path, wherein each navigation key point corresponds to a voice; aiming at each navigation key point, determining a search formula corresponding to the navigation key point according to the navigation key point and the navigation path; obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library to generate a voice packet; and sending the voice packet to a vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for a user by playing the voice in the voice packet. The embodiment of the invention can provide navigation broadcast for the user in time and reduce the probability of the user deviating from the navigation route.

Description

Intelligent navigation system and method based on Internet of things and big data
Technical Field
The embodiment of the invention relates to the field of vehicles, in particular to an intelligent navigation system and method based on the Internet of things and big data.
Background
When the vehicle runs to a fork in the navigation process, the situation that a user cannot determine a correct running route in time and deviates from the navigation route often occurs, and in order to reduce the probability of the situation, the embodiment of the invention provides an intelligent navigation system and method based on the Internet of things and big data.
Disclosure of Invention
The embodiment of the invention provides an intelligent navigation system and method based on the Internet of things and big data.
On one hand, the embodiment of the invention provides an intelligent navigation method based on the Internet of things and big data, which is applied to a vehicle-mounted navigation server and comprises the following steps:
acquiring a navigation path corresponding to a user identifier, wherein the navigation path comprises a path starting point and a path end point;
inquiring all navigation key points in the navigation path, wherein each navigation key point corresponds to a voice;
aiming at each navigation key point, determining a search formula corresponding to the navigation key point according to the navigation key point and the navigation path;
obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library to generate a voice packet;
and sending the voice packet to a vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for a user by playing the voice in the voice packet.
In one embodiment, the vehicle navigation terminal broadcasts voice according to the traveling condition, each voice comprises a broadcast position, the voice is broadcasted when the vehicle navigation terminal travels to the broadcast position, the traveling position of the vehicle navigation terminal is determined based on a GPS, and when a GPS signal is smaller than a preset intensity, the traveling position of the vehicle navigation terminal is determined based on the Internet of things.
In one embodiment, the obtaining the speech corresponding to each of the navigation key points according to the search-type query-choice navigation speech library to generate a speech packet includes:
determining a target customized voice library according to the user identification;
in the target customized voice library, inquiring based on the search formula to obtain an inquiry result;
if the query result is not empty, determining the voice in the query result as the voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
In one embodiment, the target customized voice library corresponding to the navigation interface is selected and expanded according to the actual navigation condition, the navigation key point selection control in the navigation interface is triggered to determine the target navigation key point, and the voice corresponding to the target navigation key point is recorded.
In one embodiment, a user corresponds to a customized voice library, and a target customized voice library corresponding to the user is determined according to a user identifier, wherein the target customized voice library is higher in priority than the universal voice library.
On the other hand, the embodiment of the invention provides an intelligent navigation system based on the internet of things and big data, the system comprises a vehicle-mounted navigation terminal and a vehicle-mounted navigation server, and the vehicle-mounted navigation server comprises:
the navigation path determining module is used for acquiring a navigation path corresponding to the user identifier, wherein the navigation path comprises a path starting point and a path ending point;
the query module is used for querying all navigation key points in the navigation path, and each navigation key point corresponds to a voice;
a search formula determining module, configured to determine, for each navigation key point, a search formula corresponding to the navigation key point according to the navigation key point and the navigation path;
the voice query module is used for obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library and generating a voice packet;
and the navigation support module is used for sending the voice packet to the vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for the user by playing the voice in the voice packet.
In one embodiment, the voice query module includes:
the target customized voice library determining unit is used for determining a target customized voice library according to the user identification;
the voice query unit is used for performing query based on the search formula in the target customized voice library to obtain a query result;
a query result determining unit, configured to determine, if the query result is not empty, a voice in the query result as a voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, where at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the above intelligent navigation method based on the internet of things and big data.
In another aspect, an embodiment of the present invention provides an electronic device, which includes at least one processor, and a memory communicatively connected to the at least one processor; the memory stores instructions executable by the at least one processor, and the at least one processor implements the intelligent navigation method based on the internet of things and big data by executing the instructions stored by the memory.
The embodiment of the invention provides an intelligent navigation system and method based on the Internet of things and big data. The embodiment of the invention can reduce the probability of deviating from the navigation route.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present invention or the related art, the drawings used in the description of the embodiments or the related art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an intelligent navigation method provided by an embodiment of the invention;
FIG. 2 is a flow chart of a method for generating a voice packet according to an embodiment of the present invention;
FIG. 3 is a flow chart of a voice entry method based on dual-level authentication according to an embodiment of the present invention;
FIG. 4 is a flowchart of a training process for an instruction sequence matching model according to an embodiment of the present invention;
fig. 5 is a block diagram of an intelligent navigation system based on the internet of things and big data according to an embodiment of the present invention.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention discloses an intelligent navigation system and method based on the Internet of things and big data. The implementation subjects related to the method in the implementation of the present invention can all communicate via the internet, and the generated data can be processed based on the big data technology, for example, data acquisition, data management, data processing, and the like can all be implemented based on the related big data technology, and thus, the embodiments of the present invention are not described in detail. The navigation system provided by the embodiment of the invention comprises the vehicle-mounted navigation terminal and the vehicle-mounted navigation pair server, and the navigation system can provide navigation voice based on the Internet of things and big data for a user through the vehicle-mounted navigation terminal, so that the vehicle-mounted navigation competitiveness is improved.
As shown in fig. 1, the method is applied to a vehicle navigation server, and includes:
s101, acquiring a navigation path corresponding to a user identifier, wherein the navigation path comprises a path starting point and a path ending point.
And S102, inquiring all navigation key points in the navigation path, wherein each navigation key point corresponds to a voice.
In the embodiment of the invention, each navigation key point can correspond to a piece of decision-based navigation voice. Points which are easy to yaw can be determined as navigation key points, and usually at the navigation key points, a user needs to make a decision on a path, such as a junction, a user needs to decide which road ahead is to be taken, such as a turning port, a user needs to decide which direction to turn, such as an overhead port, and a user needs to decide whether to go to an overhead or not. By broadcasting the decision-making navigation speech corresponding to the navigation key points, a decision can be made for the user so that the user knows how to proceed in time.
S103, aiming at each navigation key point, determining a search formula corresponding to the navigation key point according to the navigation key point and the navigation path.
In the embodiment of the invention, a first road section entering the navigation key point and a second road section leaving the navigation point can be determined according to the navigation path, and a search formula is uniquely determined according to the first identifier of the first road section, the identifier of the navigation key point and the second identifier of the second road section.
S104, inquiring a decision navigation voice library according to the search formula to obtain the voice corresponding to each navigation key point and generate a voice packet.
And S105, sending the voice packet to a vehicle-mounted navigation terminal to trigger the vehicle-mounted navigation terminal to navigate a user by playing the voice in the voice packet.
In one embodiment, the vehicle navigation terminal broadcasts voice according to the traveling condition, each voice comprises a broadcast position, the voice is broadcasted when the vehicle navigation terminal travels to the broadcast position, the traveling position of the vehicle navigation terminal is determined based on a GPS, and when a GPS signal is smaller than a preset intensity, the traveling position of the vehicle navigation terminal is determined based on the Internet of things.
In one embodiment, the decision-making navigation speech library may manage a customized speech library for a user according to a user instruction, and may also manage a general speech library for the whole user, and the query of the decision-making navigation speech library according to the search formula obtains speech corresponding to each of the navigation key points, and generates a speech packet, as shown in fig. 2, including:
s1041, determining a target customized voice library according to the user identification.
The customized voice library corresponds to the users one by one, one user corresponds to one customized voice library, and the target customized voice library corresponding to the user can be determined according to the user identification. The target customized voice library has a higher priority than the generic voice library. In the historical travel of the user, if the user drifts, the user can customize the choice voice and input the choice voice into the target customized voice library so as to be broadcast for the user according to the voice in the target customized voice library later, which is different from the prior art that a general voice library is used.
And S1042, in the target customized voice library, inquiring based on the search formula to obtain an inquiry result.
S1043, if the query result is not empty, determining the voice in the query result as the voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
Since the target customized voice library gradually collects the customized voice, there may be a case where the search formula cannot be hit, and in this case, the general search library is queried to ensure that the voice corresponding to the key navigation point is not missed and the miss-broadcasting does not occur.
In one embodiment, a user can select and expand a target customized voice library corresponding to the user according to an actual navigation condition, and the user can determine a target navigation key point and input a voice corresponding to the target navigation key point by triggering a navigation key point selection control in a navigation interface. In order to ensure that the target customized voice library can provide reasonable voice for the user and is compatible with the voice input speed, the embodiment of the invention provides a voice input method based on double-level verification. The voice entry method based on the dual-level authentication, as shown in fig. 3, includes:
s201, determining a first voice which is input by a user aiming at a target navigation key point.
S202, analyzing the first voice based on the voice matching model to obtain a first analysis result.
In the embodiment of the invention, the user audio image corresponding to the user identifier of the user can be stored, the first voice is analyzed based on the voice matching model, the audio characteristic of the first voice can be obtained, and the similarity between the audio characteristic and the user audio image is the first analysis result. And if the similarity is higher than a preset threshold value, judging that the user is legal.
S203, analyzing the first voice based on the instruction sequence matching model to obtain a second analysis result.
And S204, if the first analysis result represents that the user is legal and the second analysis result represents that the first voice is legal, receiving a second voice which is input by the user aiming at the target key point, and storing the second voice in the target customized voice library.
That is, the user's custom voice can be collected directly in the target customized voice library.
S205, if the first analysis result represents that the user is legal and the second analysis result represents that the first voice is illegal, receiving second voice which is input by the user aiming at the target key point, carrying out decision-making navigation semantic verification on the second voice, and if the verification is passed, storing the second voice in the target customized voice library.
The choice navigation semantic check can check whether the second voice describes the navigation information, for example, if the second voice is irrelevant to the navigation, the decision navigation semantic check cannot be passed, and the criterion of the decision navigation semantic check is not limited in the embodiment of the invention.
In the embodiment of the invention, the first analysis result can represent whether the user is the owner of the target customized voice library, if so, the user can input the second voice into the target customized voice library, the second analysis result represents whether the user inputs the VIP instruction, the VIP instruction is only available to the VIP user, the VIP instruction can enable the second voice to be directly input into the target customized voice library by bypassing the choice navigation semantic check, the input speed and the input success rate can be improved, the VIP instruction is similar to opening the back door privilege for the VIP user, namely, the VIP user can directly input the second voice into the target customized voice library by bypassing the choice navigation semantic check, the second voice can not be ensured to be still the choice navigation voice without being checked, and possibly only some contents broadcasted by the vehicle-mounted navigation terminal that the VIP user wants to hear are broadcasted, so that the navigation system still considers the contents to be the "choice navigation voice" and broadcasted, therefore, the method can give the user novel experience and owner experience, remarkably enhance the viscosity of the user and the dignity of the VIP user, enjoy the completely customized voice service which does not have strong correlation with the navigation in the navigation process, and also improve the fullness of the navigation content and the intelligent degree of the navigation service.
In order to ensure that only a VIP user can exercise the VIP privilege after entering a correct instruction, control the customization degree of the target customized voice library, prevent the target customized voice library from being easily tampered under the condition of an incorrect instruction or a non-VIP user, and improve the safety of the target customized voice library, the embodiment of the invention can analyze the first voice based on a customized instruction sequence matching model to obtain a second analysis result. A VIP user only corresponds to an instruction sequence matching model which can ensure that a second analysis result which represents that the first voice is legal can be obtained only when the VIP user inputs a corresponding correct instruction. That is, to train one instruction sequence matching model for each VIP user, in order to achieve this technical objective, the embodiment of the present invention details a training process of an instruction sequence matching model corresponding to a user, as shown in fig. 4, including:
s301, a pre-trained voice analysis model is obtained, the voice analysis model obtains text information corresponding to voice according to the voice input by a target user, and loss generated by the voice analysis model is smaller than a preset first threshold value.
The training process of the instruction sequence matching model in the invention depends on a voice analysis model which is a trained model meeting the application standard. The voice analysis model is obtained by aiming at the voice training of a target user, the target user is a corresponding VIP user, namely the voice analysis model is matched with a user audio portrait of the target user, the voice of the target user can be accurately recognized, and the recognized text meets the requirement of high accuracy.
S302, obtaining a preset number of target instruction words selected by a target user in an instruction word set.
S303, generating a plurality of sample texts, wherein each sample text comprises at least one target instruction word, acquiring a sample audio frequency input by a user according to the sample text, generating a sample target instruction word sequence according to the appearance sequence of the target instruction words in the sample text, determining the sample audio frequency and the corresponding sample target instruction word sequence as training samples, and hitting all target instruction words in a character set formed by all sample texts in a set formed by all the training samples.
S304, a pre-trained semantic analysis model is obtained, the semantic analysis model obtains an instruction word analysis result according to an input text, and loss generated by the semantic analysis model is smaller than a preset second threshold value.
The embodiment of the invention does not limit the specific source of the semantic analysis model, can be obtained by self training, can also use open source, and only needs the semantic analysis model to meet the application requirement of high accuracy. The semantic analysis model divides words according to any input text to obtain a word vector corresponding to each word, and determines the instruction word analysis result according to the obtained word vector, wherein the instruction word analysis result can be in the form of an instruction word sequence.
S305, inputting the sample audio in the training sample into the voice analysis model, transmitting the output of the voice analysis model to the semantic analysis model, obtaining a sample instruction word analysis result output by the semantic analysis model, and adjusting a target parameter in the semantic analysis model according to the difference between the sample instruction word analysis result and a sample target instruction word sequence in the training sample, wherein the target parameter is a word vector corresponding to a word in an instruction word in the semantic analysis model.
According to the embodiment of the invention, loss can be obtained based on the difference between the analysis result of the sample instruction words and the sample target instruction word sequence in the training sample, and the target parameter is fed back and adjusted based on a gradient descent method, wherein the target parameter can be a word vector corresponding to a word in any instruction word in the semantic analysis model.
S306, determining the result of the sequential connection of the voice analysis model, the semantic analysis model after parameter adjustment and the judgment model as the instruction sequence matching model, wherein the judgment model is used for judging the consistency of the output of the semantic analysis model after parameter adjustment and the preset instruction of the target user, and determining the judgment result as the second analysis result.
The preset instruction is an instruction preset by the target user, and if the first voice of the target user comprises the preset instruction, the first voice can be considered to be legal.
An embodiment of the present invention further provides a navigation system, as shown in fig. 5, including a vehicle for vehicle navigation and a vehicle navigation server, where the vehicle navigation server includes:
the navigation path determining module is used for acquiring a navigation path corresponding to the user identifier, wherein the navigation path comprises a path starting point and a path ending point;
the query module is used for querying all navigation key points in the navigation path, and each navigation key point corresponds to a voice;
a search formula determining module, configured to determine, for each navigation key point, a search formula corresponding to the navigation key point according to the navigation key point and the navigation path;
the voice query module is used for obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library and generating a voice packet;
and the navigation support module is used for sending the voice packet to the vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for the user by playing the voice in the voice packet.
In one embodiment, the voice query module includes:
the target customized voice library determining unit is used for determining a target customized voice library according to the user identification;
the voice query unit is used for performing query based on the search formula in the target customized voice library to obtain a query result;
a query result determining unit, configured to determine, if the query result is not empty, a voice in the query result as a voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
The system and the method in the embodiment of the invention are based on the same concept, and are not described herein again.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, where at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the above intelligent navigation method based on the internet of things and big data.
In another aspect, an embodiment of the present invention provides an electronic device, which includes at least one processor, and a memory communicatively connected to the at least one processor; the memory stores instructions executable by the at least one processor, and the at least one processor implements the intelligent navigation method based on the internet of things and big data by executing the instructions stored by the memory.
The above description is only a preferred embodiment of the present invention, and should not be taken as limiting the embodiments of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present invention should be included in the scope of the present invention.

Claims (9)

1. An intelligent navigation method based on the Internet of things and big data is applied to a vehicle-mounted navigation server, and the method comprises the following steps:
acquiring a navigation path corresponding to a user identifier, wherein the navigation path comprises a path starting point and a path end point;
inquiring all navigation key points in the navigation path, wherein each navigation key point corresponds to a voice;
aiming at each navigation key point, determining a search formula corresponding to the navigation key point according to the navigation key point and the navigation path;
obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library to generate a voice packet;
and sending the voice packet to a vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for a user by playing the voice in the voice packet.
2. The method of claim 1, wherein:
the vehicle navigation terminal carries out voice broadcast according to the advancing condition, each voice comprises a broadcast position, when the vehicle navigation terminal travels to the broadcast position, the voice is broadcast, the vehicle navigation terminal traveling position is determined based on a GPS, and when a GPS signal is smaller than preset intensity, the vehicle navigation terminal traveling position is determined based on the Internet of things.
3. The method according to claim 1 or 2, wherein the obtaining the corresponding voice of each navigation key point according to the search query decision-making navigation voice library to generate a voice packet comprises:
determining a target customized voice library according to the user identification;
in the target customized voice library, inquiring based on the search formula to obtain an inquiry result;
if the query result is not empty, determining the voice in the query result as the voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
4. The method of claim 1, wherein:
and selectively expanding a target customized voice library corresponding to the navigation interface according to the actual navigation condition, determining a target navigation key point by triggering a navigation key point selection control in the navigation interface, and inputting the voice corresponding to the target navigation key point.
5. The method of claim 4, wherein:
and one user corresponds to one customized voice library, and a target customized voice library corresponding to the user is determined according to the user identification, wherein the priority of the target customized voice library is higher than that of the universal voice library.
6. The utility model provides an intelligent navigation system based on thing networking and big data which characterized in that, the system includes vehicle navigation terminal and vehicle navigation server, vehicle navigation server includes:
the navigation path determining module is used for acquiring a navigation path corresponding to the user identifier, wherein the navigation path comprises a path starting point and a path ending point;
the query module is used for querying all navigation key points in the navigation path, and each navigation key point corresponds to a voice;
a search formula determining module, configured to determine, for each navigation key point, a search formula corresponding to the navigation key point according to the navigation key point and the navigation path;
the voice query module is used for obtaining the voice corresponding to each navigation key point according to the search type query decision navigation voice library and generating a voice packet;
and the navigation support module is used for sending the voice packet to the vehicle-mounted navigation terminal so as to trigger the vehicle-mounted navigation terminal to navigate for the user by playing the voice in the voice packet.
7. The system of claim 6, wherein the voice query module comprises:
the target customized voice library determining unit is used for determining a target customized voice library according to the user identification;
the voice query unit is used for performing query based on the search formula in the target customized voice library to obtain a query result;
a query result determining unit, configured to determine, if the query result is not empty, a voice in the query result as a voice corresponding to the navigation key point; and if the query result is empty, querying the general voice library based on the search formula to obtain the voice corresponding to the navigation key point.
8. An electronic device, comprising a processor and a memory, wherein the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the method for intelligent navigation based on internet of things and big data according to any one of claims 1 to 5.
9. A computer readable storage medium, wherein at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded by a processor and executed to implement the method for intelligent navigation based on internet of things and big data according to any one of claims 1 to 5.
CN202110525846.7A 2021-05-14 2021-05-14 Intelligent navigation system and method based on Internet of things and big data Active CN113267189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110525846.7A CN113267189B (en) 2021-05-14 2021-05-14 Intelligent navigation system and method based on Internet of things and big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110525846.7A CN113267189B (en) 2021-05-14 2021-05-14 Intelligent navigation system and method based on Internet of things and big data

Publications (2)

Publication Number Publication Date
CN113267189A true CN113267189A (en) 2021-08-17
CN113267189B CN113267189B (en) 2022-05-03

Family

ID=77230797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110525846.7A Active CN113267189B (en) 2021-05-14 2021-05-14 Intelligent navigation system and method based on Internet of things and big data

Country Status (1)

Country Link
CN (1) CN113267189B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920482A (en) * 2005-08-23 2007-02-28 厦门雅迅网络股份有限公司 Processing of navigation track in vehicle online type navigation system and transportation method thereof
CN101033981A (en) * 2007-03-19 2007-09-12 江苏新科数字技术有限公司 Navigator used for simulating guiding path and operation method thereof
CN101368827A (en) * 2007-08-16 2009-02-18 北京灵图软件技术有限公司 Communication navigation method, apparatus and communication navigation system
CN106441341A (en) * 2015-08-12 2017-02-22 高德软件有限公司 Navigation method and navigation device
WO2019007071A1 (en) * 2017-07-05 2019-01-10 乐高乐佳(北京)信息技术有限公司 Key point-based oriented navigation method, device and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920482A (en) * 2005-08-23 2007-02-28 厦门雅迅网络股份有限公司 Processing of navigation track in vehicle online type navigation system and transportation method thereof
CN101033981A (en) * 2007-03-19 2007-09-12 江苏新科数字技术有限公司 Navigator used for simulating guiding path and operation method thereof
CN101368827A (en) * 2007-08-16 2009-02-18 北京灵图软件技术有限公司 Communication navigation method, apparatus and communication navigation system
CN106441341A (en) * 2015-08-12 2017-02-22 高德软件有限公司 Navigation method and navigation device
WO2019007071A1 (en) * 2017-07-05 2019-01-10 乐高乐佳(北京)信息技术有限公司 Key point-based oriented navigation method, device and apparatus

Also Published As

Publication number Publication date
CN113267189B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
US10818286B2 (en) Communication system and method between an on-vehicle voice recognition system and an off-vehicle voice recognition system
US11153733B2 (en) Information providing system and information providing method
CN107615345B (en) Movement assistance device, movement assistance server, and movement assistance system
CN105070288A (en) Vehicle-mounted voice instruction recognition method and device
CN111508482A (en) Semantic understanding and voice interaction method, device, equipment and storage medium
US20130325478A1 (en) Dialogue apparatus, dialogue system, and dialogue control method
US20090187406A1 (en) Voice recognition system
US20150187351A1 (en) Method and system for providing user with information in vehicle
US8583441B2 (en) Method and system for providing speech dialogue applications
US20130297210A1 (en) Route guidance apparatus and method with voice recognition
CN110472029B (en) Data processing method, device and computer readable storage medium
US20200191583A1 (en) Matching method, matching server, matching system, and storage medium
JP2009064186A (en) Interactive system for vehicle
CN113267189B (en) Intelligent navigation system and method based on Internet of things and big data
CN110121086B (en) Planning method for online playing content and cloud server
CN107170447B (en) Sound processing system and sound processing method
US10593323B2 (en) Keyword generation apparatus and keyword generation method
US20220208187A1 (en) Information processing device, information processing method, and storage medium
CN111089603B (en) Navigation information prompting method based on social application communication content and vehicle
US11874129B2 (en) Apparatus and method for servicing personalized information based on user interest
US11620994B2 (en) Method for operating and/or controlling a dialog system
JP6324249B2 (en) Electronic device, voice recognition system, and voice recognition program
JP7449852B2 (en) Information processing device, information processing method, and program
JP2022146261A (en) Guidance device
CN117213519A (en) Navigation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant