CN108827338B - Voice navigation method and related product - Google Patents

Voice navigation method and related product Download PDF

Info

Publication number
CN108827338B
CN108827338B CN201810574609.8A CN201810574609A CN108827338B CN 108827338 B CN108827338 B CN 108827338B CN 201810574609 A CN201810574609 A CN 201810574609A CN 108827338 B CN108827338 B CN 108827338B
Authority
CN
China
Prior art keywords
target
voice
wearable device
determining
navigation route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810574609.8A
Other languages
Chinese (zh)
Other versions
CN108827338A (en
Inventor
张伟正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810574609.8A priority Critical patent/CN108827338B/en
Publication of CN108827338A publication Critical patent/CN108827338A/en
Application granted granted Critical
Publication of CN108827338B publication Critical patent/CN108827338B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses voice navigation method and related products, which are applied to wearable equipment, wherein the wearable equipment comprises a processing circuit, a communication circuit, a sensor and an audio component, wherein the communication circuit, the sensor and the audio component are connected with the processing circuit, and the method comprises the following steps: acquiring a target position through the Internet of things; acquiring a current position; generating a navigation route between the current location and the target location; and playing the navigation route by voice. By adopting the embodiment of the application, the voice navigation can be realized through the wearable device, the functions of the wearable device are enriched, and the user experience is improved.

Description

Voice navigation method and related product
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a voice navigation method and a related product.
Background
With the maturity of wireless technology, the wireless headset is connected with wearable devices such as mobile phones through wireless technology in more and more scenes. People can realize various functions such as listening to music, making a call and the like through the wireless earphone. However, the current wireless headset has a single function, thereby reducing the user experience.
Disclosure of Invention
The embodiment of the application provides a voice navigation method and a related product, which can realize voice navigation through wearable equipment, enrich the functions of the wearable equipment and improve the user experience.
In a first aspect, embodiments of the present application provide a wearable device including a processing circuit, and a communication circuit, a sensor, and an audio component connected to the processing circuit, wherein,
the audio component is used for acquiring a target position through the Internet of things;
the sensor is used for acquiring the current position;
the processing circuit is configured to generate a navigation route between the current location and the target location;
the audio component is also used for playing the navigation route in voice.
In a second aspect, an embodiment of the present application provides a voice navigation method, which is applied to a wearable device, and includes:
acquiring a target position through the Internet of things;
acquiring a current position;
generating a navigation route between the current location and the target location;
and playing the navigation route by voice.
In a third aspect, an embodiment of the present application provides a voice navigation apparatus, which is applied to a wearable device, and includes an obtaining unit, a generating unit, and a playing unit, where,
the acquisition unit is used for acquiring a target position through the Internet of things; and obtaining a current position;
the generating unit is used for generating a navigation route between the current position and the target position;
the playing unit is used for playing the navigation route in a voice mode.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods in the second aspect of the present application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in any one of the methods of the second aspect of the present application. The computer program product may be a software installation package.
It can be seen that the voice navigation method and the related product described in the embodiments of the present application are applied to wearable devices, the wearable devices are worn on the head of a user, a target position is obtained through the internet of things, a current position is obtained, a navigation route between the current position and the target position is generated, and the navigation route is played through voice.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
FIG. 1B is a schematic flow chart illustrating a voice navigation method disclosed in an embodiment of the present application;
FIG. 1C is a schematic illustration of a positioning demonstration disclosed in an embodiment of the present application;
FIG. 2 is a flow chart illustrating another voice guidance method disclosed in an embodiment of the present application;
FIG. 3 is a flow chart of another voice navigation method disclosed in the embodiments of the present application;
fig. 4 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 5 is a schematic structural diagram of a voice navigation apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The wearable device may include at least one of: wireless earphones, brain wave acquisition devices, Augmented Reality (AR)/Virtual Reality (VR) devices, smart glasses, and the like, wherein the wireless earphones may implement communication by: wireless fidelity (Wi-Fi) technology, bluetooth technology, visible light communication technology, invisible light communication technology (infrared communication technology, ultraviolet communication technology), and the like. In the embodiment of the present application, a wireless headset is taken as an example, and the wireless headset includes a left earplug and a right earplug, where the left earplug can be taken as an independent component, and the right earplug can also be taken as an independent component.
The electronic devices involved in the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication functions, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and so on. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Optionally, the wireless headset may be an ear-hook headset, an ear-plug headset, or a headset, which is not limited in the embodiments of the present application.
The wireless headset may be housed in a headset case, which may include: two receiving cavities (a first receiving cavity and a second receiving cavity) sized and shaped to receive a pair of wireless headsets (a left earbud and a right earbud); one or more earphone housing magnetic components disposed within the case for magnetically attracting and respectively magnetically securing a pair of wireless earphones into the two receiving cavities. The earphone box may further include an earphone cover. Wherein the first receiving cavity is sized and shaped to receive a first wireless headset and the second receiving cavity is sized and shaped to receive a second wireless headset.
The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing.
In one possible implementation, the wireless headset may further include a touch area, which may be located on an outer surface of the headset housing, and at least one touch sensor is disposed in the touch area for detecting a touch operation, and the touch sensor may include a capacitive sensor. When a user touches the touch area, the at least one capacitive sensor may detect a change in self-capacitance to recognize a touch operation.
In one possible implementation, the wireless headset may further include an acceleration sensor and a triaxial gyroscope, the acceleration sensor and the triaxial gyroscope may be disposed within the headset housing, and the acceleration sensor and the triaxial gyroscope are used to identify a picking up action and a taking down action of the wireless headset.
In a possible implementation manner, the wireless headset may further include at least one air pressure sensor, and the air pressure sensor may be disposed on a surface of the headset housing and configured to detect air pressure in the ear after the wireless headset is worn. The wearing tightness of the wireless earphone can be detected through the air pressure sensor. When it is detected that the wireless earphone is worn loosely, the wireless earphone can send prompt information to an electronic device connected with the wireless earphone so as to prompt a user that the wireless earphone has a risk of falling.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application, the wearable device 100 includes a storage and processing circuit 110, and a sensor 170 and an audio component 140 connected to the storage and processing circuit 110, wherein:
the wearable device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the wearable device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the wearable device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) phone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on touch sensors, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in wearable device 100, to name a few, embodiments of the present application are not limited.
The wearable device 100 may also include input-output circuitry 150. The input-output circuitry 150 may be used to enable the wearable device 100 to enable input and output of data, i.e., to allow the wearable device 100 to receive data from an external device and also to allow the wearable device 100 to output data from the wearable device 100 to an external device. The input-output circuit 150 may further include a sensor 170. The sensors 170 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on optical touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, ultrasonic sensors, and other sensors, among others. The ultrasonic sensor may include at least one receiver and a microphone, and specifically, the microphone emits ultrasonic waves, the receiver receives the ultrasonic waves, and the receiver and the microphone form an ultrasonic sensor.
Input-output circuitry 150 may also include one or more displays, such as display 130. Display 130 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The audio component 140 may be used to provide audio input and output functionality for the wearable device 100. The audio components 140 in the wearable device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sounds.
The communication circuit 120 may be used to provide the wearable device 100 with the ability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The wearable device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through the input-output circuitry 150 to control operation of the wearable device 100, and may use output data of the input-output circuitry 150 to enable receiving status information and other outputs from the wearable device 100.
Based on the wearable device described in fig. 1A above, the following functions may be implemented:
the communication circuit 120 is configured to obtain a target location through the internet of things;
the sensor 170 is configured to acquire a current position;
the processing circuit is configured to generate a navigation route between the current location and the target location;
the audio component 140 is further configured to play the navigation route in voice.
It can be seen that, the wearable device described in the embodiment of the application is worn on the head of a user, acquires a target position through the internet of things, acquires a current position, generates a navigation route between the current position and the target position, and plays the navigation route through voice, so that voice navigation can be realized through the wearable device, functions of the wearable device are enriched, and user experience is improved.
In one possible example, the sensor 170 is further specifically configured to acquire a target environmental parameter;
in terms of the voice playing the navigation route, the audio component 140 is specifically configured to:
determining a target playing parameter corresponding to the target environment parameter;
and playing the navigation route according to the target playing parameter in a voice mode.
In one possible example, the wearable device includes a first voice component and a second voice component;
when the wearable device plays the target audio, in terms of the voice playing the navigation route, the audio component 140 is specifically configured to:
playing the target audio by adopting a first voice component; and the navigation route is played by voice through the second voice component.
In one possible example, in connection with the generating the navigation route between the current location and the target location, the processing circuitry is specifically configured to:
determining an average course of at least one navigation path between the current location and the target location;
determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
and generating a navigation route between the current position and the target position according to the target travel mode.
In one possible example, in terms of obtaining the target location through the internet of things, the communication circuit 120 is specifically configured to:
receiving a search result of searching for the Internet of things node by a target user aiming at the Internet of things, wherein the search result comprises a plurality of Internet of things nodes;
selecting three target Internet of things nodes from the plurality of Internet of things nodes, wherein the three target Internet of things nodes are not positioned on the same straight line, and the position of each target Internet of things node is a known quantity;
acquiring a signal intensity value of each target internet of things node in the three target internet of things nodes to obtain three signal intensity values;
and determining the target position according to the three signal strength values and the positions of the three target Internet of things nodes.
Based on the wearable device described in fig. 1A, the following voice navigation method can be implemented:
the communication circuit 120 obtains a target location through the internet of things;
the sensor 170 acquires a current position;
the processing circuit generates a navigation route between the current location and the target location;
the audio component 140 plays the navigation route audibly.
Referring to fig. 1B, fig. 1B is a schematic flow chart of a voice navigation method according to an embodiment of the present application. The voice navigation method is applied to the wearable device shown in fig. 1A, the wearable device is worn on the head of a user, and the voice navigation method comprises the following steps.
101. And acquiring the target position through the Internet of things.
The embodiment of the application can be applied to an indoor navigation environment, under the indoor navigation environment, the wearable device can be connected with the Internet of things through a network, and the node of the Internet of things can be at least one of the following: a router, a server, a monitoring platform, a gateway, an electronic device, etc., the indoor navigation environment may be at least one of: train stations, airports, malls, supermarkets, museums, hospitals, schools, bus stops, etc., without limitation. Specifically, for example, a network connection is established between the wearable device and the electronic device, a target location transmitted by the electronic device is received, or the target location may be input by a user voice.
Optionally, in the step 101, obtaining the target location through the internet of things may include the following steps:
111. receiving a search result of searching for the Internet of things node by a target user aiming at the Internet of things, wherein the search result comprises a plurality of Internet of things nodes;
112. selecting three target Internet of things nodes from the plurality of Internet of things nodes, wherein the three target Internet of things nodes are not positioned on the same straight line, and the position of each target Internet of things node is a known quantity;
113. acquiring a signal intensity value of each target internet of things node in the three target internet of things nodes to obtain three signal intensity values;
114. and determining the target position according to the three signal strength values and the positions of the three target Internet of things nodes.
Wherein, the target user is a destination position of navigation, the target user can be an electronic device, and under an indoor navigation environment, the target user can search the internet of things nodes to obtain a search result, the search result can comprise a plurality of internet of things nodes and a signal intensity value corresponding to each internet of things node, because some positions of the internet of things nodes are changed, three target internet of things nodes can be selected from the plurality of internet of things nodes, the three target internet of things nodes are not positioned on the same straight line, the position of each target internet of things node is a known quantity, the signal intensity value of each target internet of things node in the three target internet of things nodes is obtained to obtain three signal intensity values, the mapping relation between the signal intensity value and the distance can be preset, further, a plurality of distance values can be obtained, and the three target internet of things nodes can be mapped on an indoor map, taking each target internet of things node as a center, making a circle with the corresponding distance value as a radius to obtain three circles, taking the map position corresponding to the intersection area shared by the three circles as a target position, for example, as shown in fig. 1C, a1, a2, and a3 are all three target internet of things nodes, r1 is the distance value corresponding to a1 (distance between the target user and a 1), r2 is the distance value corresponding to a2, and r3 is the distance value corresponding to a3, so that three circles can be obtained, and taking the position of the intersection area of the three circles as the target position.
Optionally, in the step 101, the obtaining of the target location through the internet of things may include the following steps:
121. acquiring a target voice signal sent by a target user through the Internet of things;
122. analyzing the target voice signal to obtain a plurality of target pronunciation characteristics;
123. determining a language type corresponding to each target pronunciation feature in the plurality of target pronunciation features according to a preset mapping relation between the pronunciation features and the language types to obtain a plurality of language types;
124. selecting the language type with the most occurrence times from the plurality of language types as a target language type;
125. acquiring a target analysis model corresponding to the target language type;
126. and analyzing the target voice signal according to the target analysis model to obtain target content, and extracting the target position from the target content.
The language type may be a national or local language, and may include at least one of mandarin, english, spanish, arabic, russian, chongqing, tetrazang, and the like, for example, without limitation. The wearable device may acquire a target speech signal through a microphone, for example, a user may input a piece of speech through an input method, and further, the wearable device may parse the target speech signal to obtain a plurality of target pronunciation features, where the pronunciation features may be used to uniquely identify a language in a certain country or region, for example, the language in the four and Chongqing languages, and may also be used to identify different local languages, for example, the language features may be used to identify the different local languages, a mapping relationship between the pronunciation features and the language types may be pre-stored in the wearable device, a language type corresponding to each target pronunciation feature in the plurality of target pronunciation features may be determined, a plurality of language types may be obtained, a language type with the largest number of occurrences in the plurality of language types may be selected as the target language type, and a mapping relationship between the language type and the parsing model may be pre-stored in the wearable device, according to the mapping relation, a target analysis model corresponding to the target language type can be obtained, different analysis models exist in different language types, for example, the language A corresponds to the analysis model A, the language B corresponds to the analysis model B, then, a target voice signal can be analyzed according to the target analysis model to obtain target content, and a target position is extracted from the target content.
102. And acquiring the current position.
The wearable device may obtain the current location through a Global Positioning System (GPS) or a wireless fidelity (Wi-Fi) positioning technology.
103. Generating a navigation route between the current location and the target location.
After the current position and the target position are determined, a navigation route between the current position and the target position can be generated through a path generation algorithm.
Optionally, in step 103, generating a navigation route between the current location and the target location may include the following steps:
31. determining an average course of at least one navigation path between the current location and the target location;
32. determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
33. and generating a navigation route between the current position and the target position according to the target travel mode.
After the current position and the target position are determined, the navigation routes between the current position and the target position can be generated, at least one navigation route can be obtained, each navigation route can correspond to one route, the routes of the multiple navigation routes are averaged to obtain an average route, a mapping relation between a preset route and a trip mode can be stored in the wearable device in advance, furthermore, a target trip mode corresponding to the average route can be determined according to the mapping relation between the preset route and the trip mode, and the trip mode can be at least one of the following modes: taxis, buses, bicycles, walking, taxis + buses, etc., without limitation. And finally, generating a navigation route between the current position and the target position according to the target travel mode.
104. And playing the navigation route by voice.
The wearable device can play the navigation route by voice at preset time intervals, and the preset time intervals can be set by the user or default by the system. The wearable device can be positioned in real time, the navigation route is played by voice at intervals of preset moving distances, and the preset moving distances can be set by a user or are defaulted by a system.
Optionally, in step 104, the voice playing the navigation route may include the following steps:
41. acquiring target environment parameters;
42. determining a target playing parameter corresponding to the target environment parameter;
43. and playing the navigation route according to the target playing parameter in a voice mode.
Wherein, the sensor of the wearable device may be an environmental sensor, and the environmental sensor may be at least one of the following: a position sensor, a humidity sensor, a temperature sensor, an external sound detection sensor, and the like. The target environmental parameter may be acquired by an environmental sensor, and may include at least one of: location, humidity, temperature, external noise, etc. The playing parameters may include at least one of the following: volume, sound effects, speech rate, etc. The wearable device can pre-store the mapping relation between the environmental parameters and the playing parameters, and further, after the target environmental parameters are obtained, the target playing parameters corresponding to the target environmental parameters can be determined according to the mapping relation, and the navigation route is played according to the target playing parameters in a voice mode.
Optionally, in step 104, the wearable device includes a first voice component and a second voice component;
when the wearable device plays the target audio, the voice plays the navigation route, including:
playing the target audio by adopting a first voice component; and the navigation route is played by voice through the second voice component.
Wherein the target audio may be at least one of: music, radio, talk voice, etc. The wearable device may comprise a first voice component and a second voice component, e.g. the wireless headset comprises a left ear plug and a right ear plug, the left ear plug may be considered as the first voice component and the right ear plug may be considered as the second voice component. In the navigation process, when the wearable device plays the target audio, the first voice component is adopted to play the target audio, and the second voice component is adopted to play the navigation route in a voice mode.
Optionally, in step 104, the voice playing the navigation route may include the following steps:
a1, determining the target fit degree between the wearable device and the ear;
a2, determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device;
a3, controlling the wearable device to play the navigation route in the first volume voice.
The fitting degree is used for expressing the fitting tightness degree between the wearable device and the ear, and the fitting degree can be expressed by specific numerical values. The wearable device may be provided with a sensor for detecting a degree of fit between the wearable device and the ear, the sensor may comprise at least one of: pressure sensors, barometric pressure sensors, ultrasonic sensors, distance sensors, and the like. In specific implementation, a mapping relation between the attaching degree and the volume of the wearable device can be stored in the wearable device in advance, then, a first volume corresponding to the target attaching degree is determined according to the mapping relation, and under the target attaching degree, the wearable device can be controlled to play the navigation route through the first volume voice.
In practical applications, a wireless headset is taken as an example, and a specified volume is taken as an example, the closer the wireless headset is attached to the ear, the louder the wireless headset sounds, and the looser the wireless headset is attached to the ear, the louder the wireless headset sounds.
Optionally, the wearable device includes a pressure sensor, and the step a1 of determining the target fit between the wearable device and the ear may include the steps of:
a11, detecting a target pressure value between the wearable device and the ear;
a12, determining the target label contact degree corresponding to the target pressure value according to the mapping relation between the preset pressure value and the fit degree.
At least one pressure sensor may be disposed at a portion where the wearable device contacts with the ear, the at least one pressure sensor may detect a target pressure value between the wearable device and the ear, and the target pressure value may be a pressure value of any one of the at least one pressure sensor, or an average pressure value of all the at least one pressure sensor, or a maximum pressure value detected by the at least one pressure sensor, or a minimum pressure value detected by the at least one pressure sensor, or the like. The mapping relation between the pressure value and the fitting degree can be prestored in the wearable device, and then the target fitting degree corresponding to the target pressure value can be determined according to the mapping relation.
Pressure value Degree of adhesion
a~b K1
b~c K2
c~d K3
Wherein a < b < c < d, K1, K2, and K3 are numbers greater than 0.
Optionally, the wearable device includes an air pressure sensor, and the determining the target fit between the wearable device and the ear in step a1 may include the following steps:
a21, detecting a target air pressure value between the wearable device and the ear;
a22, determining the target label contact degree corresponding to the target air pressure value according to the preset mapping relation between the air pressure value and the fit degree.
The wearable device comprises an air pressure sensor, and a target air pressure value between the wearable device and the ear is detected through the air pressure sensor. The mapping relation between the air pressure value and the fitting degree can be stored in the wearable device in advance, and then the target label fitting degree corresponding to the target air pressure value can be determined according to the mapping relation.
Optionally, the wearable device comprises a first component and a second component; the step a1 of determining the target fit between the wearable device and the ear may include the following steps:
a31, determining a target distance between the first voice component and the second voice component;
a32, determining the target labeling degree corresponding to the target distance according to the mapping relation between the preset distance and the fitting degree.
Wherein the wearable device may comprise a first speech part and a second speech part, e.g. a wireless headset, may comprise two earplugs, each earpiece may be provided with an ultrasonic sensor, e.g. a left earpiece is provided with a transmitter and a right earpiece is provided with a receiver, and a target distance between the first part and the second part is measured by the two earplugs. The mapping relation between the distance and the fitting degree can be stored in the wearable device in advance, and then the target fitting degree corresponding to the target distance can be determined according to the mapping relation.
Optionally, a mapping relationship set is pre-stored in the wearable device, where the mapping relationship set includes a plurality of mapping relationships, and each mapping relationship is a mapping relationship between a preset degree of attachment and a volume of the wearable device;
between the above steps a1 and a2, the following steps may be further included:
b1, acquiring current environment parameters;
b2, determining a target mapping relation corresponding to the current environmental parameter according to the corresponding relation between the preset environmental parameter and the mapping relation;
in the step a2, determining the first volume corresponding to the target fitness according to a mapping relationship between a preset fitness and the volume of the wearable device, which may be implemented as follows:
and determining the first volume corresponding to the target fit degree according to the target mapping relation.
The wearable device may store a mapping relationship set in advance, where the mapping relationship set may include a plurality of mapping relationships, and each mapping relationship is a mapping relationship between a preset degree of attachment and a volume of the wearable device. The sensor of the wearable device may be an environmental sensor, and the environmental sensor may be at least one of: a position sensor, a humidity sensor, a temperature sensor, an external sound detection sensor, and the like. The current environmental parameters can be acquired by the environmental sensors. The wearable device may pre-store a corresponding relationship between the environmental parameter and the mapping relationship, and determine a target mapping relationship corresponding to the current environmental parameter according to the corresponding relationship. Further, a first volume corresponding to the target fit degree can be determined according to the target mapping relation. A mapping table between environment parameters and mapping relationships is provided as follows, specifically as follows:
environmental parameter Mapping relationships
Environmental parameter 1 Mapping relation 1
Environmental parameter 2 Mapping relation 2
Environmental parameter n Mapping relation n
So, under different environmental parameters, can take different mapping relations, for example, if external environment is noisy, the mapping relation at this time is different with the mapping relation under the quiet environment, and this application embodiment can provide the mapping relation that corresponds with it under the environment of difference, so, obtains the volume that is fit for with the environment.
Optionally, after the step a3, the following steps may be further included:
a4, monitoring the target variation of the target label contact degree;
a5, when the absolute value of the target variation is larger than a preset threshold, determining a target volume adjustment parameter corresponding to the target variation according to a mapping relation between preset variation and volume adjustment parameters;
a6, determining a second volume according to the first volume and the target volume adjusting parameter;
a7, controlling the wearable device to play the navigation route in the second volume voice.
Wherein, wearable equipment can monitor the target change volume of target labeling degree of consistency through the sensor, and the target change volume is the change volume of degree of consistency, among the practical application to wireless earphone has been taken as an example, and the earphone has been worn for a long time, and perhaps, the user lets the degree of consistency step-down very easily in the motion, otherwise, the user plugs up the earphone, then can increase the degree of consistency of pasting, and above-mentioned target change volume can be realized through the sensor, and for example, the sensor includes pressure sensor, can confirm the target change volume through the pressure value change. The preset threshold value can be set by the user or the default of the system. The volume adjustment parameter may be "+" volume (volume up) or "-" volume (volume down), a mapping relationship between a preset variation and the volume adjustment parameter may be preset in the wearable device, and when an absolute value of the target variation is greater than a preset threshold, the target volume adjustment parameter corresponding to the target variation is determined according to the mapping relationship. When the target volume adjustment parameter is determined, a second volume may be determined according to the first volume and the target volume adjustment parameter, for example, the second volume is equal to the first volume + the target volume adjustment parameter, if the target degree of attachment is increased, the second volume is smaller than the first volume, and if the target degree of attachment is decreased, the second volume is larger than the first volume.
It can be seen that the voice navigation method described in the embodiment of the application is applied to wearable equipment, the wearable equipment is worn on the head of a user, a target position is obtained through the internet of things, a current position is obtained, a navigation route between the current position and the target position is generated, and the navigation route is played through voice.
Referring to fig. 2, fig. 2 is a schematic flowchart of a voice navigation method disclosed in an embodiment of the present application, and the method is applied to the wearable device shown in fig. 1A, where the wearable device includes a first voice component and a second voice component; the wearable device is worn on the head of a user, and the voice navigation method comprises the following steps.
201. And acquiring the target position through the Internet of things.
202. And acquiring the current position.
203. Generating a navigation route between the current location and the target location.
204. When the wearable device plays the target audio, a first voice component is adopted to play the target audio, and the second voice component is adopted to play the navigation route in a voice mode.
The voice navigation method described in the embodiment of the application is applied to wearable equipment, the wearable equipment is worn on the head of a user, a target position is obtained through the internet of things, a current position is obtained, a navigation route between the current position and the target position is generated, and when the wearable equipment plays a target audio, a first voice component is adopted to play the target audio; and adopt second pronunciation part pronunciation broadcast navigation route, so, can realize pronunciation navigation through wearable equipment, richen wearable equipment's function, can also listen the music while, pronunciation navigation has promoted user experience.
Referring to fig. 3, fig. 3 is a flowchart illustrating a voice navigation method according to an embodiment of the present application, applied to the wearable device shown in fig. 1A, wherein the wearable device is worn on the head of a user, and the voice navigation method includes the following steps.
301. And acquiring the target position through the Internet of things.
302. And acquiring the current position.
303. Generating a navigation route between the current location and the target location.
304. Determining a target fit between the wearable device and the ear.
305. And determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device.
306. And playing the navigation route according to the first volume voice.
It can be seen that the voice navigation method described in the embodiment of the present application is applied to wearable devices, where the wearable devices are worn on the head of a user, a target position is obtained through the internet of things, a current position is obtained, a navigation route between the current position and the target position is generated, a target fitting degree between the wearable devices and ears is determined, a first volume corresponding to the target fitting degree is determined according to a mapping relation between a preset fitting degree and the volume of the wearable devices, and the navigation route is played according to first volume voice.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application, and as shown in the drawing, the wearable device includes a processor, a memory, a communication interface, and one or more programs, the wearable device is worn on the head of a user, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
acquiring a target position through the Internet of things;
acquiring a current position;
generating a navigation route between the current location and the target location;
and playing the navigation route by voice.
It can be seen that, the wearable device described in the embodiment of the application is worn on the head of a user, acquires a target position through the internet of things, acquires a current position, generates a navigation route between the current position and the target position, and plays the navigation route through voice, so that voice navigation can be realized through the wearable device, functions of the wearable device are enriched, and user experience is improved.
In one possible example, in the aspect of the voice playing the navigation route, the program includes instructions for performing the following steps:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and playing the navigation route according to the target playing parameter in a voice mode.
In one possible example, the wearable device includes a first voice component and a second voice component;
when the wearable device plays the target audio, the voice plays the navigation route, including:
playing the target audio by adopting a first voice component; and the navigation route is played by voice through the second voice component.
In one possible example, in said generating a navigation route between said current location and said target location, the above program includes instructions for performing the steps of:
determining an average distance between the current location and the target location;
determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
and generating a navigation route between the current position and the target position according to the target travel mode.
In one possible example, in the obtaining of the target location through the internet of things, the program includes instructions for:
receiving a search result of searching for the Internet of things node by a target user aiming at the Internet of things, wherein the search result comprises a plurality of Internet of things nodes and a signal intensity value corresponding to each Internet of things node;
selecting three target Internet of things nodes from the plurality of Internet of things nodes, wherein the three target Internet of things nodes are not positioned on the same straight line, and the position of each target Internet of things node is a known quantity;
acquiring a signal intensity value of each target internet of things node in the three target internet of things nodes to obtain three signal intensity values;
and determining the target position according to the three signal strength values and the positions of the three target Internet of things nodes.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the wearable device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the wearable device may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a voice navigation apparatus, which is applied to a wearable device, the wearable device is worn on a head of a user, the voice navigation apparatus includes an obtaining unit 501, a generating unit 502, and a playing unit 503, wherein,
the acquiring unit 501 is configured to acquire a target location through the internet of things; and obtaining a current position;
the generating unit 502 is configured to generate a navigation route between the current location and the target location;
the playing unit 503 is configured to play the navigation route in voice.
It can be seen that the voice navigation device described in the embodiment of the application is applied to wearable equipment, the wearable equipment is worn on the head of a user, the target position is obtained through the internet of things, the current position is obtained, the navigation route between the current position and the target position is generated, and the navigation route is played through voice.
In one possible example, in terms of playing the navigation route by the voice, the playing unit 503 is specifically configured to:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and playing the navigation route according to the target playing parameter in a voice mode.
In one possible example, the wearable device includes a first voice component and a second voice component; when the wearable device plays the target audio, in terms of the speech playing the navigation route, the playing unit 503 is specifically configured to:
playing the target audio by adopting a first voice component; and the navigation route is played by voice through the second voice component.
In one possible example, in terms of the generating the navigation route between the current location and the target location, the generating unit 502 is specifically configured to:
determining an average course of at least one navigation path between the current location and the target location;
determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
and generating a navigation route between the current position and the target position according to the target travel mode.
In a possible example, in terms of obtaining the target location through the internet of things, the obtaining unit 501 is specifically configured to:
receiving a search result of searching for the Internet of things node by a target user aiming at the Internet of things, wherein the search result comprises a plurality of Internet of things nodes and a signal intensity value corresponding to each Internet of things node;
selecting three target Internet of things nodes from the plurality of Internet of things nodes, wherein the three target Internet of things nodes are not positioned on the same straight line, and the position of each target Internet of things node is a known quantity;
acquiring a signal intensity value of each target internet of things node in the three target internet of things nodes to obtain three signal intensity values;
and determining the target position according to the three signal strength values and the positions of the three target Internet of things nodes.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. A wearable device comprising a processing circuit, and a communication circuit, a sensor, and an audio component connected to the processing circuit, wherein,
the communication circuit is used for establishing network connection between the wearable equipment and the Internet of things under an indoor navigation environment and acquiring a target position through the Internet of things;
the sensor is used for acquiring the current position;
the processing circuit is configured to generate a navigation route between the current location and the target location;
the audio component is used for playing the navigation route in a voice mode, and specifically comprises the following steps: determining a target fit between the wearable device and an ear; determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device; controlling the wearable device to play the navigation route in the first volume voice;
monitoring the target variation of the target label fitting degree;
when the absolute value of the target variable quantity is larger than a preset threshold value, determining a target volume adjusting parameter corresponding to the target variable quantity according to a mapping relation between the preset variable quantity and the volume adjusting parameter;
determining a second volume according to the first volume and the target volume adjusting parameter;
controlling the wearable device to play the navigation route in the second volume voice;
wherein the wearable device comprises a first voice component and a second voice component;
when the wearable device plays a target audio, in terms of the voice playing the navigation route, the audio component is specifically configured to:
playing the target audio by adopting the first voice component; and the navigation route is played by voice through the second voice component;
wherein, in the aspect of obtaining the target location through the internet of things, the communication circuit is specifically configured to:
acquiring a target voice signal sent by a target user through the Internet of things;
analyzing the target voice signal to obtain a plurality of target pronunciation characteristics;
determining a language type corresponding to each target pronunciation feature in the plurality of target pronunciation features according to a preset mapping relation between the pronunciation features and the language types to obtain a plurality of language types;
selecting the language type with the most occurrence times from the plurality of language types as a target language type;
acquiring a target analysis model corresponding to the target language type;
analyzing the target voice signal according to the target analysis model to obtain target content, and extracting the target position from the target content;
wherein, first pronunciation part, second pronunciation part sets up respectively in user's left ear department and right ear department, confirm the target laminating degree between wearable equipment and the ear, include:
determining a target distance between the first speech component and the second speech component;
and determining the target labeling degree corresponding to the target distance according to a mapping relation between a preset distance and the fitting degree.
2. The wearable device according to claim 1, wherein in connection with the generating of the navigation route between the current location and the target location, the processing circuit is specifically configured to:
determining an average course of at least one navigation path between the current location and the target location;
determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
and generating a navigation route between the current position and the target position according to the target travel mode.
3. A voice navigation method is applied to a wearable device and comprises the following steps:
under an indoor navigation environment, network connection is established between the wearable device and the Internet of things, and a target position is obtained through the Internet of things;
acquiring a current position;
generating a navigation route between the current location and the target location;
the voice playing of the navigation route specifically comprises: determining a target fit between the wearable device and an ear; determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device; controlling the wearable device to play the navigation route in the first volume voice;
monitoring the target variation of the target label fitting degree;
when the absolute value of the target variable quantity is larger than a preset threshold value, determining a target volume adjusting parameter corresponding to the target variable quantity according to a mapping relation between the preset variable quantity and the volume adjusting parameter;
determining a second volume according to the first volume and the target volume adjusting parameter;
controlling the wearable device to play the navigation route in the second volume voice;
wherein the wearable device comprises a first voice component and a second voice component;
when the wearable device plays the target audio, the voice plays the navigation route, including:
playing the target audio by adopting the first voice component; and the navigation route is played by voice through the second voice component;
wherein, the obtaining the target position through the internet of things includes:
acquiring a target voice signal sent by a target user through the Internet of things;
analyzing the target voice signal to obtain a plurality of target pronunciation characteristics;
determining a language type corresponding to each target pronunciation feature in the plurality of target pronunciation features according to a preset mapping relation between the pronunciation features and the language types to obtain a plurality of language types;
selecting the language type with the most occurrence times from the plurality of language types as a target language type;
acquiring a target analysis model corresponding to the target language type;
analyzing the target voice signal according to the target analysis model to obtain target content, and extracting the target position from the target content;
wherein, first pronunciation part, second pronunciation part sets up respectively in user's left ear department and right ear department, confirm the target laminating degree between wearable equipment and the ear, include:
determining a target distance between the first speech component and the second speech component;
and determining the target labeling degree corresponding to the target distance according to a mapping relation between a preset distance and the fitting degree.
4. The method of claim 3, wherein the generating a navigation route between the current location and the target location comprises:
determining an average course of at least one navigation path between the current location and the target location;
determining a target trip mode corresponding to the average distance according to a preset mapping relation between the distance and the trip mode;
and generating a navigation route between the current position and the target position according to the target travel mode.
5. The voice navigation device is applied to wearable equipment and comprises an acquisition unit, a generation unit and a playing unit, wherein,
the acquisition unit is used for establishing network connection between the wearable equipment and the Internet of things in an indoor navigation environment and acquiring a target position through the Internet of things; and obtaining a current position;
the generating unit is used for generating a navigation route between the current position and the target position;
the playing unit is used for playing the navigation route by voice, and specifically comprises: determining a target fit between the wearable device and an ear; determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device; controlling the wearable device to play the navigation route in the first volume voice;
monitoring the target variation of the target label fitting degree;
when the absolute value of the target variable quantity is larger than a preset threshold value, determining a target volume adjusting parameter corresponding to the target variable quantity according to a mapping relation between the preset variable quantity and the volume adjusting parameter;
determining a second volume according to the first volume and the target volume adjusting parameter;
controlling the wearable device to play the navigation route in the second volume voice;
wherein the wearable device comprises a first voice component and a second voice component;
when the wearable device plays the target audio, the voice plays the navigation route, including:
playing the target audio by adopting the first voice component; and the navigation route is played by voice through the second voice component;
wherein, the aspect of obtaining the target position through the internet of things comprises:
acquiring a target voice signal sent by a target user through the Internet of things;
analyzing the target voice signal to obtain a plurality of target pronunciation characteristics;
determining a language type corresponding to each target pronunciation feature in the plurality of target pronunciation features according to a preset mapping relation between the pronunciation features and the language types to obtain a plurality of language types;
selecting the language type with the most occurrence times from the plurality of language types as a target language type;
acquiring a target analysis model corresponding to the target language type;
analyzing the target voice signal according to the target analysis model to obtain target content, and extracting the target position from the target content;
wherein, first pronunciation part, second pronunciation part sets up respectively in user's left ear department and right ear department, confirm the target laminating degree between wearable equipment and the ear, include:
determining a target distance between the first speech component and the second speech component;
and determining the target labeling degree corresponding to the target distance according to a mapping relation between a preset distance and the fitting degree.
6. A wearable device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 3 or 4.
7. A computer-readable storage medium, in which a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 3 or 4.
CN201810574609.8A 2018-06-06 2018-06-06 Voice navigation method and related product Expired - Fee Related CN108827338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810574609.8A CN108827338B (en) 2018-06-06 2018-06-06 Voice navigation method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810574609.8A CN108827338B (en) 2018-06-06 2018-06-06 Voice navigation method and related product

Publications (2)

Publication Number Publication Date
CN108827338A CN108827338A (en) 2018-11-16
CN108827338B true CN108827338B (en) 2021-06-25

Family

ID=64144039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810574609.8A Expired - Fee Related CN108827338B (en) 2018-06-06 2018-06-06 Voice navigation method and related product

Country Status (1)

Country Link
CN (1) CN108827338B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020086658A (en) * 2018-11-19 2020-06-04 ナブテスコ株式会社 Information processing apparatus, information processing system, information processing method, and similarity determination method
CN111148167A (en) * 2019-03-18 2020-05-12 广东小天才科技有限公司 Operator network switching method of wearable device and wearable device
CN113834478A (en) * 2020-06-23 2021-12-24 阿里巴巴集团控股有限公司 Travel method, target object guiding method and wearable device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101988835A (en) * 2009-07-30 2011-03-23 黄金富 Navigation directing system adopting electronic compass for pedestrians and corresponding method
CN104902359A (en) * 2014-03-06 2015-09-09 昆山研达电脑科技有限公司 Navigation earphones
CN106896528A (en) * 2017-03-15 2017-06-27 苏州创必成电子科技有限公司 Bluetooth spectacles with ear-phone function
CN107403232A (en) * 2016-05-20 2017-11-28 北京搜狗科技发展有限公司 A kind of navigation control method, device and electronic equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401200B2 (en) * 2009-11-19 2013-03-19 Apple Inc. Electronic device and headset with speaker seal evaluation capabilities
CN101917656A (en) * 2010-08-30 2010-12-15 鸿富锦精密工业(深圳)有限公司 Automatic volume adjustment device and method
US9042588B2 (en) * 2011-09-30 2015-05-26 Apple Inc. Pressure sensing earbuds and systems and methods for the use thereof
US9453902B2 (en) * 2013-03-13 2016-09-27 Intel Corporation Dead zone location detection apparatus and method
CN104280038A (en) * 2013-07-12 2015-01-14 中国电信股份有限公司 Navigation method and navigation device
CN104507003A (en) * 2014-11-28 2015-04-08 广东好帮手电子科技股份有限公司 A method and a system for adjusting a volume intelligently according to a noise in a vehicle
CN105744410A (en) * 2014-12-10 2016-07-06 曾辉赛 Headset with memory-prompting function suitable for old people
CN104702763A (en) * 2015-03-04 2015-06-10 乐视致新电子科技(天津)有限公司 Method, device and system for adjusting volume
US9677901B2 (en) * 2015-03-10 2017-06-13 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing navigation instructions at optimal times
CN105246000A (en) * 2015-10-28 2016-01-13 维沃移动通信有限公司 Method for improving sound quality of headset and mobile terminal
CN107843250A (en) * 2017-10-17 2018-03-27 三星电子(中国)研发中心 Vibration air navigation aid, device and wearable device for wearable device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101988835A (en) * 2009-07-30 2011-03-23 黄金富 Navigation directing system adopting electronic compass for pedestrians and corresponding method
CN104902359A (en) * 2014-03-06 2015-09-09 昆山研达电脑科技有限公司 Navigation earphones
CN107403232A (en) * 2016-05-20 2017-11-28 北京搜狗科技发展有限公司 A kind of navigation control method, device and electronic equipment
CN106896528A (en) * 2017-03-15 2017-06-27 苏州创必成电子科技有限公司 Bluetooth spectacles with ear-phone function

Also Published As

Publication number Publication date
CN108827338A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108810693B (en) Wearable device and device control device and method thereof
CN109511037B (en) Earphone volume adjusting method and device and computer readable storage medium
CN109040887A (en) Principal and subordinate&#39;s earphone method for handover control and Related product
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
WO2018045536A1 (en) Sound signal processing method, terminal, and headphones
EP3598435B1 (en) Method for processing information and electronic device
WO2020019847A1 (en) Method for switching main headset, and related device
CN108966067B (en) Play control method and related product
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN108827338B (en) Voice navigation method and related product
CN108540660B (en) Voice signal processing method and device, readable storage medium and terminal
CN108777827B (en) Wireless earphone, volume adjusting method and related product
CN109918039A (en) A kind of volume adjusting method and mobile terminal
CN107182011B (en) Audio playing method and system, mobile terminal and WiFi earphone
JP2018078398A (en) Autonomous assistant system using multifunctional earphone
CN106506437B (en) Audio data processing method and device
CN107863110A (en) Safety prompt function method, intelligent earphone and storage medium based on intelligent earphone
CN108737923A (en) Volume adjusting method and related product
CN109039355B (en) Voice prompt method and related product
CN107786714B (en) Sound control method, apparatus and system based on vehicle-mounted multimedia equipment
CN106126160A (en) A kind of effect adjusting method and user terminal
CN114125639A (en) Audio signal processing method and device and electronic equipment
CN110460721A (en) A kind of starting method, device and mobile terminal
CN109873894B (en) Volume adjusting method and mobile terminal
WO2022057365A1 (en) Noise reduction method, terminal device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210625