CN109039355B - Voice prompt method and related product - Google Patents

Voice prompt method and related product Download PDF

Info

Publication number
CN109039355B
CN109039355B CN201810550406.5A CN201810550406A CN109039355B CN 109039355 B CN109039355 B CN 109039355B CN 201810550406 A CN201810550406 A CN 201810550406A CN 109039355 B CN109039355 B CN 109039355B
Authority
CN
China
Prior art keywords
vehicle
determining
road condition
driving
condition state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810550406.5A
Other languages
Chinese (zh)
Other versions
CN109039355A (en
Inventor
张伟正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810550406.5A priority Critical patent/CN109039355B/en
Publication of CN109039355A publication Critical patent/CN109039355A/en
Application granted granted Critical
Publication of CN109039355B publication Critical patent/CN109039355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • G01P3/38Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • H04B2001/3872Transceivers carried on the body, e.g. in helmets with extendable microphones or earphones

Abstract

The application discloses voice prompt method and related product, which are applied to wearable equipment, wherein the wearable equipment comprises a processing circuit, a communication circuit and an audio component, wherein the communication circuit and the audio component are connected with the processing circuit, and the method comprises the following steps: after the wearable equipment is connected with the vehicle-mounted equipment, acquiring driving information acquired by the vehicle-mounted equipment; determining a road condition state according to the driving information; and when the road condition state is a preset state, carrying out voice prompt. By adopting the embodiment of the application, the network connection can be established with the vehicle-mounted equipment to acquire the driving information, the road condition state can be determined according to the driving information, voice prompt is carried out according to the road condition state, the functions of the wearable equipment are enriched, and the user experience is improved.

Description

Voice prompt method and related product
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a voice prompt method and a related product.
Background
With the maturity of wireless technology, the wireless headset is connected with wearable devices such as mobile phones through wireless technology in more and more scenes. People can realize various functions such as listening to music, making a call and the like through the wireless earphone. However, the current wireless headset has a single function, thereby reducing the user experience.
Disclosure of Invention
The embodiment of the application provides a voice prompt method and a related product, which can realize road condition prompt, enrich the functions of wearable equipment and improve user experience.
In a first aspect, embodiments of the present application provide a wearable device including a processing circuit, and a communication circuit and an audio component connected to the processing circuit, wherein,
the communication circuit is used for acquiring the driving information acquired by the vehicle-mounted equipment after the wearable equipment is connected with the vehicle-mounted equipment;
the processing circuit is used for determining the road condition state according to the driving information;
and the audio component is used for carrying out voice prompt when the road condition state is a preset state.
In a second aspect, an embodiment of the present application provides a voice prompt method, which is applied to a wearable device, and the method includes:
after the wearable equipment is connected with the vehicle-mounted equipment, acquiring driving information acquired by the vehicle-mounted equipment;
determining a road condition state according to the driving information;
and when the road condition state is a preset state, carrying out voice prompt.
In a third aspect, an embodiment of the present application provides a voice prompt apparatus, which is applied to a wearable device, where the voice prompt apparatus includes an obtaining unit, a determining unit, and a prompting unit, where:
the acquisition unit is used for acquiring the driving information acquired by the vehicle-mounted equipment after the wearable equipment is connected with the vehicle-mounted equipment;
the determining unit is used for determining the road condition state according to the driving information;
and the prompting unit is used for carrying out voice prompt when the road condition state is a preset state.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods in the second aspect of the present application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in any one of the methods of the second aspect of the present application. The computer program product may be a software installation package.
It can be seen that the voice prompt method and the related product described in the embodiments of the present application are applied to a wearable device, and after the wearable device is connected to a vehicle-mounted device, driving information acquired by the vehicle-mounted device is acquired, a road condition state is determined according to the driving information, and when the road condition state is a preset state, voice prompt is performed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 1B is a schematic flowchart of a voice prompt method disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating another voice prompt method disclosed in the embodiments of the present application;
FIG. 3 is a flow chart illustrating another method for voice prompting disclosed in the embodiments of the present application;
fig. 4 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 5 is a schematic structural diagram of a voice prompt apparatus disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The wearable device may include at least one of: wireless earphones, brain wave acquisition devices, Augmented Reality (AR)/Virtual Reality (VR) devices, smart glasses, and the like, wherein the wireless earphones may implement communication by: wireless fidelity (Wi-Fi) technology, bluetooth technology, visible light communication technology, invisible light communication technology (infrared communication technology, ultraviolet communication technology), and the like. In the embodiment of the present application, a wireless headset is taken as an example, and the wireless headset includes a left earplug and a right earplug, where the left earplug can be taken as an independent component, and the right earplug can also be taken as an independent component.
Optionally, the wireless headset may be an ear-hook headset, an ear-plug headset, or a headset, which is not limited in the embodiments of the present application.
The wireless headset may be housed in a headset case, which may include: two receiving cavities (a first receiving cavity and a second receiving cavity) sized and shaped to receive a pair of wireless headsets (a left earbud and a right earbud); one or more earphone housing magnetic components disposed within the case for magnetically attracting and respectively magnetically securing a pair of wireless earphones into the two receiving cavities. The earphone box may further include an earphone cover. Wherein the first receiving cavity is sized and shaped to receive a first wireless headset and the second receiving cavity is sized and shaped to receive a second wireless headset.
The wireless headset may include a headset housing, a rechargeable battery (e.g., a lithium battery) disposed within the headset housing, a plurality of metal contacts for connecting the battery to a charging device, the driver unit including a magnet, a voice coil, and a diaphragm, the driver unit for emitting sound from a directional sound port, and a speaker assembly including a directional sound port, the plurality of metal contacts disposed on an exterior surface of the headset housing.
In one possible implementation, the wireless headset may further include a touch area, which may be located on an outer surface of the headset housing, and at least one touch sensor is disposed in the touch area for detecting a touch operation, and the touch sensor may include a capacitive sensor. When a user touches the touch area, the at least one capacitive sensor may detect a change in self-capacitance to recognize a touch operation.
In one possible implementation, the wireless headset may further include an acceleration sensor and a triaxial gyroscope, the acceleration sensor and the triaxial gyroscope may be disposed within the headset housing, and the acceleration sensor and the triaxial gyroscope are used to identify a picking up action and a taking down action of the wireless headset.
In a possible implementation manner, the wireless headset may further include at least one air pressure sensor, and the air pressure sensor may be disposed on a surface of the headset housing and configured to detect air pressure in the ear after the wireless headset is worn. The wearing tightness of the wireless earphone can be detected through the air pressure sensor. When it is detected that the wireless earphone is worn loosely, the wireless earphone can send prompt information to an electronic device connected with the wireless earphone so as to prompt a user that the wireless earphone has a risk of falling.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application, the wearable device 100 includes a storage and processing circuit 110, and a communication circuit 120 and an audio component 140 connected to the storage and processing circuit 110, wherein:
the wearable device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the wearable device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the wearable device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) phone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on touch sensors, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in wearable device 100, to name a few, embodiments of the present application are not limited.
The wearable device 100 may also include input-output circuitry 150. The input-output circuitry 150 may be used to enable the wearable device 100 to enable input and output of data, i.e., to allow the wearable device 100 to receive data from an external device and also to allow the wearable device 100 to output data from the wearable device 100 to an external device. The input-output circuit 150 may further include a sensor 170. The sensors 170 may include ambient light sensors, proximity sensors based on light and capacitance, touch sensors (e.g., based on optical touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, ultrasonic sensors, and other sensors, among others. The ultrasonic sensor may include at least one receiver and a microphone, and specifically, the microphone emits ultrasonic waves, the receiver receives the ultrasonic waves, and the receiver and the microphone form an ultrasonic sensor.
Input-output circuitry 150 may also include one or more displays, such as display 130. Display 130 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The audio component 140 may be used to provide audio input and output functionality for the wearable device 100. The audio components 140 in the wearable device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sounds.
The communication circuit 120 may be used to provide the wearable device 100 with the ability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The wearable device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through the input-output circuitry 150 to control operation of the wearable device 100, and may use output data of the input-output circuitry 150 to enable receiving status information and other outputs from the wearable device 100.
Based on the wearable device described in fig. 1A above, the following functions may be implemented:
the communication circuit 120 is configured to acquire driving information acquired by a vehicle-mounted device after the wearable device is connected to the vehicle-mounted device;
the processing circuit is used for determining the road condition state according to the driving information;
the audio component 140 is configured to perform voice prompt when the road condition status is a preset status.
It can be seen that, the wearable device described in the embodiment of the application acquires the driving information collected by the vehicle-mounted device after the wearable device is connected with the vehicle-mounted device, determines the road condition state according to the driving information, and performs voice prompt when the road condition state is the preset state, so that the network connection can be established with the vehicle-mounted device to acquire the driving information, determine the road condition state according to the driving information, perform voice prompt according to the road condition state, enrich the functions of the wearable device, and improve the user experience.
In one possible example, in the aspect of determining the road condition state according to the driving information, the processing circuit is specifically configured to:
determining lane changing frequency of the vehicle according to the driving information;
and determining the road condition state according to the vehicle lane change frequency.
In one possible example, in the aspect of determining the road condition state according to the driving information, the processing circuit is specifically configured to:
extracting two adjacent frames of driving images from the driving information;
determining the average speed of surrounding vehicles according to the two driving images;
acquiring the current vehicle speed;
determining a difference between the current vehicle speed and the surrounding vehicle average speed;
and determining the road condition state according to the difference.
In one possible example, in the determining the average speed of the surrounding vehicle from the two driving images, the processing circuit is specifically configured to:
determining at least one target vehicle appearing in the two driving images;
mapping the two frames of driving images into a 3D map;
marking a location of the at least one target vehicle in the 3D map;
determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
In one possible example, in the aspect of performing voice prompt, the audio component 140 is specifically configured to:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and carrying out voice prompt according to the target playing parameters.
In one possible example, based on the wearable device described in fig. 1A above, the following voice prompt method may be implemented:
the communication circuit 120 acquires driving information acquired by the vehicle-mounted device after the wearable device is connected to the vehicle-mounted device;
the processing circuit determines the road condition state according to the driving information;
the audio module 140 performs a voice prompt when the road condition status is a preset status.
Referring to fig. 1B, fig. 1B is a schematic flow chart of a voice prompt method according to an embodiment of the present application. The voice prompting method is applied to the wearable device shown in fig. 1A, the wearable device is worn on the head of a user, and the voice prompting method comprises the following steps.
101. After the wearable device is connected with the vehicle-mounted device, the driving information collected by the vehicle-mounted device is obtained.
Wherein, the vehicle-mounted equipment can comprise at least one of the following: a vehicle event data recorder, a vehicle network, a vehicle refrigerator, a vehicle computer, a vehicle charger, etc. The driving information may include at least one of: driving images, navigation information, vehicle-mounted broadcast information, and the like. The vehicle-mounted equipment can collect the driving information, and after the wearable equipment is connected with the vehicle-mounted equipment, the wearable equipment can acquire the driving information collected by the vehicle-mounted equipment.
102. And determining the road condition state according to the driving information.
In the driving process, the vehicle-mounted equipment can record driving information, and further can determine the road condition state according to the driving information. In the embodiment of the present application, the road condition status may be at least one of the following: a normal state, an overspeed state, a slow-moving state, a congestion state, a traffic accident state, a dangerous state, etc., without limitation.
Optionally, in the step 102, determining the road condition state according to the driving information may include the following steps:
211. determining lane changing frequency of the vehicle according to the driving information;
212. and determining the road condition state according to the vehicle lane change frequency.
The driving information comprises driving images, and then the driving images can be analyzed to obtain the lane changing frequency of the vehicle, specifically, the steering direction lamps in the driving images can be identified, the steering direction lamps are classified and counted, the number of each steering direction lamp is counted, and the lane changing frequency of the vehicle is determined by the number of the steering direction lamps. The wearable device stores the mapping relation between the vehicle lane change frequency and the road condition state in advance, and further determines the road condition state corresponding to the vehicle lane change frequency according to the mapping relation. The following provides a mapping relationship between a lane change frequency of a vehicle and a road condition state, specifically as follows:
lane change frequency of vehicle Road condition status
(f1,f2) A
(f2,f3) B
(f3,f4) C
Wherein f1, f2 and f3 are all greater than 0, for example, if the lane change frequency of the vehicle is (f1 and f2), the road condition is a. In a concrete implementation, if the lane change frequency of the vehicle is high, a traffic accident may occur in the front direction.
Optionally, in the step 102, determining the road condition state according to the driving information may include the following steps:
221. extracting two adjacent frames of driving images from the driving information;
222. determining the average speed of surrounding vehicles according to the two driving images;
223. acquiring the current vehicle speed;
224. determining a difference between the current vehicle speed and the surrounding vehicle average speed;
225. and determining the road condition state according to the difference.
The two adjacent frames of driving images can be extracted from the driving information, and certainly, the two adjacent frames of driving images can be any two adjacent frames of driving images. Further, the average speed of the surrounding vehicles can be determined according to the two frames of driving images. The wearable device may also obtain the current vehicle speed, e.g. by an in-vehicle device, and for another example, the sensor in the wearable device comprises an acceleration sensor by which the current vehicle speed is detected. The wearable device may pre-store a mapping relationship between the difference and the road condition state, and after determining the difference between the current vehicle speed and the average speed of the surrounding vehicles, determine the road condition state corresponding to the difference between the current vehicle speed and the average speed of the surrounding vehicles according to the pre-stored mapping relationship between the difference and the road condition state. In a specific implementation, if the average speed of the surrounding vehicle is lower than the current vehicle speed, it indicates that the current vehicle may overspeed, and if the average speed of the surrounding vehicle is higher than the current vehicle speed, a dangerous state may be indicated.
Optionally, in the step 222, determining the average speed of the surrounding vehicle according to the two driving images may include the following steps:
2221. determining at least one target vehicle appearing in the two driving images;
2222. mapping the two frames of driving images into a 3D map;
2223. marking a location of the at least one target vehicle in the 3D map;
2224. determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
Wherein, because the vehicle continuously overtakes other vehicles or is overtaken by other vehicles in the driving process, at least one target vehicle appearing in both the two frames of driving images can be determined, of course, the driving direction of at least one target vehicle is consistent, the two frames of driving images can be mapped into the 3D map, and the position of at least one target vehicle is marked in the 3D map, and further, the average speed of the surrounding vehicles is determined according to the position of at least one target vehicle, specifically, the position of each target vehicle can be marked in the 3D map, for any target vehicle, two positions can be obtained, the distance between the two positions is obtained in the 3D map, of course, a time vehicle also exists between the two frames of driving images, the speed of the target vehicle is equal to the distance/time difference, and the speed of all the target vehicles is averaged, the average speed of the surrounding vehicle is obtained.
103. And when the road condition state is a preset state, carrying out voice prompt.
The preset state can be set by the default of the system or the user, and then voice prompt can be performed when the road condition state is the preset state, specifically, the vehicle can be prompted to be in the preset state, and certainly, a solution corresponding to the preset state can also be prompted.
Optionally, the step 103 of performing voice prompt may include the following steps:
31. acquiring target environment parameters;
32. determining a target playing parameter corresponding to the target environment parameter;
33. and carrying out voice prompt according to the target playing parameters.
Wherein, the sensor of the wearable device may be an environmental sensor, and the environmental sensor may be at least one of the following: a position sensor, a humidity sensor, a temperature sensor, an external sound detection sensor, and the like. The target environmental parameter may be acquired by an environmental sensor, and may include at least one of: location, humidity, temperature, external noise, etc. The playing parameters may include at least one of the following: volume, sound effects, speech rate, etc. The wearable device can pre-store the mapping relation between the environmental parameters and the playing parameters, and then after the target environmental parameters are obtained, the target playing parameters corresponding to the target environmental parameters can be determined according to the mapping relation, and voice prompt is carried out according to the target playing parameters.
Optionally, in step 103, the wearable device includes a first voice component and a second voice component;
when the wearable device plays the target audio, the performing voice prompt includes:
playing the target audio by adopting a first voice component; and the second voice component is adopted for voice prompt.
Wherein the target audio may be at least one of: music, radio, talk voice, etc. The wearable device may comprise a first voice component and a second voice component, e.g. the wireless headset comprises a left ear plug and a right ear plug, the left ear plug may be considered as the first voice component and the right ear plug may be considered as the second voice component. In the voice prompt process, when the wearable device plays the target audio, the first voice component is adopted to play the target audio, and the second voice component is adopted to perform voice prompt.
Optionally, the step 103 of performing voice prompt may include the following steps:
a1, determining the target fit degree between the wearable device and the ear;
a2, determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device;
a3, controlling the wearable device to perform voice prompt at the first volume.
The fitting degree is used for expressing the fitting tightness degree between the wearable device and the ear, and the fitting degree can be expressed by specific numerical values. The wearable device may be provided with a sensor for detecting a degree of fit between the wearable device and the ear, the sensor may comprise at least one of: pressure sensors, barometric pressure sensors, ultrasonic sensors, distance sensors, and the like. In the specific implementation, the mapping relation between the attaching degree and the volume of the wearable device can be stored in the wearable device in advance, then the first volume corresponding to the target attaching degree is determined according to the mapping relation, and under the target attaching degree, the wearable device can be controlled to perform voice prompt with the first volume.
In practical applications, a wireless headset is taken as an example, and a specified volume is taken as an example, the closer the wireless headset is attached to the ear, the louder the wireless headset sounds, and the looser the wireless headset is attached to the ear, the louder the wireless headset sounds.
Optionally, the wearable device includes a pressure sensor, and the step a1 of determining the target fit between the wearable device and the ear may include the steps of:
a11, detecting a target pressure value between the wearable device and the ear;
a12, determining the target label contact degree corresponding to the target pressure value according to the mapping relation between the preset pressure value and the fit degree.
At least one pressure sensor may be disposed at a portion where the wearable device contacts with the ear, the at least one pressure sensor may detect a target pressure value between the wearable device and the ear, and the target pressure value may be a pressure value of any one of the at least one pressure sensor, or an average pressure value of all the at least one pressure sensor, or a maximum pressure value detected by the at least one pressure sensor, or a minimum pressure value detected by the at least one pressure sensor, or the like. The mapping relation between the pressure value and the fitting degree can be prestored in the wearable device, and then the target fitting degree corresponding to the target pressure value can be determined according to the mapping relation.
Pressure value Degree of adhesion
a~b K1
b~c K2
c~d K3
Wherein a < b < c < d, K1, K2, and K3 are numbers greater than 0.
Optionally, the wearable device includes an air pressure sensor, and the determining the target fit between the wearable device and the ear in step a1 may include the following steps:
a21, detecting a target air pressure value between the wearable device and the ear;
a22, determining the target label contact degree corresponding to the target air pressure value according to the preset mapping relation between the air pressure value and the fit degree.
The wearable device comprises an air pressure sensor, and a target air pressure value between the wearable device and the ear is detected through the air pressure sensor. The mapping relation between the air pressure value and the fitting degree can be stored in the wearable device in advance, and then the target label fitting degree corresponding to the target air pressure value can be determined according to the mapping relation.
Optionally, the wearable device comprises a first component and a second component; the step a1 of determining the target fit between the wearable device and the ear may include the following steps:
a31, determining a target distance between the first voice component and the second voice component;
a32, determining the target labeling degree corresponding to the target distance according to the mapping relation between the preset distance and the fitting degree.
Wherein the wearable device may comprise a first speech part and a second speech part, e.g. a wireless headset, may comprise two earplugs, each earpiece may be provided with an ultrasonic sensor, e.g. a left earpiece is provided with a transmitter and a right earpiece is provided with a receiver, and a target distance between the first part and the second part is measured by the two earplugs. The mapping relation between the distance and the fitting degree can be stored in the wearable device in advance, and then the target fitting degree corresponding to the target distance can be determined according to the mapping relation.
Optionally, a mapping relationship set is pre-stored in the wearable device, where the mapping relationship set includes a plurality of mapping relationships, and each mapping relationship is a mapping relationship between a preset degree of attachment and a volume of the wearable device;
between the above steps a1 and a2, the following steps may be further included:
b1, acquiring current environment parameters;
b2, determining a target mapping relation corresponding to the current environmental parameter according to the corresponding relation between the preset environmental parameter and the mapping relation;
in the step a2, determining the first volume corresponding to the target fitness according to a mapping relationship between a preset fitness and the volume of the wearable device, which may be implemented as follows:
and determining the first volume corresponding to the target fit degree according to the target mapping relation.
The wearable device may store a mapping relationship set in advance, where the mapping relationship set may include a plurality of mapping relationships, and each mapping relationship is a mapping relationship between a preset degree of attachment and a volume of the wearable device. The sensor of the wearable device may be an environmental sensor, and the environmental sensor may be at least one of: a position sensor, a humidity sensor, a temperature sensor, an external sound detection sensor, and the like. The current environmental parameters can be acquired by the environmental sensors. The wearable device may pre-store a corresponding relationship between the environmental parameter and the mapping relationship, and determine a target mapping relationship corresponding to the current environmental parameter according to the corresponding relationship. Further, a first volume corresponding to the target fit degree can be determined according to the target mapping relation. A mapping table between environment parameters and mapping relationships is provided as follows, specifically as follows:
Figure BDA0001681063940000121
Figure BDA0001681063940000131
so, under different environmental parameters, can take different mapping relations, for example, if external environment is noisy, the mapping relation at this time is different with the mapping relation under the quiet environment, and this application embodiment can provide the mapping relation that corresponds with it under the environment of difference, so, obtains the volume that is fit for with the environment.
Optionally, after the step a3, the following steps may be further included:
a4, monitoring the target variation of the target label contact degree;
a5, when the absolute value of the target variation is larger than a preset threshold, determining a target volume adjustment parameter corresponding to the target variation according to a mapping relation between preset variation and volume adjustment parameters;
a6, determining a second volume according to the first volume and the target volume adjusting parameter;
a7, controlling the wearable device to perform voice prompt at the second volume.
Wherein, wearable equipment can monitor the target change volume of target labeling degree of consistency through the sensor, and the target change volume is the change volume of degree of consistency, among the practical application to wireless earphone has been taken as an example, and the earphone has been worn for a long time, and perhaps, the user lets the degree of consistency step-down very easily in the motion, otherwise, the user plugs up the earphone, then can increase the degree of consistency of pasting, and above-mentioned target change volume can be realized through the sensor, and for example, the sensor includes pressure sensor, can confirm the target change volume through the pressure value change. The preset threshold value can be set by the user or the default of the system. The volume adjustment parameter may be "+" volume (volume up) or "-" volume (volume down), a mapping relationship between a preset variation and the volume adjustment parameter may be preset in the wearable device, and when an absolute value of the target variation is greater than a preset threshold, the target volume adjustment parameter corresponding to the target variation is determined according to the mapping relationship. When the target volume adjustment parameter is determined, a second volume may be determined according to the first volume and the target volume adjustment parameter, for example, the second volume is equal to the first volume + the target volume adjustment parameter, if the target degree of attachment is increased, the second volume is smaller than the first volume, and if the target degree of attachment is decreased, the second volume is larger than the first volume.
It can be seen that, the voice prompt method described in the embodiment of the present application is applied to a wearable device, after the wearable device is connected to a vehicle-mounted device, driving information acquired by the vehicle-mounted device is acquired, a road condition state is determined according to the driving information, and when the road condition state is a preset state, voice prompt is performed.
Referring to fig. 2, fig. 2 is a schematic flowchart of a voice navigation method disclosed in an embodiment of the present application, and the method is applied to the wearable device shown in fig. 1A, where the wearable device includes a first voice component and a second voice component; the wearable device is worn on the head of a user, and the voice navigation method comprises the following steps.
201. After the wearable device is connected with the vehicle-mounted device, the driving information collected by the vehicle-mounted device is obtained.
202. And determining the road condition state according to the driving information.
203. And when the road condition state is a preset state, carrying out voice prompt.
204. When the wearable device plays the target audio, a first voice component is adopted to play the target audio, and the second voice component is adopted to perform voice prompt.
It can be seen that the voice prompt method described in the embodiment of the present application is applied to a wearable device, after the wearable device is connected to the vehicle-mounted device, driving information collected by the vehicle-mounted device is acquired, a road condition state is determined according to the driving information, when the road condition state is a preset state, voice prompt is performed, when the wearable device plays a target audio, the target audio is played by using a first voice component, and voice prompt is performed by using a second voice component, so that network connection can be established with the vehicle-mounted device to acquire the driving information, the road condition state is determined according to the driving information, voice prompt is performed according to the road condition state, and voice prompt can be performed while listening to music, functions of the wearable device are enriched, and user experience is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a voice navigation method according to an embodiment of the present application, applied to the wearable device shown in fig. 1A, wherein the wearable device is worn on the head of a user, and the voice navigation method includes the following steps.
301. After the wearable device is connected with the vehicle-mounted device, the driving information collected by the vehicle-mounted device is obtained.
302. And determining the road condition state according to the driving information.
303. And when the road condition state is a preset state, determining the target fitting degree between the wearable equipment and the ear.
304. And determining a first volume corresponding to the target fitting degree according to a mapping relation between a preset fitting degree and the volume of the wearable device.
305. And carrying out voice prompt according to the first volume voice playing.
It can be seen that the voice prompt method described in the embodiment of the present application is applied to a wearable device, after the wearable device is connected to the vehicle-mounted device, the driving information collected by the vehicle-mounted device is obtained, the road condition state is determined according to the driving information, when the road condition state is a preset state, the target fitness between the wearable device and an ear is determined, according to a mapping relationship between the preset fitness and the volume of the wearable device, a first volume corresponding to the target fitness is determined, and voice prompt is performed according to the first volume, so that network connection can be established with the vehicle-mounted device to obtain the driving information, the road condition state is determined according to the driving information, voice prompt is performed according to the road condition state, and the volume of the voice prompt is determined according to the fitness between the ear of a user and the wearable device, thereby enriching the functions of the wearable device, the user experience is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application, and as shown in the drawing, the wearable device includes a processor, a memory, a communication interface, and one or more programs, the wearable device further includes an ultrasonic sensor, the wearable device is worn on the head of a user, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
after the wearable equipment is connected with the vehicle-mounted equipment, acquiring driving information acquired by the vehicle-mounted equipment;
determining a road condition state according to the driving information;
and when the road condition state is a preset state, carrying out voice prompt.
It can be seen that, the wearable device described in the embodiment of the application acquires the driving information collected by the vehicle-mounted device after the wearable device is connected with the vehicle-mounted device, determines the road condition state according to the driving information, and performs voice prompt when the road condition state is the preset state, so that the network connection can be established with the vehicle-mounted device to acquire the driving information, determine the road condition state according to the driving information, perform voice prompt according to the road condition state, enrich the functions of the wearable device, and improve the user experience.
In one possible example, in the aspect of determining the road condition state according to the driving information, the program includes instructions for performing the following steps:
determining lane changing frequency of the vehicle according to the driving information;
and determining the road condition state according to the vehicle lane change frequency.
In one possible example, in the aspect of determining the road condition state according to the driving information, the program includes instructions for performing the following steps:
extracting two adjacent frames of driving images from the driving information;
determining the average speed of surrounding vehicles according to the two driving images;
acquiring the current vehicle speed;
determining a difference between the current vehicle speed and the surrounding vehicle average speed;
and determining the road condition state according to the difference.
In one possible example, in said determining the average speed of the surrounding vehicle from said two driving images, the program comprises instructions for:
determining at least one target vehicle appearing in the two driving images;
mapping the two frames of driving images into a 3D map;
marking a location of the at least one target vehicle in the 3D map;
determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
In one possible example, in the voice prompting, the program includes instructions for:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and carrying out voice prompt according to the target playing parameters.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the wearable device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the wearable device may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a voice prompt apparatus 500 disclosed in an embodiment of the present application, which is applied to a wearable device, the wearable device is worn on a head of a user, the voice prompt apparatus 500 includes an obtaining unit 501, a determining unit 502, and a prompting unit 503, where:
the acquiring unit 501 is configured to acquire driving information acquired by a vehicle-mounted device after the wearable device is connected to the vehicle-mounted device;
the determining unit 502 is configured to determine a road condition state according to the driving information;
the prompting unit 503 is configured to perform voice prompting when the road condition state is a preset state.
It can be seen that the voice prompt device described in the embodiment of the application is applied to wearable equipment, after the wearable equipment is connected with the vehicle-mounted equipment, the driving information collected by the vehicle-mounted equipment is acquired, the road condition state is determined according to the driving information, and when the road condition state is a preset state, voice prompt is performed.
In a possible example, in the aspect of determining the road condition state according to the driving information, the determining unit 502 is specifically configured to:
determining lane changing frequency of the vehicle according to the driving information;
and determining the road condition state according to the vehicle lane change frequency.
In a possible example, in the aspect of determining the road condition state according to the driving information, the determining unit 502 is specifically configured to:
extracting two adjacent frames of driving images from the driving information;
determining the average speed of surrounding vehicles according to the two driving images;
acquiring the current vehicle speed;
determining a difference between the current vehicle speed and the surrounding vehicle average speed;
and determining the road condition state according to the difference.
In one possible example, in the aspect of determining the average speed of the surrounding vehicle according to the two driving images, the determining unit 502 is specifically configured to:
determining at least one target vehicle appearing in the two driving images;
mapping the two frames of driving images into a 3D map;
marking a location of the at least one target vehicle in the 3D map;
determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
In one possible example, in the aspect of performing voice prompt, the prompt unit 503 is specifically configured to:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and carrying out voice prompt according to the target playing parameters.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A wearable device comprising a processing circuit, and a communication circuit and an audio component connected to the processing circuit, wherein,
the communication circuit is used for acquiring the driving information acquired by the vehicle-mounted equipment after the wearable equipment is connected with the vehicle-mounted equipment;
the processing circuit is used for determining a road condition state according to the driving information, wherein the driving information comprises driving images, and the driving images are analyzed to obtain a vehicle lane change frequency, and specifically the method comprises the following steps: identifying steering direction lamps in the driving image, carrying out classified statistics on the steering direction lamps, counting the number of each steering direction lamp, determining the lane changing frequency of the vehicle according to the number of the steering direction lamps, and determining the road condition state corresponding to the lane changing frequency of the vehicle according to a pre-stored mapping relation between the lane changing frequency of the vehicle and the road condition state;
and the audio component is used for carrying out voice prompt when the road condition state is a preset state.
2. The wearable device of claim 1, wherein in determining the status of the road condition based on the driving information, the processing circuit is specifically configured to:
extracting two adjacent frames of driving images from the driving information;
determining the average speed of surrounding vehicles according to the two driving images;
acquiring the current vehicle speed;
determining a difference between the current vehicle speed and the surrounding vehicle average speed;
and determining the road condition state corresponding to the difference between the current vehicle speed and the average speed of the surrounding vehicles according to the mapping relation between the pre-stored difference and the road condition state.
3. The wearable device according to claim 2, wherein in the determining of the average speed of the surrounding vehicle from the two frame driving imagery, the processing circuit is specifically configured to:
determining at least one target vehicle appearing in the two driving images;
mapping the two frames of driving images into a 3D map;
marking a location of the at least one target vehicle in the 3D map;
determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
4. Wearable device according to any of claims 1-3, wherein the audio component is specifically configured to, in connection with the voice prompting:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and carrying out voice prompt according to the target playing parameters.
5. A voice prompt method is applied to a wearable device, and comprises the following steps:
after the wearable equipment is connected with the vehicle-mounted equipment, acquiring driving information acquired by the vehicle-mounted equipment;
determining a road condition state according to the driving information, wherein the driving information comprises a driving image, analyzing the driving image to obtain a vehicle lane change frequency, and specifically comprising the following steps: identifying steering direction lamps in the driving image, carrying out classified statistics on the steering direction lamps, counting the number of each steering direction lamp, determining the lane changing frequency of the vehicle according to the number of the steering direction lamps, and determining the road condition state corresponding to the lane changing frequency of the vehicle according to a pre-stored mapping relation between the lane changing frequency of the vehicle and the road condition state;
and when the road condition state is a preset state, carrying out voice prompt.
6. The method as claimed in claim 5, wherein the determining the traffic status according to the driving information comprises:
extracting two adjacent frames of driving images from the driving information;
determining the average speed of surrounding vehicles according to the two driving images;
acquiring the current vehicle speed;
determining a difference between the current vehicle speed and the surrounding vehicle average speed;
and determining the road condition state corresponding to the difference between the current vehicle speed and the average speed of the surrounding vehicles according to the mapping relation between the pre-stored difference and the road condition state.
7. The method of claim 6, wherein determining the average speed of the surrounding vehicle from the two driving images comprises:
determining at least one target vehicle appearing in the two driving images;
mapping the two frames of driving images into a 3D map;
marking a location of the at least one target vehicle in the 3D map;
determining the average speed of the surrounding vehicles according to the position of the at least one target vehicle.
8. The method according to any one of claims 5-7, wherein said performing voice prompts comprises:
acquiring target environment parameters;
determining a target playing parameter corresponding to the target environment parameter;
and carrying out voice prompt according to the target playing parameters.
9. The voice prompt device is applied to wearable equipment and comprises an acquisition unit, a determination unit and a prompt unit, wherein:
the acquisition unit is used for acquiring the driving information acquired by the vehicle-mounted equipment after the wearable equipment is connected with the vehicle-mounted equipment;
the determining unit is configured to determine a road condition state according to the driving information, wherein the driving information includes a driving image, and the driving image is analyzed to obtain a vehicle lane change frequency, which specifically is: identifying steering direction lamps in the driving image, carrying out classified statistics on the steering direction lamps, counting the number of each steering direction lamp, determining the lane changing frequency of the vehicle according to the number of the steering direction lamps, and determining the road condition state corresponding to the lane changing frequency of the vehicle according to a pre-stored mapping relation between the lane changing frequency of the vehicle and the road condition state;
and the prompting unit is used for carrying out voice prompt when the road condition state is a preset state.
10. A wearable device comprising a processor, a memory, a communication interface, the memory storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 5-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 5-8.
CN201810550406.5A 2018-05-31 2018-05-31 Voice prompt method and related product Active CN109039355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810550406.5A CN109039355B (en) 2018-05-31 2018-05-31 Voice prompt method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810550406.5A CN109039355B (en) 2018-05-31 2018-05-31 Voice prompt method and related product

Publications (2)

Publication Number Publication Date
CN109039355A CN109039355A (en) 2018-12-18
CN109039355B true CN109039355B (en) 2020-05-08

Family

ID=64612000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810550406.5A Active CN109039355B (en) 2018-05-31 2018-05-31 Voice prompt method and related product

Country Status (1)

Country Link
CN (1) CN109039355B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211402A (en) * 2019-05-30 2019-09-06 努比亚技术有限公司 Wearable device road conditions based reminding method, wearable device and storage medium
CN115472039B (en) * 2021-06-10 2024-03-01 上海博泰悦臻网络技术服务有限公司 Information processing method and related product
CN114476085B (en) * 2022-02-09 2023-12-12 Oppo广东移动通信有限公司 Information prompting method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008664A (en) * 2014-04-18 2014-08-27 小米科技有限责任公司 Method and device for obtaining road condition information
CN105225474A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Based on the traffic collision early warning system of intelligent wearable device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110049548A (en) * 2009-11-05 2011-05-12 엘지전자 주식회사 Navigation method of mobile terminal and apparatus thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008664A (en) * 2014-04-18 2014-08-27 小米科技有限责任公司 Method and device for obtaining road condition information
CN105225474A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Based on the traffic collision early warning system of intelligent wearable device

Also Published As

Publication number Publication date
CN109039355A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN108810693B (en) Wearable device and device control device and method thereof
CN109040887B (en) Master-slave earphone switching control method and related product
CN109068206B (en) Master-slave earphone switching control method and related product
CN108966067B (en) Play control method and related product
CN108737921B (en) Play control method, system, earphone and mobile terminal
EP3598435B1 (en) Method for processing information and electronic device
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
CN109040446B (en) Call processing method and related product
CN109918039B (en) Volume adjusting method and mobile terminal
CN108319445B (en) Audio playing method and mobile terminal
CN108540900B (en) Volume adjusting method and related product
CN109039355B (en) Voice prompt method and related product
CN109086027B (en) Audio signal playing method and terminal
CN108777827B (en) Wireless earphone, volume adjusting method and related product
CN109067965B (en) Translation method, translation device, wearable device and storage medium
CN110708630B (en) Method, device and equipment for controlling earphone and storage medium
CN108737923A (en) Volume adjusting method and related product
CN110460721B (en) Starting method and device and mobile terminal
CN108874130B (en) Play control method and related product
CN107863110A (en) Safety prompt function method, intelligent earphone and storage medium based on intelligent earphone
CN108897516B (en) Wearable device volume adjustment method and related product
CN109150221B (en) Master-slave switching method for wearable equipment and related product
CN108683790B (en) Voice processing method and related product
CN108827338B (en) Voice navigation method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant