CN109064720B - Position prompting method and device, storage medium and electronic equipment - Google Patents
Position prompting method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN109064720B CN109064720B CN201810682454.XA CN201810682454A CN109064720B CN 109064720 B CN109064720 B CN 109064720B CN 201810682454 A CN201810682454 A CN 201810682454A CN 109064720 B CN109064720 B CN 109064720B
- Authority
- CN
- China
- Prior art keywords
- voice signal
- information
- preset
- wearable device
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012795 verification Methods 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 abstract description 5
- 230000004044 response Effects 0.000 abstract description 4
- 238000004458 analytical method Methods 0.000 description 71
- 238000012549 training Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000005021 gait Effects 0.000 description 6
- 210000003462 vein Anatomy 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008451 emotion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 239000000872 buffer Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/24—Reminder alarms, e.g. anti-loss alarms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/20—Pattern transformations or operations aimed at increasing system robustness, e.g. against channel noise or different working conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a position prompting method, a position prompting device, a storage medium and electronic equipment, wherein when a voice signal in an external environment is collected, a command to be executed included in the voice signal can be acquired. And when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user. When the voiceprint features are not matched with the preset voiceprint features, auxiliary verification is carried out through the wearable equipment which is pre-associated, and when the verification is passed, position prompting operation is carried out according to a preset mode. Compared with the prior art that the position prompting operation is executed only when the voiceprint feature is verified to pass, the method and the device avoid no response caused by the change of the voiceprint feature of the user, and can improve the success rate of triggering the electronic device to perform position prompting.
Description
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a position prompting method and apparatus, a storage medium, and an electronic device.
Background
At present, with the development of technology, the interaction modes between human and machine become more and more abundant. In the related art, a user can control electronic devices such as a mobile phone and a tablet computer through voice. For example, when the user cannot find the electronic device, the electronic device may perform a location prompt operation according to a voice signal of the user to guide the user to find himself. After receiving a voice signal sent by a user, if the electronic device recognizes that the to-be-executed instruction included in the voice signal is an instruction for triggering position prompt, the electronic device needs to extract a voiceprint feature from the voice signal, verify the extracted voiceprint feature, and execute the to-be-executed instruction to perform position prompt operation only when the voiceprint feature verification passes. However, the voiceprint feature of the user is easily affected by various factors and changes, which results in that the electronic device fails to verify the voiceprint feature, and cannot identify the user, and thus cannot perform the location prompting operation.
Disclosure of Invention
The embodiment of the application provides a position prompting method and device, a storage medium and electronic equipment, and can improve the success rate of triggering the electronic equipment to carry out position prompting.
In a first aspect, an embodiment of the present application provides a position prompting method, where the position prompting method includes:
when a voice signal in an external environment is collected, acquiring a command to be executed included in the voice signal;
when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the voiceprint features are matched with preset voiceprint features of a preset user or not;
when the voiceprint features are not matched with the preset voiceprint features, sending an identity authentication request to a pre-associated wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user or not, and returning an identification result;
and when receiving the recognition result that the current wearer is the preset user, executing position prompting operation according to a preset mode.
In a second aspect, an embodiment of the present application provides a position prompting apparatus, including:
the acquisition module is used for acquiring a command to be executed included in a voice signal when the voice signal in an external environment is acquired;
the verification module is used for acquiring the voiceprint features included by the voice signals and judging whether the voiceprint features are matched with the preset voiceprint features of a preset user or not when the instruction to be executed is an instruction for triggering position prompt;
the sending module is used for sending an identity authentication request to a pre-associated wearable device when the voiceprint feature is not matched with the preset voiceprint feature, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user or not and returning an identification result;
and the prompting module is used for executing position prompting operation according to a preset mode when receiving the identification result of the current wearer as the preset user.
In a third aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the steps in the position prompting method provided by the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the steps in the location hint method provided in the embodiment of the present application by calling the computer program.
The electronic device in the embodiment of the application can acquire the instruction to be executed included in the voice signal when the voice signal in the external environment is acquired. And when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user. When the voiceprint features are not matched with the preset voiceprint features, auxiliary verification is carried out through the wearable equipment which is pre-associated, and when the verification is passed, position prompting operation is carried out according to a preset mode. Compared with the prior art that the position prompting operation is executed only when the voiceprint feature is verified to pass, the method and the device avoid no response caused by the change of the voiceprint feature of the user, and can improve the success rate of triggering the electronic device to perform position prompting.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a position indication method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a prompt mode setting interface provided by an electronic device in an embodiment of the present application.
Fig. 3 is another schematic flow chart of a location prompting method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an electronic device executing a location hint operation according to an embodiment of the present application.
Fig. 5 is another schematic diagram of an electronic device performing a location hint operation according to an embodiment of the present application.
Fig. 6 is a schematic position diagram of an electronic device and a wearable device in an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a position indication device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 9 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The term module, as used herein, may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An execution main body of the position prompting method may be the position prompting device provided in the embodiment of the present application, or an electronic device integrated with the position prompting device, where the position prompting device may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of a position prompting method according to an embodiment of the present disclosure. As shown in fig. 1, a flow of the position prompting method provided in the embodiment of the present application may be as follows:
101. when the voice signal in the external environment is collected, the instruction to be executed included in the voice signal is obtained.
It should be noted that the electronic device may collect the voice signal in the external environment in a plurality of different manners, for example, when the electronic device is not externally connected with a microphone, the electronic device may collect the voice in the external environment through a built-in microphone to obtain the voice signal; for another example, when the electronic device is externally connected with a microphone, the electronic device may collect voice in an external environment through the externally connected microphone to obtain a voice signal.
When the electronic device collects a voice signal in an external environment through a microphone (the microphone here may be a built-in microphone or an external microphone), if the microphone is an analog microphone, the electronic device collects an analog voice signal, and at this time, the electronic device needs to sample the analog voice signal to convert the analog voice signal into a digitized voice signal, for example, the electronic device can sample at a sampling frequency of 16 KHz; in addition, if the microphone is a digital microphone, the electronic device directly collects the digitized voice signal through the digital microphone without conversion.
When the instruction to be executed included in the voice signal is acquired, the electronic equipment firstly judges whether a voice analysis engine exists locally, if so, the electronic equipment inputs the voice signal into the local voice analysis engine for voice analysis, and a voice analysis text is obtained. The voice analysis is performed on the voice signal, that is, the voice signal is converted from "audio" to "text".
Furthermore, when a plurality of speech analysis engines exist locally, the electronic device may select one speech analysis engine from the plurality of speech analysis engines to perform speech analysis on the speech signal in the following manner:
first, the electronic device may randomly select one speech analysis engine from a plurality of local speech analysis engines to perform speech analysis on the speech signal.
And secondly, the electronic equipment can select the voice analysis engine with the highest analysis success rate from the plurality of voice analysis engines to perform voice analysis on the voice signal.
And thirdly, the electronic equipment can select the voice analysis engine with the shortest analysis time length from the plurality of voice analysis engines to carry out voice analysis on the voice signal.
Fourthly, the electronic equipment can also select a voice analysis engine with the analysis success rate reaching the preset success rate and the shortest analysis time from the plurality of voice analysis engines to carry out voice analysis on the voice signal.
It should be noted that, a person skilled in the art may also select a speech analysis engine according to a manner not listed above, or may perform speech analysis on the speech signal by combining multiple speech analysis engines, for example, the electronic device may perform speech analysis on the speech signal by using two speech analysis engines at the same time, and when speech analysis texts obtained by two speech analysis engines are the same, use the same speech analysis text as a speech analysis text of the speech signal; for another example, the electronic device may perform speech analysis on the speech signal through at least three speech analysis engines, and when speech analysis texts obtained by at least two of the speech analysis engines are the same, use the same speech analysis text as the speech analysis text of the speech signal.
After the voice analysis text of the voice signal is obtained through analysis, the electronic equipment further obtains the instruction to be executed included in the voice signal from the voice analysis text.
The electronic equipment stores a plurality of instruction keywords in advance, and a single instruction keyword or a plurality of instruction keyword combinations correspond to one instruction. When the to-be-executed instruction included in the voice signal is obtained from the voice analysis text obtained through analysis, the electronic equipment firstly carries out word segmentation operation on the voice analysis text to obtain a word sequence corresponding to the voice analysis text, and the word sequence includes a plurality of words.
After the word sequence corresponding to the voice analysis text is obtained, the electronic device matches the instruction keywords with the word sequence, that is, the instruction keywords in the word sequence are found out, so that the corresponding instruction is obtained through matching, and the instruction obtained through matching is used as an instruction to be executed of the voice signal. Wherein the matching search of the instruction keywords comprises complete matching and/or fuzzy matching.
After determining whether a voice analysis engine exists locally, if not, the electronic device sends the voice signal to a server (the server is a server providing voice analysis service), instructs the server to analyze the voice signal, and returns a voice analysis text obtained by analyzing the voice signal. After receiving the voice analysis text returned by the server, the electronic device can acquire the instruction to be executed included in the voice signal from the voice analysis text.
102. And when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user.
In the embodiment of the application, the electronic device performs bluetooth pairing with a wearable device (for example, the wearable device can be an intelligent bracelet, an intelligent watch, an intelligent jewelry, an intelligent eye and the like) of the owner in advance according to an input operation received to the owner, and establishes an association relation with the wearable device after pairing is successful.
After the electronic equipment obtains that the voice signal comprises the instruction to be executed, whether the instruction to be executed is the instruction for triggering the position prompt is identified, wherein the instruction for triggering the position prompt can be set according to input data of the owner, and if the instruction to be executed is the instruction for triggering the position prompt, the speaker of the voice signal is judged to be the owner. For example, the owner sets an instruction keyword combination "small europe" + "you" + "where" as an instruction for triggering the position prompt, and correspondingly, when the electronic device receives the voice signal "small europe you are" the electronic device determines that the speaker of the voice signal "small europe you are" is the owner, and the instruction to be executed included in the voice signal "small europe you are" as the instruction for triggering the position prompt.
When the instruction to be executed included in the voice signal is recognized as an instruction for triggering the position prompt, the electronic equipment further acquires the voiceprint feature included in the voice signal. Wherein the voiceprint feature includes, but is not limited to, at least one feature component of a spectrum feature component, a cepstrum feature component, a formant feature component, a pitch feature component, a reflection coefficient feature component, a tone feature component, a speech rate feature component, an emotion feature component, a prosody feature component, and a rhythm feature component.
Then, the electronic device determines whether the obtained voiceprint feature matches a preset voiceprint feature (which can be pre-entered by a user owner) of a preset user (the preset user is the user owner of the electronic device), wherein the electronic device can obtain a similarity between the voiceprint feature and the preset voiceprint feature, and determine whether the obtained similarity is greater than or equal to the preset voiceprint similarity (which can be set by a person skilled in the art according to actual needs). When the obtained similarity is smaller than the preset voiceprint similarity, the electronic equipment determines that the voiceprint feature is not matched with the preset voiceprint feature, and determines that the speaker of the voice signal is not the owner.
The electronic device can obtain the distance between the voiceprint feature and a preset voiceprint feature, and the obtained distance is used as the similarity between the voiceprint feature and the preset voiceprint feature. Wherein, any feature distance (such as euclidean distance, manhattan distance, chebyshev distance, etc.) can be selected by those skilled in the art according to actual needs to measure the distance between the aforementioned voiceprint feature and the preset voiceprint feature.
For example, the cosine distance between the voiceprint feature and the preset voiceprint feature may be obtained, specifically referring to the following formula:
wherein e represents the cosine distance between the voiceprint feature and a preset voiceprint feature, f represents the voiceprint feature, and N represents the voiceprint feature and a preset voiceprint featureDimension of voiceprint feature (the dimension of the voiceprint feature is the same as that of the preset voiceprint feature), fiFeature vector, g, representing the ith dimension of the voiceprint featureiAnd representing a feature vector of an ith dimension in the preset voiceprint features.
103. And when the voiceprint features are not matched with the preset voiceprint features, sending an identity authentication request to the pre-associated wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is a preset user or not, and returning an identification result.
Based on the above description, it is easily understood by those skilled in the art that when the voiceprint feature does not match the preset voiceprint feature, it is stated that the speaker of the voice signal may not be the owner, but the possibility that the speaker is the owner is not excluded (for example, when the owner catches a cold, the voiceprint feature will be changed), and at this time, the electronic device performs the auxiliary verification through the pre-associated wearable device.
The electronic device performs bluetooth pairing with wearable equipment (such as an intelligent bracelet, an intelligent watch, an intelligent jewelry, an intelligent eye and the like) of the owner in advance according to the input operation received by the owner, and establishes an association relationship with the wearable equipment after pairing is successful.
When the electronic device determines that the voiceprint feature is not matched with the preset voiceprint feature, firstly, whether the current wearer of the wearable device is a speaker of the voice signal is identified.
As an optional embodiment, when recognizing whether the current wearer of the wearable device is a speaker of the voice signal, the electronic device may acquire a first loudness value of receiving the voice signal, and then send a loudness acquisition request to other electronic devices (including the wearable device) in the communication range, where the loudness acquisition request is used to instruct the other electronic devices to return to a second loudness value of receiving the voice signal. After receiving second loudness values returned by other electronic equipment, the electronic equipment judges whether the second loudness value corresponding to the wearable equipment is larger than second loudness values corresponding to other electronic equipment except the wearable equipment, judges whether the second loudness value corresponding to the wearable equipment is larger than the first loudness value, determines that the current wearer of the wearable equipment is the speaker of the voice signal when the judgment results are yes, and otherwise determines that the current wearer of the wearable equipment is not the speaker of the voice signal.
As another optional implementation, when recognizing whether the current wearer of the wearable device is a speaker of the voice signal, the electronic device may acquire a first receiving time of receiving the voice signal, and then send a receiving time acquiring request to other electronic devices (including the wearable device) in a communication range, where the loudness acquiring request is used to instruct the other electronic devices to return to a second receiving time of receiving the voice signal. After receiving a second receiving time returned by other electronic equipment, the electronic equipment judges whether the second receiving time corresponding to the wearable equipment is greater than a second receiving time corresponding to other electronic equipment except the wearable equipment and judges whether the second receiving time corresponding to the wearable equipment is greater than a first receiving time, if yes, the current wearer of the wearable equipment is determined to be a speaker of the voice signal, otherwise, the current wearer of the wearable equipment is determined not to be the speaker of the voice signal.
When the current wearer of the wearable device is identified to be the speaker of the voice signal, the electronic device constructs an authentication request according to a message format agreed with the wearable device in advance, and sends the constructed authentication request to the wearable device, wherein the authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user (namely the owner) or not, and returning an identification result.
The wearable device can acquire gait data of a current wearer after receiving an identity verification request from the electronic device, compare the acquired gait data with preset gait data of a preset user, and return an identification result that the current wearer is the preset user to the electronic device if the similarity between the acquired gait data and the preset gait data reaches the preset gait similarity (which can be set by a person skilled in the art according to actual needs, for example, can be set to 90%), or return an identification result that the current wearer is not the preset user.
In addition, after the wearable device receives an authentication request from the electronic device, the wearable device can also acquire vein information of a current wearer, compare the acquired vein information with preset vein information of a preset user, and return an identification result that the current wearer is the preset user to the electronic device if the similarity between the acquired vein information and the preset vein information reaches the preset vein similarity (which can be set by a technical person in the field according to actual needs, for example, can be set to 90%), or otherwise return an identification result that the current wearer is not the preset user.
104. And when receiving the recognition result that the current wearer is a preset user, executing position prompting operation according to a preset mode.
In the embodiment of the application, when receiving the recognition result that the current wearer is the preset user and returned by the wearable device, the electronic device executes the position prompting operation according to the preset mode so as to guide the preset user (namely the owner) to find the user.
The mode of the electronic device for executing the position prompting operation can be set by default or according to data input by a user. For example, the default prompting mode of the electronic device is to light up. In addition, referring to fig. 2, the electronic device is further provided with a prompt mode setting interface for the user to select a prompt mode according to actual needs, for example, when the user selects a position prompt mode of "ring while being on the bright screen", if the user receives a voice signal "you are in a small europe" sent by the user, the electronic device reminds the user of the current position in a bright screen and ring mode.
As can be seen from the above, the electronic device in the embodiment of the application may acquire the instruction to be executed included in the voice signal when the voice signal in the external environment is acquired. And when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user. When the voiceprint features are not matched with the preset voiceprint features, auxiliary verification is carried out through the wearable equipment which is pre-associated, and when the verification is passed, position prompting operation is carried out according to a preset mode. Compared with the prior art that the position prompting operation is executed only when the voiceprint feature is verified to pass, the method and the device avoid no response caused by the change of the voiceprint feature of the user, and can improve the success rate of triggering the electronic device to perform position prompting.
Referring to fig. 3, fig. 3 is another schematic flow chart of a position prompting method according to an embodiment of the present application, and as shown in fig. 3, the position prompting method may include:
201. when a noisy speech signal in an external environment is collected, a historical noise signal corresponding to the noisy speech signal is obtained.
It is easily understood that various noises exist in the environment, such as noises generated by operating a computer, noises generated by knocking a keyboard, and the like in an office. Therefore, when the electronic device collects the voice signal, it is obviously difficult to collect a pure voice signal.
Correspondingly, when the electronic device is in a noisy environment, if the user sends out a voice signal, the electronic device collects a noisy voice signal in the external environment, the noisy voice signal is formed by combining the voice signal sent out by the user and a noise signal in the external environment, and if the user does not send out the voice signal, the electronic device only collects the noise signal in the external environment. The electronic equipment buffers the collected voice signals with noise and noise signals.
In this embodiment of the present application, when the electronic device collects a noisy speech signal in an external environment, taking a start time of the noisy speech signal as an end time, obtaining a historical noise signal of a preset duration (the preset duration may be a suitable value according to actual needs by a person skilled in the art, which is not specifically limited in this embodiment of the present application, and may be set to 500ms, for example) collected before receiving the noisy speech signal, and taking the noise signal as the historical noise signal corresponding to the noisy speech signal.
For example, the preset time duration is configured to be 500ms, and the start time of the noisy speech signal is 48 minutes, 56 seconds and 500ms at 17 hours at 19 days at 06 months in 2018, the electronic device acquires the noise signal with the time duration of 500ms buffered from 48 minutes and 56 seconds at 17 hours at 19 days at 48 minutes and 56 seconds at 19 days at 06 months in 2018, and takes the noise signal as the historical noise signal corresponding to the noisy speech signal.
202. And acquiring a noise signal during the acquisition of the voice signal with noise according to the historical noise signal.
After acquiring the historical noise signal corresponding to the voice signal with noise, the electronic equipment further acquires the noise signal during the acquisition of the voice signal with noise according to the acquired historical noise signal.
For example, the electronic device may predict noise distribution during the period of acquiring the noisy speech signal according to the acquired historical noise signal, so as to obtain a noise signal during the period of acquiring the noisy speech signal.
For another example, in consideration of noise stability, noise change in continuous time is usually small, and the electronic device may use the acquired historical noise signal as a noise signal during the acquisition of the noisy speech signal, wherein if the duration of the historical noise signal is greater than that of the noisy speech signal, a noise signal having the same duration as that of the noisy speech signal may be intercepted from the historical noise signal as a noise signal during the acquisition of the noisy speech signal; if the duration of the historical noise signal is less than the duration of the voice signal with noise, the historical noise signal can be copied, and a plurality of historical noise signals are spliced to obtain a noise signal with the duration same as that of the voice signal with noise, and the noise signal is used as the noise signal during the acquisition of the voice signal with noise.
203. And performing reverse phase superposition on the noise signal and the voice signal with the noise, and taking the noise-reduced voice signal obtained by superposition as the voice signal to be processed.
After the noise signal during the collection of the voice signal with noise is acquired, the electronic equipment firstly carries out phase inversion processing on the acquired noise signal, then superposes the noise signal after the phase inversion processing and the voice signal with noise to eliminate the noise part in the voice signal with noise to obtain a voice signal with noise, and uses the obtained voice signal with noise as the voice signal to be processed for subsequent processing.
204. And acquiring an instruction to be executed included in the voice signal.
When the instruction to be executed included in the voice signal is acquired, the electronic equipment firstly judges whether a voice analysis engine exists locally, if so, the electronic equipment inputs the voice signal into the local voice analysis engine for voice analysis, and a voice analysis text is obtained. The voice analysis is performed on the voice signal, that is, the voice signal is converted from "audio" to "text". After the voice analysis text of the voice signal is obtained through analysis, the electronic equipment further obtains the instruction to be executed included in the voice signal from the voice analysis text.
The electronic equipment stores a plurality of instruction keywords in advance, and a single instruction keyword or a plurality of instruction keyword combinations correspond to one instruction. When the to-be-executed instruction included in the voice signal is obtained from the voice analysis text obtained through analysis, the electronic equipment firstly carries out word segmentation operation on the voice analysis text to obtain a word sequence corresponding to the voice analysis text, and the word sequence includes a plurality of words.
After the word sequence corresponding to the voice analysis text is obtained, the electronic device matches the instruction keywords with the word sequence, that is, the instruction keywords in the word sequence are found out, so that the corresponding instruction is obtained through matching, and the instruction obtained through matching is used as an instruction to be executed of the voice signal. Wherein the matching search of the instruction keywords comprises complete matching and/or fuzzy matching.
After determining whether a voice analysis engine exists locally, if not, the electronic device sends the voice signal to a server (the server is a server providing voice analysis service), instructs the server to analyze the voice signal, and returns a voice analysis text obtained by analyzing the voice signal. After receiving the voice analysis text returned by the server, the electronic device can acquire the instruction to be executed included in the voice signal from the voice analysis text.
205. And when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user.
After the electronic equipment obtains that the voice signal comprises the instruction to be executed, whether the instruction to be executed is the instruction for triggering the position prompt is identified, wherein the instruction for triggering the position prompt can be set according to input data of the owner, and if the instruction to be executed is the instruction for triggering the position prompt, the speaker of the voice signal is judged to be the owner. For example, the owner sets an instruction keyword combination "small europe" + "you" + "where" as an instruction for triggering the position prompt, and correspondingly, when the electronic device receives the voice signal "small europe you are" the electronic device determines that the speaker of the voice signal "small europe you are" is the owner, and the instruction to be executed included in the voice signal "small europe you are" as the instruction for triggering the position prompt.
When the instruction to be executed included in the voice signal is recognized as an instruction for triggering the position prompt, the electronic equipment further acquires the voiceprint feature included in the voice signal. Wherein the voiceprint feature includes, but is not limited to, at least one feature component of a spectrum feature component, a cepstrum feature component, a formant feature component, a pitch feature component, a reflection coefficient feature component, a tone feature component, a speech rate feature component, an emotion feature component, a prosody feature component, and a rhythm feature component.
Then, the electronic device determines whether the obtained voiceprint feature matches a preset voiceprint feature (which can be pre-entered by a user owner) of a preset user (the preset user is the user owner of the electronic device), wherein the electronic device can obtain a similarity between the voiceprint feature and the preset voiceprint feature, and determine whether the obtained similarity is greater than or equal to the preset voiceprint similarity (which can be set by a person skilled in the art according to actual needs). When the obtained similarity is smaller than the preset voiceprint similarity, the electronic equipment determines that the voiceprint feature is not matched with the preset voiceprint feature, and determines that the speaker of the voice signal is not the owner.
206. And when the voiceprint features are not matched with the preset voiceprint features, sending an identity authentication request to the pre-associated wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is a preset user or not, and returning an identification result.
In the embodiment of the application, the electronic device performs bluetooth pairing with a wearable device (for example, the wearable device can be an intelligent bracelet, an intelligent watch, an intelligent jewelry, an intelligent eye and the like) of the owner in advance according to an input operation received to the owner, and establishes an association relation with the wearable device after pairing is successful.
Based on the above description, it is easily understood by those skilled in the art that when the voiceprint feature does not match the preset voiceprint feature, it is stated that the speaker of the voice signal may not be the owner, but the possibility that the speaker is the owner is not excluded, and at this time, the electronic device performs the auxiliary verification through the pre-associated wearable device.
The electronic device performs bluetooth pairing with wearable equipment (such as an intelligent bracelet, an intelligent watch, an intelligent jewelry, an intelligent eye and the like) of the owner in advance according to the input operation received by the owner, and establishes an association relationship with the wearable equipment after pairing is successful.
When the electronic device determines that the voiceprint feature is not matched with the preset voiceprint feature, firstly, whether the current wearer of the wearable device is a speaker of the voice signal is identified.
When the current wearer of the wearable device is identified to be the speaker of the voice signal, the electronic device constructs an authentication request according to a message format agreed with the wearable device in advance, and sends the constructed authentication request to the wearable device, wherein the authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user (namely the owner) or not, and returning an identification result.
207. And when receiving the recognition result that the current wearer is a preset user, executing position prompting operation according to a preset mode.
In the embodiment of the application, when receiving the recognition result that the current wearer is the preset user and returned by the wearable device, the electronic device executes the position prompting operation according to the preset mode so as to guide the preset user (namely the owner) to find the user.
The mode of the electronic device for executing the position prompting operation can be set by default or according to data input by a user. For example, the default prompting mode of the electronic device is to light up. In addition, referring to fig. 2, the electronic device further provides a setting interface of a prompting mode, so that the user can select the prompting mode according to actual needs, for example, when the user selects a position prompting mode of "ring while being on the bright screen", if the user receives a voice signal "you are in a small europe" sent by the user, the electronic device reminds the user of the current position in a mode of being on the bright screen and ring.
In one embodiment, obtaining a noise signal during collection of a noisy speech signal from a historical noise signal comprises:
(1) performing model training by taking the historical noise signal as sample data to obtain a noise prediction model;
(2) and predicting a noise signal during the collection of the voice signal with noise according to the noise prediction model.
After the electronic equipment acquires the historical noise signal, the historical noise signal is used as sample data, model training is carried out according to a preset training algorithm, and a noise prediction model is obtained.
It should be noted that the training algorithm is a machine learning algorithm, and the machine learning algorithm may predict data by continuously performing feature learning, for example, the electronic device may predict a current noise distribution according to a historical noise distribution. Wherein the machine learning algorithm may include: decision tree algorithm, regression algorithm, bayesian algorithm, neural network algorithm (which may include deep neural network algorithm, convolutional neural network algorithm, recursive neural network algorithm, etc.), clustering algorithm, etc., and the selection of which training algorithm to use as the preset training algorithm for model training may be selected by those skilled in the art according to actual needs.
For example, a preset training algorithm configured for the electronic device is a gaussian mixture model algorithm (which is a regression algorithm), after a historical noise signal is obtained, the historical noise signal is used as sample data, model training is performed according to the gaussian mixture model algorithm, a gaussian mixture model is obtained through training (a noise prediction model includes a plurality of gaussian units and is used for describing noise distribution), and the gaussian mixture model is used as a noise prediction model. Then, the electronic device inputs the start time and the end time of the noisy speech signal acquisition period as the input of the noise prediction model, inputs the input to the noise prediction model for processing, and outputs the noise signal of the noisy speech signal acquisition period by the noise prediction model.
In one embodiment, "performing the location hint operation in a preset manner" includes:
acquiring current first position information, and generating first position prompt information according to the first position information.
When the electronic equipment executes the position prompting operation according to a preset mode, current position information is firstly acquired and recorded as first position information. When the mobile terminal is in an outdoor environment (the electronic device may identify whether the mobile terminal is currently in the outdoor environment or in an indoor environment according to the strength of the received satellite positioning signal, for example, when the strength of the received satellite positioning signal is lower than a preset threshold, the mobile terminal is determined to be in the indoor environment, and when the strength of the received satellite positioning signal is higher than or equal to the preset threshold, the mobile terminal is determined to be in the outdoor environment), the electronic device may acquire the current first location information by using a satellite positioning technology, and when the mobile terminal is in the indoor environment, the electronic device may acquire the current first location information by using an indoor positioning technology.
After the current first location information is obtained, the obtained first location information may be spliced with first preset information (which may be set by a person skilled in the art according to actual needs, and this is not specifically limited in this embodiment of the present application), and the obtained spliced information is used as the first location prompt information.
For example, referring to fig. 4, assuming that the first preset information is "owner, i is present", assuming that the acquired first location information is "meeting room", the electronic device splices the first preset information and the first location information to obtain first location prompt information, i.e., "owner, i is present in the meeting room, and then outputs the obtained first location prompt information" owner, i is present in the meeting room "in a voice manner, so as to guide the user and help the user to find the electronic device.
In one embodiment, "performing the location hint operation in a preset manner" includes:
(1) sending a ranging request to the wearable device, and receiving ranging information returned by the wearable device according to the ranging request;
(2) acquiring a distance value from the wearable device to the current distance according to the received distance measurement information;
(3) and generating second position prompt information according to the acquired distance value, and outputting the generated second position prompt information in a voice mode.
The electronic equipment constructs a ranging request according to a message format agreed with the wearable equipment in advance, and sends the constructed ranging request to the wearable equipment, wherein the ranging request is used for indicating the wearable equipment to return ranging information.
After sending the ranging request to the wearable device, the electronic device receives ranging information returned by the wearable device according to the ranging request, where the ranging information includes related information that enables a distance value between the electronic device and the wearable device to be calculated, for example, the ranging information may be location information of a location where the wearable device is located.
After receiving the ranging information returned by the wearable device, the electronic device calculates a distance value from the wearable device to the current distance according to the ranging information.
After the electronic equipment acquires the distance value between the electronic equipment and the wearable equipment, second position prompt information is generated according to the distance value and used for prompting the distance between the electronic equipment and the wearable equipment.
When the electronic device generates the second position prompt information, the obtained distance value may be spliced with second preset information (which may be set by a person skilled in the art according to actual needs, and this is not specifically limited in this embodiment of the present application), and the obtained spliced information is used as the second position prompt information.
For example, assuming that the second preset information is "owner, i.e., the distance between me and your is" and the obtained distance value is "10 meters", the electronic device splices the second preset information and the distance value to obtain second position prompt information, i.e., the second position prompt information is "owner, i.e., the distance between me and your is 10 meters", and then outputs the obtained second position prompt information, i.e., the "owner, i.e., the distance between me and your is 10 meters", in a voice manner, as shown in fig. 5.
In one embodiment, the "obtaining the distance value of the current distance wearable device according to the received ranging information" includes:
(1) acquiring a timestamp carried by the ranging information, wherein the timestamp is the sending time of the ranging information;
(2) and calculating the distance value of the current distance wearable equipment according to the sending time and the receiving time of the received ranging information.
The ranging information sent by the wearable device may be a blank frame, where the blank frame carries a timestamp, and the timestamp is a sending time when the wearable device sends the blank frame. Correspondingly, after the electronic device receives the blank frame from the wearable device, the electronic device analyzes the timestamp carried by the blank frame to obtain the sending time of the blank frame, and since data interaction between the electronic device and the wearable device is realized in a physical layer in the form of electromagnetic waves and the propagation speed of the electromagnetic waves in the air is known, the electronic device can calculate the distance value from the wearable device to the current distance value according to the sending time and the receiving time when the distance measurement information is received, in combination with the propagation speed of the electromagnetic waves, according to the following formula:
L=(Tr-Tt)*C;
wherein L represents a distance value between the electronic device and the wearable device, Tr represents a receiving time when the electronic device receives the blank frame (i.e., the ranging information), Tt represents a transmitting time when the electronic device transmits the blank frame, and C represents a propagation velocity of the electromagnetic wave in the air.
In one embodiment, the "obtaining the distance value of the current distance wearable device according to the received ranging information" includes:
(1) when the distance measurement information comprises second position information of the wearable device, acquiring current third position information;
(2) and calculating a distance value of the current distance from the wearable equipment according to the second position information and the third position information.
After receiving a ranging request from the electronic device, the wearable device performs positioning operation according to the ranging request, acquires position information of the position where the wearable device is located, and records the position information as second position information. And then packaging the second position information into a data frame, and sending the data frame as ranging information to the electronic equipment.
Correspondingly, after receiving the ranging information sent by the wearable device in the form of a data frame, the electronic device extracts the second location information of the wearable device from the data frame. In addition, the electronic equipment also acquires current third position information. When the mobile terminal is in an outdoor environment (the electronic device may identify whether the mobile terminal is currently in the outdoor environment or in an indoor environment according to the strength of the received satellite positioning signal, for example, when the strength of the received satellite positioning signal is lower than a preset threshold, the mobile terminal is determined to be in the indoor environment, and when the strength of the received satellite positioning signal is higher than or equal to the preset threshold, the mobile terminal is determined to be in the outdoor environment), the electronic device may acquire the current third location information by using a satellite positioning technology, and when the mobile terminal is in the indoor environment, the electronic device may acquire the current third location information by using an indoor positioning technology.
It should be noted that the second location information and the third location information are referred to by using the same coordinate system, the second location information is a coordinate of a second location where the wearable device is located, the third location information is a coordinate of a third location where the electronic device is located, and the electronic device calculates a distance value from the wearable device to the present according to the following formula:
where L represents a distance between the electronic device and the wearable device, X1 represents an abscissa of a second location where the wearable device is located, Y1 represents an ordinate of the second location where the wearable device is located, X2 represents an abscissa of a third location where the electronic device is located, and Y2 represents an ordinate of the third location where the electronic device is located, as shown in fig. 6.
In one embodiment, the method further comprises the following steps:
(1) generating navigation path information according to the second position information and the third position information;
(2) and sending the navigation path information to the wearable device, and instructing the wearable device to output the navigation path information.
After the electronic device acquires the second position information of the wearable device and the third position information of the electronic device, the distance value between the electronic device and the wearable device is calculated according to the second position information and the third position information, and navigation path information from the wearable device to the electronic device is generated by adopting a preset navigation algorithm according to the second position information and the third position information. It should be noted that what kind of navigation algorithm is adopted as the preset navigation algorithm may be selected by a person skilled in the art according to actual needs, and the present application is not limited to this specifically.
The electronic device, after generating navigation path information from the wearable device to the electronic device, sends the navigation path information to the wearable device and instructs the wearable device to output the navigation path information. The wearable device can output navigation path information in a voice and/or image mode to guide a user to find the electronic device.
In one embodiment, a position prompting device is further provided. Referring to fig. 7, fig. 7 is a schematic structural diagram of a position indicating device 400 according to an embodiment of the present disclosure. The position prompting device is applied to an electronic device, and includes an obtaining module 401, a verifying module 402, a sending module 403, and a prompting module 404, as follows:
the obtaining module 401 is configured to obtain an instruction to be executed included in the voice signal when the voice signal in the external environment is collected.
The verification module 402 is configured to, when the instruction to be executed is an instruction for triggering a position prompt, acquire a voiceprint feature included in the voice signal, and determine whether the acquired voiceprint feature matches a preset voiceprint feature of a preset user.
A sending module 403, configured to send an authentication request to a pre-associated wearable device when the voiceprint feature is not matched with the preset voiceprint feature, where the authentication request is used to indicate that the wearable device identifies whether the current wearer is a preset user, and return an identification result.
And the prompt module 404 is configured to execute a position prompt operation according to a preset mode when receiving an identification result that the current wearer is a preset user.
In an embodiment, the prompt module 404 may be configured to:
acquiring current first position information, and generating first position prompt information according to the first position information.
In an embodiment, the prompt module 404 may be configured to:
sending a ranging request to the wearable device, and receiving ranging information returned by the wearable device according to the ranging request;
acquiring a distance value from the wearable device to the current distance according to the received distance measurement information;
and generating second position prompt information according to the acquired distance value, and outputting the generated second position prompt information in a voice mode.
In an embodiment, the prompt module 404 may be configured to:
acquiring a timestamp carried by the ranging information, wherein the timestamp is the sending time of the ranging information;
and calculating the distance value of the current distance wearable equipment according to the sending time and the receiving time of the received ranging information.
In an embodiment, the prompt module 404 may be configured to:
when the distance measurement information comprises second position information of the wearable device, acquiring current third position information;
and calculating a distance value of the current distance from the wearable equipment according to the second position information and the third position information.
In an embodiment, the position prompting apparatus 400 further includes a navigation module, which may be configured to:
generating navigation path information according to the second position information and the third position information;
and sending the navigation path information to the wearable device, and instructing the wearable device to output the navigation path information.
In an embodiment, the obtaining module 401 may be configured to:
acquiring a historical noise signal corresponding to a voice signal with noise when the voice signal with noise in the external environment is acquired;
acquiring a noise signal during the acquisition of a voice signal with noise according to the historical noise signal;
and performing reverse phase superposition on the noise signal and the voice signal with the noise, and taking the noise-reduced voice signal obtained by superposition as the voice signal.
In an embodiment, the obtaining module 401 may be configured to:
performing model training by taking the historical noise signal as sample data to obtain a noise prediction model;
and predicting a noise signal during the collection of the voice signal with noise according to the noise prediction model.
The steps executed by each module in the position prompting device 400 may refer to the method steps described in the above method embodiments. The position prompting device 400 can be integrated into an electronic device, such as a mobile phone, a tablet computer, and the like.
In specific implementation, the modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the units may refer to the foregoing embodiments, which are not described herein again.
As can be seen from the above, in the position prompting apparatus of this embodiment, when the obtaining module 401 collects the voice signal in the external environment, the to-be-executed instruction included in the voice signal may be obtained. When the instruction to be executed is an instruction for triggering the position prompt, the verification module 402 acquires a voiceprint feature included in the voice signal, and determines whether the acquired voiceprint feature is matched with a preset voiceprint feature of a preset user. When the voiceprint feature is not matched with the preset voiceprint feature, the sending module 403 sends an authentication request to the pre-associated wearable device, where the authentication request is used to indicate that the wearable device identifies whether the current wearer is a preset user, and returns an identification result. When receiving the recognition result that the current wearer is the preset user, the prompting module 404 executes the position prompting operation in a preset manner. Compared with the prior art that the position prompting operation is executed only when the voiceprint feature is verified to pass, the method and the device avoid no response caused by the change of the voiceprint feature of the user, and can improve the success rate of triggering the electronic device to perform position prompting.
In an embodiment, an electronic device is also provided. Referring to fig. 8, an electronic device 500 includes a processor 501 and a memory 502. The processor 501 is electrically connected to the memory 502.
The processor 500 is a control center of the electronic device 500, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device 500 and processes data by running or loading a computer program stored in the memory 502 and calling data stored in the memory 502.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by running the computer programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
In this embodiment, the processor 501 in the electronic device 500 loads instructions corresponding to one or more processes of the computer program into the memory 502, and the processor 501 runs the computer program stored in the memory 502, so as to implement various functions as follows:
when a voice signal in an external environment is collected, acquiring a command to be executed included in the voice signal;
when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included in the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user or not;
when the voiceprint features are not matched with the preset voiceprint features, sending an identity authentication request to the pre-associated wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is a preset user or not, and returning an identification result;
and when receiving the recognition result that the current wearer is a preset user, executing position prompting operation according to a preset mode.
Referring to fig. 9, in some embodiments, the electronic device 500 may further include: a display 503, radio frequency circuitry 504, audio circuitry 505, and a power supply 506. The display 503, the rf circuit 504, the audio circuit 505, and the power source 506 are electrically connected to the processor 501.
The display 503 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 503 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices.
The audio circuit 505 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone.
The power supply 506 may be used to power various components of the electronic device 500. In some embodiments, power supply 506 may be logically coupled to processor 501 through a power management system, such that functions of managing charging, discharging, and power consumption are performed through the power management system.
Although not shown in fig. 9, the electronic device 500 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In some embodiments, when performing the location hint operation in a preset manner, the processor 501 may perform the following steps:
acquiring current first position information, and generating first position prompt information according to the first position information.
In some embodiments, when performing the location hint operation in a preset manner, the processor 501 may perform the following steps:
sending a ranging request to the wearable device, and receiving ranging information returned by the wearable device according to the ranging request;
acquiring a distance value from the wearable device to the current distance according to the received distance measurement information;
and generating second position prompt information according to the acquired distance value, and outputting the generated second position prompt information in a voice mode.
In some embodiments, when obtaining the distance value from the wearable device to the current distance according to the received ranging information, the processor 501 may perform the following steps:
acquiring a timestamp carried by the ranging information, wherein the timestamp is the sending time of the ranging information;
and calculating the distance value of the current distance wearable equipment according to the sending time and the receiving time of the received ranging information.
In some embodiments, when obtaining the distance value from the wearable device to the current distance according to the received ranging information, the processor 501 may perform the following steps:
when the distance measurement information comprises second position information of the wearable device, acquiring current third position information;
and calculating a distance value of the current distance from the wearable equipment according to the second position information and the third position information.
In some embodiments, processor 501 may also perform the following steps:
generating navigation path information according to the second position information and the third position information;
and sending the navigation path information to the wearable device, and instructing the wearable device to output the navigation path information.
In some embodiments, when acquiring the voice signal in the external environment, before acquiring the instruction to be executed included in the voice signal, the processor 501 may further perform the following steps:
acquiring a historical noise signal corresponding to a voice signal with noise when the voice signal with noise in the external environment is acquired;
acquiring a noise signal during the acquisition of a voice signal with noise according to the historical noise signal;
and performing reverse phase superposition on the noise signal and the voice signal with the noise, and taking the noise-reduced voice signal obtained by superposition as the voice signal.
In some embodiments, when acquiring a noise signal during noisy speech signal acquisition from a historical noise signal, processor 501 may further perform the following steps:
performing model training by taking the historical noise signal as sample data to obtain a noise prediction model;
and predicting a noise signal during the collection of the voice signal with noise according to the noise prediction model.
An embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on a computer, the computer is caused to execute the position indication method in any one of the above embodiments, for example: when a voice signal in an external environment is collected, acquiring a command to be executed included in the voice signal; when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included in the voice signal, and judging whether the acquired voiceprint features are matched with preset voiceprint features of a preset user or not; when the voiceprint features are not matched with the preset voiceprint features, sending an identity authentication request to the pre-associated wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is a preset user or not, and returning an identification result; and when receiving the recognition result that the current wearer is a preset user, executing position prompting operation according to a preset mode.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the position indication method in the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the position indication method in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and during the execution process, the process of the embodiment of the position indication method can be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the position indication device according to the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The position prompting method, the position prompting device, the storage medium and the electronic device provided by the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A position prompting method is characterized by comprising the following steps:
when a voice signal in an external environment is collected, acquiring a command to be executed included in the voice signal;
when the instruction to be executed is an instruction for triggering position prompt, acquiring voiceprint features included by the voice signal, and judging whether the voiceprint features are matched with preset voiceprint features of a preset user or not;
when the voiceprint features are not matched with the preset voiceprint features, identifying whether a current wearer of the pre-associated wearable device is a speaker of the voice signal;
if so, sending an identity authentication request to the wearable device, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user or not, and returning an identification result;
and when receiving the recognition result that the current wearer is the preset user, executing position prompting operation according to a preset mode.
2. The method of claim 1, wherein the step of performing the location hint operation in a predetermined manner comprises:
acquiring current first position information, and generating first position prompt information according to the first position information;
and outputting the first position prompt information in a voice mode.
3. The method of claim 1, wherein the step of performing the location hint operation in a predetermined manner comprises:
sending a ranging request to the wearable device, and receiving ranging information returned by the wearable device according to the ranging request;
acquiring a distance value from the wearable equipment to the current according to the ranging information;
and generating second position prompt information according to the distance value, and outputting the second position prompt information in a voice mode.
4. The location prompting method of claim 3, wherein the step of obtaining a current distance value from the wearable device according to the ranging information comprises:
acquiring a timestamp carried by the ranging information, wherein the timestamp is the sending time of the ranging information;
and calculating the distance value according to the sending time and the receiving time of the distance measuring information.
5. The location prompting method of claim 3, wherein the step of obtaining a current distance value from the wearable device according to the ranging information comprises:
when the ranging information comprises second position information of the wearable device, acquiring current third position information;
and calculating the distance value according to the second position information and the third position information.
6. The position prompting method according to claim 5, characterized in that the position prompting method further comprises the steps of:
generating navigation path information according to the second position information and the third position information;
and sending the navigation path information to the wearable equipment, and instructing the wearable equipment to output the navigation path information.
7. The position prompting method according to any one of claims 1-6, characterized in that, when acquiring the voice signal in the external environment, before the step of obtaining the instruction to be executed included in the voice signal, the method further comprises:
acquiring a historical noise signal corresponding to a voice signal with noise when the voice signal with noise in an external environment is acquired;
acquiring a noise signal during the collection of the voice signal with the noise according to the historical noise signal;
and performing reverse phase superposition on the noise signal and the voice signal with the noise, and taking the noise-reduced voice signal obtained by superposition as the voice signal.
8. A position prompting device, characterized in that the position prompting device comprises:
the acquisition module is used for acquiring a command to be executed included in a voice signal when the voice signal in an external environment is acquired;
the verification module is used for acquiring the voiceprint features included by the voice signals and judging whether the voiceprint features are matched with the preset voiceprint features of a preset user or not when the instruction to be executed is an instruction for triggering position prompt;
the recognition module is used for recognizing whether the current wearer of the pre-associated wearable device is a speaker of the voice signal or not when the voiceprint feature is not matched with the preset voiceprint feature;
the sending module is used for sending an identity authentication request to the wearable device when the current wearer is the speaker, wherein the identity authentication request is used for indicating the wearable device to identify whether the current wearer is the preset user or not and returning an identification result;
and the prompting module is used for executing position prompting operation according to a preset mode when receiving the identification result of the current wearer as the preset user.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute a position prompting method according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the location hint method of any one of claims 1 to 7 by invoking the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810682454.XA CN109064720B (en) | 2018-06-27 | 2018-06-27 | Position prompting method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810682454.XA CN109064720B (en) | 2018-06-27 | 2018-06-27 | Position prompting method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109064720A CN109064720A (en) | 2018-12-21 |
CN109064720B true CN109064720B (en) | 2020-09-08 |
Family
ID=64818040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810682454.XA Active CN109064720B (en) | 2018-06-27 | 2018-06-27 | Position prompting method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109064720B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033775A (en) * | 2019-05-07 | 2019-07-19 | 百度在线网络技术(北京)有限公司 | Multitone area wakes up exchange method, device and storage medium |
CN112071311B (en) | 2019-06-10 | 2024-06-18 | Oppo广东移动通信有限公司 | Control method, control device, wearable device and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6052610B2 (en) * | 2013-03-12 | 2016-12-27 | パナソニックIpマネジメント株式会社 | Information communication terminal and interactive method thereof |
CN104468961A (en) * | 2013-09-25 | 2015-03-25 | 北京新媒传信科技有限公司 | Method and device for prompting position of terminal |
CN105093178A (en) * | 2015-07-20 | 2015-11-25 | 小米科技有限责任公司 | Terminal positioning method, apparatus and system |
CN105827810B (en) * | 2015-10-20 | 2019-09-24 | 南京步步高通信科技有限公司 | A kind of communication terminal based on Application on Voiceprint Recognition recovers method and communication terminal |
CN106878535A (en) * | 2015-12-14 | 2017-06-20 | 北京奇虎科技有限公司 | The based reminding method and device of mobile terminal locations |
CN105592224A (en) * | 2015-12-31 | 2016-05-18 | 宇龙计算机通信科技(深圳)有限公司 | Communication information processing method and mobile terminal |
-
2018
- 2018-06-27 CN CN201810682454.XA patent/CN109064720B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109064720A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12009007B2 (en) | Voice trigger for a digital assistant | |
CN108962241B (en) | Position prompting method and device, storage medium and electronic equipment | |
CN110310623B (en) | Sample generation method, model training method, device, medium, and electronic apparatus | |
CN109243432B (en) | Voice processing method and electronic device supporting the same | |
CN108922525B (en) | Voice processing method, device, storage medium and electronic equipment | |
WO2019214361A1 (en) | Method for detecting key term in speech signal, device, terminal, and storage medium | |
US11455989B2 (en) | Electronic apparatus for processing user utterance and controlling method thereof | |
CN110853617B (en) | Model training method, language identification method, device and equipment | |
CN108806684B (en) | Position prompting method and device, storage medium and electronic equipment | |
CN110838286A (en) | Model training method, language identification method, device and equipment | |
CN113129867B (en) | Training method of voice recognition model, voice recognition method, device and equipment | |
CN111524501A (en) | Voice playing method and device, computer equipment and computer readable storage medium | |
CN108900965A (en) | Position indicating method, device, storage medium and electronic equipment | |
CN113220590A (en) | Automatic testing method, device, equipment and medium for voice interaction application | |
CN109064720B (en) | Position prompting method and device, storage medium and electronic equipment | |
CN111428079A (en) | Text content processing method and device, computer equipment and storage medium | |
CN110728993A (en) | Voice change identification method and electronic equipment | |
CN108989551B (en) | Position prompting method and device, storage medium and electronic equipment | |
CN108922523B (en) | Position prompting method and device, storage medium and electronic equipment | |
CN113225624A (en) | Time-consuming determination method and device for voice recognition | |
CN109829067B (en) | Audio data processing method and device, electronic equipment and storage medium | |
CN108711428B (en) | Instruction execution method and device, storage medium and electronic equipment | |
US11670294B2 (en) | Method of generating wakeup model and electronic device therefor | |
KR20200092763A (en) | Electronic device for processing user speech and controlling method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |