CN109257490A - Audio-frequency processing method, device, wearable device and storage medium - Google Patents
Audio-frequency processing method, device, wearable device and storage medium Download PDFInfo
- Publication number
- CN109257490A CN109257490A CN201811001212.6A CN201811001212A CN109257490A CN 109257490 A CN109257490 A CN 109257490A CN 201811001212 A CN201811001212 A CN 201811001212A CN 109257490 A CN109257490 A CN 109257490A
- Authority
- CN
- China
- Prior art keywords
- audio
- data
- wearable device
- information
- audio clip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72442—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72436—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Abstract
A kind of audio-frequency processing method, device, wearable device and the storage medium provided in the embodiment of the present application, this method comprises: obtaining the audio data of wearable device acquisition;If detecting to include predeterminable event information in the audio data, audio clip data corresponding with the predeterminable event information is extracted from the audio data;The audio clip data is stored.The embodiment of the present application passes through the sound outside wearable device acquisition, if detecting in sound includes when can star the predeterminable event information of recording, sound is acquired and is recorded, the operating efficiency of recording can be improved and reduces the redundancy in audio-frequency information.
Description
Technical field
The invention relates to wearable device technical field more particularly to a kind of audio-frequency processing method, device, wearings
Formula equipment and storage medium.
Background technique
With the development of wearable device, field applied by wearable device is more and more.The use of wearable device
It is usually worn for a long time by user, more user related datas can be collected compared to other mobile devices, it can be with
Preferably assist the daily life and work of user.But the audio collection function of current wearable device is perfect not enough,
It needs to make improvements.
Summary of the invention
A kind of audio-frequency processing method, device, wearable device and storage medium provided by the embodiments of the present application, can optimize
The audio collection function of intelligent glasses.
In a first aspect, the embodiment of the present application provides a kind of audio-frequency processing method, comprising:
Obtain the audio data of wearable device acquisition;
If detect in the audio data include predeterminable event information, from the audio data extract with it is described
The corresponding audio clip data of predeterminable event information;
The audio clip data is stored.
Second aspect, the embodiment of the present application provide a kind of apparatus for processing audio, comprising:
Sound detection module, for obtaining the audio data of wearable device acquisition;
Sound acquisition module, if for detecting including predeterminable event information in the audio data, from the sound
Frequency extracts audio clip data corresponding with the predeterminable event information in;
Memory module, for being stored to the audio clip data.
The third aspect, the embodiment of the present application provide a kind of wearable device, comprising: memory and is stored in processor
On memory and such as this Shen can be realized when the computer program of processor operation, the processor execute the computer program
It please audio-frequency processing method described in embodiment.
Fourth aspect, the embodiment of the present application provide a kind of storage medium comprising wearable device executable instruction, institute
Wearable device executable instruction is stated when being executed as wearable device processor for executing as described in the embodiment of the present application
Audio-frequency processing method.
A kind of audio processing scheme provided in the embodiment of the present application, by the audio number for obtaining wearable device acquisition
According to;If detecting includes predeterminable event information in the audio data, extracts from the audio data and preset with described
The corresponding audio clip data of event information;The audio clip data is stored.The embodiment of the present application passes through wearable
The external sound of equipment acquisition, if detect in sound including can star recording predeterminable event information when, to sound into
Row acquisition and record can be improved the operating efficiency of recording and reduce the redundancy in audio-frequency information.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of audio-frequency processing method provided by the embodiments of the present application;
Fig. 2 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application;
Fig. 4 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application;
Fig. 5 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application;
Fig. 6 is a kind of structural block diagram of apparatus for processing audio provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of wearable device provided by the embodiments of the present application;
Fig. 8 is a kind of signal pictorial diagram of wearable device provided by the embodiments of the present application.
Specific embodiment
Further illustrate the technical solution of the application below with reference to the accompanying drawings and specific embodiments.It is understood that
It is that specific embodiment described herein is used only for explaining the application, rather than the restriction to the application.It further needs exist for illustrating
, part relevant to the application is illustrated only for ease of description, in attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail
The processing or method described as flow chart.Although each step is described as the processing of sequence by flow chart, many of these
Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation
The processing can be terminated when completion, it is also possible to have the additional step being not included in attached drawing.The processing can be with
Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is a kind of flow diagram of audio-frequency processing method provided by the embodiments of the present application, and this method can be by audio
Processing unit executes, and wherein the device can generally be can integrate in wearable device by software and or hardware realization,
It can integrate in the equipment that other are equipped with operating system.As shown in Figure 1, this method comprises:
S110, the audio data for obtaining wearable device acquisition.
Wherein, wearable device can be the wearable device with intelligent operating system, illustratively, wearable device
It can be intelligent glasses, intelligent glasses are usually to be worn on around the eyes of user.Being integrated in wearable device can acquire
The various sensors of various information, comprising: the attitude transducer for acquiring the posture information of user acquires the shooting module of image,
Acquire the microphone of sound, and the condition sensor etc. of detection user's sign information.
The sound outside monitoring can be led to, and acquire the external corresponding audio data of sound.The sound of the outside can
To be the sound in the wearable device local environment.The sound in external environment can be monitored by microphone, if prison
External sound is measured, then passes through the corresponding audio data of sound outside wearable device acquisition.And then it can be to the sound
Sound is handled, and be can be and is carried out conversion process to the sound, exports corresponding text information.It can also be to the sound
Feature extraction processing is carried out, the audio feature information that the sound is included is extracted.
If S111, detecting in the audio data to include predeterminable event information, extracted from the audio data
Audio clip data corresponding with the predeterminable event information.
The predeterminable event information is that possible be comprising critical event and the information that is stored, the critical event
The event for comparing concern for users, executing grade height or having time requirement.Illustratively, the predeterminable event information can be with
It is the information about meeting, the information about the time and the information about place etc..The predeterminable event information can be system
Default either user setting, if user compares concern for " news " this word, user can be by " news " this word
Language is set as predeterminable event information, includes " news " in detecting the sound, can acquire corresponding with sound " news "
Audio clip data.
Predeterminable event information can include: default text or preset audio feature.If carrying out conversion process to the sound
Corresponding text information is exported, then whether may determine that in the text information of the output comprising default text.If it is extraction institute
State the audio feature information for including in sound, it is determined that whether the audio feature information includes preset audio feature.If institute
It states comprising including preset audio feature in default text or the audio feature information in the text information of output, then it can be with
Determine to include predeterminable event information in the sound.
Including after predeterminable event information in determining the sound, then it represents that include critical event in the sound, need
Sound is stored, audio clip data corresponding with the predeterminable event information in the sound can be acquired.It can be with
It is in determining the sound including starting the sound-recording function of the microphone of wearable device, acquiring the sound after temporal information
Audio clip data corresponding with the predeterminable event information in sound.
Optionally it is determined that the initial time that the predeterminable event information occurs, using the initial time as benchmark time point
The audio clip data of preset time period is extracted from the audio data.
The initial time that the predeterminable event information occurs be obtain in the audio data that wearable device acquires with it is described
The time point of the corresponding starting of predeterminable event information.Illustratively, in 19:17:05, collected sound and predeterminable event are believed
Manner of breathing matching can then determine that time 19:17:05 is the initial time that predeterminable event information occurs.It is with the initial time
The audio clip data that reference time point extracts preset time period from the audio data may is that with initial time work
The time data of preset time period are extracted from the audio data for the time started.
The preset time period can be the duration of systemic presupposition or user preset, illustratively, if described default
Period be five minutes, then in the sound for detecting acquisition include predeterminable event information after on the basis of the time of origin point
Time acquires five minutes audio clip datas and is stored.
Wherein, if within a preset period of time acquire audio clip data, and detect microphone acquisition sound in
Including predeterminable event information, then using the initial time of new predeterminable event information appearance as reference time point again from the sound
Frequency extracts the audio clip data of preset time period in.And the sound for newly executing the audio clip data of acquisition and acquiring before
Frequency fragment data is stored simultaneously.
S112, the audio clip data is stored.
It can be in the memory that the audio clip data is stored in wearable device, be also possible to by described
The audio clip data is transmitted to background server and stored by the communication module of wearable device.Detecting acquisition
It include depositing collected audio clip data corresponding with predeterminable event information after predeterminable event information in audio data
Storage, so that user can obtain the audio clip data of storage in subsequent need.
A kind of audio-frequency processing method disclosed in the embodiment of the present application obtains the audio data of wearable device acquisition;If
It detects to include predeterminable event information in the audio data, is then extracted and the predeterminable event information from the audio data
Corresponding audio clip data;The audio clip data is stored.The embodiment of the present application is acquired by wearable device
External sound, include when can star the predeterminable event information of recording if detected in sound sound is acquired and
Record can be improved the operating efficiency of recording and reduce the redundancy in audio-frequency information.
Fig. 2 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application, in above-described embodiment institute
On the basis of the technical solution of offer, when to extracting default from the audio data using the initial time as benchmark time point
Between the audio clip data of section be optimized, optionally, as shown in Fig. 2, this method comprises:
S120, the audio data for obtaining wearable device acquisition.
Specific embodiment can refer to associated description above, and details are not described herein.
If S121, detecting to include predeterminable event information in the sound, determine what the predeterminable event information occurred
Initial time, using the initial time as benchmark time point, from the audio data extract using the reference time point as
First audio clip data of the first time period of end time, and extract using the reference time point as the time started
Second audio clip data of second time period.
Wherein, the audio data before and after the relevant time shaft of the predeterminable event information is acquired.Because of part
The locution of people is some relevant informations to be mentioned before mentioning key message, then gradually draw key message.So such as
The fruit audio clip data later to predeterminable event information time of origin point is acquired and stores, and may lose
Audio data.Using the initial time as benchmark time point, from the audio data extract using the reference time point as
First audio clip data of the first time period of end time, and extract using the reference time point as the time started
Second audio clip data of second time period, i.e., by acquiring reference time point first time period forward and backward
The audio clip data of second time period, can be to avoid when detecting the predeterminable event information, which goes out
It is missed before existing initial time there is also the audio data of key message.
Wherein, the duration that the first time period and the second time period are added can be and the preset time period
Duration is identical.
S122, first audio clip data and second audio clip data are stored.
Wherein it is possible to which first audio clip data and second audio clip data are combined into complete audio
Fragment data is stored, and specific embodiment can refer to associated description above, and details are not described herein.
The initial time that the embodiment of the present application occurs by determining the predeterminable event information, using the initial time as base
Quasi- time point acquires the audio clip data of the first time period of the reference time point forward, and when the acquisition benchmark
Between put the audio clip data of second time period backward, the integrality of the information of the audio clip data of acquisition can be improved,
Portion of audio data has been avoided to be lost.
Fig. 3 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application, in above-described embodiment institute
On the basis of the technical solution of offer, it is optimized to storage is carried out to the audio clip data, optionally, such as Fig. 3 institute
Show, this method comprises:
S130, the audio data for obtaining wearable device acquisition.
If S131, detecting in the audio data to include predeterminable event information, extracted from the audio data
Audio clip data corresponding with the predeterminable event information.
The specific embodiment of aforesaid operations can refer to associated description above, and details are not described herein.
S132, the audio clip data is converted to corresponding text information, and the text information is stored.
Wherein it is possible to which the audio clip data to acquisition carries out text conversion processing, the audio clip data is converted
For text information, and the text information is stored.By storing the text information after conversion, use can be improved
Family obtains the efficiency of information, and user no longer needs to a little open audio data can directly to pass through text information to listen to determine information
Determine key message collected.
It can also be the text information and audio data while storing, user can be general by text information
Determine whether the audio data is and then to be confirmed further according to audio data required for user.Avoid the text letter of conversion
There are large error, users possibly can not get accurate key message for breath.
Optionally, the audio clip data is converted into corresponding text information, and the text information is deposited
Storage can be implemented by following manner:
Determine sound clip to be converted corresponding with conversion time section in the audio clip data, and will be described to be converted
Sound clip is converted to text information;Wherein, when the conversion time section includes the starting occurred with the predeterminable event information
Carve the set time section for the time started.
The text information and the audio clip data are stored.
Wherein, the conversion time section may include using the initial time of predeterminable event information appearance as the time started
Set time section;It include above-mentioned time of origin point in the conversion time section.For example, what the predeterminable event information occurred
For initial time in 19:17:05, the conversion time section can be ten seconds in 19:17:00 to 19:17:10.When fixed
Between the duration in section can be systemic presupposition or user setting, can also be adjusted according to the difference of predeterminable event information.
The text information and the audio clip data are subjected to corresponding storage, by by the period it is corresponding to
Transform Acoustic segment carries out text conversion processing, and sound clip to be converted relevant to key message is converted to text, is not necessarily to
Conversion process is carried out to entire audio data, the workload that system converts audio clip data can be reduced.It will be described default
The relevant sound clip to be converted of event information carries out text conversion, obtains the corresponding text information of key message, described and pass
The corresponding text information of key information can be used as the effect of abstract, quickly understand user in this section audio fragment data
Related content, to determine whether the audio clip data is that the operating efficiency of user can be improved required for user.
The embodiment of the present application believes the text by the way that the audio clip data is converted to corresponding text information
Breath is stored, and the efficiency that user obtains key message is improved.
Fig. 4 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application, in above-described embodiment institute
On the basis of the technical solution of offer, optionally, as shown in figure 4, this method comprises:
Environmental information locating for S140, acquisition wearable device, and the location information of acquisition wearable device.
Wherein, environment acquisition module is integrated in wearable device, environment acquisition module can acquire wearable device institute
Locate the environmental information of environment.
Optionally, the environmental information includes at least one in image data, infrared data, brightness data and voice data
Kind.Correspondingly, the environment acquisition module includes the picture shooting assembly for acquiring image data, the infrared sensing for acquiring infrared data
Device, the light sensor for acquiring brightness data and the sound transducer for acquiring voice data.
Wherein it is possible to the image data to collected local environment carries out image recognition, it is therefrom available to locating
The environmental aspect of environment, including the object etc. for including in indoor or outdoors, image.It, can the local environment dark the case where
To obtain the infrared image of local environment by infrared data, the environmental aspect of local environment can also be determined.According to brightness number
According to can determine local environment be indoor or outdoors.It can determine that user is in indoor or outdoors according to the sound transducer,
And the noisy degree of local environment.
The locating module for acquisition position information is provided in the wearable device, locating module can be GPS
(Global Positioning System, global positioning system) module, correspondingly, the location information includes the wearing
The GPS data of formula equipment.
S141, the usage scenario that the wearable device is determined according to the environmental information and the location information, if
The usage scenario is default scene, then triggers the wearable device acquisition audio data.
The usage scenario of the wearable device can be indoor or outdoors, for the difference of usage scenario, user for
The demand for acquiring sound according to wearable device is different.Illustratively, if user wears wearable device and walks in road
On, user to acquisition sound and the demand recorded it is relatively low.And if user is in interior, such as meeting room or classroom,
User needs to acquire sound and the demand recorded is relatively high.
The usage scenario that the wearable device can be determined according to the environmental information and the location information, according to institute
Usage scenario is stated to judge whether to need to trigger the wearable device acquisition audio data.The default scene includes being suitable for record
The scene of sound operation, can be systemic presupposition or user preset.Illustratively, the default scene includes meeting room, classroom
With the scenes such as meeting-place.
The current corresponding venue type of location information, the default cartographic information can be determined in default cartographic information
It can come from the map applications such as Baidu map, Amap, it can be by the location information of acquisition from the default map
Corresponding venue type is determined in information.It is then possible to determine the tool of wearable device local environment according to the environmental information
Body situation, and then corresponding usage scenario can be determined from the venue type.Illustratively, if the location information is true
Fixed venue type is teaching building, can determine that user is in indoor or outdoor according to environmental information, if user is in room
Outside, user may be on the side of teaching building, then without triggering the wearable device acquisition audio data.If passing through environment
Information determines that user is in interior, then can trigger the wearable device acquisition audio data.
S142, the audio data for obtaining wearable device acquisition, if detecting in the audio data includes default thing
Part information then extracts audio clip data corresponding with the predeterminable event information from the audio data.
S143, the audio clip data is stored.
The specific embodiment of aforesaid operations can refer to associated description above, and details are not described herein.
The embodiment of the present application is by determining making for the wearable device according to the environmental information and the location information
The audio data of wearable device acquisition is obtained if the usage scenario is default scene with scene;Believe according to environment
Breath and location information determine that the usage scenario of wearable device is scene suitable for recording operation, then trigger described wearable set
Standby acquisition audio data, can persistently open to avoid the microphone of wearable device and cause larger power consumption to wearable device.
Fig. 5 is the flow diagram of another audio-frequency processing method provided by the embodiments of the present application, in above-described embodiment institute
On the basis of the technical solution of offer, if pair detecting in the audio data includes predeterminable event information, from the sound
The operation that frequency extracts audio clip data corresponding with the predeterminable event information in is optimized, optionally, such as Fig. 5
It is shown, this method comprises:
S150, the audio data for obtaining wearable device acquisition.
Specific embodiment can refer to associated description above, and details are not described herein.
S151, Text region is carried out to the audio data, to obtain the corresponding text information of the audio data.
If in S152, the text information include predetermined keyword, from the audio data extract with it is described pre-
If the relevant audio clip data of keyword;Wherein, the predetermined keyword includes at least the one of time, place and default word
It is a.
Wherein, the text information can be the text information that identification conversion is carried out to the audio data.
Judge in the text information of identification conversion whether to include predetermined keyword, indicates to examine if including predetermined keyword
It include predetermined keyword in the sound of survey, it is possible to acquisition audio clip data corresponding with the predetermined keyword.Institute
Stating predetermined keyword is the event for comparing concern for users, executing grade height or having time requirement, can be systemic presupposition
Or user setting.
The specific embodiment for acquiring audio clip data corresponding with the predeterminable event information can refer to above
Associated description, details are not described herein.
S153, the audio clip data is stored.
Specific embodiment can refer to associated description above, and details are not described herein.
Optionally, extracted from the audio data described relevant to predetermined keyword audio clip data it
Before, further include operating as follows:
Voice recognition is carried out to the audio data, to obtain the sound characteristic information for including in the audio data and right
The text information answered.
Correspondingly, if in the text information include predetermined keyword, from the audio data extract with it is described
The relevant audio clip data of predetermined keyword can be implemented by following manner:
If the sound characteristic information and default characteristic information match and the text information in include default close
Keyword then acquires audio clip data corresponding with the predetermined keyword.
Wherein, the sound characteristic information include embody user sound speciality and with the information that other people distinguish, show
Example property, the sound characteristic information includes voiceprint.Everyone sound has corresponding unique voiceprint,
Voiceprint can embody the sound speciality of user, and distinguish with other people.
The default characteristic information is the characteristic information of the sound of preset user, can be default voiceprint.Example
Property, user wants the sound of acquisition oneself, and the sound of oneself can be obtained by wearable device and determines corresponding feature
Information, as default characteristic information.Later, when detecting sound by wearable device, if extracted according to sound detected
Sound characteristic information and default characteristic information match, then it represents that the sound of acquisition is the sound of user.Further judgement is known
Whether include predetermined keyword in the text information that do not convert, indicates that the sound of detection is user if including predetermined keyword
Sound, and include predetermined keyword in the speech content of user, it is possible to which acquisition is corresponding with the predetermined keyword
Audio clip data.
The embodiment of the present application is by identifying the audio data, to extract the sound characteristic letter of the audio data
Breath and corresponding text information, if the sound characteristic information and default characteristic information match and the text information
In include predetermined keyword, then acquire corresponding with predetermined keyword audio clip data;It can be further improved acquisition
The accuracy of sound, the sound that user can choose specific people are acquired.
Fig. 6 is a kind of structural block diagram of apparatus for processing audio provided by the embodiments of the present application, which can execute one kind
Audio-frequency processing method, as shown in fig. 6, the device includes:
Sound detection module 220, for obtaining the audio data of wearable device acquisition;
Sound acquisition module 221, if in the audio data including predeterminable event information for detecting, from described
Audio clip data corresponding with the predeterminable event information is extracted in audio data;
Memory module 222, for being stored to the audio clip data.
A kind of apparatus for processing audio provided in the embodiment of the present application obtains the audio data of wearable device acquisition;Such as
Fruit detects to include predeterminable event information in the audio data, then extracts from the audio data and believe with the predeterminable event
Cease corresponding audio clip data;The audio clip data is stored.The embodiment of the present application is adopted by wearable device
Sound outside collection, if detecting in sound includes when can star the predeterminable event information of recording, being acquired to sound
And record, the operating efficiency of recording can be improved and reduce the redundancy in audio-frequency information.
Optionally, sound acquisition module is specifically used for:
It determines the initial time that the predeterminable event information occurs, is benchmark time point from the sound using the initial time
Frequency extracts the audio clip data of preset time period in.
Optionally, sound acquisition module is specifically used for:
Using the initial time as benchmark time point, extracted from the audio data using the reference time point as knot
First audio clip data of the first time period of beam time, and extract using the reference time point as the time started the
The second audio clip data of two periods;
Correspondingly, memory module is specifically used for: to first audio clip data and second audio clip data
It is stored.
Optionally, memory module is specifically used for:
The audio clip data is converted into corresponding text information, and the text information is stored.
Optionally, memory module is specifically used for:
Determine sound clip to be converted corresponding with conversion time section in the audio clip data, and will be described to be converted
Sound clip is converted to text information;Wherein, when the conversion time section includes the starting occurred with the predeterminable event information
Carve the set time section for the time started;
The text information and the audio clip data are stored.
Optionally, sound detection module is specifically used for:
Information acquisition module, for acquiring wearable device institute before the audio data for obtaining wearable device acquisition
The environmental information at place, and the location information of acquisition wearable device;
Scene module, for determining the use field of the wearable device according to the environmental information and the location information
Scape triggers the wearable device acquisition audio data if the usage scenario is default scene.
Optionally, sound acquisition module specifically includes:
Text acquisition module, it is corresponding to obtain the audio data for carrying out Text region to the audio data
Text information;
Acquisition module, if extracted from the audio data for including predetermined keyword in the text information
Audio clip data relevant to the predetermined keyword;Wherein, the predetermined keyword includes time, place and default word
At least one.
Optionally, further includes:
Sound characteristic module, for extracting audio fragment number relevant to the predetermined keyword from the audio data
According to before, voice recognition is carried out to the audio data, to obtain the sound characteristic information for including in the audio data;
Correspondingly, acquisition module is specifically used for:
If the sound characteristic information and default characteristic information match, and include default close in the text information
Keyword then extracts audio clip data relevant to the predetermined keyword from the audio data.
The present embodiment provides a kind of wearable device on the basis of the various embodiments described above, and Fig. 7 is the embodiment of the present application
A kind of structural schematic diagram of the wearable device provided, Fig. 8 is a kind of signal of wearable device provided by the embodiments of the present application
Pictorial diagram.As shown in Figure 7 and Figure 8, which includes: memory 201, processor (Central Processing
Unit, CPU) 202, display unit 203, touch panel 204, heart rate detection mould group 205, range sensor 206, camera 207,
Bone-conduction speaker 208, microphone 209, breath light 210, these components pass through one or more communication bus or signal wire 211
To communicate.
It should be understood that diagram wearable device 200 is only an example of wearable device, and wearable set
Standby 200 can have than shown in the drawings more or less component, can combine two or more components, or
It can have different component configurations.Various parts shown in the drawings can include one or more signal processings and/or
It is realized in the combination of hardware, software or hardware and software including specific integrated circuit.
Just provided in this embodiment for the wearable device of audio processing to be described in detail below, this is wearable
Equipment is by taking intelligent glasses as an example.
Memory 201, the memory 201 can be accessed with device 202 processed, and the memory 201 may include high speed
Random access memory, can also include nonvolatile memory, for example, one or more disk memory, flush memory device,
Or other volatile solid-state parts.
Display unit 203, can be used for the operation and control interface of display image data and operating system, and display unit 203 is embedded in
In the frame of intelligent glasses, frame is internally provided with inner transmission lines 211, the inner transmission lines 211 and display unit
203 connections.
Touch panel 204, which is arranged in the outside of the temple of at least one intelligent glasses, for obtaining
Touch data, touch panel 204 are connected by inner transmission lines 211 and processor 202.Wherein, touch panel 204 is detectable
The finger sliding of user, clicking operation, and the data detected are transmitted to processor 202 accordingly and are handled with generation pair
The control instruction answered, illustratively, can be left shift instruction, right shift instruction, move up instruction, move down instruction etc..Illustratively, it shows
Show component 203 can video-stream processor 202 transmit virtual image data, which can be accordingly according to touch panel
204 user's operations that detect carry out corresponding changes, specifically, can be carry out screen switching, when detect left shift instruction or
Switch upper one or next virtual image picture after right shift instruction accordingly;When display unit 203 shows video playing information
When, which, which can be, plays out playbacking for content, and right shift instruction can be the F.F. for playing out content;Work as display
The display of component 203 is when being editable word content, and the left shift instruction, right shift instruction move up instruction, move down instruction and can be pair
The displacement operation of cursor, the i.e. position of cursor can move the touch operation of touch tablet according to user;Work as display unit
When the contents of 203 displays are game animation picture, the left shift instruction, right shift instruction move up instruction, move down instruction and can be to trip
Object in play is controlled, in machine game like flying, can by the left shift instruction, right shift instruction, move up instruction, move down instruction point
Not Kong Zhi aircraft heading;When display unit 203 can show the video pictures of different channel, which is moved to right
It instructs, moves up instruction, moves down the switching for instructing and can carrying out different channel, wherein moving up instruction and moving down instruction can be switching
To pre-set channel (the common channel that such as user uses);When display unit 203 shows static images, which is moved to right
It instructs, moves up instruction, moves down the switching that instructs and can carry out between different pictures, wherein left shift instruction can be to switch to one
Width picture, right shift instruction, which can be, switches to next width figure, and an atlas can be to switch to by moving up instruction, and moving down instruction can be with
It is to switch to next atlas.The touch panel 204 can also be used to control the display switch of display unit 203, exemplary
, when long pressing 204 touch area of touch panel, display unit 203, which is powered, shows graphic interface, when long pressing touch again
When 204 touch area of panel, display unit 203 is powered off, can be by carrying out in touch panel 204 after display unit 203 is powered
Upper cunning and operation of gliding are to adjust the brightness or resolution ratio that show image in display unit 203.
Heart rate detection mould group 205, for measuring the heart rate data of user, heart rate refers to beats per minute, the heart rate
Mould group 205 is detected to be arranged on the inside of temple.Specifically, the heart rate detection mould group 205 can be in such a way that electric pulse measures
Human body electrocardio data are obtained using stemness electrode, heart rate size is determined according to the amplitude peak in electrocardiogram (ECG) data;The heart rate detection
Mould group 205 can also be by being formed using the light transmitting and light receiver of photoelectric method measurement heart rate, correspondingly, the heart rate is examined
Mould group 205 is surveyed to be arranged at temple bottom, the ear-lobe of human body auricle.Heart rate detection mould group 205 can phase after collecting heart rate data
The progress data processing in processor 202 that is sent to answered has obtained the current heart rate value of wearer, in one embodiment, processing
Device 202, can be by the heart rate value real-time display in display unit 203 after determining the heart rate value of user, optional processor
202 are determining that heart rate value lower (such as less than 50) or higher (such as larger than 100) can correspondingly trigger alarm, while by the heart
Rate value and/or the warning message of generation are sent to server by communication module.
Range sensor 206, may be provided on frame, the range sensor 206 be used to incude face to frame 101 away from
From the realization of infrared induction principle can be used in the range sensor 206.Specifically, the range sensor 206 is by the distance number of acquisition
According to processor 202 is sent to, data control the bright dark of display unit 203 to processor 202 according to this distance.Illustratively, when true
When making the collected distance of range sensor 206 less than 5 centimetres, control display unit 203 is in point to processor 202 accordingly
Bright state, when determine range sensor 206 be detected with object close to when, it is corresponding control display unit 203 and be in close
Closed state.
In addition, other kinds of sensor can also be arranged on the frame of intelligent glasses, following one is included at least: accelerating
Sensor, gyro sensor and pressure sensor are spent, for detecting user's shaking, touching or the operation of pressing intelligent glasses,
And sensing data is sent to processor 202, with it is determined whether to enable cameras 207 to carry out Image Acquisition.Fig. 7 as an example,
Show a kind of acceleration transducer 212, it should be understood that this is not the restriction to the present embodiment.
Breath light 210 may be provided at the edge of frame, when display unit 203 closes display picture, the breath light 210
It can be lighted according to the control of processor 202 in the bright dark effect of gradual change.
Camera 207 can be the position that the upper side frame of frame is arranged in, and acquire the proactive of the image data in front of user
As module, the rear photographing module of user eyeball information can also be acquired, is also possible to the combination of the two.Specifically, camera 207
When acquiring forward image, the image of acquisition is sent to the identification of processor 202, processing, and trigger accordingly according to recognition result
Trigger event.Illustratively, when user wears the intelligent glasses at home, by being identified to the forward image of acquisition,
If recognizing article of furniture, corresponding inquiry whether there is corresponding control event, if it is present accordingly by the control
The corresponding control interface of event processed is shown in display unit 203, and user can carry out corresponding furniture object by touch panel 204
The control of product, wherein the article of furniture and intelligent glasses are connected to the network by bluetooth or wireless self-networking;When user is at family
When outer wearing intelligent glasses, target identification mode can be opened accordingly, which can be used to identify specific people,
The image of acquisition is sent to processor 202 and carries out recognition of face processing by camera 207, if recognizing the default people of setting
Face then can carry out sound casting by the loudspeaker integrated on intelligent glasses accordingly, which can be also used for
Different plants is identified, for example, processor 202 is worked as according to the touch operation of touch panel 204 with what recording camera 207 acquired
Preceding image is simultaneously sent to server by communication module to be identified, server identifies simultaneously the plant in acquisition image
It feeds back relevant botanical name, introduce to intelligent glasses, and feedback data is shown in display unit 203.Camera 207 is also
It can be the image for acquiring user's eye such as eyeball, different control instructions generated by the identification of the rotation to eyeball,
Illustratively, control instruction is moved up as eyeball is rotated up generation, eyeball rotates down generation and moves down control instruction, and eyeball is to the left
Rotation generates and moves to left control instruction, and the eyeball generation that turns right moves to right control instruction, wherein qualified, display unit 203 can be shown
The virtual image data that processor 202 transmits, user's eye which can detect according to camera 207 accordingly
Control instruction that the mobile variation of ball generates and change, specifically, can be carry out screen switching, when detecting that moving to left control refers to
Switch upper one or next virtual image picture accordingly after enabling or moving to right control instruction;When display unit 203 shows video
When broadcast information, this, which moves to left control instruction and can be, plays out playbacking for content, moves to right control instruction and can be and plays out
The F.F. of content;When display unit 203 display be editable word content when, this move to left control instruction, move to right control instruction,
Control instruction is moved up, control instruction is moved down and can be displacement operation to cursor, is i.e. the position of cursor can be according to user to touch
The touch operation of plate and moved;When the content that display unit 203 is shown is game animation picture, this moves to left control and refers to
It enables, moves to right control instruction, moves up control instruction, moving down control instruction and can be the object in game is controlled, such as aircraft
In game, can be moved to left by this control instruction, move to right control instruction, move up control instruction, move down control instruction control respectively fly
The heading of machine;When display unit 203 can show the video pictures of different channel, this moves to left control instruction, moves to right control
It instructs, move up control instruction, moving down control instruction and can carry out the switching of different channel, wherein moving up control instruction and move down control
System instruction can be to switch to pre-set channel (the common channel that such as user uses);When display unit 203 shows static images,
This moves to left control instruction, moves to right control instruction, moves up control instruction, moving down control instruction and can carry out cutting between different pictures
It changing, wherein a width picture can be to switch to by moving to left control instruction, moved to right control instruction and be can be and switch to next width figure,
An atlas can be to switch to by moving up control instruction, moved down control instruction and be can be and switch to next atlas.
The inner wall side of at least one temple is arranged in bone-conduction speaker 208, bone-conduction speaker 208, for that will receive
To processor 202 send audio signal be converted to vibration signal.Wherein, sound is passed through skull by bone-conduction speaker 208
It is transferred to human body inner ear, is transmitted in skull cochlea by the way that the electric signal of audio is changed into vibration signal, then by auditory nerve
It is perceived.Reduce hardware configuration thickness as sounding device by bone-conduction speaker 208, weight is lighter, while without electromagnetism
Radiation will not be influenced by electromagnetic radiation, and have the advantages of antinoise, waterproof and liberation ears.
Microphone 209, may be provided on the lower frame of frame, for acquiring external (user, environment) sound and being transmitted to
Processor 202 is handled.Illustratively, the sound that microphone 209 issues user be acquired and pass through processor 202 into
Row Application on Voiceprint Recognition can receive subsequent voice control, specifically, user if being identified as the vocal print of certification user accordingly
Collected voice is sent to processor 202 and identified according to recognition result generation pair by capable of emitting voice, microphone 209
The control instruction answered, such as " booting ", " shutdown ", " promoting display brightness ", " reducing display brightness ", the subsequent basis of processor 202
The control instruction of the generation executes corresponding control processing.
The executable present invention of the apparatus for processing audio and wearable device of the wearable device provided in above-described embodiment appoints
The audio-frequency processing method of wearable device provided by embodiment of anticipating has and executes the corresponding functional module of this method and beneficial to effect
Fruit.The not technical detail of detailed description in the above-described embodiments wearable is set reference can be made to provided by any embodiment of the invention
Standby audio-frequency processing method.
The embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction, described wearable to set
Standby executable instruction is used to execute a kind of audio-frequency processing method when being executed by wearable device processor, this method comprises:
Obtain the audio data of wearable device acquisition;
If detect in the audio data include predeterminable event information, from the audio data extract with it is described
The corresponding audio clip data of predeterminable event information;
The audio clip data is stored.
In a possible embodiment, audio corresponding with the predeterminable event information is extracted from the audio data
Fragment data includes:
It determines the initial time that the predeterminable event information occurs, is benchmark time point from the sound using the initial time
Frequency extracts the audio clip data of preset time period in.
In a possible embodiment, it is extracted from the audio data using the initial time as benchmark time point pre-
If the audio clip data of period includes:
Using the initial time as benchmark time point, extracted from the audio data using the reference time point as knot
First audio clip data of the first time period of beam time, and extract using the reference time point as the time started the
The second audio clip data of two periods;
Correspondingly, it is described to the audio clip data carry out storage include:
First audio clip data and second audio clip data are stored.
In a possible embodiment, carrying out storage to the audio clip data includes:
The audio clip data is converted into corresponding text information, and the text information is stored.
In a possible embodiment, the audio clip data is converted into corresponding text information, and to described
Text information is stored, comprising:
Determine sound clip to be converted corresponding with conversion time section in the audio clip data, and will be described to be converted
Sound clip is converted to text information;Wherein, when the conversion time section includes the starting occurred with the predeterminable event information
Carve the set time section for the time started;
The text information and the audio clip data are stored.
In a possible embodiment, before the audio data for obtaining wearable device acquisition, further includes:
Acquire environmental information locating for wearable device, and the location information of the acquisition wearable device;
The usage scenario that the wearable device is determined according to the environmental information and the location information, if described make
It is default scene with scene, then triggers the wearable device acquisition audio data.
In a possible embodiment, if detecting in the audio data includes predeterminable event information, from institute
It states and extracts audio clip data corresponding with the predeterminable event information in audio data, comprising:
Text region is carried out to the audio data, to obtain the corresponding text information of the audio data;
If in the text information including predetermined keyword, extracted and the default key from the audio data
The relevant audio clip data of word;Wherein, the predetermined keyword includes at least one of time, place and default word.
In a possible embodiment, audio piece relevant to the predetermined keyword is extracted from the audio data
Before segment data, further includes:
Voice recognition is carried out to the audio data, to obtain the sound characteristic information for including in the audio data;
Correspondingly, if in the text information include predetermined keyword, from the audio data extract with it is described
The relevant audio clip data of predetermined keyword, comprising:
If the sound characteristic information and default characteristic information match, and include default close in the text information
Keyword then extracts audio clip data relevant to the predetermined keyword from the audio data.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap
It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other
Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term
Matter " may include may reside in different location (such as by network connection different computer systems in) two or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application
Audio provided by any embodiment of the invention can also be performed in the audio processing operation that executable instruction is not limited to the described above
Relevant operation in processing method.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation,
The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application
It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also
It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.
Claims (11)
1. a kind of audio-frequency processing method characterized by comprising
Obtain the audio data of wearable device acquisition;
If detecting includes predeterminable event information in the audio data, extracts from the audio data and preset with described
The corresponding audio clip data of event information;
The audio clip data is stored.
2. believing the method according to claim 1, wherein being extracted from the audio data with the predeterminable event
Ceasing corresponding audio clip data includes:
It determines the initial time that the predeterminable event information occurs, is benchmark time point from the audio number using the initial time
According to the middle audio clip data for extracting preset time period.
3. according to the method described in claim 2, it is characterized in that, being benchmark time point from the audio using the initial time
The audio clip data of extraction preset time period includes: in data
Using the initial time as benchmark time point, extracted from the audio data using the reference time point at the end of
Between first time period the first audio clip data, and extract using the reference time point as the time started second when
Between section the second audio clip data;
Correspondingly, it is described to the audio clip data carry out storage include:
First audio clip data and second audio clip data are stored.
4. the method according to claim 1, wherein to the audio clip data carry out storage include:
The audio clip data is converted into corresponding text information, and the text information is stored.
5. according to the method described in claim 4, believing it is characterized in that, the audio clip data is converted to corresponding text
Breath, and the text information is stored, comprising:
Determine sound clip to be converted corresponding with conversion time section in the audio clip data, and by the sound to be converted
Segment is converted to text information;Wherein, the conversion time section includes being with the initial time that the predeterminable event information occurs
The set time section of time started;
The text information and the audio clip data are stored.
6. method according to any one of claims 1 to 5, which is characterized in that obtain the audio number of wearable device acquisition
According to before, further includes:
Acquire environmental information locating for wearable device, and the location information of the acquisition wearable device;
The usage scenario of the wearable device is determined according to the environmental information and the location information, if described use field
Scape is default scene, then triggers the wearable device acquisition audio data.
7. method according to any one of claims 1 to 5, which is characterized in that wrapped in the audio data if detected
Predeterminable event information is included, then extracts audio clip data corresponding with the predeterminable event information, packet from the audio data
It includes:
Text region is carried out to the audio data, to obtain the corresponding text information of the audio data;
If in the text information including predetermined keyword, extracted and the predetermined keyword phase from the audio data
The audio clip data of pass;Wherein, the predetermined keyword includes at least one of time, place and default word.
8. according to the described in any item methods of claim 7, which is characterized in that extract from the audio data and preset with described
Before the relevant audio clip data of keyword, further includes:
Voice recognition is carried out to the audio data, to obtain the sound characteristic information for including in the audio data;
Correspondingly, it if in the text information including predetermined keyword, extracts from the audio data and is preset with described
The relevant audio clip data of keyword, comprising:
If the sound characteristic information and default characteristic information match, and include default key in the text information
Word then extracts audio clip data relevant to the predetermined keyword from the audio data.
9. a kind of apparatus for processing audio characterized by comprising
Sound detection module, for obtaining the audio data of wearable device acquisition;
Sound acquisition module, if for detecting including predeterminable event information in the audio data, from the audio number
Audio clip data corresponding with the predeterminable event information is extracted according to middle;
Memory module, for being stored to the audio clip data.
10. a kind of wearable device, comprising: memory, processor and storage on a memory and can processor operation meter
Calculation machine program, which is characterized in that the processor realizes any one of -8 institute according to claim 1 when executing the computer program
The audio-frequency processing method stated.
11. a kind of storage medium comprising wearable device executable instruction, which is characterized in that the wearable device is executable
Instruction by wearable device processor when being executed for executing audio processing side according to claim 1-8
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001212.6A CN109257490B (en) | 2018-08-30 | 2018-08-30 | Audio processing method and device, wearable device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001212.6A CN109257490B (en) | 2018-08-30 | 2018-08-30 | Audio processing method and device, wearable device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109257490A true CN109257490A (en) | 2019-01-22 |
CN109257490B CN109257490B (en) | 2021-07-09 |
Family
ID=65048964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811001212.6A Active CN109257490B (en) | 2018-08-30 | 2018-08-30 | Audio processing method and device, wearable device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109257490B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111128243A (en) * | 2019-12-25 | 2020-05-08 | 苏州科达科技股份有限公司 | Noise data acquisition method, device and storage medium |
CN111564165A (en) * | 2020-04-27 | 2020-08-21 | 北京三快在线科技有限公司 | Data storage method, device, equipment and storage medium |
CN111798872A (en) * | 2020-06-30 | 2020-10-20 | 联想(北京)有限公司 | Processing method and device for online interaction platform and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105336329A (en) * | 2015-09-25 | 2016-02-17 | 联想(北京)有限公司 | Speech processing method and system |
CN105657129A (en) * | 2016-01-25 | 2016-06-08 | 百度在线网络技术(北京)有限公司 | Call information obtaining method and device |
CN106024009A (en) * | 2016-04-29 | 2016-10-12 | 北京小米移动软件有限公司 | Audio processing method and device |
CN106448702A (en) * | 2016-09-14 | 2017-02-22 | 努比亚技术有限公司 | Recording data processing device and method, and mobile terminal |
US20180090145A1 (en) * | 2016-09-29 | 2018-03-29 | Toyota Jidosha Kabushiki Kaisha | Voice Interaction Apparatus and Voice Interaction Method |
-
2018
- 2018-08-30 CN CN201811001212.6A patent/CN109257490B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105336329A (en) * | 2015-09-25 | 2016-02-17 | 联想(北京)有限公司 | Speech processing method and system |
CN105657129A (en) * | 2016-01-25 | 2016-06-08 | 百度在线网络技术(北京)有限公司 | Call information obtaining method and device |
CN106024009A (en) * | 2016-04-29 | 2016-10-12 | 北京小米移动软件有限公司 | Audio processing method and device |
CN106448702A (en) * | 2016-09-14 | 2017-02-22 | 努比亚技术有限公司 | Recording data processing device and method, and mobile terminal |
US20180090145A1 (en) * | 2016-09-29 | 2018-03-29 | Toyota Jidosha Kabushiki Kaisha | Voice Interaction Apparatus and Voice Interaction Method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111128243A (en) * | 2019-12-25 | 2020-05-08 | 苏州科达科技股份有限公司 | Noise data acquisition method, device and storage medium |
CN111564165A (en) * | 2020-04-27 | 2020-08-21 | 北京三快在线科技有限公司 | Data storage method, device, equipment and storage medium |
CN111564165B (en) * | 2020-04-27 | 2021-09-28 | 北京三快在线科技有限公司 | Data storage method, device, equipment and storage medium |
CN111798872A (en) * | 2020-06-30 | 2020-10-20 | 联想(北京)有限公司 | Processing method and device for online interaction platform and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109257490B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180151036A1 (en) | Method for producing haptic signal and electronic device supporting the same | |
US20180124497A1 (en) | Augmented Reality Sharing for Wearable Devices | |
US20130177296A1 (en) | Generating metadata for user experiences | |
CN109254659A (en) | Control method, device, storage medium and the wearable device of wearable device | |
CN109145847B (en) | Identification method and device, wearable device and storage medium | |
CN109259724B (en) | Eye monitoring method and device, storage medium and wearable device | |
CN109088815A (en) | Message prompt method, device, storage medium, mobile terminal and wearable device | |
CN109116991A (en) | Control method, device, storage medium and the wearable device of wearable device | |
CN109032384A (en) | Music control method, device and storage medium and wearable device | |
CN109358744A (en) | Information sharing method, device, storage medium and wearable device | |
TW201923758A (en) | Audio activity tracking and summaries | |
WO2019105238A1 (en) | Method and terminal for speech signal reconstruction and computer storage medium | |
CN109059929A (en) | Air navigation aid, device, wearable device and storage medium | |
CN109224432B (en) | Entertainment application control method and device, storage medium and wearable device | |
CN109257490A (en) | Audio-frequency processing method, device, wearable device and storage medium | |
CN109119080A (en) | Sound identification method, device, wearable device and storage medium | |
CN109255064A (en) | Information search method, device, intelligent glasses and storage medium | |
CN109061903B (en) | Data display method and device, intelligent glasses and storage medium | |
CN109189225A (en) | Display interface method of adjustment, device, wearable device and storage medium | |
CN111683329B (en) | Microphone detection method, device, terminal and storage medium | |
CN109241900A (en) | Control method, device, storage medium and the wearable device of wearable device | |
EP4097992A1 (en) | Use of a camera for hearing device algorithm training | |
CN109067627A (en) | Appliances equipment control method, device, wearable device and storage medium | |
CN109144264A (en) | Display interface method of adjustment, device, wearable device and storage medium | |
CN109255314A (en) | Information cuing method, device, intelligent glasses and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |