CN109119080A - Sound identification method, device, wearable device and storage medium - Google Patents
Sound identification method, device, wearable device and storage medium Download PDFInfo
- Publication number
- CN109119080A CN109119080A CN201811001599.5A CN201811001599A CN109119080A CN 109119080 A CN109119080 A CN 109119080A CN 201811001599 A CN201811001599 A CN 201811001599A CN 109119080 A CN109119080 A CN 109119080A
- Authority
- CN
- China
- Prior art keywords
- related information
- wearable device
- voice data
- database
- recognition result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000002452 interceptive effect Effects 0.000 claims abstract description 40
- 230000003993 interaction Effects 0.000 claims abstract description 18
- 239000011521 glass Substances 0.000 claims description 35
- 230000001755 vocal effect Effects 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 5
- 210000001508 eye Anatomy 0.000 claims description 3
- 210000005252 bulbus oculi Anatomy 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 235000013399 edible fruits Nutrition 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000004321 preservation Methods 0.000 description 2
- 210000003625 skull Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000037147 athletic performance Effects 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010224 classification analysis Methods 0.000 description 1
- 210000003477 cochlea Anatomy 0.000 description 1
- 210000000860 cochlear nerve Anatomy 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/3827—Portable transceivers
- H04B1/385—Transceivers carried on the body, e.g. in helmets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/3827—Portable transceivers
- H04B1/385—Transceivers carried on the body, e.g. in helmets
- H04B2001/3872—Transceivers carried on the body, e.g. in helmets with extendable microphones or earphones
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses a kind of sound identification method, device, wearable device and storage medium, and this method obtains the voice data of microphone acquisition when detecting voice recognition commands, and the microphone is arranged in wearable device;The voice data is identified, if recognition result meets preset condition, obtains the related information corresponding with the recognition result recorded in database;The interactive mode for determining the wearable device, the interaction of the related information is carried out by the interactive mode, and the sound identification method of this programme can easily be supplied to the information of user demand, increase the function of wearable device.
Description
Technical field
The invention relates to wearable device field more particularly to a kind of sound identification method, device, wearable devices
And storage medium.
Background technique
With the progress for the development and Internet technology for calculating equipment, the interaction between user and smart machine is increasingly
Frequently, as watched film, TV play using smart phone, TV programme is watched using smart television, are checked using smartwatch
Short message, physical sign parameters etc..
In the prior art, existing defects otherwise are known to voice data, such as needs to open smart phone and passes through language
Sound identification software carries out voice recognition, has a single function the depth requirements for being unable to satisfy user.
Summary of the invention
The present invention provides a kind of sound identification method, device, wearable device and storage mediums, can easily provide
To the information of user demand, the function of wearable device is increased.
In a first aspect, the embodiment of the present application provides a kind of sound identification method, comprising:
When detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone setting is being dressed
In formula equipment;
The voice data is identified, if recognition result meets preset condition, obtains and is recorded in database
Related information corresponding with the recognition result;
The interactive mode for determining the wearable device carries out the interaction of the related information by the interactive mode.
Second aspect, the embodiment of the present application also provides a kind of voice recognition devices, comprising:
Voice data obtains module, for obtaining the voice data of microphone acquisition when detecting voice recognition commands,
The microphone is arranged in wearable device;
Related information extraction module, for being identified to the voice data, if recognition result meets preset condition,
Then obtain the related information corresponding with the recognition result recorded in database;
Related information interactive module, for determining the interactive mode of the wearable device, by the interactive mode into
The interaction of the row related information.
The third aspect, the embodiment of the present application also provides a kind of wearable devices, comprising: processor, memory and deposits
The computer program that can be run on a memory and on a processor is stored up, the processor is realized when executing the computer program
Sound identification method as described in the embodiment of the present application.
Fourth aspect, the embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction,
The wearable device executable instruction as wearable device processor when being executed for executing described in the embodiment of the present application
Sound identification method.
In the present solution, obtaining the voice data of microphone acquisition when detecting voice recognition commands, the microphone is set
It sets in wearable device;The voice data is identified, if recognition result meets preset condition, obtains database
The related information corresponding with the recognition result of middle record;The interactive mode for determining the wearable device passes through the friendship
Mutual mode carries out the interaction of the related information, can easily be supplied to the information of user demand, increase wearable device
Function.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, of the invention other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow chart of sound identification method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another sound identification method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another sound identification method provided by the embodiments of the present application;
Fig. 4 is the flow chart of another sound identification method provided by the embodiments of the present application;
Fig. 5 is a kind of structural block diagram of voice recognition device provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of wearable device provided by the embodiments of the present application;
Fig. 7 is a kind of signal pictorial diagram of wearable device provided by the embodiments of the present application.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used to explain the present invention, rather than limitation of the invention.It also should be noted that for the ease of retouching
It states, only the parts related to the present invention are shown in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of sound identification method provided by the embodiments of the present application, is applicable to wearable device wearing
Person identifies that this method can be executed by wearable device provided by the embodiments of the present application, the wearing to the sound of surrounding
The mode that software and/or hardware can be used in the voice recognition device of formula equipment is realized, as shown in Figure 1, tool provided in this embodiment
Body scheme is as follows:
Step S101, when detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone is set
It sets in wearable device.
Wherein, voice recognition commands are the instruction for opening sound identifying function, after detecting the voice recognition commands, phase
The unlatching microphone answered obtains the voice data of microphone acquisition.In one embodiment, sound device instruction can be by user
Touching manually touches the integrated touch panel of wearable device such as to generate voice recognition commands, can also be detect it is specific
Movement after, the acceleration transducer and gyro sensor such as integrated by wearable device detects the head of the condition of satisfaction
Athletic performance (can be and nod or shake the head twice in succession) triggers afterwards generates voice recognition commands.Wherein, microphone is for adopting
Collect voice data, which is the sound of wearable device wearer local environment, when such as and other users are talked,
The voice data of other side can also be animal sounds etc. in a natural environment.
Step S102, the voice data is identified, if recognition result meets preset condition, obtains database
The related information corresponding with the recognition result of middle record.
In one embodiment, which can be around the collected wearable device user of microphone other people
Sound.Wherein, the voice data is identified, if recognition result meets preset condition, obtains in database and remember
The related information corresponding with the recognition result of record includes: to carry out Application on Voiceprint Recognition matching to the voice data, if vocal print
It identifies successful match, then obtains the corresponding user information of vocal print of the successful match recorded in database.Wherein, Application on Voiceprint Recognition is
One kind of biological identification technology, the sound of different people respectively correspond different voiceprint maps, can by the comparison of voiceprint map
To uniquely determine the sender of current collected voice data, optionally, when including in the collected voice data of microphone
When the sound that at least two users issue, determines the voiceprint map of each user voice respectively accordingly and matched, if
Successful match then obtains the corresponding user information of vocal print of the successful match recorded in database.Wherein, it is protected in advance in database
There is different user voiceprint map feature, to match for Application on Voiceprint Recognition, wherein also corresponding preserve the corresponding use of different vocal prints
Family information, illustratively, the user information can be at least one of name, position, contact method and brief introduction of user.
Optionally, before the corresponding user information of vocal print of the successful match recorded in the acquisition database, further includes: will identify
Different vocal print features and corresponding user information out are added in database, wherein the user information includes the surname of user
At least one of name, position, contact method and brief introduction.Specifically, wearable device can open a pre-stored patterns, in the mode
Under collected voice data can be identified to store corresponding voiceprint map, while can according to the instruction that user inputs or
Person obtains the related information of user corresponding with the voiceprint map automatically, such as to the information of user's typing be associated preservation or
The business card picture that the corresponding user of the vocal print is acquired by photographic device identifies business card picture to obtain corresponding user
Contact method, Business Name, position etc., and be associated preservation.
In one embodiment, to voice data carry out identification can also be to the voice data carry out feature extraction with
And aspect ratio pair, it is corresponding to obtain the institute recorded in database if record has the data entries of individual features in database
The corresponding identification information of voice data is stated, if it does not, cloud service can be sent to the voice data corresponding characteristic information
Device has been identified.Illustratively, which can be the sending sound identified from collected voice data
Animal name.
Step S103, the interactive mode for determining the wearable device carries out the association by the interactive mode and believes
The interaction of breath.
Wherein, interactive mode is wearable device and the mode that user interacts.In one embodiment, this is wearable
Equipment can be intelligent glasses, correspondingly, determining the interactive mode of the wearable device, carry out institute by the interactive mode
The interaction for stating related information comprises determining that whether the display screen of the intelligent glasses is lighted, and lights if the display screen is in
State then shows the related information by the display screen, wherein the display screen is integrated in the frame of the intelligent glasses
In.Specifically, being integrated with the display screen that can show text, image, video in the frame of intelligent glasses, connect by display screen control
Mouthful determine that display screen be under illuminating state, it is corresponding by display screen display related information, it can be with text or picture
Form is shown on a display screen.In another embodiment, if the display screen is in non-illuminating state, pass through osteoacusis
Loudspeaker plays the related information, wherein the bone-conduction speaker is integrated in the temple of the intelligent glasses, exemplary
, it can be by being played out the corresponding text conversion of related information by bone-conduction speaker to notify to use for voice data
Family, optionally, after playing the related information by bone-conduction speaker, further includes: establish the network with smart phone
Communication connection, is sent to the smart phone for the related information, carries out the related information for the smart phone
Display.Wherein, network communication connection can be bluetooth connection, and related information is sent to company by bluetooth module by intelligent glasses
The smart phone connect, smart phone are parsed the word content that will be obtained after receiving the blue-teeth data and are shown in intelligent hand
It is checked in the display screen of machine for user.
As shown in the above, wearable device has the sound in automatic identification environment and provides corresponding related information
Function, while being reasonably associated according to current and user the interactive mode of wearable device the interaction of information, in time
Related information is easily communicated to user.
Fig. 2 is the flow chart of another sound identification method provided by the embodiments of the present application, optionally, described to the sound
It includes: to carry out Application on Voiceprint Recognition matching to the voice data that sound data, which carry out identification,;Correspondingly, if the recognition result meets
Preset condition, if then obtaining the related information corresponding with the recognition result recorded in database includes: Application on Voiceprint Recognition
With success, then the corresponding user information of vocal print of the successful match recorded in database is obtained.As shown in Fig. 2, technical solution has
Body is as follows:
Step S201, when detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone is set
It sets in wearable device.
Step S202, the voice data is identified, judges the sound that whether there is people in the voice data, such as
Fruit exists, and thens follow the steps S203, if it is not, then terminating.
In one embodiment, first determined whether after getting voice data in the voice data whether include people sound
Sound passes through vocal print specifically, the voiceprint map for one or more sound that the voice data includes can be obtained by Application on Voiceprint Recognition
Map judges whether there is the sound of people, and judgment mode in the prior art can be used in specific judgment mode.If there is people's
Sound, then the Application on Voiceprint Recognition matching executed.
Step S203, Application on Voiceprint Recognition matching is carried out to the voice data.
In Application on Voiceprint Recognition matching process, pass through the vocal print feature and voice print database for obtaining the voice data collected
The vocal print feature recorded in library is compared to carry out the identification of people.Specifically, being based on energy measuring and LTSD (Long-
Term Spectral Divergence, long-term spectrum diverging) determine the sound of people after, carry out at noise reduction and dereverberation
Reason carries out feature extraction by dynamic time warping, vector quantization and support vector machines scheduling algorithm.Wherein, Application on Voiceprint Recognition matches
In the process using to model include hidden Markov model and gauss hybrid models.
Step S204, judge whether successful match, if so, S205 is thened follow the steps, if it is not, then terminating.
Step S205, the corresponding user information of vocal print of the successful match recorded in database is obtained.
Wherein, which includes name, position, Business Name of user etc..
Step S206, the interactive mode for determining the wearable device carries out the association by the interactive mode and believes
The interaction of breath.
Illustratively, the name of the user determined in step S205, position and Business Name can be shown in Brilliant Eyes
So that user obtains the relevant information data of the user in the integrated display screen of mirror.
It can be seen from the above, user is when wearing wearable device, can the sound automatically to the people in environment identified simultaneously
Corresponding related information is communicated to user, improves the social convenience of user, while allowing user to learn in the shortest possible time
Useful information simplifies the operating procedure of user.
In one embodiment, when user recognizes friend for the first time, the target that can open wearable device such as intelligent glasses is known
Other mode, after target identification mode is opened, the integrated microphone acquisition other side's one's voice in speech of intelligent glasses simultaneously carries out vocal print
Feature extraction to be stored in voice print database, meanwhile, user can own voices input by way of input other side name,
Position, contact method and brief introduction, such as " Zhang San ", " project manager ", " 138xxxxxxxx " and " xx business department of xx company is negative
People is blamed, the information of xx institute professor, research direction xxxx ", the corresponding input correspond to the vocal print feature associated storage in vocal print
In database.
Fig. 3 is the flow chart of another sound identification method provided by the embodiments of the present application, optionally, described to the sound
It includes: to carry out feature extraction and aspect ratio pair to the voice data that sound data, which carry out identification,;If correspondingly, the identification
As a result meet preset condition, if it includes: special for then obtaining the related information corresponding with the recognition result recorded in database
Sign compares successfully, then obtains the corresponding identification information of the voice data recorded in database.As shown in figure 3, technical solution
It is specific as follows:
Step S301, when detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone is set
It sets in wearable device.
Step S302, feature extraction and aspect ratio pair are carried out to the voice data, if aspect ratio obtains success
Take the corresponding identification information of the voice data recorded in database.
Wherein, aspect ratio then shows that record has the identification information for corresponding to the voice data in database to success, specifically,
The mode of machine learning training can be used in the generating process of the database, illustratively, can typing difference birds in advance cry,
Tentatively obtain the rhythm of chirm, pitch, repeatedly, the difference such as tone color, syllable length, then carry out classification learning training, specifically
Learning training mode can be the classification based training based on SVM (Support Vector Machine, support vector machines), wherein
SVM is the learning model for having supervision, for carrying out pattern-recognition, classification and regression analysis.In step s 302 to working as
The feature recorded in the database that the voice data of preceding acquisition carries out feature extraction to obtain with training is compared, if there is
Consistent feature is compared, then obtains the corresponding identification information of the consistent feature of the comparison (the title type of such as bird).
Step S303, the interactive mode for determining the wearable device carries out the association by the interactive mode and believes
The interaction of breath.
It can be seen from the above, wearable device can identify to the sound acquired in environment and accordingly tie identification automatically
Fruit is communicated to user, the study convenient for user especially children to surrounding.
Fig. 4 is the flow chart of another sound identification method provided by the embodiments of the present application, optionally, described wearable to set
Standby includes intelligent glasses, and the interactive mode of the determination wearable device carries out the association by the interactive mode
The interaction of information comprises determining that whether the display screen of the intelligent glasses is lighted, if the display screen is in illuminating state,
The related information is shown by the display screen, wherein the display screen is integrated in the frame of the intelligent glasses.If
The display screen is in non-illuminating state, then plays the related information by bone-conduction speaker, wherein the osteoacusis is raised
Sound device is integrated in the temple of the intelligent glasses.It is described the related information is played by bone-conduction speaker after, also
Include: that foundation is connected with the network communication of smart phone, the related information is sent to the smart phone, is used for the intelligence
Energy mobile phone carries out the display of the related information.As shown in figure 4, technical solution is specific as follows:
Step S401, when detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone is set
It sets in wearable device, the wearable device includes intelligent glasses.
Step S402, the voice data is identified, if recognition result meets preset condition, obtains database
The related information corresponding with the recognition result of middle record.
Step S403, judge whether the display screen of the intelligent glasses is lighted, if so, S404 is thened follow the steps, if
It is no, then follow the steps S405.
Wherein, for user when wearing intelligent glasses, display screen is integrated in the intelligent glasses frame can show text, figure
Piece and video information, the display function has accordingly been activated if detecting that it is in illuminating state, can directly be executed at this time
Step S404 shows the related information in display screen.
Step S404, the related information is shown by the display screen, wherein the display screen is integrated in the intelligence
In the frame of glasses.
Step S405, the related information is played by bone-conduction speaker, wherein the bone-conduction speaker is integrated in
In the temple of the intelligent glasses.
Wherein, if display screen functions are not in open state, the association is played by bone-conduction speaker accordingly
The related information is communicated to user by information.
Step S406, foundation is connected with the network communication of smart phone, and the related information is sent to the intelligent hand
Machine carries out the display of the related information for the smart phone.
It should be noted that step S406 is also possible to be replaced step S405, i.e., it ought detect that display screen is in
When non-illuminating state, which is received by smart phone.Optionally, smart phone can not show that the association is believed at once
Breath, but the related information is shown accordingly after user clicks power key or unlock using smart phone.When related information has
When multiple, display can be arranged successively according to the sequencing for the related information determined with tabular form.
It can be seen from the above, determining the interactive mode of wearable device after determining related information, most reasonable manner is selected
Related information is conveyed, the electricity for saving wearable device while interfering to user is avoided.
Fig. 5 is a kind of structural block diagram of voice recognition device provided by the embodiments of the present application, and the device is above-mentioned for executing
The sound identification method that embodiment provides, has the corresponding functional module of execution method and beneficial effect.As shown in figure 5, the dress
Set and specifically include: voice data obtains module 101, related information extraction module 102 and related information interactive module 103, wherein
Voice data obtains module 101, for obtaining the sound number of microphone acquisition when detecting voice recognition commands
According to the microphone is arranged in wearable device.
Related information extraction module 102, for being identified to the voice data, if recognition result meets default item
Part then obtains the related information corresponding with the recognition result recorded in database.
Related information interactive module 103 passes through the interactive mode for determining the interactive mode of the wearable device
Carry out the interaction of the related information.
It can be seen from the above, wearable device has the sound in automatic identification environment and provides the function of corresponding related information
Can, while it being reasonably associated according to current and user the interactive mode of wearable device the interaction of information, it is convenient in time
Related information is communicated to user.
In a possible embodiment, the related information extraction module 102 is specifically used for:
Application on Voiceprint Recognition matching is carried out to the voice data;If Application on Voiceprint Recognition successful match, obtains in database and remembers
The corresponding user information of the vocal print of the successful match of record.
In a possible embodiment, further include voice data adding module 104, be used for:
Before the corresponding user information of vocal print of the successful match recorded in the acquisition database, it will identify that not
It is added in database with vocal print feature and corresponding user information, wherein the user information includes the name of user, duty
At least one of position, contact method and brief introduction.
In a possible embodiment, the related information extraction module 102 is specifically used for:
Feature extraction and aspect ratio pair are carried out to the voice data;If aspect ratio obtains database to success
The corresponding identification information of the voice data of middle record.
In a possible embodiment, the wearable device includes intelligent glasses, the related information interactive module
103 are specifically used for:
It determines whether the display screen of the intelligent glasses is lighted, if the display screen is in illuminating state, passes through institute
It states display screen and shows the related information, wherein the display screen is integrated in the frame of the intelligent glasses.
In a possible embodiment, if the display screen is in non-illuminating state, pass through bone-conduction speaker
Play the related information, wherein the bone-conduction speaker is integrated in the temple of the intelligent glasses.
In a possible embodiment, the related information interactive module 103 is also used to: being raised described by osteoacusis
After sound device plays the related information, foundation is connected with the network communication of smart phone, and the related information is sent to institute
Smart phone is stated, the display of the related information is carried out for the smart phone.
The present embodiment provides a kind of wearable device on the basis of the various embodiments described above, and Fig. 6 is the embodiment of the present application
A kind of structural schematic diagram of the wearable device provided, Fig. 7 is a kind of signal of wearable device provided by the embodiments of the present application
Pictorial diagram, as shown in Figure 6 and Figure 7, the wearable device include: memory 201, processor (Central Processing
Unit, CPU) 202, display unit 203, touch panel 204, heart rate detection mould group 205, range sensor 206, camera 207,
Bone-conduction speaker 208, microphone 209, breath light 210, these components pass through one or more communication bus or signal wire 211
To communicate.
It should be understood that illustrating the example that wearable device is only wearable device, and wearable device
It can have than shown in the drawings more or less component, can combine two or more components, or can be with
It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated
It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Just the wearable device provided in this embodiment for voice recognition is described in detail below, this is wearable to set
For by taking intelligent glasses as an example.
Memory 201, the memory 201 can be accessed by CPU202, and the memory 201 may include that high speed is random
Access memory, can also include nonvolatile memory, for example, one or more disk memory, flush memory device or its
His volatile solid-state part.
Display unit 203, can be used for the operation and control interface of display image data and operating system, and display unit 203 is embedded in
In the frame of intelligent glasses, frame is internally provided with inner transmission lines 211, the inner transmission lines 211 and display unit
203 connections.
The outside of at least one intelligent glasses temple is arranged in touch panel 204, the touch panel 204, for obtaining touching
Data are touched, touch panel 204 is connected by inner transmission lines 211 with CPU202.Wherein, touch panel 204 can detect user
Finger sliding, clicking operation, and accordingly the data detected be transmitted to processor 202 handled it is corresponding to generate
Control instruction, illustratively, can be left shift instruction, right shift instruction, move up instruction, move down instruction etc..Illustratively, display unit
Part 203 can video-stream processor 202 transmit virtual image data, which can be accordingly according to touch panel 204
The user's operation that detects carries out corresponding change, specifically, can be carry out screen switching, when detecting left shift instruction or move to right
Switch upper one or next virtual image picture after instruction accordingly;It, should when display unit 203 shows video playing information
Left shift instruction, which can be, plays out playbacking for content, and right shift instruction can be the F.F. for playing out content;Work as display unit
203 displays are when being editable word content, and the left shift instruction, right shift instruction move up instruction, move down instruction and can be to cursor
Displacement operation, i.e. the position of cursor can move the touch operation of touch tablet according to user;When display unit 203 is aobvious
When the content shown is game animation picture, the left shift instruction, right shift instruction move up instruction, move down instruction and can be in game
Object controlled, in machine game like flying, can by the left shift instruction, right shift instruction, move up instruction, move down instruction control respectively
The heading of aircraft processed;When display unit 203 can show the video pictures of different channel, the left shift instruction, right shift instruction,
Move up instruction, move down instruction and can carry out the switching of different channel, wherein move up instruction and move down instruction can be to switch to it is preset
Channel (the common channel that such as user uses);When display unit 203 show static images when, the left shift instruction, right shift instruction, on
It moves instruction, move down the switching that instructs and can carry out between different pictures, wherein left shift instruction can be to switch to a width picture,
Right shift instruction, which can be, switches to next width figure, and an atlas can be to switch to by moving up instruction, and moving down instruction can be switching
To next atlas.The touch panel 204 can also be used to control the display switch of display unit 203, illustratively, work as length
When pressing 204 touch area of touch panel, display unit 203, which is powered, shows graphic interface, presses touch panel 204 when long again
When touch area, display unit 203 power off, when display unit 203 be powered after, can by touch panel 204 carry out upper cunning and
Operation glide to adjust the brightness or resolution ratio that show image in display unit 203.
Heart rate detection mould group 205, for measuring the heart rate data of user, heart rate refers to beats per minute, the heart rate
Mould group 205 is detected to be arranged on the inside of temple.Specifically, the heart rate detection mould group 205 can be in such a way that electric pulse measures
Human body electrocardio data are obtained using stemness electrode, heart rate size is determined according to the amplitude peak in electrocardiogram (ECG) data;The heart rate detection
Mould group 205 can also be by being formed using the light transmitting and light receiver of photoelectric method measurement heart rate, correspondingly, the heart rate is examined
Mould group 205 is surveyed to be arranged at temple bottom, the ear-lobe of human body auricle.Heart rate detection mould group 205 can phase after collecting heart rate data
The progress data processing in processor 202 that is sent to answered has obtained the current heart rate value of wearer, in one embodiment, processing
Device 202, can be by the heart rate value real-time display in display unit 203 after determining the heart rate value of user, optional processor
202 are determining that heart rate value lower (such as less than 50) or higher (such as larger than 100) can trigger alarm accordingly, while by the heart
Rate value and/or the warning message of generation are sent to server by communication module.
Range sensor 206, may be provided on frame, the distance which is used to incude face to frame,
The realization of infrared induction principle can be used in the range sensor 206.Specifically, the range sensor 206 is by the range data of acquisition
It is sent to processor 202, data control the bright dark of display unit 203 to processor 202 according to this distance.Illustratively, work as determination
When the collected distance of range sensor 206 is less than 5 centimetres out, the corresponding control display unit 203 of processor 202, which is in, to be lighted
State, when determine range sensor be detected with object close to when, it is corresponding control display unit 204 and be in close shape
State.
Breath light 210 may be provided at the edge of frame, when display unit 203 closes display picture, the breath light 210
It can be lighted according to the control of processor 202 in the bright dark effect of gradual change.
Camera 207 can be the position that the upper side frame of frame is arranged in, and acquire the proactive of the image data in front of user
As module, the rear photographing module of user eyeball information can also be acquired, is also possible to the combination of the two.Specifically, camera 207
When acquiring forward image, the image of acquisition is sent to the identification of processor 202, processing, and trigger accordingly according to recognition result
Trigger event.Illustratively, when user wears the wearable device at home, by being identified to the forward image of acquisition,
If recognizing article of furniture, corresponding inquiry whether there is corresponding control event, if it is present accordingly by the control
The corresponding control interface of event processed is shown in display unit 203, and user can carry out corresponding furniture object by touch panel 204
The control of product, wherein the article of furniture and intelligent glasses are connected to the network by bluetooth or wireless self-networking;When user is at family
When outer wearing wearable device, target identification mode can be opened accordingly, which can be used to identify specific people,
The image of acquisition is sent to processor 202 and carries out recognition of face processing by camera 207, if recognizing the default people of setting
Face, the then loudspeaker that can be integrated accordingly by intelligent glasses carry out sound casting, which can be also used for knowing
Not different plants, for example, processor 202 acquired according to the touch operation of touch panel 204 with recording camera 207 it is current
Image is simultaneously sent to server by communication module to be identified, server identify to the plant in acquisition image simultaneously anti-
It presents relevant botanical name, introduce to intelligent glasses, and feedback data is shown in display unit 203.Camera 207 may be used also
To be the image for acquiring user's eye such as eyeball, different control instructions is generated by the identification of the rotation to eyeball, is shown
Example property, control instruction is moved up as eyeball is rotated up generation, eyeball rotates down generation and moves down control instruction, and eyeball is turned left
Dynamic generation moves to left control instruction, and the eyeball generation that turns right moves to right control instruction, wherein qualified, display unit 203 can show place
Manage the virtual image data that device 202 transmits, the user eyeball which can detect according to camera 207 accordingly
Mobile variation generate control instruction and change, specifically, can be carry out screen switching, move to left control instruction when detecting
Or switch upper one or next virtual image picture accordingly after moving to right control instruction;When display unit 203 shows that video is broadcast
When putting information, this, which moves to left control instruction and can be, plays out playbacking for content, moves to right in control instruction can be and play out
The F.F. of appearance;When display unit 203 display be editable word content when, this move to left control instruction, move to right control instruction, on
Control instruction is moved, control instruction is moved down and can be displacement operation to cursor, is i.e. the position of cursor can be according to user to touch tablet
Touch operation and moved;When display unit 203 show content be game animation picture when, this move to left control instruction,
Control instruction is moved to right, control instruction is moved up, moves down control instruction and can be the object in game is controlled, machine game like flying
In, control instruction can be moved to left by this, moved to right control instruction, moved up control instruction, moving down control instruction and control aircraft respectively
Heading;When display unit 203 can show the video pictures of different channel, this move to left control instruction, move to right control instruction,
Control instruction is moved up, control instruction is moved down and can carry out the switching of different channel, wherein moves up control instruction and moves down control instruction
Pre-set channel (the common channel that such as user uses) can be to switch to;When display unit 203 shows static images, this is moved to left
Control instruction moves to right control instruction, moves up control instruction, moving down control instruction and can carry out switching between different pictures, wherein
A width picture can be to switch to by moving to left control instruction, moved to right control instruction and be can be and switch to next width figure, and control is moved up
Instruction can be to switch to an atlas, move down control instruction and can be and switch to next atlas.
The inner wall side of at least one temple is arranged in bone-conduction speaker 208, bone-conduction speaker 208, for that will receive
To processor 202 send audio signal be converted to vibration signal.Wherein, sound is passed through skull by bone-conduction speaker 208
It is transferred to human body inner ear, is transmitted in skull cochlea by the way that the electric signal of audio is changed into vibration signal, then by auditory nerve
It is perceived.Reduce hardware configuration thickness as sounding device by bone-conduction speaker 208, weight is lighter, while without electromagnetism
Radiation will not be influenced by electromagnetic radiation, and have antinoise, waterproof and liberation ears a little.
Microphone 209, may be provided on the lower frame of frame, for acquiring external (user, environment) sound and being transmitted to
Processor 202 is handled.Illustratively, the sound that microphone 209 issues user be acquired and pass through processor 202 into
Row Application on Voiceprint Recognition can receive subsequent voice control, specifically, user if being identified as the vocal print of certification user accordingly
Collected voice is sent to processor 202 and identified according to recognition result generation pair by capable of emitting voice, microphone 209
The control instruction answered, such as " booting ", " shutdown ", " promoting display brightness ", " reducing display brightness ", the subsequent basis of processor 202
The control instruction of the generation executes corresponding control processing.
The executable present invention of the voice recognition device and wearable device of the wearable device provided in above-described embodiment appoints
The sound identification method of wearable device provided by embodiment of anticipating has and executes the corresponding functional module of this method and beneficial to effect
Fruit.The not technical detail of detailed description in the above-described embodiments wearable is set reference can be made to provided by any embodiment of the invention
Standby sound identification method.
The embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction, described wearable to set
Standby executable instruction is used to execute a kind of sound identification method when being executed by wearable device processor, this method comprises:
When detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone setting is being dressed
In formula equipment;
The voice data is identified, if recognition result meets preset condition, obtains and is recorded in database
Related information corresponding with the recognition result;
The interactive mode for determining the wearable device carries out the interaction of the related information by the interactive mode.
In a possible embodiment, it is described to the voice data carry out identification include:
Application on Voiceprint Recognition matching is carried out to the voice data;
If obtaining the identification knot record in database and described correspondingly, the recognition result meets preset condition
The corresponding related information of fruit includes:
If Application on Voiceprint Recognition successful match, the corresponding user's letter of vocal print of the successful match recorded in database is obtained
Breath.
In a possible embodiment, the corresponding user of vocal print of the successful match recorded in the acquisition database
Before information, further includes:
The different vocal print features and corresponding user information that will identify that are added in database, wherein the user believes
Breath includes at least one of name, position, contact method and the brief introduction of user.
In a possible embodiment, it is described to the voice data carry out identification include:
Feature extraction and aspect ratio pair are carried out to the voice data;
If obtaining the identification knot record in database and described correspondingly, the recognition result meets preset condition
The corresponding related information of fruit includes:
If aspect ratio to success, obtains the corresponding identification information of the voice data recorded in database.
In a possible embodiment, the wearable device includes intelligent glasses, and the determination is described wearable to be set
Standby interactive mode includes: by the interaction that the interactive mode carries out the related information
It determines whether the display screen of the intelligent glasses is lighted, if the display screen is in illuminating state, passes through institute
It states display screen and shows the related information, wherein the display screen is integrated in the frame of the intelligent glasses.
In a possible embodiment, if the display screen is in non-illuminating state, pass through bone-conduction speaker
Play the related information, wherein the bone-conduction speaker is integrated in the temple of the intelligent glasses.
In a possible embodiment, it is described the related information is played by bone-conduction speaker after, also wrap
It includes:
Foundation is connected with the network communication of smart phone, and the related information is sent to the smart phone, is used for institute
State the display that smart phone carries out the related information.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap
It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other
Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term
Matter " may include may reside in different location (such as by network connection different computer systems in) two or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application
The sound identification method operation that executable instruction is not limited to the described above, can also be performed provided by any embodiment of the invention
Relevant operation in sound identification method.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (10)
1. sound identification method characterized by comprising
When detecting voice recognition commands, the voice data of microphone acquisition is obtained, the microphone setting is set wearable
It is standby upper;
The voice data is identified, if recognition result meets preset condition, obtains record in database and institute
State the corresponding related information of recognition result;
The interactive mode for determining the wearable device carries out the interaction of the related information by the interactive mode.
2. the method according to claim 1, wherein it is described to the voice data carry out identification include:
Application on Voiceprint Recognition matching is carried out to the voice data;
If obtaining the recognition result pair record in database and described correspondingly, the recognition result meets preset condition
The related information answered includes:
If Application on Voiceprint Recognition successful match, the corresponding user information of vocal print of the successful match recorded in database is obtained.
3. according to the method described in claim 2, it is characterized in that, the sound of the successful match recorded in the acquisition database
Before the corresponding user information of line, further includes:
The different vocal print features and corresponding user information that will identify that are added in database, wherein the user information packet
Include at least one of name, position, contact method and the brief introduction of user.
4. the method according to claim 1, wherein it is described to the voice data carry out identification include:
Feature extraction and aspect ratio pair are carried out to the voice data;
If obtaining the recognition result pair record in database and described correspondingly, the recognition result meets preset condition
The related information answered includes:
If aspect ratio to success, obtains the corresponding identification information of the voice data recorded in database.
5. method according to any of claims 1-4, which is characterized in that the wearable device includes Brilliant Eyes
Mirror, the interactive mode of the determination wearable device are wrapped by the interaction that the interactive mode carries out the related information
It includes:
Determine whether the display screen of the intelligent glasses is lighted, if the display screen is in illuminating state, by described aobvious
Display screen shows the related information, wherein the display screen is integrated in the frame of the intelligent glasses.
6. according to the method described in claim 5, it is characterized in that, passing through if the display screen is in non-illuminating state
Bone-conduction speaker plays the related information, wherein the bone-conduction speaker is integrated in the temple of the intelligent glasses.
7. according to the method described in claim 6, it is characterized in that, playing the association letter by bone-conduction speaker described
After breath, further includes:
Foundation is connected with the network communication of smart phone, and the related information is sent to the smart phone, is used for the intelligence
Energy mobile phone carries out the display of the related information.
8. voice recognition device characterized by comprising
Voice data obtains module, described for when detecting voice recognition commands, obtaining the voice data of microphone acquisition
Microphone is arranged in wearable device;
Related information extraction module, if recognition result meets preset condition, is obtained for identifying to the voice data
Take the related information corresponding with the recognition result recorded in database;
Related information interactive module carries out institute by the interactive mode for determining the interactive mode of the wearable device
State the interaction of related information.
9. a kind of wearable device, comprising: processor, memory and storage can be run on a memory and on a processor
Computer program, which is characterized in that the processor is realized when executing the computer program such as any one of claim 1-7
The sound identification method.
10. a kind of storage medium comprising wearable device executable instruction, which is characterized in that the wearable device is executable
Instruction by wearable device processor when being executed for executing such as voice recognition side of any of claims 1-7
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001599.5A CN109119080A (en) | 2018-08-30 | 2018-08-30 | Sound identification method, device, wearable device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001599.5A CN109119080A (en) | 2018-08-30 | 2018-08-30 | Sound identification method, device, wearable device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109119080A true CN109119080A (en) | 2019-01-01 |
Family
ID=64860522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811001599.5A Pending CN109119080A (en) | 2018-08-30 | 2018-08-30 | Sound identification method, device, wearable device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109119080A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334497A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Switching method and wearable electronic equipment, the storage medium of display interface |
CN112071311A (en) * | 2019-06-10 | 2020-12-11 | Oppo广东移动通信有限公司 | Control method, control device, wearable device and storage medium |
CN113687595A (en) * | 2021-08-24 | 2021-11-23 | 深圳市新科盈数码有限公司 | Wearable wrist-watch of intelligence voice broadcast |
CN115412771A (en) * | 2022-08-11 | 2022-11-29 | 深圳创维-Rgb电子有限公司 | Interaction control method between smart watch and smart television and related equipment |
CN115662436A (en) * | 2022-11-14 | 2023-01-31 | 北京探境科技有限公司 | Audio processing method and device, storage medium and intelligent glasses |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130339018A1 (en) * | 2012-06-15 | 2013-12-19 | Sri International | Multi-sample conversational voice verification |
CN104090973A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Information presentation method and device |
CN105205457A (en) * | 2015-09-10 | 2015-12-30 | 上海卓易科技股份有限公司 | Information acquisition system and method based on face recognition |
CN106302044A (en) * | 2016-08-12 | 2017-01-04 | 美的集团股份有限公司 | A kind of household electric appliance control method, household electrical appliances and appliance control system |
CN107729737A (en) * | 2017-11-08 | 2018-02-23 | 广东小天才科技有限公司 | The acquisition methods and wearable device of a kind of identity information |
-
2018
- 2018-08-30 CN CN201811001599.5A patent/CN109119080A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130339018A1 (en) * | 2012-06-15 | 2013-12-19 | Sri International | Multi-sample conversational voice verification |
CN104090973A (en) * | 2014-07-18 | 2014-10-08 | 百度在线网络技术(北京)有限公司 | Information presentation method and device |
CN105205457A (en) * | 2015-09-10 | 2015-12-30 | 上海卓易科技股份有限公司 | Information acquisition system and method based on face recognition |
CN106302044A (en) * | 2016-08-12 | 2017-01-04 | 美的集团股份有限公司 | A kind of household electric appliance control method, household electrical appliances and appliance control system |
CN107729737A (en) * | 2017-11-08 | 2018-02-23 | 广东小天才科技有限公司 | The acquisition methods and wearable device of a kind of identity information |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112071311A (en) * | 2019-06-10 | 2020-12-11 | Oppo广东移动通信有限公司 | Control method, control device, wearable device and storage medium |
CN110334497A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Switching method and wearable electronic equipment, the storage medium of display interface |
CN110334497B (en) * | 2019-06-28 | 2021-10-26 | Oppo广东移动通信有限公司 | Display interface switching method, wearable electronic device and storage medium |
CN113687595A (en) * | 2021-08-24 | 2021-11-23 | 深圳市新科盈数码有限公司 | Wearable wrist-watch of intelligence voice broadcast |
CN115412771A (en) * | 2022-08-11 | 2022-11-29 | 深圳创维-Rgb电子有限公司 | Interaction control method between smart watch and smart television and related equipment |
CN115662436A (en) * | 2022-11-14 | 2023-01-31 | 北京探境科技有限公司 | Audio processing method and device, storage medium and intelligent glasses |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021043053A1 (en) | Animation image driving method based on artificial intelligence, and related device | |
CN109119080A (en) | Sound identification method, device, wearable device and storage medium | |
CN107464564B (en) | Voice interaction method, device and equipment | |
CN109120790B (en) | Call control method and device, storage medium and wearable device | |
CN109254659A (en) | Control method, device, storage medium and the wearable device of wearable device | |
CN109259724B (en) | Eye monitoring method and device, storage medium and wearable device | |
JP7143847B2 (en) | Information processing system, information processing method, and program | |
CN109743504A (en) | A kind of auxiliary photo-taking method, mobile terminal and storage medium | |
CN109032384A (en) | Music control method, device and storage medium and wearable device | |
CN109358744A (en) | Information sharing method, device, storage medium and wearable device | |
CN109224432A (en) | Control method, device, storage medium and the wearable device of entertainment applications | |
CN107864353B (en) | A kind of video recording method and mobile terminal | |
CN109034827A (en) | Method of payment, device, wearable device and storage medium | |
CN109521927A (en) | Robot interactive approach and equipment | |
CN109819167B (en) | Image processing method and device and mobile terminal | |
CN109061903A (en) | Data display method, device, intelligent glasses and storage medium | |
CN109040641A (en) | A kind of video data synthetic method and device | |
CN109189225A (en) | Display interface method of adjustment, device, wearable device and storage medium | |
CN109241924A (en) | Multi-platform information interaction system Internet-based | |
CN109144264A (en) | Display interface method of adjustment, device, wearable device and storage medium | |
CN110096251A (en) | Exchange method and device | |
CN109119057A (en) | Musical composition method, apparatus and storage medium and wearable device | |
CN109067627A (en) | Appliances equipment control method, device, wearable device and storage medium | |
CN109255314A (en) | Information cuing method, device, intelligent glasses and storage medium | |
CN108763475A (en) | A kind of method for recording, record device and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190101 |
|
RJ01 | Rejection of invention patent application after publication |