CN110197662A - Sound control method, wearable device and computer readable storage medium - Google Patents
Sound control method, wearable device and computer readable storage medium Download PDFInfo
- Publication number
- CN110197662A CN110197662A CN201910478950.8A CN201910478950A CN110197662A CN 110197662 A CN110197662 A CN 110197662A CN 201910478950 A CN201910478950 A CN 201910478950A CN 110197662 A CN110197662 A CN 110197662A
- Authority
- CN
- China
- Prior art keywords
- instruction
- control
- wearable device
- voice
- keyword
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 238000004088 simulation Methods 0.000 claims abstract description 17
- 239000000284 extract Substances 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims description 9
- 239000012141 concentrate Substances 0.000 claims description 8
- 230000001960 triggered effect Effects 0.000 claims description 3
- 210000000707 wrist Anatomy 0.000 abstract description 12
- 230000006854 communication Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 230000001755 vocal effect Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 5
- 230000005611 electricity Effects 0.000 description 4
- 238000005452 bending Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000005764 inhibitory process Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000012769 display material Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Environmental & Geological Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
This application discloses a kind of sound control method, wearable device and computer readable storage mediums, the sound control method includes: the voice messaging for obtaining the wearable device local environment in real time based on preset voice acquisition module, and whether integrates the detection voice messaging as control instruction according to preset instruction analysis;If so, obtaining the interface layout of current display page, and extract the control information of each control on the interface layout;Detect whether the control instruction is effective control instruction according to all control informations;If so, effective control instruction described in script execution is clicked in the simulation of analytic set mapping based on instruction.Present application addresses the disagreeableness technical problems of smaller screen operating experience for how improving the wearable devices such as wrist machine.
Description
Technical field
This application involves voice control fields more particularly to a kind of sound control method, wearable device and computer can
Read storage medium.
Background technique
As the fast development of electronic equipment and intelligence degree are higher and higher, occurring wrist machine on current market, this is new
Type sci-tech product.As a kind of wearable device, wrist machine has the advantages such as small in size, easy to carry, provides for numerous users
It is more convenient interesting end product.But meanwhile the screen of wrist machine is more narrow, user screen operator experience on it is extremely not friendly
Good, to need to input the scenes such as password, button be smaller particularly inconvenient encountering.User is easy to cause accidentally to grasp because screen is narrow
Make, this brings great inconvenience to user, seriously reduces user experience.Therefore, it is wearable how to improve wrist machine etc.
The smaller screen operating experience of equipment becomes a technical problem to be solved urgently.
Summary of the invention
The main purpose of the present invention is to provide a kind of sound control method, wearable device and computer-readable storages
Medium, it is intended to solve the technical issues of how improving the smaller screen operating experience of the wearable devices such as wrist machine.
To achieve the above object, the embodiment of the present invention provides a kind of sound control method, and the method is applied to wearable
Equipment, the sound control method include:
Obtain the voice messaging of the wearable device local environment, and foundation in real time based on preset voice acquisition module
Whether preset instruction analysis integrates the detection voice messaging as control instruction;
If so, obtaining the interface layout of current display page, and extract the control of each control on the interface layout
Information;
Detect whether the control instruction is effective control instruction according to all control informations;
If so, effective control instruction described in script execution is clicked in the simulation of analytic set mapping based on instruction.
Optionally, described to integrate the step of whether the detection voice messaging is as control instruction packet according to preset instruction analysis
It includes:
All voice keywords in voice messaging are extracted, and confirm the keyword attribute of each voice keyword, the pass
Keyword attribute includes verb keyword and noun keyword;
Weight marking is carried out to the voice keyword according to preset instruction analysis collection, to obtain each voice keyword
Weight point;
Obtain weight point highest optimal verb keyword and optimal noun keyword in each voice keyword;
If detecting, instruction analysis concentrates the mapping for existing and matching with optimal verb keyword and optimal noun keyword
Instruction, then be confirmed as control instruction for the demapping instruction.
Optionally, if described detect that instruction analysis concentrates presence and optimal verb keyword and optimal noun keyword phase
Matched demapping instruction, then the step of demapping instruction being confirmed as control instruction further include:
If detecting demapping instruction more than one, by the smallest demapping instruction in committed memory space in the demapping instruction
It is confirmed as control instruction.
Optionally, the current display page includes current page mark, and the control information includes current control mark,
It is described to detect that the step of whether control instruction is effective control instruction includes: according to all control informations
Extract the target pages mark and target widget mark in the control instruction;
It detects and is identified in the current display page with the presence or absence of with the current page of target pages identity map, Yi Jisuo
Stating control information whether there is and the current control of target widget identity map mark;
It is identified if existing in the current display page with the current page of target pages identity map, and the control is believed
Breath is identified with the presence or absence of with the current control of target widget identity map, then confirms that the control instruction is effective control instruction.
Optionally, the step of effective control instruction described in script execution is clicked in the simulation of the mapping of analytic set based on instruction
Include:
If detecting, the control instruction is specific control instruction, a length of preset time in current display page display
The time schedule scroll bar of value, and the simulation of analytic set mapping is clicked described in script execution based on instruction after preset time value
Effective control instruction.
Optionally, the voice for obtaining the wearable device local environment in real time based on preset voice acquisition module
Information, and according to preset instruction analysis integrate the detection voice messaging whether as the step of control instruction include:
If detecting exclusive wake up instruction based on preset voice acquisition module, the wearable device institute is obtained in real time
Locate the voice messaging of environment.
Optionally, the method also includes:
If detecting the instruction custom instruction of user's input, all instructions item in described instruction analytic set is exported;
The selection instruction triggered based on all instructions item is obtained, and obtains the corresponding instruction items to be selected of the selection instruction
Specified sequence, the instruction items to be selected include multinomial instruction items;
If detecting the corresponding editor's combined command of the instruction items to be selected, based on editor's combined command by described
The instruction items group to be selected is combined into target instruction target word item by specified sequence, and exports the renaming input frame of the target instruction target word item;
It will be referred to as the instruction name of target instruction target word item based on the name of renaming input frame input, and by the target instruction target word
Item is added to instruction analysis concentration.
Optionally, the voice for obtaining the wearable device local environment in real time based on preset voice acquisition module
Information, and after integrating the step of whether the detection voice messaging is as control instruction according to preset instruction analysis further include:
Sound quality identification is carried out to the voice messaging, to obtain the quality level of the voice messaging;
If the quality level is less than default quality level, the prompt information of voice messaging is re-entered in output.
In addition, to achieve the above object, the present invention also provides a kind of wearable devices;
The wearable device includes: memory, processor and is stored on the memory and can be in the processor
The computer program of upper operation, in which:
The step of computer program realizes sound control method as described above when being executed by the processor.
In addition, to achieve the above object, the present invention also provides computer storage mediums;
Computer program, the realization when computer program is executed by processor are stored in the computer storage medium
Such as the step of above-mentioned sound control method.
A kind of sound control method, equipment and the computer readable storage medium that the embodiment of the present invention proposes, by being based on
Preset voice acquisition module obtains the voice messaging of the wearable device local environment in real time, and according to preset instruction point
Whether analysis integrates the detection voice messaging as control instruction;If so, obtaining the interface layout of current display page, and extract institute
State the control information of each control on interface layout;Detect whether the control instruction is effectively to control according to all control informations
Instruction;If so, effective control instruction described in script execution is clicked in the simulation of analytic set mapping based on instruction.By with top
Case allows the wearable devices such as wrist machine to realize manipulation optimization experience to the greatest extent under the voice control of user, thus
It solves the disagreeableness technical problem of smaller screen operating experience of the wearable devices such as wrist machine, avoids wearable device because screen is narrow
It is narrow to lead to user's generation the phenomenon that inputting password, clicking the inconvenient or maloperation encountered on small-sized control, and then promoted
The controllability of wearable device improves the manipulation experience of user.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, for those of ordinary skill in the art
Speech, without any creative labor, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of hardware structural diagram of embodiment of wearable device provided in an embodiment of the present invention;
Fig. 2 provides wearable device a kind of hardware schematic at the first visual angle of embodiment for the embodiment of the present application;
Fig. 3 provides wearable device a kind of hardware schematic at the second visual angle of embodiment for the embodiment of the present application;
Fig. 4 provides a kind of hardware schematic at embodiment third of wearable device visual angle for the embodiment of the present application;
Fig. 5 is a kind of hardware schematic at the 4th visual angle of embodiment of wearable device provided by the embodiments of the present application;
Fig. 6 provides the flow diagram of one embodiment of sound control method for the embodiment of the present application;
Fig. 7 is the refinement flow diagram of step S10 in Fig. 6;
Fig. 8 is the refinement flow diagram of step S30 in Fig. 6;
Fig. 9 provides the flow diagram of the another embodiment of sound control method for the embodiment of the present application.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element
Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
The wearable device provided in the embodiment of the present invention includes that Intelligent bracelet, smartwatch and smart phone etc. move
Dynamic terminal.With the continuous development of Screen Technology, the appearance of the screens form such as flexible screen, Folding screen, smart phone etc. is mobile eventually
End can also be used as wearable device.The wearable device provided in the embodiment of the present invention may include: RF (Radio
Frequency, radio frequency) unit, WiFi module, audio output unit, A/V (audio/video) input unit, sensor, display
The components such as unit, user input unit, interface unit, memory, processor and power supply.
It will be illustrated by taking wearable device as an example in subsequent descriptions, referring to Fig. 1, its each implementation to realize the present invention
A kind of hardware structural diagram of wearable device of example, which may include: RF (Radio
Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104,
Sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, Yi Ji electricity
The components such as source 111.It will be understood by those skilled in the art that wearable device structure shown in Fig. 1 is not constituted to wearable
The restriction of equipment, wearable device may include perhaps combining certain components or difference than illustrating more or fewer components
Component layout.
It is specifically introduced below with reference to all parts of the Fig. 1 to wearable device:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, radio frequency list
Uplink information can be sent to base station by member 101, and after the downlink information that in addition can also be sent base station receives, being sent to can be worn
The processor 110 for wearing equipment is handled, and base station can be to the downlink information that radio frequency unit 101 is sent and be sent out according to radio frequency unit 101
What the uplink information sent generated, it is also possible to actively push away to radio frequency unit 101 after the information update for detecting wearable device
It send, for example, base station can penetrating to wearable device after detecting that geographical location locating for wearable device changes
Frequency unit 101 sends the message informing of geographical location variation, and radio frequency unit 101, can should after receiving the message informing
The processor 110 that message informing is sent to wearable device is handled, and it is logical that the processor 110 of wearable device can control the message
Know on the display panel 1061 for being shown in wearable device;In general, radio frequency unit 101 include but is not limited to antenna, at least one
Amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also pass through channel radio
Letter communicated with network and other equipment, specifically may include: by wireless communication with the server communication in network system, example
Such as, wearable device can download file resource from server by wireless communication, for example can download and answer from server
With program, after wearable device completes the downloading of a certain application program, if the corresponding file of the application program in server
Resource updates, then the server can be by wireless communication to the message informing of wearable device push resource updates, to remind
User is updated the application program.Any communication standard or agreement can be used in above-mentioned wireless communication, including but not limited to
GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General
Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access
2000, CDMA 2000), (Wideband Code Division Multiple Access, wideband code division are more by WCDMA
Location), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time division synchronous
CDMA), (Frequency Division Duplexing-Long Term Evolution, frequency division duplex are long by FDD-LTE
Phase evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex are drilled for a long time
Into) etc..
In one embodiment, wearable device 100 can access existing communication network by insertion SIM card.
In another embodiment, wearable device 100 can be come real by the way that esim card (Embedded-SIM) is arranged
Existing communication network is now accessed, by the way of esim card, the inner space of wearable device can be saved, reduce thickness.
It is understood that although Fig. 1 shows radio frequency unit 101, but it is understood that, radio frequency unit 101 its
And it is not belonging to must be configured into for wearable device, it can according to need within the scope of not changing the essence of the invention and save completely
Slightly.Wearable device 100 can realize the communication connection with other equipment or communication network separately through wifi module 102,
The embodiment of the present invention is not limited thereto.
WiFi belongs to short range wireless transmission technology, and wearable device can help user to receive and dispatch by WiFi module 102
Email, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1
WiFi module 102 is shown, but it is understood that, and it is not belonging to must be configured into for wearable device, it completely can root
It is omitted within the scope of not changing the essence of the invention according to needs.
Audio output unit 103 can be in call signal reception pattern, call mode, record in wearable device 100
When under the isotypes such as mode, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or
The audio data that person stores in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit
103 can also provide audio output relevant to the specific function that wearable device 100 executes (for example, call signal reception sound
Sound, message sink sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can
To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model.
Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition)
The noise generated during frequency signal or interference.
In one embodiment, wearable device 100 includes one or more cameras, by opening camera,
It can be realized the capture to image, realize the functions such as take pictures, record a video, the position of camera, which can according to need, to be configured.
Wearable device 100 further includes at least one sensor 105, for example, optical sensor, motion sensor and other
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring
The light and shade of border light adjusts the brightness of display panel 1061, proximity sensor can when wearable device 100 is moved in one's ear,
Close display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions
The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile phone posture when static
It (for example pedometer, is struck using (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function
Hit) etc..
In one embodiment, wearable device 100 further includes proximity sensor, can by using proximity sensor
Wearable device can be realized non-contact manipulation, provide more modes of operation.
In one embodiment, wearable device 100 further includes heart rate sensor, when wearing, by close to using
Person can be realized the detecting of heart rate.
In one embodiment, wearable device 100 can also include that fingerprint sensor can by reading fingerprint
Realize the functions such as safety verification.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
In one embodiment, display panel 1061 uses flexible display screen, and wearable using flexible display screen sets
For when wearing, screen is able to carry out bending, to more be bonded.Optionally, the flexible display screen can use OLED screen
Body and graphene screen body, in other embodiments, the flexible display screen is also possible to other display materials, the present embodiment
It is not limited thereto.
In one embodiment, the display panel 1061 of wearable device can take rectangle, ring when convenient for wearing
Around.In other embodiments, other modes can also be taken.
User input unit 107 can be used for receiving the number or character information of input, and generate and wearable device
User setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch behaviour of user on it or nearby
Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel
Operation near 1071), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touching
Two parts of detection device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch behaviour
Make bring signal, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and
It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.This
Outside, touch panel 1071 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touching
Panel 1071 is controlled, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072
It can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operation
One of bar etc. is a variety of, specifically herein without limitation.
In one embodiment, one or more buttons have can be set in the side of wearable device 100.Button can be with
The various ways such as short-press, long-pressing, rotation are realized, to realize a variety of operating effects.The quantity of button can be different to be multiple
It can be applied in combination between button, realize a variety of operating functions.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel
1061 be the function that outputs and inputs of realizing wearable device as two independent components, but in certain embodiments,
Touch panel 1071 and display panel 1061 can be integrated and be realized the function that outputs and inputs of wearable device, specifically herein
Without limitation.For example, processor 110 can be controlled when receiving the message informing of a certain application program by radio frequency unit 101
The message informing show in a certain predeterminable area of display panel 1061 by system, the predeterminable area and touch panel 1071 certain
One region is corresponding, can be to corresponding to area on display panel 1061 by carrying out touch control operation to a certain region of touch panel 1071
The message informing shown in domain is controlled.
Interface unit 108 be used as at least one external device (ED) connect with wearable device 100 can by interface.Example
Such as, external device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, You Xianhuo
Wireless data communications port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in wearable device 100 or can
For transmitting data between wearable device 100 and external device (ED).
In one embodiment, wearable device 100 interface unit 108 using contact structure, by contact with
Corresponding other equipment connection, realizes the functions such as charging, connection.Use contact can be with waterproof.
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of wearable device, utilizes various interfaces and the entire wearable device of connection
Various pieces, by running or execute the software program and/or module that are stored in memory 109, and call and be stored in
Data in memory 109 execute the various functions and processing data of wearable device, to carry out to wearable device whole
Monitoring.Processor 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulation
Demodulation processor, wherein the main processing operation system of application processor, user interface and application program etc., modulation /demodulation processing
Device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Wearable device 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply
111 can be logically contiguous by power-supply management system and processor 110, thus charged by power-supply management system realization management,
The functions such as electric discharge and power managed.
Although Fig. 1 is not shown, wearable device 100 can also be including bluetooth module etc., and details are not described herein.It is wearable to set
Standby 100, by bluetooth, can connect with other terminal devices, realize communication and the interaction of information.
Fig. 2-Fig. 4 is please referred to, is the structure under a kind of a kind of embodiment of wearable device provided in an embodiment of the present invention
Schematic diagram.Wearable device in the embodiment of the present invention, including flexible screen.In wearable device expansion, flexible screen is in
Strip;When wearable device is in wearing state, flexible screen bending is annular in shape.Fig. 2 and Fig. 3 show wearable device
Structural schematic diagram when screen is unfolded, Fig. 4 show structural schematic diagram when wearable device screen-bending.
Based on above-mentioned each embodiment, it can be seen that if the equipment is wrist-watch, bracelet or wearable device
When, the screen of the equipment can not overlay device watchband region, can also be with the watchband region of overlay device.Here, this Shen
It please propose a kind of optional embodiment, in the present embodiment, the equipment for wrist-watch, bracelet or wearable can be set
Standby, the equipment includes screen and interconnecting piece.The screen can be flexible screen, and the interconnecting piece can be watchband.It can
Choosing, the screen of the equipment or the viewing area of screen can be partly or completely covered on the watchband of equipment.Such as Fig. 5
Shown, Fig. 5 is a kind of a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application, the equipment
Screen extends to two sides, and part is covered on the watchband of equipment.In other embodiments, the screen of the equipment can also be with
It is all covered on the watchband of the equipment, the embodiment of the present application is not limited thereto.
The present invention provides a kind of sound control method, which is mainly used on wearable device,
Specifically, in sound control method first embodiment of the present invention, referring to Fig. 6, sound control method includes:
Step S10 obtains the voice letter of the wearable device local environment based on preset voice acquisition module in real time
Breath, and whether integrate the detection voice messaging as control instruction according to preset instruction analysis;
Voice acquisition module is the acquisition unit being built on wearable device, and in the present embodiment, wearable device is logical
Voice acquisition module real-time reception is crossed to obtain the voice messaging of local environment.Since the source of voice messaging is the external world,
There are a variety of sound sources.For example, it may be possible to be the twittering language of children, it may be possible to the traveling sound that street is got on the car, it is also possible to use
Family is currently to phonetic order of wearable device etc..The sound that these sound sources are formed can all be got by voice acquisition module,
Form voice messaging.But whether these voice messagings have control action to wearable device, then need further to sentence it
It is fixed.
Instruction analysis collection is preset in the present embodiment, described instruction analytic set, which refers to that developer is set in advance in, to wear
The instruction set in equipment is worn, including the parsing sentence pattern template of a large amount of phonetic order items, can be used for received voice
Information carries out semantic contrast and screens, and meets the voice messaging that instruction analysis concentrates sentence pattern template to match.By preset
Instruction analysis collection, wearable device are able to detect whether the voice messaging is control instruction.The control instruction is equipment
The voice messaging for meeting instruction analysis and concentrating speech analysis sentence pattern template filtered out.
Further, it is assumed that the state for keeping real-time voice to obtain for a long time in wearable device, then voice control mould
Block will persistently consume electricity, and wearable device is when carrying out electricity supplement, and battery capacity is in one and quickly reduces
In state, such case, which can continue a journey to the electric power of wearable device to be formed, cannot be neglected test.Therefore, how to keep real-time
While obtaining voice messaging, the electricity of wearable device is effectively saved, the cruising ability of wearable device is improved, is to work as front
Case advanced optimizes direction.
Based on above-mentioned optimization direction, in another embodiment, wearable device uses the skill of voice collecting state switching
Art.It is specific as follows, the voice letter for obtaining the wearable device local environment in real time based on preset voice acquisition module
The step of breath includes:
If detecting exclusive wake up instruction based on preset voice acquisition module, if being examined based on preset voice acquisition module
Exclusive wake up instruction is measured, then obtains the voice messaging of the wearable device local environment in real time.
In this embodiment, wearable device has preset exclusive wake up instruction, using as equipment enabled instruction.It is wearable to set
Standby voice acquisition module is divided into two states, the first is enabled instruction acquisition state;Second is voice messaging acquisition shape
State.If equipment is only in enabled instruction acquisition state in the embodiment under the premise of equipment is in standby, i.e., it is current only receive with
The relevant voice messaging of exclusive wake up instruction only receives exclusive wake up instruction, subsequent just to carry out voice messaging acquisition and sieve
No matter choosing judgement, otherwise any voice messaging got from equipment local environment, do not manipulated.If equipment is in enabled instruction
Exclusive wake up instruction is got under acquisition state, then directly carries out voice messaging acquisition state and carry out the acquisition of voice messaging, sieve
It selects and contrasts.For example, exclusive wake up instruction is that " hello, my wrist machine." it is starting under the premise of wearable device is in standby
Instruction acquisition state, voice acquisition module carry out low power operation, if extraneous voice signal vocal print does not meet exclusive wake-up and refers to
The vocal print feature of order does not do any movement then.Only detect that voice signal vocal print meets that " hello, my wrist machine." vocal print
When feature, the voice acquisition module of wearable device just improves collecting efficiency, into voice messaging acquisition state.In this way,
Switched by state, electric quantity consumption can be greatly lowered in equipment, to improve the voice control cruising ability of equipment.It is intelligible
It is that above-mentioned exclusive wake up instruction is only for example.
Step S20 if so, obtaining the interface layout of current display page, and extracts each control on the interface layout
The control information of part;
When equipment determines voice messaging for control instruction, then working as in the screen section that current device is lighted is navigated to
The preceding display page, and obtain the interface layout of current display page.It is understood that the current display page of wearable device
In usually be all functionality controls, the position and place section of these controls are all preset.Therefore current display can be got
The interface layout of the page to identify each space on the interface layout, and extracts the control information of each control.Institute
State the attribute information that control information is current control, including current control mark, control length and width, control color etc..
Step S30 detects whether the control instruction is effective control instruction according to all control informations;
After getting control information, equipment can detect presently described control instruction according to the control information of all controls is
No is effective control instruction.
Specifically, referring to Fig. 8, the current display page includes current page mark, and the control information includes current
Control mark, it is described to detect that the step of whether control instruction is effective control instruction includes: according to all control informations
Step S31 extracts target pages mark and target widget mark in the control instruction;
Step S32 is detected in the current display page with the presence or absence of the current page mark with target pages identity map
Know and the control information whether there is and the current control of target widget identity map mark;
In the present embodiment, current control mark represents the id number of the current control in current display page, and controls
The target widget that the target pages mark and needs that the target control page to be controlled must be specified in system instruction control
Target widget identifies (i.e. id number).
And judge whether control instruction is effective control instruction, need to confirm following two aspect:
1, can control instruction execute in current display page;
2, can control instruction detect the control object to be controlled in current display page.
Due to wearable device current display page one and only one.Therefore, equipment needs first to detect current display
Whether whether the page, which has, identifies matched current page with target pages and identifies, while also to detect in control information and have and target
Control identifies matched current page mark.For example, target pages are identified as point-1, target widget is identified as widget-1,
And the current page of current display page is identified as point-1;Current control is identified with: widget-1, widget-2 and
widget-3.So target pages mark with current page mark mutually mapping, target widget mark and control information in its
In the mutually mapping of current control mark.But if current page is identified as point-2, current page mark and target
Page iden-tity does not map, at this point, testing result will be for there is no identify with the current page of target pages identity map.
Step S33 is identified if existing in the current display page with the current page of target pages identity map, and institute
It states control information to identify with the presence or absence of with the current control of target widget identity map, then confirms that the control instruction is effectively control
System instruction.
It identifies if existing simultaneously with the current page of target pages identity map, and works as with target widget identity map
Preceding control mark, then prove that current control instruction has operability, can be realized in current state by equipment, belong to meaning
Justice defines and achievable control instruction.
Step S40, if so, effective control instruction described in script execution is clicked in the simulation of analytic set mapping based on instruction.
When confirming the control instruction is effective control instruction, i.e., voice messaging in proof current environment can be by can
Wearable device is normally identified and is operated.In the instruction analysis collection of equipment, in addition to the parsing clause mould of a large amount of phonetic order items
Plate, further comprises the operation script of corresponding parsing sentence pattern template, and shell script is clicked in this operation script system simulation.The simulation
The automated procedures that script refers to completing the operating procedure of corresponding parsing sentence pattern template are clicked, and it is corresponding to parse sentence pattern template
Effective control instruction, therefore equipment division based on instruction analytic set mapping simulation click script execution described in effectively control refer to
It enables.For example, effectively control instruction is " ' a contact person ' on the cursor left side in input frame to be replaced with ' b contact person '." so equipment
Execution step (i.e. operation script) is clicked into the corresponding simulation of the corresponding effective instruction, and clicked according to the simulation and navigate to light
" a contact person " on the left side is marked, and according to " the b contact person " provided in effective control instruction, is replaced with " a contact person ".
The application obtains the voice messaging of the wearable device local environment based on preset voice acquisition module in real time,
And whether integrate the detection voice messaging as control instruction according to preset instruction analysis;If so, obtaining current display page
Interface layout, and extract the control information of each control on the interface layout;The control is detected according to all control informations
Whether system instruction is effective control instruction;If so, the simulation of analytic set mapping is clicked described in script execution effectively based on instruction
Control instruction.By above scheme, the wearable devices such as wrist machine are realized utmostly under the voice control of user
Manipulation optimize experience, to solve the disagreeableness technical problem of smaller screen operating experience of the wearable devices such as wrist machine, avoid
Wearable device causes user in input password because screen is narrow, clicks the inconvenient or maloperation encountered on small-sized control
The phenomenon that occur, and then improve the controllability of wearable device, improve the manipulation experience of user.
Further, the second embodiment that sound control method of the present invention is proposed based on first embodiment, in the present embodiment
In, referring to Fig. 7, it is described according to preset instruction analysis integrate the detection voice messaging whether as the step of control instruction include:
Step S11 extracts all voice keywords in voice messaging, and confirms the keyword category of each voice keyword
Property, the keyword attribute includes verb keyword and noun keyword;
In the present embodiment, equipment will extract all voice keywords in voice messaging when detecting voice messaging.Institute
Effective word in predicate sound keyword, that is, voice messaging, such as voice messaging are " improving the volume in music player ", this
When, the voice keyword that equipment is extracted is " raising ", " music player " and " volume ", therein " in " and " " will be known
Not Wei voice auxiliary word and omit.After getting voice keyword, equipment will confirm that the keyword attribute of each keyword, thus
Further confirm that command information corresponding to each keyword.In the present embodiment, keyword attribute includes verb keyword and noun
Keyword.The verb keyword represents the equipment state for needing equipment to change, and the noun keyword represents needs and sets
The standby control object changed.It specifies keyword attribute, facilitates the clearly voice messaging institute operation to be performed content.
Step S12 carries out weight marking to the voice keyword according to preset instruction analysis collection, to obtain each voice
The weight of keyword point;
The possible more than one of verb keyword (or noun keyword) in voice keyword, such as " click ' input ' to press
Verb keyword in button " includes " click " and " input ", and noun keyword then only has " button ".It so just needs to carry out multiple
It is screened with class keywords, and is obtained in the form of weight point.
The weight marking is the further extraction to voice keyword.In a large amount of phonetic order items that instruction analysis is concentrated
Include a large amount of noun and verb, implies the level of application of corresponding verb or noun.Therefore, preset instruction analysis collection is first
It first needs with generality, next is only function command newly developed.For example, instruction analysis is concentrated to " opening music
The weight of the usual instructions such as device ", " answer short message " point will be arranged higher, and to " opening local equipment situation ", " be switched to
The weight of this non-usual instructions of battery saving mode " point will be arranged more relatively low.Certainly, the above citing is only with the size of applying frequency
For reference, the weight marking of instruction analysis collection is by the more of key sequence, the range size of application scenarios and combination number
Few etc. aspect combined factors can just obtain most that accurately weight is divided as evaluation factor.
Step S13 obtains weight point highest optimal verb keyword and optimal noun keyword in each voice keyword;
Step S14, if detecting, instruction analysis concentrates presence and optimal verb keyword and optimal noun keyword phase
The demapping instruction is then confirmed as control instruction by the demapping instruction matched.
It gives a mark according in preset instructions analytic set to the weight of voice keyword, equipment can get in voice keyword and weigh
Divide highest optimal verb keyword and optimal noun keyword again.Optimal verb keyword and optimal noun keyword are as this
The secondary voice messaging command information to be referred to, and concentrated from instruction analysis and carry out instructions match, to get relevant mapping
Instruction, to reject meaningless voice keyword.Such as " broadcasting " optimal verb keyword and " xx song " optimal noun close
Keyword;And " opening " optimal verb keyword and " B software " optimal noun keyword etc..By being reflected to the above keyword
Matching is penetrated, really significant voice messaging is oriented, and as " incoming traffic software ", " completing music player " and " shearing
The voice messaging of this kind of interrogatory of system update " is not admitted to control instruction.
Further, if described detect demapping instruction more than one, by committed memory space in the demapping instruction
The smallest demapping instruction is confirmed as control instruction.
For example, optimal verb keyword is " opening ", optimal noun keyword is " wechat ", while crucial with optimal verb
The demapping instruction that word and optimal noun keyword match has three altogether, comprising: and 1, open wechat;2, open wechat circle of friends;
3, open wechat wallet.In the present embodiment, the memory headroom that equipment respectively occupies three demapping instructions of inquiry.It can be obvious
It obtains, the step of instruction 2 and instruction 3 all include instruction 1, therefore, the occupied memory headroom of instruction 1 is minimum, and equipment will be
Demapping instruction 1 is confirmed as control instruction.This scheme can utmostly make information feedback, and have scalability, Yong Huke
It is further expanded on the step of having executed, thus the step of realizing instruction 2 or instruction 3.
Further, the 3rd embodiment that sound control method of the present invention is proposed based on first embodiment, in the present embodiment
In, the step of effective control instruction described in script execution is clicked in the simulation of the analytic set based on instruction mapping includes:
If detecting, the control instruction is specific control instruction, a length of preset time in current display page display
The time schedule scroll bar of value, and the simulation of analytic set mapping is clicked described in script execution based on instruction after preset time value
Effective control instruction.
It is understood that when the main control mode of wearable device is main with voice control, then will also inherit
The defect of voice control.Such as voice control needs user's sounding, and issue voice for a user and be it is very simple, this
Voice control is caused to have certain language random.Assuming that active user is intended to issue " making a phone call to Zhang San ", and
Because slip of the tongue is excessively arbitrarily caused to issue " making a phone call to Li Si ".Assuming that no mechanism for correcting errors, user will be unable to cancel this
The speech control of mistake.The present embodiment provides it is a kind of for special sound instruction delay execution mechanism, so as to user carry out and
When error correction.Its method are as follows: when detecting specific control instruction, show a time schedule rolling in the current display page of equipment
Dynamic item, the scroll bar a length of preset time value (such as 2 seconds) when showing, so that user is assigning the laggard row buffering error correction of instruction.
Specifically, it is assumed that present instruction is " making a phone call to Zhang San ", then equipment senses " making a phone call to Zhang San "
When meeting the feature of specific control instruction " making a phone call to xxx ", by automatically according to the instruction mode of specific control instruction, working as
Display is up to 3 seconds time schedule scroll bars in the preceding display page, and directly executes " making a phone call to Zhang San " after 3 seconds
Control instruction.If user has found to dial the wrong number within 3 seconds when contact person, it can be taken by the voice command control equipment of " cancellation is dialed "
Disappear the step of making a phone call.
Example above is only for example, and specific control instruction may include various features, is such as sent short messages to xxx, and close payment xxx is exempted from
Member etc..
Further, the fourth embodiment that sound control method of the present invention is proposed based on 3rd embodiment, in the present embodiment
In, reference Fig. 9, the method also includes:
Step S50 is exported all in described instruction analytic set if detecting the instruction custom instruction of user's input
Instruction items;
Step S60 obtains the selection instruction triggered based on all instructions item, and it is corresponding to be selected to obtain the selection instruction
The specified sequence of instruction items, the instruction items to be selected include multinomial instruction items;
The phonetic order item that instruction analysis is concentrated may lack the personalized instruction of user, and user can carry out customized set
It sets.If detecting the instruction custom instruction of user, equipment will export all instructions item in analytic set, compile for user query
Volume.If user edits specified wherein any two and two or more instruction items, specified sequence is obtained.
Step S70 is referred to if detecting the corresponding editor's combined command of the instruction items to be selected based on editor combination
It enables and the instruction items group to be selected is combined into target instruction target word item by the specified sequence, and the renaming for exporting the target instruction target word item is defeated
Enter frame;
Assuming that user clicks editor's combination button, then according to editor's combined command in a designated order to specified instruction items
It is combined editor, to generate objective cross instruction.Meanwhile device prompts user carries out the renaming stream of objective cross instruction
Journey.
Name based on renaming input frame input is referred to as the instruction name of target instruction target word item by step S80, and will be described
Target instruction target word item is added to instruction analysis concentration.
User carries out customized name, and the target instruction target word item after completing name is added using newname as instruction name
It is concentrated to instruction analysis, to expand the analyst coverage of designated analysis collection.
Specifically, it is assumed that user inputs the instruction custom instruction of " the customized setting of entry instruction ", and equipment will export institute
There are instruction items, if user specifies a instruction items and b instruction items, obtains specified sequence 1-a, 2-b.If user clicks editor, combination refers to
It enables, then instruction items a, b is combined editor in order, to generate target instruction target word item c (i.e. a+b), while showing name frame
Prompt user renames.The target instruction target word item c for completing renaming is finally added to instruction analysis again to concentrate.
Further, the 5th embodiment that sound control method of the present invention is proposed based on fourth embodiment, in the present embodiment
In, the step of the voice messaging for obtaining the wearable device local environment in real time based on preset voice acquisition module it
Afterwards further include:
Step S90 carries out sound quality identification to the voice messaging, to obtain the quality level of the voice messaging;
The sound quality identification refers to carrying out voice messaging vocal print, volume detection.In real life, wearable device
It is likely to be in noisy environment with user, leading to voice messaging accessed by equipment includes a large amount of noises, is reduced subsequent
The accuracy that phonetic order tests and analyzes.The present embodiment, should with confirmation by carrying out sound quality identification to the voice messaging got
The quality level of voice messaging.
Specifically, it is assumed that user equipment is in food market, and a large amount of food markets are mingled in the voice messaging of user and are peddled
The noises such as sound, and the volume of noise may cover the voice of user itself.The collected voice messaging of equipment passes through sound at this time
Line detection and volume detection, it will a large amount of voice noise is presented.The quality level is judgement of sound quality set by equipment etc.
Grade, is the comprehensive judgement result on the basis of the vocal print feature and volume characteristics that detect.It is damaged for example, sound is arranged for voice messaging
Point: detected unordered vocal print feature every 0.1 second in the vocal print of voice messaging, sound damage point plus 1, in sound quality volume characteristics
Detect within every 0.1 second that the volume for being greater than preset value rises and falls, sound damage point plus 1.Accumulative sound damages total score, and is mapped to sound quality identification
On the quality level decision table of detection.Quality level is followed successively by level-one, second level and three-level from high to low.Its middle pitch damage total score is got over
Height, corresponding quality level are lower.
Above embodiments are only for example.
Step S100, if the quality level is less than default quality level, the prompt of voice messaging is re-entered in output
Information.
The present embodiment is provided with default quality level, if the quality level is less than default quality level, it was demonstrated that current sound
Matter grade is not up to standard, can not effectively be identified by equipment.The prompt information of voice messaging is re-entered in equipment room output at this time.
The present invention also provides a kind of wearable device, the wearable device includes: memory, processor, communication bus
And it is stored in the computer program on the memory:
The communication bus is for realizing the connection communication between processor and memory;
The processor is for executing the computer program, to realize the step of each embodiment of above-mentioned sound control method
Suddenly.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has one
Perhaps more than one program the one or more programs can also be executed by one or more than one processor with
The step of embodiment each for realizing above-mentioned sound control method.
Computer readable storage medium specific embodiment of the present invention and the basic phase of each embodiment of above-mentioned sound control method
Together, details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form, all of these belong to the protection of the present invention.
Claims (10)
1. a kind of sound control method, the method is applied to wearable device, which is characterized in that the sound control method packet
It includes:
Obtain the voice messaging of the wearable device local environment in real time based on preset voice acquisition module, and according to default
Instruction analysis whether integrate the detection voice messaging as control instruction;
If so, obtaining the interface layout of current display page, and extract the control information of each control on the interface layout;
Detect whether the control instruction is effective control instruction according to all control informations;
If so, effective control instruction described in script execution is clicked in the simulation of analytic set mapping based on instruction.
2. sound control method as described in claim 1, which is characterized in that described to detect institute according to preset instruction analysis collection
State that the step of whether voice messaging is control instruction includes:
All voice keywords in voice messaging are extracted, and confirm the keyword attribute of each voice keyword, the keyword
Attribute includes verb keyword and noun keyword;
Weight marking is carried out to the voice keyword according to preset instruction analysis collection, to obtain the weight of each voice keyword
Point;
Obtain weight point highest optimal verb keyword and optimal noun keyword in each voice keyword;
If detecting, instruction analysis concentrates the demapping instruction for existing and matching with optimal verb keyword and optimal noun keyword,
The demapping instruction is then confirmed as control instruction.
3. sound control method as claimed in claim 2, which is characterized in that
If described detect that instruction analysis concentrates the mapping for existing and matching with optimal verb keyword and optimal noun keyword
The step of instructing, then the demapping instruction being confirmed as control instruction further include:
If detecting demapping instruction more than one, the smallest demapping instruction in committed memory space in the demapping instruction is confirmed
For control instruction.
4. sound control method as described in claim 1, which is characterized in that the current display page includes current page mark
Know, the control information includes current control mark, described to detect whether the control instruction is to have according to all control informations
Imitate control instruction the step of include:
Extract the target pages mark and target widget mark in the control instruction;
Detect whether there is and the current page of target pages identity map mark and the control in the current display page
Part information whether there is to be identified with the current control of target widget identity map;
It is identified if existing in the current display page with the current page of target pages identity map, and the control information is
It is no to be identified in the presence of with the current control of target widget identity map, then confirm that the control instruction is effective control instruction.
5. sound control method as described in claim 1, which is characterized in that the simulation point of the mapping of analytic set based on instruction
The step of hitting effective control instruction described in script execution include:
If detecting, the control instruction is specific control instruction, a length of preset time value in current display page display
Time schedule scroll bar, and the simulation of analytic set mapping is clicked described in script execution effectively based on instruction after preset time value
Control instruction.
6. sound control method as claimed in claim 5, which is characterized in that
The voice messaging for obtaining the wearable device local environment in real time based on preset voice acquisition module, and foundation
Preset instruction analysis integrate the detection voice messaging whether as the step of control instruction include:
If detecting exclusive wake up instruction based on preset voice acquisition module, ring locating for the wearable device is obtained in real time
The voice messaging in border.
7. sound control method as claimed in claim 6, which is characterized in that the method also includes:
If detecting the instruction custom instruction of user's input, all instructions item in described instruction analytic set is exported;
The selection instruction triggered based on all instructions item is obtained, and obtains the specified of the corresponding instruction items to be selected of the selection instruction
Sequentially, the instruction items to be selected include multinomial instruction items;
If detecting the corresponding editor's combined command of the instruction items to be selected, based on editor's combined command by described specified
The instruction items group to be selected is combined into target instruction target word item by sequence, and exports the renaming input frame of the target instruction target word item;
Name based on renaming input frame input is referred to as the instruction name of target instruction target word item, and the target instruction target word item is added
Enter to instruction analysis and concentrates.
8. the sound control method as described in claim 1-7, which is characterized in that
The voice messaging for obtaining the wearable device local environment in real time based on preset voice acquisition module, and foundation
Preset instruction analysis integrated after the step of whether the detection voice messaging is as control instruction further include:
Sound quality identification is carried out to the voice messaging, to obtain the quality level of the voice messaging;
If the quality level is less than default quality level, the prompt information of voice messaging is re-entered in output.
9. a kind of wearable device, which is characterized in that the wearable device includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
Such as voice control described in any item of the claim 1 to 8 is realized when the computer program is executed by the processor
The step of method.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes such as sound control method described in any item of the claim 1 to 8 when the computer program is executed by processor
The step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910478950.8A CN110197662A (en) | 2019-05-31 | 2019-05-31 | Sound control method, wearable device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910478950.8A CN110197662A (en) | 2019-05-31 | 2019-05-31 | Sound control method, wearable device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110197662A true CN110197662A (en) | 2019-09-03 |
Family
ID=67753878
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910478950.8A Pending CN110197662A (en) | 2019-05-31 | 2019-05-31 | Sound control method, wearable device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197662A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865755A (en) * | 2019-11-11 | 2020-03-06 | 珠海格力电器股份有限公司 | Voice control method and device of terminal, storage medium and terminal |
CN110968362A (en) * | 2019-11-18 | 2020-04-07 | 北京小米移动软件有限公司 | Application running method and device and storage medium |
CN110970022A (en) * | 2019-10-14 | 2020-04-07 | 珠海格力电器股份有限公司 | Terminal control method, device, equipment and readable medium |
CN111161730A (en) * | 2019-12-27 | 2020-05-15 | 中国联合网络通信集团有限公司 | Voice instruction matching method, device, equipment and storage medium |
CN111583956A (en) * | 2020-04-30 | 2020-08-25 | 联想(北京)有限公司 | Voice processing method and device |
CN112527412A (en) * | 2020-12-23 | 2021-03-19 | 青岛海信移动通信技术股份有限公司 | Electronic device and information recording method |
CN112860013A (en) * | 2021-01-29 | 2021-05-28 | 亮风台(北京)信息科技有限公司 | Method and device for processing data through wearable display device |
CN113409788A (en) * | 2021-07-15 | 2021-09-17 | 深圳市同行者科技有限公司 | Voice wake-up method, system, device and storage medium |
CN113419627A (en) * | 2021-06-18 | 2021-09-21 | Oppo广东移动通信有限公司 | Equipment control method, device and storage medium |
CN113450778A (en) * | 2021-06-09 | 2021-09-28 | 惠州市德赛西威汽车电子股份有限公司 | Training method based on voice interaction control and storage medium |
CN113488042A (en) * | 2021-06-29 | 2021-10-08 | 荣耀终端有限公司 | Voice control method and electronic equipment |
CN113539254A (en) * | 2021-06-02 | 2021-10-22 | 惠州市德赛西威汽车电子股份有限公司 | Voice interaction method and system based on action engine and storage medium |
CN113555019A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Voice control method and device and electronic equipment |
US20220308828A1 (en) * | 2021-03-23 | 2022-09-29 | Microsoft Technology Licensing, Llc | Voice assistant-enabled client application with user view context |
US11972095B2 (en) | 2021-03-23 | 2024-04-30 | Microsoft Technology Licensing, Llc | Voice assistant-enabled client application with user view context and multi-modal input support |
-
2019
- 2019-05-31 CN CN201910478950.8A patent/CN110197662A/en active Pending
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110970022B (en) * | 2019-10-14 | 2022-06-10 | 珠海格力电器股份有限公司 | Terminal control method, device, equipment and readable medium |
CN110970022A (en) * | 2019-10-14 | 2020-04-07 | 珠海格力电器股份有限公司 | Terminal control method, device, equipment and readable medium |
CN110865755A (en) * | 2019-11-11 | 2020-03-06 | 珠海格力电器股份有限公司 | Voice control method and device of terminal, storage medium and terminal |
CN110968362A (en) * | 2019-11-18 | 2020-04-07 | 北京小米移动软件有限公司 | Application running method and device and storage medium |
CN110968362B (en) * | 2019-11-18 | 2023-09-26 | 北京小米移动软件有限公司 | Application running method, device and storage medium |
CN111161730A (en) * | 2019-12-27 | 2020-05-15 | 中国联合网络通信集团有限公司 | Voice instruction matching method, device, equipment and storage medium |
CN111161730B (en) * | 2019-12-27 | 2022-10-04 | 中国联合网络通信集团有限公司 | Voice instruction matching method, device, equipment and storage medium |
CN111583956A (en) * | 2020-04-30 | 2020-08-25 | 联想(北京)有限公司 | Voice processing method and device |
CN111583956B (en) * | 2020-04-30 | 2024-03-26 | 联想(北京)有限公司 | Voice processing method and device |
CN112527412A (en) * | 2020-12-23 | 2021-03-19 | 青岛海信移动通信技术股份有限公司 | Electronic device and information recording method |
CN112860013B (en) * | 2021-01-29 | 2024-02-09 | 亮风台(北京)信息科技有限公司 | Method and device for data processing through wearable display device |
CN112860013A (en) * | 2021-01-29 | 2021-05-28 | 亮风台(北京)信息科技有限公司 | Method and device for processing data through wearable display device |
US20220308828A1 (en) * | 2021-03-23 | 2022-09-29 | Microsoft Technology Licensing, Llc | Voice assistant-enabled client application with user view context |
US11789696B2 (en) * | 2021-03-23 | 2023-10-17 | Microsoft Technology Licensing, Llc | Voice assistant-enabled client application with user view context |
US11972095B2 (en) | 2021-03-23 | 2024-04-30 | Microsoft Technology Licensing, Llc | Voice assistant-enabled client application with user view context and multi-modal input support |
CN113539254A (en) * | 2021-06-02 | 2021-10-22 | 惠州市德赛西威汽车电子股份有限公司 | Voice interaction method and system based on action engine and storage medium |
CN113450778A (en) * | 2021-06-09 | 2021-09-28 | 惠州市德赛西威汽车电子股份有限公司 | Training method based on voice interaction control and storage medium |
CN113419627A (en) * | 2021-06-18 | 2021-09-21 | Oppo广东移动通信有限公司 | Equipment control method, device and storage medium |
CN113488042A (en) * | 2021-06-29 | 2021-10-08 | 荣耀终端有限公司 | Voice control method and electronic equipment |
CN113409788A (en) * | 2021-07-15 | 2021-09-17 | 深圳市同行者科技有限公司 | Voice wake-up method, system, device and storage medium |
CN113555019A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Voice control method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197662A (en) | Sound control method, wearable device and computer readable storage medium | |
CN103578474B (en) | A kind of sound control method, device and equipment | |
CN108227726A (en) | UAV Flight Control method, apparatus, terminal and storage medium | |
CN109947327A (en) | A kind of interface inspection method, wearable device and computer readable storage medium | |
CN108898552A (en) | Picture joining method, double screen terminal and computer readable storage medium | |
CN110187771A (en) | Gesture interaction method, device, wearable device and computer storage medium high up in the air | |
CN110096195A (en) | Motion icon display methods, wearable device and computer readable storage medium | |
CN105550316B (en) | The method for pushing and device of audio list | |
CN109618218A (en) | A kind of method for processing video frequency and mobile terminal | |
CN109145088A (en) | A kind of searching method and private tutor's machine based on private tutor's machine | |
CN108196781A (en) | The display methods and mobile terminal at interface | |
CN110308883A (en) | Method for splitting, wearable device and the computer readable storage medium of screen | |
CN110175008A (en) | Method, wearable device and the computer readable storage medium of operating terminal | |
CN110013260A (en) | A kind of mood theme regulation method, equipment and computer readable storage medium | |
CN109873901A (en) | A kind of screenshot method for managing resource and terminal, computer readable storage medium | |
CN109918014A (en) | Page display method, wearable device and computer readable storage medium | |
CN110069774A (en) | Text handling method, device and terminal | |
CN110262748A (en) | Wearable device control method, wearable device and computer readable storage medium | |
CN110174935A (en) | Put out screen control method, terminal and computer readable storage medium | |
CN110175259A (en) | Image display method, wearable device and computer readable storage medium | |
CN110069200A (en) | Wearable device input control method, wearable device and storage medium | |
CN110198411A (en) | Depth of field control method, equipment and computer readable storage medium during a kind of video capture | |
CN110209268A (en) | Wearable device control method, wearable device and computer readable storage medium | |
CN110032308A (en) | A kind of page display method, terminal and computer readable storage medium | |
CN109947345A (en) | A kind of fingerprint identification method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |