CN104423925B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN104423925B
CN104423925B CN201310376885.0A CN201310376885A CN104423925B CN 104423925 B CN104423925 B CN 104423925B CN 201310376885 A CN201310376885 A CN 201310376885A CN 104423925 B CN104423925 B CN 104423925B
Authority
CN
China
Prior art keywords
input operation
input
speech recognition
recognition engine
operating conditions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310376885.0A
Other languages
Chinese (zh)
Other versions
CN104423925A (en
Inventor
贾旭
张渊毅
彭世峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310376885.0A priority Critical patent/CN104423925B/en
Publication of CN104423925A publication Critical patent/CN104423925A/en
Application granted granted Critical
Publication of CN104423925B publication Critical patent/CN104423925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application provides a kind of information processing method and electronic equipment, electronic equipment includes display unit and speech recognition engine, and method includes: the M object that display includes the first object, and the first object is the mark of speech recognition engine;Obtain input operation;It determines whether input operation meets predetermined condition, when meeting predetermined condition, when input operation is operated as the first input, controls speech recognition engine from low-power consumption and be switched to normal operating conditions;When speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained;When speech recognition engine is in identification state, the parameter information based on the second object identifies voice input;When speech recognition engine is in result feedback states, recognition result is exported.The application, which makes user only and need to execute one to be simply input operation, can make electronic equipment start function of radio receiver, and electronic equipment can be for voice input rapid feedback as a result, therefore easy to operate, better user experience.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to technical field of information processing more particularly to a kind of information processing methods and electronic equipment.
Background technique
On mobile phone or other intelligent platforms, voice has the advantage of oneself as a kind of interactive mode, for example, information is recorded Enter, retrieve, the execution of deep layer order.In the prior art, the typical case of interactive voice mode is voice assistant Siri, is utilized Siri user can by mobile phone reading short message, search contact person, introduce dining room, alarm clock etc. is arranged in inquiry weather, voice.
However, inventor is had found during the invention realizing: using Siri carry out time of interactive voice compared with Length, process are cumbersome.For searching contact person, need first to start Siri, the interface proprietary into Siri, then receive again and Phonetic order is handled, in addition, process for system of starting Siri itself and the psychology of user have heavier burden, can not show a candle to a little Open that address list is direct, this makes user preferentially want to search contact person by the application program of contact person, rather than passes through Siri.
Summary of the invention
In view of this, the present invention provides a kind of information processing method and electronic equipment, to solve in the prior art into The problem that the time of row interactive voice is longer, process is cumbersome, its technical solution is as follows:
A kind of information processing method is applied in electronic equipment, and the electronic equipment includes display unit and speech recognition Engine, the speech recognition engine have low power consumpting state and normal operating conditions, which comprises
M object is shown in the display unit, wherein includes the first object, first object in the M object For the mark of the speech recognition engine;
Obtain input operation;
Determine whether the input operation meets predetermined condition;
When input operation meets the predetermined condition, when input operation is operated as the first input, control The speech recognition engine is switched to the normal operating conditions from the low power consumpting state, wherein the first input operation It can determine that first object and the second object, second object belong to the M object;
When the speech recognition engine is in the radio reception state of the normal operating conditions, voice input is obtained;
When the speech recognition engine is in the identification state of the normal operating conditions, based on second object Parameter information identifies voice input;
When the speech recognition engine is in the result feedback states of the normal operating conditions, recognition result is exported.
Optionally, when input operation is unsatisfactory for predetermined condition, when input operation is operated as the second input, It controls the speech recognition engine and is switched to the normal operating conditions from the low power consumpting state;
When the speech recognition engine is in the radio reception state of the normal operating conditions, the voice input is obtained;
When the speech recognition engine is in the identification state of the normal operating conditions, the voice is inputted and is carried out Identification;
When the speech recognition engine is in the result feedback states of the normal operating conditions, recognition result is exported.
Wherein, whether determination input operation meets predetermined condition and includes:
It is operated according to the input and determines the first object;
When determining first object, N number of object in the M object is prompted, in N number of object The parameter information of each object can act on the identification state of the normal operating conditions of the speech recognition engine.
Wherein, the predetermined condition is whether object determined by the input operates includes the first object and second pair As second object is an object in the determining M object of the end point of the input operation;
Or/and
The predetermined condition is whether the identified object of input operation includes the first object and the second object, described Second object is an object in the determining N number of object of the end point of the input operation.
Wherein, the input operation manipulates the operation of input point for two;
Whether the determination input operation, which meets predetermined condition, includes:
When the first manipulation input point in described two manipulation input points meets pre-defined rule, the first object is determined, with And when the second manipulation input point in described two manipulation input points meets pre-defined rule, determine the second object;
At the end of input operation, corresponding first object of input operation and the second object are determined;
Alternatively,
The input operation is to slidably input operation;
Whether the determination input operation, which meets predetermined condition, includes:
The tracing point of the input operation determines the first object when meeting pre-defined rule;
The end point of the input operation determines the second object when meeting pre-defined rule;
At the end of input operation, corresponding first object of input operation and the second object are determined;
Alternatively,
The electronic equipment have touch sensing unit, the touch area of the touch sensing unit be divided into first area and Second area, the second area are overlapped with the display unit, and the input operation is slide;
Whether the determination input operation, which meets predetermined condition, includes:
When the starting point of the input operation is in the first area, and the input is operated and is moved to from first area When the cut-off rule of the first area and the second area, determines first object and show first object, so that It is mobile in the second area that the input operation controls first object;
When input operation moves into the second area from the first area, if the input operation End point meets pre-defined rule, it is determined that the second object;
At the end of input operation, corresponding first object of input operation and the second object are determined.A kind of electronics Equipment, the electronic equipment include display unit and speech recognition engine, the speech recognition engine have low power consumpting state with And normal operating conditions, the electronic equipment include:
Display module, for showing M object in the display unit, wherein include first pair in the M object As first object is the mark of the speech recognition engine;
First obtains module, for obtaining input operation;
Determining module, for determining whether the input operation meets predetermined condition;
First control module, for when input operation meets the predetermined condition, the input to be operated conduct When the first input operation, the speech recognition engine is controlled from the low power consumpting state and is switched to the normal operating conditions, In, the first input operation can determine first object and the second object, and it is a right that second object belongs to the M As;
Second obtains module, when for being in the radio reception state of the normal operating conditions when the speech recognition engine, Obtain voice input;
First identification module, when for being in the identification state of the normal operating conditions when the speech recognition engine, Parameter information based on second object identifies voice input;
First output module, for being in the result feedback states of the normal operating conditions when the speech recognition engine When, export recognition result.
Optionally, the electronic equipment further include:
Second control module regard input operation as second for being unsatisfactory for predetermined condition when input operation When input operation, the speech recognition engine is controlled from the low power consumpting state and is switched to the normal operating conditions;
Third obtains module, when for being in the radio reception state of the normal operating conditions when the speech recognition engine, Obtain the voice input;
Second identification module, when for being in the identification state of the normal operating conditions when the speech recognition engine, Voice input is identified;
Second output module, for being in the result feedback states of the normal operating conditions when the speech recognition engine When, export recognition result.
Wherein, the determining module includes:
First determines submodule, determines the first object for operating according to the input;
Prompting submodule, for when determining first object, N number of object in the M object to be mentioned Show, the parameter information of each object can act on the normal work shape of the speech recognition engine in N number of object The identification state of state.
Wherein, the predetermined condition is whether object determined by the input operates includes the first object and second pair As second object is an object in the determining M object of the end point of the input operation;
Or/and the predetermined condition is whether object determined by the input operates includes the first object and second pair As second object is an object in the determining N number of object of the end point of the input operation.
Wherein, the input operation manipulates the operation of input point for two;
The determining module includes:
Second determines submodule, for meeting pre-defined rule when the first manipulation input point in described two manipulation input points When, determine the first object;
Third determines submodule, meets pre-defined rule for working as the second manipulation input point in described two manipulation input points When, determine the second object;
4th determines submodule, at the end of input operation, determining corresponding first object of input operation With the second object;
Alternatively,
The input operation is to slidably input operation;
The determining module includes:
5th determines submodule, for determining the first object when the tracing point of the input operation meets pre-defined rule;
6th determines submodule, for determining the second object when the end point of the input operation meets pre-defined rule;
7th determines submodule, at the end of input operation, determining the input operation corresponding described first Object and the second object;
Alternatively,
The electronic equipment have touch sensing unit, the touch area of the touch sensing unit be divided into first area and Second area, the second area are overlapped with the display unit, and the input operation is to slidably input operation;
The determining module includes:
8th determines submodule, for when the starting point of the input operation is in the first area, and the input When operation is moved to cut-off rule of the first area with the second area from first area, determines first object and show Show first object, so that input operation controls first object in second area movement;
9th determines submodule, moves into the second area for operating in the input from the first area When, if the end point of the input operation meets pre-defined rule, it is determined that the second object;
Tenth determines submodule, at the end of input operation, determining corresponding first object of input operation With the second object.
Above-mentioned technical proposal has the following beneficial effects:
Information processing method and electronic equipment provided by the invention show M object in display unit, wherein M object In include the first object, obtain input operation, determine input operation whether meet predetermined condition, when input operation meet predetermined item Part when operating input operation as the first input, controls speech recognition engine from low power consumpting state and is switched to normal work shape State obtains voice input, when speech recognition engine is in when speech recognition engine is in the radio reception state of normal operating conditions When the identification state of normal operating conditions, the parameter information based on the second object identifies voice input, works as speech recognition When engine is in the result feedback states of normal operating conditions, recognition result is exported.Information processing method provided by the invention and Electronic equipment is simply input operation electronic equipment can be made to start function of radio receiver so that user need to only execute one, and electronics sets The standby rapid feedback that can input for voice is as a result, therefore easy to operate, better user experience.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is the flow diagram of the first information processing method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of second of information processing method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram that operation is slidably inputed in second of information processing method provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram that operation is slidably inputed in second of information processing method provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram that operation is slidably inputed in second of information processing method provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram that operation is slidably inputed in second of information processing method provided in an embodiment of the present invention;
Fig. 7 is the flow diagram of the third information processing method provided in an embodiment of the present invention;
Fig. 8 is the operation signal of two manipulation input points in the third information processing method provided in an embodiment of the present invention Figure;
Fig. 9 is the operation chart of the third information processing method two provided in an embodiment of the present invention manipulation input points;
Figure 10 is the operation chart of the third information processing method two provided in an embodiment of the present invention manipulation input points;
Figure 11 is the flow diagram of the 4th kind of information processing method provided in an embodiment of the present invention;
Figure 12 is the schematic diagram that operation is slidably inputed in the 4th kind of information processing method provided in an embodiment of the present invention;
Figure 13 is the schematic diagram that operation is slidably inputed in the 4th kind of information processing method provided in an embodiment of the present invention;
Figure 14 is the schematic diagram that operation is slidably inputed in the 4th kind of information processing method provided in an embodiment of the present invention;
Figure 15 is the structural schematic diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, being a kind of flow diagram of information processing method provided in an embodiment of the present invention, this method application In electronic equipment, electronic equipment includes display unit and speech recognition engine, speech recognition engine have low power consumpting state with And normal operating conditions, method provided in an embodiment of the present invention include:
Step S101: M object is shown in display unit, wherein include the first object in M object, the first object is The mark of speech recognition engine.
Step S102: input operation is obtained.
Step S103: determine whether input operation meets predetermined condition.
Step S104: when input operation meets predetermined condition, is used as the first input to operate input operation, voice is controlled Identify that engine from low power consumpting state is switched to normal operating conditions, wherein the first input operation can determine the first object and the Two objects, the second object belong to M object.
Step S105: when speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained.
Step S106: when speech recognition engine is in the identification state of the normal operating conditions, it is based on the second object Parameter information to voice input identify.
Step S107: when speech recognition engine is in the result feedback states of normal operating conditions, recognition result is exported.
In information processing method provided in an embodiment of the present invention, M object is shown in display unit, wherein in M object Including the first object, input operation is obtained, determines whether input operation meets predetermined condition, when input operation meets predetermined item Part when operating input operation as the first input, controls speech recognition engine from the low power consumpting state and is switched to normal work Make state, when speech recognition engine is in the radio reception state of normal operating conditions, obtains voice input, work as speech recognition engine When identification state in normal operating conditions, the parameter information based on the second object identifies voice input, works as voice When identification engine is in the result feedback states of normal operating conditions, recognition result is exported.Information provided in an embodiment of the present invention Processing method is simply input operation electronic equipment can be made to start function of radio receiver so that user need to only execute one, and electronics sets The standby rapid feedback that can input for voice is as a result, therefore easy to operate, better user experience.
Referring to Fig. 2, for the flow diagram of another information processing method provided in an embodiment of the present invention, this method is answered For in electronic equipment, electronic equipment includes touch sensing unit, display unit and speech recognition engine, touch sensing unit Touch area is divided into first area and second area, and second area is overlapped with display unit, and speech recognition engine has low-power consumption State and normal operating conditions, method provided in an embodiment of the present invention include:
Step S201: M object is shown in display unit, wherein include the first object in M object, the first object is The mark of speech recognition engine.
Step S202: acquisition slidably inputs operation.
Step S203: it determines and slidably inputs whether operation meets predetermined condition.
In this example it is shown that the M object (except the first object) shown on unit can be can interactive object, each Can interactive object parameter information can act on speech recognition engine normal operating conditions identification state.Certainly, it shows In the M object (except the first object) shown on unit, can include simultaneously can interactive object and can not interactive object.When M is a right As in include can not interactive object when, according to input operate determine the first object when, N number of object in M object can be given With prompt, the parameter information of each object can act on the identification of the normal operating conditions of speech recognition engine in N number of object State, that is, when determining the first object, can be determined from M object it is N number of can interactive object, and interacted what is determined Object is prompted, for example, that a mark can be shown to be prompted on interactive object.
It determines and slidably inputs whether operation meets predetermined condition specifically: determine whether object determined by input operation wraps Include the first object and the second object, wherein the second object is that one in the M object that the end point of input operation determines can hand over Mutual object, can interactive object parameter information can act on speech recognition engine normal operating conditions identification state.Tool Body, when M object that display unit is shown (removing the first object) be can interactive object when, the second object be what input operated One for (removing the first object) in the M object that end point determines can interactive object;When the M object shown on display unit In (remove the first object), not only included can interactive object include again can not interactive object when, the second object be to input the end operated In M determining object of point can an object in interactive object.
Further, input operates whether identified object includes the first object and the second object specifically:
When slidably inputing the starting point of operation in first area, and slidably inputs operation and be moved to first from first area It when the cut-off rule of region and second area, determines the first object and shows the first object, so as to slidably input operation control first Object is mobile in second area.By taking electronic equipment is mobile phone as an example, referring to Fig. 3, first area can be the main screen area of mobile phone Domain, second area can be the frame region below main screen of mobile phone.Slidably inputing operation is finger from the side below main screen of mobile phone Frame region slides to the main screen region of mobile phone, when sliding into the cut-off rule of frame region below main screen region and mobile phone, first pair Mark as being speech recognition engine shows that finger drives speech recognition during sliding from the edge below main screen The mark of engine is mobile in main screen region.
When slidably inputing operation and moving into second area from first area, if the end point of input operation meet it is pre- Set pattern is then, it is determined that the second object, wherein pre-defined rule are as follows: the object at the end point of input operation is can interactive object When, the masking ratio for inputting the object and the first object at the end point of operation is greater than preset ratio.Specifically, if M right As (remove the first object) be can interactive object, then can determine that the object for slidably inputing at the end point of operation is that can interact pair As, if include in M object it is N number of can interactive object, object at the end point for slidably inputing operation is in N number of object When one object, the object at the determining end point for slidably inputing operation is can interactive object.It is determining to slidably input operation End point at object be interactive object after, obtain the first object and this can interactive object masking ratio.It is defeated due to sliding Enter operation control the first object movement, at the end of slidably inputing operation, the first object stops at the end for slidably inputing operation Point place, at this point, the first object can be obtained and slidably input at the end point of operation can interactive object masking ratio, judgement blocks Whether ratio is greater than setting ratio value, if it is greater than setting ratio value, it is determined that slidably inputs the object at the end point of operation For the second object.
Input operation terminates, it may be determined that goes out the first object and the second object.After slidably inputing operation, slidably input It include two objects, respectively the first object and the second object at the end point of operation, wherein the first object is with slidably inputing Mobile object is operated, slidably inputing another object at the end point of operation is the second object.
Step S204: when input operation meets predetermined condition, is used as the first input to operate input operation, voice is controlled Identify that engine from low power consumpting state is switched to normal operating conditions, wherein the first input operation can determine the first object and the Two objects, the second object belong to M object.
In the present embodiment, the second object can with but be not limited to application program mark, retrieval progress bar, text input Frame or contact person.Equally by taking electronic equipment is mobile phone, the second object is the mark of music playing process as an example, referring to Fig. 4, working as When user slides to main screen region from the fringe region below main screen region with finger, the mark of speech recognition engine is in main screen region Lower section occur, finger drives the mark of speech recognition engine to be moved to the mark of music playing process, at this point, speech recognition is drawn It holds up from low power consumpting state and is switched to normal operating conditions.
Step S205: when speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained.
While speech recognition engine is switched to the radio reception state of normal operating conditions from low power consumpting state, it can show Radio reception interface is shown on unit, to prompt user to carry out voice input.
Step S206: when speech recognition engine is in the identification state of normal operating conditions, the ginseng based on the second object Number information identifies voice input.
Due to the second object be can interactive object, can the parameter information of interactive object can act on speech recognition engine The identification state of normal operating conditions can be based on the parameter information of the second object when speech recognition engine is in identification state Voice input is identified, specifically, being identified in information aggregate corresponding with the second object for voice input.
Step S207: when speech recognition engine is in the result feedback states of normal operating conditions, recognition result is exported.
In this embodiment, output recognition result can with but be not limited to show the information of identification, and/or, control Application program executes respective operations.
Equally by taking the second object is the mark of music playing process as an example, after receiving voice input, speech recognition engine Into the identification state of normal condition, voice input is known based on information aggregate corresponding with the mark of music playing process Not, specifically, assuming voice input to play song " THE INVISIBLE WINGS ", then from information corresponding with music playing process icon Song " THE INVISIBLE WINGS " is searched in set, after finding song " THE INVISIBLE WINGS ", exports recognition result, is i.e. starting music Playing program simultaneously plays song " THE INVISIBLE WINGS ".Again by taking the mark of address list as an example, speech recognition engine enters normal work While the radio reception state of state, voice input is received, if voice input is the telephone number of lookup Li Ming, speech recognition is drawn The identification state for entering normal operating conditions by the radio reception state of normal operating conditions is held up, from letter corresponding with the mark of address list The telephone number that Li Ming is searched in breath set, after finding the telephone number of Li Ming, shows the phone of Li Ming on the display unit Number.
The above process is the information processing manner slidably inputed when meeting preset condition, is given below and slidably inputs operation not Meet information processing manner when preset condition:
Step 208: when slidably inputing operation and being unsatisfactory for preset condition, will input operation as the second input operation when, It controls speech recognition engine and is switched to normal operating conditions from low power consumpting state.
Specifically, slidably inputing operation to be unsatisfactory for preset condition includes: the object slidably inputed at the end point of operation Including the first object, alternatively, slidably input object at the end point of operation be in M object can not interactive object.For example, User is identified to a white space with finger dragging speech recognition engine, alternatively, with the mark of finger dragging speech recognition engine Knowing one can not interactive object.
Step 209: when speech recognition engine is in the radio reception state of normal operating conditions, obtaining voice input.
Step 210: when speech recognition engine is in the identification state of normal operating conditions, voice input being known Not.
Due to slidably input operation end point at object do not include can interactive object, electronics is set can not root It is identified according to the parameter information of a specific object, at this point, being in the identification state of normal operating conditions in speech recognition engine When, it need to be identified in all information aggregates for voice input.
Step 211: when speech recognition engine is in the result feedback states of the normal operating conditions, output identification knot Fruit.
Equally by taking electronic equipment is mobile phone as an example, slidably inputs when being operated into main screen region, slidably input operation control The mark of speech recognition engine is mobile in main screen region, and at the end of slidably inputing operation, detection slidably inputs operation end point The object at place illustrates that finger is final if slidably inputing the mark that the object at operation end point only has speech recognition engine A white space is stopped at, referring to Fig. 5, at this point, controllable speech recognition engine is switched to normal work from low power consumpting state State obtains voice input, at speech recognition engine when speech recognition engine is in the radio reception state of normal operating conditions When the identification state of normal operating conditions, voice input is identified for all information aggregates.Likewise, when sliding At the end of input operation, detection slidably inputs the object at operation end point, if slidably inputing the object at operation end point For can not interactive object, then illustrate finger be finally stopped in one can not interactive object, referring to Fig. 6, since this can not be interacted pair The parameter information of elephant can not act on the identification state of the normal operating conditions of speech recognition engine, therefore, it is impossible to right based on this As being identified, voice input is identified for all object sets at this time.
It should be noted that determine that input operation inputs operation for first when input operation meets preset condition, the One input operation can determine the second object, due to the second object be can interactive object, to voice input identify when, It can be identified in information aggregate corresponding with the second object for voice input;When input operation is unsatisfactory for preset condition When, determine the input operation for second input operation, second input operation in include can interactive object therefore need to be in institute It is identified in some information aggregates for voice input.The former identifies in information aggregate corresponding with the second object, the latter It is identified in all information aggregates, compares the two it is found that the former specific aim is stronger, the range of information identification is smaller, recognition efficiency It is higher with recognition accuracy.
In information processing method provided in an embodiment of the present invention, M object is shown in display unit, includes in M object First object obtains input and operates, and determines whether input operation meets predetermined condition, meets predetermined condition when input operates, will When input operation is as the first input operation, speech recognition engine is controlled from the low power consumpting state and is switched to normal work shape State obtains voice input, when speech recognition engine is in when speech recognition engine is in the radio reception state of normal operating conditions When the identification state of normal operating conditions, the parameter information based on the second object identifies voice input, works as speech recognition When engine is in the result feedback states of normal operating conditions, recognition result is exported.Information processing provided in an embodiment of the present invention Method so that user need to only execute one be simply input operation can make electronic equipment start function of radio receiver, electronic equipment energy For voice input rapid feedback as a result, therefore easy to operate, better user experience.
Referring to Fig. 7, being the flow diagram of another information processing method provided in an embodiment of the present invention, this method is answered For in electronic equipment, electronic equipment to include display unit and speech recognition engine, and speech recognition engine is with low power consumpting state And normal operating conditions, method provided in an embodiment of the present invention include:
Step S301: M object is shown in display unit, wherein include the first object in M object, the first object is The mark of speech recognition engine.
Step S302: the operation of two manipulation input points is obtained.
Step S303: determine whether the operation of two manipulation input points meets predetermined condition.
In this example it is shown that the M object (except the first object) shown on unit be can interactive object, Mei Geke The parameter information of interactive object can act on the identification state of the normal operating conditions of speech recognition engine.Certainly, display is single In the M object (except the first object) shown in member, can include simultaneously can interactive object and can not interactive object.When M object In include can not interactive object when, according to input operate determine the first object when, N number of object in M object can be given It prompts, the parameter information of each object can act on the identification shape of the normal operating conditions of speech recognition engine in N number of object State, that is, when determining the first object, can be determined from M object it is N number of can interactive object, and interacted what is determined pair As being prompted, for example, that a mark can be shown to be prompted on interactive object.
Determine whether the operation of two manipulation input points meets predetermined condition specifically: determine the behaviour of two manipulation input points Whether object determined by making includes the first object and the second object, wherein the second object is one in two manipulation input points Object at a manipulation input point, the object be one in M object can interactive object, can interactive object parameter information energy Enough act on the identification state of the normal operating conditions of speech recognition engine.Specifically, working as the M object that display unit is shown (remove the first object) be can interactive object when, the second object is that one in M object (except the first object) can interactive object; In the M object (except the first object) shown on display unit, not only included can interactive object but also include can not interactive object when, Second object be in M object can an object in interactive object.
Further, whether object determined by the operation of two manipulation input points includes the first object and the second object tool Body are as follows: when the first manipulation input point in two o'clock manipulation input point meets pre-defined rule, the first object is determined, when two o'clock manipulates When the second manipulation input point meets pre-defined rule in input point, the second object is determined, at the end of inputting operation, determine input behaviour Make corresponding first object and the second object.Wherein, pre-defined rule specifically: the operation duration of two manipulation input points is greater than default Duration.
Specifically, determining that the first object includes: to obtain the first manipulation input when the first manipulation input point meets pre-defined rule The operation duration of point, judge whether the operation duration of the first manipulation input point is greater than the first preset duration, if it is, determination the Object at one manipulation input point is the first object.Likewise, determining second pair when the second manipulation input point meets pre-defined rule As including: the operation duration for obtaining the second manipulation input point, judge whether the operation duration of the second manipulation input point is greater than second Preset duration, if the operation duration of the second manipulation input point is greater than the second preset duration, and second manipulates what input was pointed out Object is can interactive object, it is determined that the object at the second manipulation input point is the second object.It should be noted that being grasped at two At the end of the operation for controlling input point, the operation duration of only the first control point is greater than the first preset duration, also, the second control point Operation duration be greater than the second preset duration, just can determine that object at the first manipulation input point is the first object, the second manipulation Object at input point is the second object.Wherein, the first preset duration can be set equal to the second preset duration, first can also be set Preset duration is greater than the second preset duration.
Step S304: when input operation meets predetermined condition, is used as the first input to operate input operation, voice is controlled Identify that engine from low power consumpting state is switched to normal operating conditions, wherein the first input operation can determine the first object and the Two objects, the second object belong to M object.
In the present embodiment, the second object be can interactive object, the second object can with but be not limited to the mark of application program Know, retrieve progress bar, text input box or contact person.By taking the second object is the mark of application program as an example, for example, the second object For the mark of music playing process, the mark and music playing process of speech recognition engine are shown on the display unit of electronic equipment Mark, referring to Fig. 8, while the mark of user's thumb long-pressing speech recognition engine, with index finger long-pressing music journey The mark of sequence, electronic equipment judge whether the duration of the input operation to the first control point of the mark of speech recognition engine is greater than First preset duration, such as 5 seconds, meanwhile, judge the duration of the input operation to the second control point of the mark of music playing process Whether be greater than the second preset duration, such as 5 seconds, if to the input operation of the first control point of the mark of speech recognition engine when It is long to be greater than the first preset duration, and the is greater than to the duration of the input operation of the second control point of the mark of music playing process Two preset durations then control speech recognition engine from low power consumpting state and are switched to normal operating conditions.
Step S305: when speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained.
While speech recognition engine is switched to the radio reception state of normal operating conditions from low power consumpting state, display unit On can show radio reception interface, with prompt user carry out voice input.
Step S306: when speech recognition engine is in the identification state of normal operating conditions, the ginseng based on the second object Number information identifies voice input.
When speech recognition engine is in identification state, the parameter information based on the second object identifies voice input Specifically: it is identified in information aggregate corresponding with the second object for voice input.
Step S307: when speech recognition engine is in the result feedback states of normal operating conditions, recognition result is exported.
In this embodiment, output recognition result can with but be not limited to show the information of identification, and/or, control Application program executes respective operations.
Equally by taking the second object is the mark of music playing process as an example: after receiving voice input, speech recognition engine Into the identification state of normal condition, voice input is known based on information aggregate corresponding with the mark of music playing process Not, specifically, assuming voice input to play song " THE INVISIBLE WINGS ", then from information corresponding with music playing process icon Song " THE INVISIBLE WINGS " is searched in set, after finding song " THE INVISIBLE WINGS ", exports recognition result, is i.e. starting music Playing program simultaneously plays song " THE INVISIBLE WINGS ".Again by taking the mark of address list as an example: speech recognition engine enters normal work While the radio reception state of state, voice input is received, the telephone number for searching Li Ming such as voice input, speech recognition engine The identification state for entering normal operating conditions by the radio reception state of normal operating conditions, from information corresponding with the mark of address list The telephone number that Li Ming is searched in set, after finding the telephone number of Li Ming, shows the phone number of Li Ming on the display unit Code.
The above process is information processing manner when two operations for manipulating input points meet preset condition, is given below two The operation of a manipulation input point is unsatisfactory for information processing manner when preset condition:
Step 308: when the operation that two manipulate input point is unsatisfactory for preset condition, by input operation as the second input When operation, speech recognition engine is controlled from low power consumpting state and is switched to normal operating conditions.
Specifically, it includes: the object at two manipulation input points that the operation of two manipulation input points, which is unsatisfactory for preset condition, The first object is only included, alternatively, the objects at two manipulation input points are respectively that one in the first object and M object can not Interactive object.For example, referring to Fig. 9, while the mark of user's thumb long-pressing speech recognition engine, it is empty with index finger long-pressing one White region, alternatively, referring to Fig. 10, can not be handed over while with the mark of thumb long-pressing speech recognition engine with index finger long-pressing one Mutual object.
Step 309: when speech recognition engine is in the radio reception state of normal operating conditions, obtaining voice input.
Step 310: when speech recognition engine is in the identification state of normal operating conditions, voice input being known Not.
Specifically, when speech recognition engine is in the identification state of normal operating conditions, in all information aggregates It is identified for voice input.
Step 311: when speech recognition engine is in the result feedback states of normal operating conditions, exporting recognition result.
It should be noted that determining that input operation is the when the operation for manipulating input points when two meets preset condition One input operation, first input operation can determine the second object, due to the second object be can interactive object, defeated to voice When entering to be identified, it can be identified in information aggregate corresponding with the second object for voice input;When input operates not When meeting preset condition, determine the input operation for second input operation, second input operation in include can interactive object, Therefore, it need to be identified in all information aggregates for voice input.The former is in information aggregate corresponding with the second object Middle identification, the latter identify in all information aggregates, both compares it is found that the former specific aim is stronger, the range of information identification compared with Small, recognition efficiency and recognition accuracy are higher.
In information processing method provided in an embodiment of the present invention, M object is shown in display unit, wherein in M object Including the first object, input operation is obtained, determines whether input operation meets predetermined condition, when input operation meets predetermined item Part when operating input operation as the first input, controls speech recognition engine from the low power consumpting state and is switched to normal work Make state, when speech recognition engine is in the radio reception state of normal operating conditions, obtains voice input, work as speech recognition engine When identification state in normal operating conditions, the parameter information based on the second object identifies voice input, works as voice When identification engine is in the result feedback states of normal operating conditions, recognition result is exported.Information provided in an embodiment of the present invention Processing method is simply input operation electronic equipment can be made to start function of radio receiver so that user need to only execute one, and electronics sets The standby rapid feedback that can input for voice is as a result, therefore easy to operate, better user experience.
Figure 11 is please referred to, is the flow diagram of another information processing method provided in an embodiment of the present invention, this method Applied in electronic equipment, electronic equipment includes display unit and speech recognition engine, and speech recognition engine has low-power consumption shape State and normal operating conditions, method provided in an embodiment of the present invention include:
Step S401: M object is shown in display unit, wherein include the first object in M object, the first object is The mark of speech recognition engine.
Step S402: acquisition slidably inputs operation.
Step S403: it determines and slidably inputs whether operation meets predetermined condition.
In this example it is shown that the M object (except the first object) shown on unit be can interactive object, Mei Geke The parameter information of interactive object can act on the identification state of the normal operating conditions of speech recognition engine.Certainly, display is single In the M object (except the first object) shown in member, can include simultaneously can interactive object and can not interactive object.When M object In include can not interactive object when, according to input operate determine the first object when, N number of object in M object can be given It prompts, the parameter information of each object can act on the identification shape of the normal operating conditions of speech recognition engine in N number of object State, that is, when determining the first object, can be determined from M object it is N number of can interactive object, and interacted what is determined pair As being prompted, for example, that a mark can be shown to be prompted on interactive object.
It determines and slidably inputs whether operation meets predetermined condition specifically: determine whether object determined by input operation wraps Include the first object and the second object, wherein the second object is that one in the M object that the end point of input operation determines can hand over Mutual object, can interactive object parameter information can act on speech recognition engine normal operating conditions identification state.Tool Body, when M object that display unit is shown (removing the first object) be can interactive object when, the second object be what input operated One for (removing the first object) in the M object that end point determines can interactive object;When the M object shown on display unit In (remove the first object), not only included can interactive object include again can not interactive object when, the second object be to input the end operated In M determining object of point can an object in interactive object.
Further, input operates whether identified object includes the first object and the second object specifically: is inputting The tracing point of operation determines the first object when meeting pre-defined rule, wherein pre-defined rule can be big for the tracing point of input operation In default points;Input operation end point meet pre-defined rule when determine the second object, wherein pre-defined rule can for Determine input operation end point at object be can interactive object when, this can interactive object and the first object masking ratio Greater than preset ratio;At the end of inputting operation, determine that input operates corresponding first object and the second object.
Specifically, it is mobile to slidably input controllable first object of operation, the can determine that by the tracing point slidably inputed An object.If the object inputted at the end point of operation be can interactive object, obtain first object and this can interactive object Masking ratio.Since input operates, the first object of control is mobile, and at the end of inputting operation, the first object stops at input operation End point at, at this point, can obtain the first object and input operation end point at can interactive object masking ratio, judge to hide Whether gear ratio is greater than setting ratio value, if it is greater than setting ratio value, it is determined that the object inputted at the end point of operation is Second object.
Step S404: when input operation meets predetermined condition, is used as the first input to operate input operation, voice is controlled Identify that engine from low power consumpting state is switched to normal operating conditions, wherein the first input operation can determine the first object and the Two objects, the second object belong to M object.
In the present embodiment, the second object can with but be not limited to application program mark, retrieval progress bar, text input Frame or contact person.Equally by taking electronic equipment is mobile phone, the second object is the mark of music playing process as an example, the display list of mobile phone The mark of member display speech recognition engine and the mark of music playing process, please refer to Figure 12, and user can drag voice with finger The mark for identifying engine, when the mark of speech recognition engine is dragged to the mark of music playing process with finger by user, language Sound identifies that engine is switched to normal operating conditions from low power consumpting state.
In addition, please referring to Figure 13, user can also drag the mark of music playing process, when user with finger by music When the mark of program is dragged to the mark of speech recognition engine, speech recognition engine is switched to normal work shape from low power consumpting state State.Figure 14 is please referred to, user can also drag the mark of speech recognition engine and the mark of music playing process simultaneously, voice is known The mark and music playing process icon drag of other engine are to together, so that speech recognition engine be made to be switched to from low power consumpting state Normal operating conditions.
Step S405: when speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained.
While speech recognition engine is switched to the radio reception state of normal operating conditions from low power consumpting state, display unit On can show radio reception interface, with prompt user carry out voice input.
Step S406: when speech recognition engine is in the identification state of normal operating conditions, the ginseng based on the second object Number information identifies voice input.
Step S407: when speech recognition engine is in the result feedback states of normal operating conditions, recognition result is exported.
In this embodiment, output recognition result can with but be not limited to show the information of identification, and/or, control Application program executes respective operations.
Equally by taking the second object is the mark of music playing process as an example: after receiving voice input, speech recognition engine Into the identification state of normal condition, voice input is known based on information aggregate corresponding with the mark of music playing process Not, specifically, assuming voice input to play song " THE INVISIBLE WINGS ", then from information corresponding with music playing process icon Song " THE INVISIBLE WINGS " is searched in set, after finding song " THE INVISIBLE WINGS ", exports recognition result, is i.e. starting music Playing program simultaneously plays song " THE INVISIBLE WINGS ".
Again by taking the mark of address list as an example: while speech recognition engine enters the radio reception state of normal operating conditions, connecing Receive voice input, as voice input for search Li Ming telephone number, speech recognition engine by normal operating conditions radio reception shape State enters the identification state of normal operating conditions, and the phone number of Li Ming is searched from information aggregate corresponding with the mark of address list Code, after finding the telephone number of Li Ming, shows the telephone number of Li Ming on the display unit.
The above process is the information processing manner slidably inputed when meeting preset condition, is given below and slidably inputs operation not Meet information processing manner when preset condition:
Step 408: when slidably inputing operation and being unsatisfactory for preset condition, will input operation as the second input operation when, It controls speech recognition engine and is switched to normal operating conditions from low power consumpting state.
Specifically, slidably inputing operation to be unsatisfactory for preset condition includes: the object slidably inputed at the end point of operation Including the first object, alternatively, slidably input object at the end point of operation be in M object can not interactive object.For example, User is identified to a white space with finger dragging speech recognition engine, alternatively, with the mark of finger dragging speech recognition engine Knowing one can not interactive object.
Step 409: when speech recognition engine is in the radio reception state of normal operating conditions, obtaining voice input.
Step 410: when speech recognition engine is in the identification state of normal operating conditions, voice input being known Not.
Due to slidably input operation end point at object do not include can interactive object, electronics is set can not root It is identified according to the parameter information of a specific object, at this point, being in the identification state of normal operating conditions in speech recognition engine When, it need to be identified in all information aggregates for voice input.
Step 411: when speech recognition engine is in the result feedback states of normal operating conditions, exporting recognition result.
Equally by taking electronic equipment is mobile phone as an example, the mark of speech recognition engine is shown on the display unit of mobile phone, is slided The mark of input operation control speech recognition engine is mobile, and at the end of slidably inputing operation, detection, which slidably inputs operation, to be terminated Object at point illustrates finger most if slidably inputing the mark that the object at operation end point only has speech recognition engine A white space is stopped at eventually, at this point, controllable speech recognition engine is switched to normal operating conditions from low power consumpting state, works as language When sound identification engine is in the radio reception state of normal operating conditions, voice input is obtained, when speech recognition engine is in normal work When making the identification state of state, voice input is identified for all information aggregates.Likewise, when slidably inputing operation At the end of, detection slidably inputs the object at operation end point, if slidably inputing the object at operation end point is that can not hand over Mutual object, due to this can not interactive object parameter information can not act on speech recognition engine normal operating conditions identification State at this time identifies voice input for all object sets therefore, it is impossible to be identified based on the object.
It should be noted that determine that input operation inputs operation for first when input operation meets preset condition, the One input operation can determine the second object, due to the second object be can interactive object, to voice input identify when, It can be identified in information aggregate corresponding with the second object for voice input;When input operation is unsatisfactory for preset condition When, determine the input operation for second input operation, second input operation in include can interactive object therefore need to be in institute It is identified in some information aggregates for voice input.The former identifies in information aggregate corresponding with the second object, the latter It is identified in all information aggregates, compares the two it is found that the former specific aim is stronger, the range of information identification is smaller, recognition efficiency It is higher with recognition accuracy.
In information processing method provided in an embodiment of the present invention, M object is shown in display unit, wherein in M object Including the first object, input operation is obtained, determines whether input operation meets predetermined condition, when input operation meets predetermined item Part when operating input operation as the first input, controls speech recognition engine from the low power consumpting state and is switched to normal work Make state, when speech recognition engine is in the radio reception state of normal operating conditions, obtains voice input, work as speech recognition engine When identification state in normal operating conditions, the parameter information based on the second object identifies voice input, works as voice When identification engine is in the result feedback states of normal operating conditions, recognition result is exported.Information provided in an embodiment of the present invention Processing method is simply input operation electronic equipment can be made to start function of radio receiver so that user need to only execute one, and electronics sets The standby rapid feedback that can input for voice is as a result, therefore easy to operate, better user experience.
Figure 15 is please referred to, for the structural schematic diagram for a kind of electronic equipment that present invention implementation provides, changing electronic equipment includes Display unit and speech recognition engine, speech recognition engine have low power consumpting state and normal operating conditions, electronic equipment packet Include: display module 101, first obtains module 102, determining module 103, the first control module 104, second and obtains module 105, the One identification module 106 and the first output module 107.Wherein:
Display module 101, for showing M object in display unit, wherein including the first object in M object, first Object is the mark of speech recognition engine.
First obtains module 102, for obtaining input operation.
Determining module 103, for determining whether input operation meets predetermined condition.
First control module 104, for when input operation meets predetermined condition, input operation to be grasped as the first input When making, speech recognition engine is controlled from low power consumpting state and is switched to normal operating conditions, wherein the first input operation can determine First object and the second object, the second object belong to M object.
Second obtains module 105, when for being in the radio reception state of normal operating conditions when speech recognition engine, obtains language Sound input.
First identification module 106, when for being in the identification state of normal operating conditions when speech recognition engine, based on the The parameter information of two objects identifies voice input;
First output module 107, it is defeated when for being in the result feedback states of normal operating conditions when speech recognition engine Recognition result out.
Electronic equipment provided in an embodiment of the present invention shows M object in display unit, wherein including the in M object An object obtains input and operates, and determines whether input operation meets predetermined condition, meets predetermined condition when input operates, will be defeated When entering operation as the first input operation, speech recognition engine is controlled from the low power consumpting state and is switched to normal operating conditions, When speech recognition engine is in the radio reception state of normal operating conditions, voice input is obtained, when speech recognition engine is in just When the identification state of normal working condition, the parameter information based on the second object identifies voice input, when speech recognition is drawn When holding up the result feedback states in normal operating conditions, recognition result is exported.Electronic equipment provided in an embodiment of the present invention, makes User need to only execute one and be simply input operation electronic equipment can be made to start function of radio receiver, electronic equipment can be directed to voice Rapid feedback is inputted as a result, therefore easy to operate, better user experience.
Above-described embodiment can also include: the second control module, third obtains module, the second identification module and second exports Module.Wherein:
Second control module operates input operation as the second input for being unsatisfactory for predetermined condition when input operation When, speech recognition engine, which is controlled, from low power consumpting state is switched to normal operating conditions.
Third obtains module, when for being in the radio reception state of the normal operating conditions when the speech recognition engine, Obtain the voice input.Second identification module, for being in the knowledge of the normal operating conditions when the speech recognition engine When other state, voice input is identified.
Second output module, for being in the result feedback states of the normal operating conditions when the speech recognition engine When, export recognition result.
In the above-described embodiments, determining module 103 may include: the first determining submodule and prompting submodule.Wherein, One determines submodule, determines the first object for operating according to input;Prompting submodule, for when determining the first object, by M N number of object in a object is prompted, and the parameter information of each object can act on speech recognition engine in N number of object Normal operating conditions identification state.
In the above-described embodiments, predetermined condition can be operated for input determined by object whether include the first object and the Two objects, the second object are an object in M determining object of the end point of input operation;Or/and predetermined condition is Input operates whether identified object includes the first object and the second object, and the second object is true in the end point for inputting operation An object in fixed N number of object.
In the above-described embodiments, input operation can be the operation of two manipulation input points, at this point, determining module 103 can To include: second to determine that submodule, third determine that submodule and the 4th determines submodule.Wherein, it second determines submodule, is used for When the first manipulation input point in two o'clock manipulation input point meets pre-defined rule, the first object is determined;Third determines submodule, For determining the second object when the second manipulation input point meets pre-defined rule in two o'clock manipulation input point;4th determines submodule Block, at the end of input operation, determining that input operates corresponding first object and the second object.
In the above-described embodiments, input operation can also be to slidably input operation, at this point, determining module 103 may include: 5th determines that submodule, the 6th determine that submodule and the 7th determines submodule.Wherein, the 5th submodule is determined, for when input When the tracing point of operation meets pre-defined rule, the first object is determined;6th determines submodule, and the end point for inputting operation is full When sufficient pre-defined rule, the second object is determined;7th determines submodule, for determining input operation pair at the end of input operation Answer the first object and the second object.
In the above-described embodiments, electronic equipment has touch sensing unit, and the touch area of touch sensing unit is divided into the One region and second area, second area are overlapped with display unit, and input operation is slide, at this point, determining module 103 is wrapped Include: the 8th determines that submodule, the 9th determine that submodule and the tenth determines submodule.Wherein, the 8th submodule is determined, for when defeated Enter the starting point of operation in first area, and inputs the segmentation that operation is moved to first area and second area from first area It when line, determines the first object and shows the first object, so that input operation the first object of control is mobile in second area;9th really Stator modules, for meeting if inputting the end point operated when input operation moves into second area from first area Pre-defined rule, it is determined that the second object;Tenth determines submodule, at the end of input operation, determining that input operation corresponds to First object and the second object.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other. For device or system class embodiment, since it is basically similar to the method embodiment, so be described relatively simple, it is related Place illustrates referring to the part of embodiment of the method.
It should also be noted that, herein, relational terms such as first and second and the like are used merely to one Entity or operation are distinguished with another entity or operation, without necessarily requiring or implying between these entities or operation There are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to contain Lid non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of information processing method, be applied in electronic equipment, which is characterized in that the electronic equipment include display unit and Speech recognition engine, the speech recognition engine have low power consumpting state and normal operating conditions, which comprises
M object is shown in the display unit, wherein includes the first object in the M object, first object is institute State the mark of speech recognition engine;
Obtain input operation;
Determine whether the input operation meets predetermined condition;
When input operation meets the predetermined condition, when input operation is operated as the first input, described in control Speech recognition engine is switched to the normal operating conditions from the low power consumpting state, wherein the first input operation can Determine that first object and the second object, second object belong to the M object;
When the speech recognition engine is in the radio reception state of the normal operating conditions, voice input is obtained;
When the speech recognition engine is in the identification state of the normal operating conditions, the parameter based on second object Information identifies voice input;
When the speech recognition engine is in the result feedback states of the normal operating conditions, recognition result is exported;
When input operation is unsatisfactory for the predetermined condition, and input operation is inputted operation as second, institute is controlled It states speech recognition engine and is switched to the normal operating conditions from the low power consumpting state, and is corresponding based on the M object All information aggregates input voice and carry out speech recognition;Wherein, the second input operation can determine first object.
2. the method according to claim 1, wherein described be based on the corresponding all information collection of the M object It closes to input voice and carries out speech recognition, comprising:
When the speech recognition engine is in the radio reception state of the normal operating conditions, the voice input is obtained;
It is corresponding based on the M object when the speech recognition engine is in the identification state of the normal operating conditions All information aggregates identify voice input;
When the speech recognition engine is in the result feedback states of the normal operating conditions, recognition result is exported.
3. the method according to claim 1, wherein whether the determination input operation meets predetermined condition Include:
It is operated according to the input and determines the first object;
When determining first object, N number of object in the M object is prompted, it is each right in N number of object The parameter information of elephant can act on the identification state of the normal operating conditions of the speech recognition engine.
4. according to the method described in claim 3, it is characterized in that, the predetermined condition is right determined by input operation As if no includes the first object and the second object, second object is the M determining in the end point of the input operation An object in a object;
Or/and
The predetermined condition is whether the identified object of input operation includes the first object and the second object, described second Object is an object in the determining N number of object of the end point of the input operation.
5. according to the method described in claim 4, it is characterized in that, the input operation manipulates the operation of input point for two;
Whether the determination input operation, which meets predetermined condition, includes:
When the first manipulation input point in described two manipulation input points meets pre-defined rule, the first object is determined, and work as When the second manipulation input point meets pre-defined rule in described two manipulation input points, the second object is determined;
At the end of input operation, corresponding first object of input operation and the second object are determined;
Alternatively,
The input operation is to slidably input operation;
Whether the determination input operation, which meets predetermined condition, includes:
The tracing point of the input operation determines the first object when meeting pre-defined rule;
The end point of the input operation determines the second object when meeting pre-defined rule;
At the end of input operation, corresponding first object of input operation and the second object are determined;
Alternatively,
The electronic equipment has touch sensing unit, and the touch area of the touch sensing unit is divided into first area and second Region, the second area are overlapped with the display unit, and the input operation is slide;
Whether the determination input operation, which meets predetermined condition, includes:
When the starting point of the input operation is in the first area, and described in input operation is moved to from first area When the cut-off rule of first area and the second area, determines first object and show first object, so that described It is mobile in the second area that input operation controls first object;
When input operation moves into the second area from the first area, if the end of the input operation Point meets pre-defined rule, it is determined that the second object;
At the end of input operation, corresponding first object of input operation and the second object are determined.
6. a kind of electronic equipment, which is characterized in that the electronic equipment includes display unit and speech recognition engine, the voice Identify that engine has low power consumpting state and normal operating conditions, the electronic equipment includes:
Display module, for showing M object in the display unit, wherein include the first object, institute in the M object State the mark that the first object is the speech recognition engine;
First obtains module, for obtaining input operation;
Determining module, for determining whether the input operation meets predetermined condition;
First control module, for regarding input operation as first when input operation meets the predetermined condition When input operation, the speech recognition engine is controlled from the low power consumpting state and is switched to the normal operating conditions, wherein institute Stating the first input operation can determine that first object and the second object, second object belong to the M object;
Second obtains module, when for being in the radio reception state of the normal operating conditions when the speech recognition engine, obtains Voice input;
First identification module is based on when for being in the identification state of the normal operating conditions when the speech recognition engine The parameter information of second object identifies voice input;
First output module, when for being in the result feedback states of the normal operating conditions when the speech recognition engine, Export recognition result;
Processing module, for being unsatisfactory for the predetermined condition when input operation, by input operation as the second input When operation, the speech recognition engine is controlled from the low power consumpting state and is switched to the normal operating conditions, and is based on the M The corresponding all information aggregates of a object, which input voice, carries out speech recognition;Wherein, the second input operation can determine First object.
7. electronic equipment according to claim 6, which is characterized in that the processing module includes:
Second control module, for being unsatisfactory for predetermined condition when input operation, by input operation as the second input When operation, the speech recognition engine is controlled from the low power consumpting state and is switched to the normal operating conditions;
Third obtains module, when for being in the radio reception state of the normal operating conditions when the speech recognition engine, obtains The voice input;
Second identification module is based on when for being in the identification state of the normal operating conditions when the speech recognition engine The corresponding all information aggregates of the M object identify voice input;
Second output module, when for being in the result feedback states of the normal operating conditions when the speech recognition engine, Export recognition result.
8. electronic equipment according to claim 6, which is characterized in that the determining module includes:
First determines submodule, determines the first object for operating according to the input;
Prompting submodule, it is described for when determining first object, N number of object in the M object to be prompted The parameter information of each object can act on the identification of the normal operating conditions of the speech recognition engine in N number of object State.
9. electronic equipment according to claim 8, which is characterized in that the predetermined condition is operated by the input and determined Object whether include the first object and the second object, second object is the institute that determines of end point in the input operation State an object in M object;
Or/and the predetermined condition is whether object determined by the input operates includes the first object and the second object, institute Stating the second object is an object in the determining N number of object of the end point of the input operation.
10. electronic equipment according to claim 9, which is characterized in that the input operation manipulates input point for two Operation;
The determining module includes:
Second determine submodule, for when it is described two manipulation input points in first manipulation input points meet pre-defined rule when, Determine the first object;
Third determines submodule, for when in described two manipulation input points second manipulation input point meet pre-defined rule when, really Fixed second object;
4th determines submodule, at the end of input operation, determining that input operation corresponds to first object and the Two objects;
Alternatively,
The input operation is to slidably input operation;
The determining module includes:
5th determines submodule, for determining the first object when the tracing point of the input operation meets pre-defined rule;
6th determines submodule, for determining the second object when the end point of the input operation meets pre-defined rule;
7th determines submodule, at the end of input operation, determining corresponding first object of input operation With the second object;
Alternatively,
The electronic equipment has touch sensing unit, and the touch area of the touch sensing unit is divided into first area and second Region, the second area are overlapped with the display unit, and the input operation is to slidably input operation;
The determining module includes:
8th determines submodule, and the starting point for operating when the input is in the first area, and the input operates When being moved to cut-off rule of the first area with the second area from first area, determines first object and show institute The first object is stated, so that input operation controls first object in second area movement;
9th determine submodule, for the input operate move into the second area from the first area when, such as The end point that operation is inputted described in fruit meets pre-defined rule, it is determined that the second object;
Tenth determines submodule, at the end of input operation, determining that input operation corresponds to first object and the Two objects.
CN201310376885.0A 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment Active CN104423925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310376885.0A CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310376885.0A CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104423925A CN104423925A (en) 2015-03-18
CN104423925B true CN104423925B (en) 2018-12-14

Family

ID=52973025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310376885.0A Active CN104423925B (en) 2013-08-26 2013-08-26 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104423925B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022582B (en) * 2015-07-20 2019-07-12 广东小天才科技有限公司 A kind of point reads the function triggering method and point reading terminal of terminal
CN110779542A (en) * 2019-09-23 2020-02-11 深圳市跨越新科技有限公司 Method and device for synchronizing vehicle track playback and playing progress bar of map system
CN111429911A (en) * 2020-03-11 2020-07-17 云知声智能科技股份有限公司 Method and device for reducing power consumption of speech recognition engine in noise scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436113A (en) * 2007-11-12 2009-05-20 捷讯研究有限公司 User interface for touchscreen device
CN101527745A (en) * 2008-03-07 2009-09-09 三星电子株式会社 User interface method and apparatus for mobile terminal having touchscreen
CN101989176A (en) * 2009-08-04 2011-03-23 Lg电子株式会社 Mobile terminal and icon collision controlling method thereof
JP2012216057A (en) * 2011-03-31 2012-11-08 Toshiba Corp Voice processor and voice processing method
CN102981763A (en) * 2012-11-16 2013-03-20 中科创达软件股份有限公司 Method of running application program under touch screen lock-out state

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436113A (en) * 2007-11-12 2009-05-20 捷讯研究有限公司 User interface for touchscreen device
CN101527745A (en) * 2008-03-07 2009-09-09 三星电子株式会社 User interface method and apparatus for mobile terminal having touchscreen
CN101989176A (en) * 2009-08-04 2011-03-23 Lg电子株式会社 Mobile terminal and icon collision controlling method thereof
JP2012216057A (en) * 2011-03-31 2012-11-08 Toshiba Corp Voice processor and voice processing method
CN102981763A (en) * 2012-11-16 2013-03-20 中科创达软件股份有限公司 Method of running application program under touch screen lock-out state

Also Published As

Publication number Publication date
CN104423925A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
CN104685898B (en) A kind of method and terminal for playing media
CN103440862B (en) A kind of method of voice and music synthesis, device and equipment
CN103871408B (en) Method and device for voice identification and electronic equipment
AU2012232659B2 (en) Method and apparatus for providing sight independent activity reports responsive to a touch gesture
CN105007560B (en) The method and system that terminal device pairing connection confirms
CN104598104B (en) Widget treating method and apparatus
CN109313903A (en) Voice user interface
CN110211589B (en) Awakening method and device of vehicle-mounted system, vehicle and machine readable medium
CN104850433A (en) Mobile terminal application starting method and mobile terminal
CN108933861A (en) Application icon sort method, device, readable storage medium storing program for executing and intelligent terminal
JP2013505495A (en) Input device and method for portable terminal
CN105208513B (en) The method and system of terminal device pairing connection confirmation
CN104423925B (en) A kind of information processing method and electronic equipment
CN106851026A (en) Inactive phone number is recognized and method for cleaning, device and mobile terminal
CN104898821B (en) The method and electronic equipment of a kind of information processing
CN105159494A (en) Information display method and device
CN106959746A (en) The processing method and processing device of speech data
CN105637586A (en) Method and apparatus for editing audio files
CN110399079A (en) Equipment control method, device and mobile terminal device
CN107145221A (en) A kind of information processing method and electronic equipment
CN107132927A (en) Input recognition methods and device and the device for identified input character of character
CN105843535A (en) Control method and control panel, and terminal
CN105843584A (en) Method and device for adjusting recording volume
CN107707759A (en) Terminal control method, device and system, storage medium
CN105808089A (en) Data transmission method and first electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant