CN101557432A - Mobile terminal and menu control method thereof - Google Patents

Mobile terminal and menu control method thereof Download PDF

Info

Publication number
CN101557432A
CN101557432A CNA2008101279100A CN200810127910A CN101557432A CN 101557432 A CN101557432 A CN 101557432A CN A2008101279100 A CNA2008101279100 A CN A2008101279100A CN 200810127910 A CN200810127910 A CN 200810127910A CN 101557432 A CN101557432 A CN 101557432A
Authority
CN
China
Prior art keywords
portable terminal
menu
user
controller
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101279100A
Other languages
Chinese (zh)
Other versions
CN101557432B (en
Inventor
尹种根
郑大成
杻在勋
金兑俊
赵在珉
郭宰到
申宗壕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020080032843A external-priority patent/KR101521908B1/en
Priority claimed from KR1020080033350A external-priority patent/KR101521909B1/en
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of CN101557432A publication Critical patent/CN101557432A/en
Application granted granted Critical
Publication of CN101557432B publication Critical patent/CN101557432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Abstract

A mobile terminal including an input unit configured to receive an input to activate a voice recognition function on the mobile terminal and a memory configured to store multiple domains related to menus and operations of the mobile terminal. It further includes a controller configured to access a specific domain among the multiple domains included in the memory based on the received input to activate the voice recognition function, to recognize user speech based on a language model and an acoustic model of the accessed domain, and to determine at least one menu and operation of the mobile terminal based on the accessed specific domain and the recognized user speech.

Description

Portable terminal and menu control method thereof
Background of invention
1. technical field
The present invention relates to portable terminal, and can correspondingly be by the territory that is used for speech recognition being arranged to certain menu or being served the method that relevant information improves phonetic recognization rate.
2. background technology is described
Except that basic session services, portable terminal also provides a lot of Additional Services now.For example, the present the Internet accessible of user, play games, watch video, listen to the music, catch image and video, record audio file etc.Portable terminal also provides broadcast program now, makes the user can watch TV programme, sports cast, video etc.
In addition, because the function that portable terminal comprises significantly increases, the complexity more so user interface has also become.For example, user interface comprises the touch-screen that makes the user can touch and select concrete item or menu option now.Portable terminal also comprises the very limited speech identifying function that makes the user can carry out preliminary function.Yet the error rate when determining the implication of user speech instruction is very high, so the user does not generally use the limited speech identifying function parts on the terminal.
Summary of the invention
Therefore, an object of the present invention is to solve above-indicated problem and other problem.
Another object of the present invention provides a kind of portable terminal, and accordingly by controlling with its specific function based on the background and the implication of content recognition voice command or serving the method for relevant menu.
Another purpose of the present invention provides a kind of portable terminal and corresponding by the territory that is used for speech recognition being appointed as with certain menu or being served the method that relevant territory significantly improves phonetic recognization rate.
A further object of the present invention provides a kind of portable terminal and corresponding handle and control with specific function or serve the method for relevant menu so that detect the user by use its one or more user interfaces (UI) in voice activated recognition function.
Another purpose of the present invention provides a kind of portable terminal and corresponding method, be used for by the help information about the input of voice command is provided according to the mode of operation of portable terminal or operator scheme, even by novice user via his or her voice command, control with specific function or serve relevant menu.
In order to realize these or other advantage and according to purpose of the present invention, embody with broadly described as this paper, a kind of portable terminal is provided in one aspect, has comprised: input unit, it is configured to receive the input that is used to activate the speech identifying function on the portable terminal; Memory, it is configured to store a plurality of territories relevant with operation with the menu of portable terminal; And controller, it is configured to be included in special domain in a plurality of territories of this memory based on the input reference that is used for voice activated recognition function that is received, with language model and acoustics Model Identification user speech, and determine at least one menu and the operation of portable terminal based on the special domain of being visited and the user speech discerned based on the territory of being visited.
On the other hand, the invention provides a kind of method of controlling portable terminal.This method comprises: receive the input that is used to activate the speech identifying function on the portable terminal; Based on the input that is used for voice activated recognition function that is received, visit is included in the special domain in a plurality of territories of being stored in the memory of portable terminal; Language model and acoustics Model Identification user speech based on the territory of being visited; And based at least one menu and the operation of the special domain of being visited with the user speech output portable terminal of being discerned.
Become apparent in the detailed description that the further scope of applicability of the present invention will provide hereinafter.Yet, be to be understood that, although detailed description and specific examples have been indicated preferred embodiment of the present invention but have only been provided as an illustration, because variations and modifications within the spirit and scope of the present invention are conspicuous after reading detailed description to one skilled in the art.
Brief Description Of Drawings
To more fully understand the present invention from the detailed description and the accompanying drawings that hereinafter provide, these the detailed description and the accompanying drawings only provide as an illustration, are not limitations of the present invention therefore, in the accompanying drawings:
Fig. 1 is the block diagram of portable terminal according to an embodiment of the invention;
Fig. 2 is the front perspective view of portable terminal according to an embodiment of the invention;
Fig. 3 is the rear side stereogram of the portable terminal shown in Fig. 2;
Fig. 4 is the general survey of the communication system that can operate on portable terminal of the present invention;
Fig. 5 illustrates the flow chart that passes through the mobile teminal menu control method of voice command according to an embodiment of the invention;
Fig. 6 A is the general survey of method that the speech identifying function of activation portable terminal according to an embodiment of the invention is shown;
Fig. 6 B and 6C are the general surveys of method that the help information of output portable terminal according to an embodiment of the invention is shown;
Fig. 7 A is the flow chart of method that the voice command of identification portable terminal according to an embodiment of the invention is shown;
Fig. 7 B is the general survey of method that the voice command of identification portable terminal according to an embodiment of the invention is shown;
Fig. 8 is the general survey of method that the menu of the phonetic recognization rate that is used to show portable terminal according to an embodiment of the invention is shown;
Fig. 9 is the general survey of method that the voice command of identification portable terminal according to another embodiment of the invention is shown;
Figure 10 is the general survey of database configuration of the benchmark of the voice command identification as portable terminal according to an embodiment of the invention;
Figure 11 is the general survey that the state that the speech identifying function of portable terminal according to an embodiment of the invention just is being performed is shown;
Figure 12 illustrates the general survey of handling the method for the subcommand relevant with certain menu in portable terminal by voice command according to an embodiment of the invention;
Figure 13 illustrates general survey of searching for the method for subway maps in portable terminal by voice command according to an embodiment of the invention;
Figure 14 illustrates the general survey of passing through the method for voice command multimedia rendering file in portable terminal according to an embodiment of the invention;
Figure 15 illustrates the general survey of passing through the method for voice command send Email in portable terminal according to an embodiment of the invention;
Figure 16 illustrates general survey of carrying out the method for call in portable terminal by voice command according to an embodiment of the invention;
Figure 17 illustrates the general survey of using the method for phone book information in portable terminal by voice command according to an embodiment of the invention;
Figure 18 illustrates the general survey that changes the method for rear projection screen in portable terminal by voice command according to an embodiment of the invention;
Figure 19 illustrates the general survey of passing through the method for voice command multimedia rendering file in portable terminal according to an embodiment of the invention.
Embodiment
Below will be in detail with reference to better embodiment of the present invention, its concrete exemplary plot is shown in the drawings.
Fig. 1 is the block diagram of portable terminal 100 according to an embodiment of the invention.As shown in the figure, portable terminal 100 comprises wireless communication unit 110, and this wireless communication unit 110 has radio communication is carried out in permission between the wireless communication system at portable terminal 100 and this portable terminal place or network one or more assemblies.
For example, wireless communication unit 110 comprises via the broadcast reception module 111 of broadcasting channel from external broadcasting management entity receiving broadcast signal and/or broadcasting related information.Broadcasting channel can comprise satellite channel and ground channel.
In addition, the broadcast control entity typically refers to the system that sends broadcast singal and/or broadcasting related information.The example of broadcasting related information comprises the information that is associated with broadcasting channel, broadcast program, broadcast service provider etc.For example, the broadcasting related information can comprise the electronic program guides (EPG) of DMB (DMB) and the electronic service guidebooks (ESG) of hand-held digital video broadcast (DVB-H).
In addition, broadcast singal can be implemented as TV broadcast singal, radio signals and data broadcasting signal etc.Broadcast singal also can comprise the broadcast singal with TV or radio signals combination.
Broadcast reception module 111 also is configured to receive the broadcast singal that sends from all kinds broadcast system.For example, this broadcast system comprises T-DMB (DMB-T), digital multimedia broadcast (dmb) via satellite (DMB-S), hand-held digital video broadcast (DVB-H) system, is called the single forward link of medium
Figure A20081012791000101
Radio Data System and floor synthetic service digital broadcasting (ISDB-T) etc.It also is possible receiving multicast signals.In addition, the data that received by broadcast reception module 111 can be stored in the suitable equipment such as memory 160.
Wireless communication unit 110 also comprises mobile communication module 112, and it sends wireless signal or receive wireless signal from it to one or more network entities (for example base station, node-b).These signals can be represented audio frequency, video, multimedia, control signaling and data etc.
What also comprise is wireless Internet module 113, and it supports the internet of portable terminal to insert.This module 113 can be coupled on the terminal internal or externally.Wireless communication unit 110 also comprises short-range communication module 114, and it helps more short-range relatively communication.The appropriate technology of realizing this module comprises radio frequency identification (RFID), infra red data as-sodation (IrDA) and the ultra broadband (UWB) that for example is commonly referred to bluetooth and ZigBee in network technology, slightly lifts several examples sincerely.
Locating module 115 also is included in the wireless communication unit 110, and identifies or otherwise obtain the position of portable terminal 100.This locating module 115 can be realized with global positioning system (GPS) assembly of cooperating with the satellite that is associated, networking component and combination thereof.
In addition, as shown in Figure 1, portable terminal 100 also comprises audio/video (A/V) input unit 120, and it provides the audio or video signal to portable terminal 100.As shown in the figure, A/V input unit 120 comprises camera 121 and microphone 122.Camera 121 receives and handles the picture frame of still picture or video.
In addition, be in following time of AD HOC such as phone call mode, logging mode and speech recognition mode at portable set, microphone 122 receives external audio signals.The audio signal that is received is processed then and convert numerical data to.Equally, this portable set, especially the A/V input unit 120, generally include to be used for removing what receive noise that the external audio signal process generates to mix the noise remove algorithm.In addition, the data that generated by A/V input unit 120 can be stored in the memory 160, be used or sent via one or more modules of communication unit 110 by output unit 150.If necessary, can use two or more microphones and/or camera.
Portable terminal 100 also comprises user input unit 130, it in response to the user to the manipulation of one or more related input equipments and generate the input data.The example of this equipment comprises keyboard, key switch, touch pad (for example static pressure/electric capacity), moving runner and rotating switch.Concrete example is the terminal that user input unit 130 is configured to the touch pad of cooperating with touch-screen display, and this will be in following more detailed description.
Sensing cell 140 also is included in the portable terminal 100, and the state measurement to the various aspects of portable terminal 100 is provided.For example, whether sensing cell 140 change in location, the user of assembly that can detect relative positioning, portable terminal 100 or the portable terminal 100 of the open/close state of portable terminal 100, the assembly of portable terminal 100 (for example display and keypad) contacts with portable terminal 100, the orientation of portable terminal 100 or acceleration etc.
As example, when portable terminal 100 is slide type mobile terminal, but the slipper of sensing cell 140 sensing portable terminals 100 is opened or is closed.Other example comprises whether sensing cell 140 sensing power supplys 190 provide whether have coupling or other connection between power, interface unit 170 and the external equipment.
In addition, interface unit 170 often is embodied as portable terminal and external equipment coupling.Typical external equipment comprises wire/wireless headpiece, external charger, power supply, is used to store memory device, earphone and the microphone etc. of data (for example audio frequency, video, picture etc.).In addition, interface unit 170 can be used wire/wireless FPDP, card slot (for example, being used to be coupled to storage card, client identification module (SIM) card, subscriber identification module (UIM) card, Removable User Identity Module (RUIM) card etc.), audio frequency input/output end port and video input/output end port.
Output unit 150 generally includes the various assemblies of supporting that portable terminal 100 outputs require.Portable terminal 100 also comprises display 151, and it shows the information that is associated with portable terminal 100 with visual means.For example, if portable terminal 100 runs on phone call mode, then display 151 provides usually and comprises and the user interface or the graphic user interface of breathing out, carrying out calling out with termination telephone the information that is associated.As another example, if portable terminal 100 is under video call pattern or the Photographing Mode, then display 151 can be additionally or is alternatively shown the image that is associated with these patterns.
In addition, display 151 preferably also comprises the touch-screen with input equipment collaborative work such as touch pad.This configuration allows display 151 to serve as output equipment and input equipment simultaneously.In addition, display 151 can be with comprising that for example the Display Technique of LCD (LCD), Thin Film Transistor-LCD (TFT-LCD), organic light emitting diode display (OLED), flexible display and three dimensional display realizes.
Portable terminal 100 also can comprise one or more such displays.The example of dual screen embodiment is that a display is configured to internal display (can check) when terminal is shown in an open position and second display is configured to external display (can check in the opening and closing position).
Fig. 1 also illustrates has the output unit 150 that the audio frequency of supporting portable terminal 100 is exported the audio frequency output module 152 that needs.Audio frequency output module 152 is realized with one or more loud speakers, buzzer, other audio producing device and combination thereof usually.In addition, audio frequency output module 152 can comprise the calling receiving mode, call out in the various patterns of carrying out pattern, logging mode, speech recognition mode and broadcast reception pattern and move.In running, 152 outputs of audio frequency output module and the relevant audio frequency of specific function (for example, calling out reception, message sink and mistake).
In addition, the output unit among the figure 150 also have be used to send signal or otherwise sign the siren 153 of the particular event that is associated with portable terminal 100 has taken place.Alarm events comprises to be received calling, receive message and receives that the user imports.The example of this output comprises to the user provides tactilely-perceptible (for example vibration).For example, siren 153 can be configured to receive calling or message in response to portable terminal 100 and vibrate.
As another example, can be by siren 153 in response to receiving that at portable terminal 100 places the user imports and vibration is provided, thereby a kind of tactile feedback mechanism is provided.In addition, the various outputs that provided by the assembly of output unit 150 can realize independently that the combination in any of available these assemblies of perhaps this output realizes.
In addition, memory 160 is used to store various types of data to support processing, control and the storage needs of portable terminal 100.The example of these data is included in program command, call history, contact data, telephone book data, message, picture, video of application program operating on the portable terminal 100 etc.
In addition, memory 160 shown in Figure 1 can be realized with the suitable volatibility of any kind (or combination) and nonvolatile memory or memory device, comprise random-access memory (ram), static RAM (SRAM), EEPROM (Electrically Erasable Programmable Read Only Memo) (EEPROM), EPROM (Erasable Programmable Read Only Memory) (EPROM), programmable read-only memory (prom), read-only memory (ROM), magnetic storage, flash memory, disk or CD, cassette memory or other similar memory or data storage device.
Terminal 100 also comprises controller 180, and it controls the overall operation of portable terminal 100 usually.For example, controller carries out the control and the processing that are associated with audio call, data communication, instant messaging, video call, camera operation and recording operation.As shown in Figure 1, controller 180 can comprise the multi-media module 181 that the multimedia playback function is provided.Multi-media module 181 can be configured to the part of controller 180, and perhaps this module can be implemented as stand-alone assembly.
In addition, power supply 190 provides the required electric power of each assembly of portable set.The electric power that provides can be internal power, external power or its combination.
Next, Fig. 2 is the front view of portable terminal 100 according to an embodiment of the invention.As shown in Figure 2, portable terminal 100 comprises and is configured to first fuselage 200 that is slidingly matched with second fuselage 205.User input unit 130 described in Fig. 1 can comprise first input unit and second input unit such as keypad 215 and the 3rd input unit such as side switch 245 such as function key 210.
Function key 210 is associated with first fuselage 200, and keypad 215 is associated with second fuselage 205.Keypad comprise make the user can outbound calling, prepare text or Multimedia Message or various keys of operating mobile terminal 100 (for example numeral, character and symbol) otherwise.
In addition, first fuselage 200 slides between the opening and closing position with respect to second fuselage 205.When off-position, first fuselage 200 is location on second fuselage 205 by this way: keypad 215 is covered by first fuselage 200 basically or fully.When open position, user capture keypad 215 and display 151 and function key 210 become possibility.Function key makes things convenient for the user to import such as the order that begins, stops and rolling.
In addition, portable terminal 100 can be worked under standby mode (for example, can receipt of call or message, reception and response to network control signaling) or call active pattern.Usually, portable terminal 100 moves under standby mode when in the closed position, and moves under activity pattern when open position.Yet this pattern configurations can maybe need change on request.
In addition, first fuselage 200 is formed by first shell 220 and second shell 225, and second fuselage 205 is formed by first shell 230 and second shell 235.Each first and second shell is formed by suitable rigidity (ridge) material such as the injection moulding plastics usually, perhaps uses the metal material such as stainless steel (STS) and titanium (Ti) to form.
If desired, can between one of first and second fuselages 200,205 or both first and second shells, one or more intermediate case be set.In addition, the size of first and second fuselages 200,205 is adjusted to the electronic building brick that can hold the operation that is used to support portable terminal 100.
First fuselage 200 also comprises camera 121 and the audio output unit 152 that is configured to respect to the loud speaker of display 151 location.Camera 121 can also this mode constitute: it can optionally locate (for example, rotation, rotation etc.) with respect to first fuselage 200.
In addition, function key 210 is near the downside location of display 151.As mentioned above, display 151 is implemented as LCD or OLED.Display 151 also can be configured to have the touch-screen that generates the bottom touch pad of signal in response to user's contact (for example, finger, input pen etc.) touch-screen.
Second fuselage 205 also comprises microphone 122 and the side switch 245 with keypad 215 adjacent positioned, and this side switch 245 is the class user input unit along the location, side of second fuselage 205.Preferably, side switch 245 can be configured to hot key, makes side switch 245 be associated with the specific function of portable terminal 100.As shown in the figure, interface unit 170 and side switch 245 adjacent positioned, and the power supply 190 of battery forms is positioned at the bottom of second fuselage 205.
Fig. 3 is the lateral side view of the portable terminal shown in Fig. 2.As shown in Figure 3, second fuselage 205 photoflash lamp 250 and the speculum 255 that comprise camera 121 and be associated.Photoflash lamp 250 is in conjunction with camera 121 operations of second fuselage 205, and speculum 255 is used for helping the user at self-timer mode location camera 121.In addition, the camera 121 of second fuselage 205 towards with 121 in the camera of first fuselage 200 shown in Figure 2 towards the side in the opposite direction.
In addition, the camera 121 of first and second fuselages can have identical or different ability separately.For example, in one embodiment, the camera 121 of first fuselage 200 is with the resolution operation more relatively low than the camera 121 of second fuselage 205.This for example is arranged in during the video conference conversation that return link bandwidth ability wherein is restricted very effective.In addition, the relative high-resolution of the camera of second fuselage 205 (Fig. 3) is very useful in order to follow-up use for obtaining the better quality picture.
Second fuselage 205 also comprises the audio frequency output module 152 of the loud speaker that is configured to be positioned at second fuselage, 205 upsides.The audio frequency output module of first and second fuselages 200,205 also can be cooperated provides stereo output.In addition, any one of these audio frequency output modules or both can be configured to serve as speaker-phone.
Terminal 100 also comprises broadcast singal reception antenna 260, and it is positioned at the upper end of second fuselage 205.Antenna 260 and broadcast reception module 111 (Fig. 1) cooperating operation.If necessary, antenna 260 can be fixed, or in second fuselage 205 that is configured to withdraw.In addition, the dorsal part of first fuselage 200 comprises the sliding block 265 that is coupled slidably with the corresponding sliding block that is positioned at second fuselage, 205 front sides.
In addition, the various assemblies of first and second fuselages 200,205 shown in arrange and can change with needs on request.Usually, part or all in the assembly of a fuselage can replacedly realize on another fuselage.In addition, the position of these assemblies and relative positioning can be positioned at and be different from the position shown in the representative figure.
In addition, the portable terminal 100 of Fig. 1-3 can be configured to operate in the communication system that sends data via frame or grouping, comprises wireless, wired communication system and satellite based communication systems.These communication systems are used different air interface and/or physical layer.
The example of this air interface of being used by communication system comprises for example Long Term Evolution (LTE) and the global system for mobile communications (GSM) of frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and Universal Mobile Telecommunications System (UMTS), UMTS.Only, further describe and will be referred to cdma communication system, but these instructions similarly are applicable to other system type as non-limiting example.
Next, Fig. 4 illustrate have a plurality of portable terminals 100, the cdma wireless communication system of a plurality of base station 270, a plurality of base station controller (BSC) 275 and mobile switching centre (MSC) 280.
MSC 280 is configured to and public switch telephone network (PSTN) 290 interfaces, and MSC 280 also is configured to 275 interfaces with BSC.In addition, BSC 275 is coupled to base station 270 via back haul link.In addition, back haul link can dispose according in some known interface any, comprises for example E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.In addition, system can comprise plural BSC 275.
Each base station 270 also can comprise one or more sectors, and each sector has omnidirectional antenna or points to radially antenna away from the specific direction of base station 270.Perhaps, each sector can comprise two antennas that are used for diversity reception.In addition, each base station 270 can be configured to support a plurality of Frequency Distribution, and each Frequency Distribution has specific frequency spectrum (for example, 1.25MHz, 5MHz).
The common factor of sector and Frequency Distribution is called as CDMA Channel.Base station 270 also can be called as base station transceiver subsystem (BTS).In some cases, term " base station " can be used for the logical BSC 275 of finger and one or more base station 270.
The base station also can be expressed as " cell site (cell site) ".Perhaps, each sector of given base station 270 can be called as cell site.In addition, T-DMB (DMB) transmitter 295 is illustrated as portable terminal 100 broadcasting in being operated in this system.
In addition, the broadcast reception module 111 (Fig. 1) of portable terminal 100 is configured to receive the broadcast singal by 295 emissions of DMB transmitter usually.As mentioned above, can the broadcasting and the multicast signaling of other type be realized similarly arranging.
Fig. 4 also shows some global positioning systems (GPS) satellite 300.These satellites help to locate the position of a part or all portable terminals 100.Figure 4 illustrates two satellites, still, can use more or less satellite to obtain locating information.
In addition, the locating module 115 (Fig. 1) of portable terminal 100 is configured to cooperate to obtain the positional information of expectation with satellite 300 usually.Yet, perhaps also can realize the location detecting technology of other type, such as adding to or the location technology of alternative GPS location technology.A part of or whole gps satellite 300 optionally or additionally is configured to provide satellite dmb to transmit.
In addition, during the typical operation of wireless communication system, base station 270 receives many group reverse link signal from each portable terminal 100.Portable terminal 100 is called out, messaging and other communication.
In addition, in base station 270, handle each reverse link signal that receives by given base station 270, and the gained data are forwarded to the BSC 275 that is associated.The mobile management function that BSC provides call resources to distribute and comprise the soft handover between the base station 270.
In addition, BSC 275 also routes to MSC 280 with the data of receiving, MSC 280 provides additional route service to be used for 290 interfaces with PSTN.Similarly, PSTN and MSC 280 interfaces, and MSC 280 and BSC 275 interfaces.BSC 275 also controls base station 270, sends many group forward link signals to portable terminal 100.
In the following description, explain the control method of the portable terminal 100 that is applicable to above configuration with reference to each embodiment.Yet following embodiment can realize separately or realize by its combination.In addition, in the following description, suppose that display 151 comprises touch-screen.In addition, touch-screen or its screen can be by Reference numeral ' 400 ' indications.
In an embodiment of the present invention, a kind of terminal is appointed as the territory (or information search scope) of discerning the database of benchmark as voice command with certain menu or is served relevant territory.Therefore, the discrimination of voice command improves, and is reduced by the total resources that portable terminal uses.
In addition, can specify by the environment setup menu of portable terminal as the territory of the database of the benchmark of speech recognition.Equally, in case speech identifying function is activated, the territory of appointment is used automatically.
Hereinafter, suppose to be used for voice command recognition data storehouse preset the territory comprise with current display 151 on the relevant information of menu that shows, or the information relevant with the submenu of one of all menus.
Next, Fig. 5 illustrates the flow chart that passes through the mobile teminal menu control method of voice command according to an embodiment of the invention.In the following description also will be with reference to figure 1.As shown in Figure 5, controller 180 is determined speech identifying functions whether be activated (S101).
In addition, speech identifying function can select hardware button on the portable terminal or the soft touch button on the display module 151 to activate by the user.The user also can come voice activated recognition function by handling the certain menu that shows on the display 151.Speech identifying function also can generate specific sound or sound effect, activate apart from wireless signal or by the user's limbs information such as gesture or figure by short distance or length by the user.
In more detail, specific sound or sound effect can comprise having level other strike note not higher than a specific order.In addition, specific sound or sound effect can utilize the sound level detection algorithm to detect.In addition, the sound level detection algorithm is preferably simpler than speech recognition algorithm, therefore consumes less mobile terminal resource, and is same, sound level detection algorithm (or circuit) can be realized individually by speech recognition algorithm or circuit, maybe can be implemented as the partial function of specified speech recognizer.
In addition, wireless signal can receive by wireless communication unit 110, and user's gesture or figure can receive by sensing cell 140.Therefore, in an embodiment of the present invention, wireless communication unit 110, user input unit 130 and sensing cell 140 can be called as signal input unit.In addition, speech identifying function can also stop in a similar fashion.
Making the user is particularly advantageous with the voice activated recognition function of physics mode, because the user can recognize that more they will use voice command to come control terminal.That is,,, thereby therefore say clearlyer or slower activation specific function so he or she recognizes that intuitively they will import voice command or instruct to terminal because the user need at first carry out the physical manipulation to terminal.Therefore, for example, because the user says clearlyer or is slower that the probability of accurately discerning phonetic order increases.That is, in an embodiment of the present invention, the activation of speech identifying function is carried out by the physical manipulation of button on the terminal, rather than by speech comes voice activated recognition function to terminal.
In addition, controller 180 time span that can touch the part of specific button or touch-screen based on number of times, the user that the user touches the part of specific button or touch-screen waits and begins or the activation of terminated speech recognition function.The user also can be provided with controller 180 and how utilize by the next voice activated recognition function of suitable menu option provided by the invention.For example, the user can select the menu option on the terminal, and it comprises 1) activation, 2 of speech recognition is set based on the selecteed number of times X of voice activation button) activation, 3 of speech recognition is set based on the selecteed time quantum X of voice activation button) activation of speech recognition etc. is set when button X and Y are selected.So the user can import the value of X and Y,, controller 180 how to determine that the voice activation function is activated so that being set convertibly.Therefore, according to embodiments of the invention, the user uses the speech identifying function of its portable terminal energetically, and this has increased controller 180 determines to instruct the probability of corresponding correct function with user speech, and this also allows the user to revise the voice activation function according to his or her needs.
Keep the state of activation of speech identifying function when controller 180 also can be touched at the button of appointment or select, and when the button of appointment is decontroled, stop speech identifying function.Perhaps, controller 180 can be after designated button is touched or selects be kept preset time at interval with the activation of speech identifying function, and stops or the terminated speech recognition function when finishing at interval at the fixed time.In yet another embodiment, controller 180 can be stored in the phonetic order that is received in the memory 160, simultaneously speech identifying function is maintained state of activation.
In addition, as shown in Figure 5, as the territory of the database of the benchmark of the implication of voice command recognition be assigned to terminal on specific function or the relevant information (S102) of menu.For example, the special domain of database can be with current display 151 on the relevant information of menu that shows, or with the relevant information of submenu of one of shown menu.In addition, because the territory of database is designated, so the discrimination of input voice command improves.The calling territory that the example in territory comprises free email domain, received and multimedia domain, MMD etc.
Equally, relevant with submenu information can be configured to the data in the database.For example, information can be configured to the form of keyword, and a plurality of information can be corresponding to a function or menu.In addition, according to the feature of information, database can be a plurality of databases, and can be stored in the memory 160
In addition, the information in the database can advantageously be upgraded or renovates by learning process.Each territory of associated databases also can be designated as and current function that just is being output or the relevant territory of menu, so that improve the discrimination of voice command.This territory also can move on and changes along with the menu step.
In case speech identifying function is activated (being among the S101) and territory designated (S102), controller 180 just determines whether users have imported voice command (S103).When controller 180 determines that users have imported voice command (being among the S103), controller 180 based on specific database analysis by the voice command of microphone 122 inputs or the background and the content of instruction, thereby judge the implication (S104) of voice command.
In addition, controller 180 implication that can determine phonetic order or order based on the language model and the acoustic model in the territory of being visited.In more detail, language model relates to speech itself, and acoustic model is corresponding to the mode of saying speech (for example, the frequency component of institute's excuse or phrase).Language and the acoustic model state with special domain and portable terminal 100 is used, and controller 180 can be determined the implication of input phonetic order or order expeditiously.
In addition, when controller 180 will be imported voice command and be stored in the memory 160, controller 180 can begin to judge the process of the implication of importing voice command immediately when the user removes the activation of speech identifying function, perhaps can carry out the voice activation function simultaneously when voice command is transfused to.
In addition, if voice command also by input (among the S103 not) fully, then controller 180 also can be carried out other function.For example, if the user carries out another action by touching menu option etc., or press button on the terminal (being among the S109), then controller 180 is carried out corresponding selected functions (S110).
In addition, after step S104 determined the implication of input voice command, controller 180 was exported the end value (S105) of implications at controller 180.That is, end value can comprise be used to carry out with corresponding to definite implication function or serve relevant menu, be used to control the control signal of the specific components etc. of portable terminal.End value also can comprise the data that are used to show the information relevant with the voice command of being discerned.
Controller also can ask the user to confirm to export end value whether correctly (S106).For example, when voice command has low discrimination or is determined when having a plurality of implication, the controller 180 exportable a plurality of menus relevant with corresponding meaning are carried out the menu of being selected by the user (S107) then.Equally, controller 180 can inquire whether the user will carry out the certain menu with high discrimination, carries out according to user's selection or response then or shows function corresponding or menu.
In addition, the also exportable speech message of controller 180 is selected concrete menu or option with the request user, and for example " do you want to carry out the photograph album menu? answer is or denys ".Then, controller 180 responds the function of carrying out or not carrying out corresponding to concrete menu or option based on the user.If the user did not respond in the concrete time interval (for example, 5 seconds), then controller 180 also can be carried out concrete menu or option immediately.That is, if not from user's response, then controller 180 can not be judged as affirmative acknowledgement (ACK) and comes automatic executing function or menu by will having to respond.
In addition, the user can utilize his or her voice (for example, be or not) or via answering the problem of self-controller 180 such as hardware or other input unit such as software push buttons, touch pad.In addition, at step S106, if negative acknowledge from the user (among the S106 not) is arranged, that is, if the implication of voice command is not judged exactly that then controller 180 can be carried out additional mistake treatment step (S108).
That is, the mistake treatment step can be carried out by the input that receives voice command once more, maybe can have a plurality of menus that are higher than other discrimination of a specific order by demonstration and maybe can be judged as a plurality of menus with similar meaning and carry out.One of the optional majority of user menu then.Equally, when the quantity with the function that is higher than other discrimination of a specific order or menu during less than predetermined quantity (for example, 2), controller 180 can automatically perform corresponding function or menu.
Next, Fig. 6 A is the general survey of method that the speech identifying function of activation portable terminal according to an embodiment of the invention is shown.Shown in display screen 410, the user can be by touching soft key 411 voice activated recognition functions.The user also can come the terminated speech recognition function by decontroling soft key 411.More specifically, the user can be by touching soft key 411 voice activated recognition functions, and continue to touch soft key 411 or hard button 412 up to finishing phonetic order.That is, the user can decontrol soft key 411 or hard button 412 when finishing phonetic order.Therefore when controller 180 knows that phonetic order will be transfused to and when phonetic order is finished.As mentioned above,, the user determines, so the accuracy of the explanation of input voice command increases because participating in this directly.
Controller 180 also can be configured to for example startup of identification voice activation functional part when the user touches soft key 411 for the first time, and the identification phonetic order is finished when the user touches soft key 411 once more then.Other system of selection also is possible.In addition, shown in the display screen among Fig. 6 A 410, except that using soft key 411, voice activation and inactive can the execution by the hard button of handling on the terminal 412.
In addition, soft key 411 shown in the display screen 410 can be that the user presses or decontrols to activate/to stop using the single soft key of speech identifying function, perhaps can be the menu button that produces when selected such as the menu list of " 1. beginning voice activation and 2. stops voice activation ".For example, soft key 411 can also show during holding state.
In another example, shown in display screen 420, the user also can activate and inactive speech identifying function by the optional position of touch screen.Display screen 430 illustrates another example, and wherein the user is higher than other specific sound of a specific order by generation or audio activates and inactive speech identifying function.For example, the user can clap hands to produce this strike note.
Therefore, according to one embodiment of present invention, speech identifying function can be realized by two kinds of patterns.For example, speech identifying function can be realized as and be used to detect concrete sound or audio and be higher than a certain other first pattern of level and be used for voice command recognition and determine second pattern of the implication of voice command.If sound or audio are higher than a certain rank in first pattern, thus second pattern voice command recognition that is activated then.
Display screen 440 illustrates another method of the user's activation and the speech identifying function of stopping using.In this example, the limb motion that controller 180 is configured to interpreting user begins or stops the voice activation function.For example, shown in display screen 440, controller 180 can be configured to the user is moved the instruction that hand is interpreted as wanting voice activated recognition function to display, and the user is removed the instruction that is interpreted as wanting the terminated speech recognition function with hand from display.Short range or remote wireless signals also can be used for starting and stopping speech identifying function.
Therefore, according to embodiments of the invention, because the voice activation function is activated and stops, so speech identifying function is not carried out continuously.That is, when speech identifying function was remained state of activation continuously, the amount of comparing the resource on the portable terminal with embodiments of the invention increased.
In addition, as discussed above in reference to Figure 5, when speech identifying function is activated, controller 180 be appointed as the territory of the certain database of the benchmark of voice command identification with display 151 on the relevant territory of menu list.Yet if select from menu list or the execution specific menu, the territory of database can be designated as selected menu or the relevant information of submenu with certain menu.
In addition, when selecting by voice command or touch input or carrying out certain menu, controller 180 can be with the form output help information relevant with the submenu of certain menu of speech message or pop-up window or balloon.For example, shown in Fig. 6 B, when the user selected " multimedia menu " via touch or voice operating, controller 180 was shown as the help information 441 of balloon with the information relevant with the submenu (for example, broadcasting, camera, line-based browser, recreation etc.) of " multimedia menu ".Perhaps, the controller 180 exportable voice signals 442 that comprise help information.The user can utilize voice command then or select one of shown help options by touch operation.
Fig. 6 C illustrates the embodiment that the user utilizes his or her limb motion (being user's gesture in this example) choice menus item.More specifically, when the user moved on to his or her finger more close menu item 443, controller 180 showed the submenu 444 relevant with menu 443.For example, controller 180 can be via sensing cell 140 identification users' limbs mobile message.In addition, shown help information can be shown as having transparency or the brightness of controlling according to user's distance.That is, along with user's hand is more and more approaching, shown item can further be highlighted.
As discussed above, controller 180 can be configured to determine the startup of speech identifying function and stop based on various method.For example, the user can select/handle soft or hard button, the optional position on the touch touch-screen etc.Controller 180 also can keep predetermined amount of time with the activation of speech identifying function, and the end of amount stops activating automatically at the fixed time then.Equally, controller 180 can only keep activating when carrying out specific button or touch operation, stops automatically then activating when input is disengaged.Controller 180 also can stop activation when no longer importing voice command and reach the certain hour amount.
Next, Fig. 7 A is the flow chart that the method for the voice command in the identification portable terminal according to an embodiment of the invention is shown.With reference to figure 7A, when speech identifying function is activated, controller 180 with the territory that can be used as the database of voice command identification benchmark be appointed as with display 151 on the menu that shows or the relevant territory (S201) of submenu of this menu.The user also utilizes accurately pad name or utilizes natural language (for example, Oral English Practice) input voice command (S202).
Controller 180 is stored in (S203) in the memory 160 with the voice command of input then.In addition, when input voice command under specified territory, controller 180 is by using background and the content of speech recognition algorithm based on specified domain analysis voice command.Equally, voice command can be converted into the text category information for analyzing (S204), is stored in then in the certain database of memory 160.Yet, can omit the step that voice command is transformed into the text category information.
Then, for the background and the content of analyzing speech order, controller 180 detects the specific word or the keyword (S205) of voice command.Based on speech that is detected or keyword, the background and the content of controller 180 analyzing speech orders, and by determine or judge the implication of voice command with reference to institute's canned data in the certain database.
In addition, as discussed above, comprise special domain as the database of benchmark, and be performed (S207) with the corresponding function of implication or the menu of the voice command of judging based on database.Equally, be designated as each information relevant because be used for the database of speech recognition with certain menu, thus the raising of the speed of discrimination and voice command recognition, and employed stock number reduces on the terminal, in addition, the matching degree that presets title of discrimination indication and certain menu.
The discrimination of input voice command also can be judged according to a plurality of information relevant with the specific function of voice command or menu.Therefore, when information when accurately coupling is included in specific function in the voice command or menu (for example, menu identity), the discrimination of input voice command improves.
In more detail, Fig. 7 B is the general survey of method that the voice command of identification portable terminal according to an embodiment of the invention is shown.Shown in Fig. 7 B, the voice command as natural language " I want to see my pictures (I want to see my picture) " that user's input is made up of six words.In this example, discrimination can be judged based on a plurality of meaningful word (for example, see, picture) relevant with certain menu (for example, photograph album).In addition, whether controller 180 word that can determine to be included in the voice command based on the information that is stored in the database is the significant word relevant with specific function or menu.For example, be included in the natural-sounding voice command, with the irrelevant insignificant word of certain menu can be subject (I), preposition (to) and possessive pronoun (my).
Equally, natural language is by people's current language, and has the notion opposite with artificial language.In addition, natural language can utilize the natural language processing algorithm to handle.Natural language can comprise or can not comprise the accurate title relevant with certain menu, and this causes the difficulty when voice command recognition fully accurately sometimes.Therefore, according to embodiments of the invention, when voice command had than the high discrimination of a certain rank (for example, 80%), controller 180 judged that these identifications are accurate.
In addition, when controller 180 judged that a plurality of menus have similar implication, controller 180 showed these a plurality of menus, and the user can select one of shown menu so that its function is performed.In addition, can show at first that menu with relative high recognition or compare discriminatively with other menu shows.
For example, Fig. 8 is the general survey of method that the menu of the phonetic recognization rate that is used to show portable terminal according to an embodiment of the invention is shown.As shown in Figure 8, the menu icon with higher discrimination is displayed on the core of display screen 510, perhaps shows with bigger size or darker color as shown in display screen 520.Also can at first show the menu icon with higher discrimination, be low discrimination menu then successively or in order.
In addition, at least one in the size that controller 180 can be by changing menu, position, color, the brightness or highlight by order and to show a plurality of menus discriminatively with higher discrimination.The transparency of menu also can suitably be changed or be controlled.
In addition, shown in the bottom of Fig. 8, the menu with high user selection rate can be updated or be arranged to have discrimination.That is, the history (S301) that controller 180 storage users select is also carried out learning process (S302), thereby upgrades the concrete discrimination (S303) of the menu option that the number of times selected by the user Duos than other menu option.Therefore, the number of times that frequently uses menu to be selected by the user can be applied to the discrimination of menu.Therefore, the number of times according to the user selects concrete menu can have different discriminations with the voice command that pronounces or the identical or similar mode of content is imported.
In addition, controller 180 also can be stored the time that the user carries out concrete function.For example, the user can on Monday check e-mails or missed messages when wake up Friday.This temporal information also can be used for improving discrimination.The state of terminal (for example, standby mode etc.) also can be used for improving discrimination.For example, when for the first time opening their portable terminal, when this terminal when off-position is opened or the like, the user can check e-mails or missed messages.
Then, Fig. 9 is the general survey of method that the voice command of identification portable terminal according to another embodiment of the invention is shown.As shown in Figure 9, the voice activated recognition function of user, and input voice command " I want to see my pictures (I want to see my picture) ".Controller 180 is appointed as the territory relevant with shown submenu with the territory that is used for voice command recognition data storehouse then.In this example, controller 180 is clarifying voice commands (S401) then, shows a plurality of menus (S402) that have greater than the probability of occurrence (80%).Shown in the display screen among Fig. 9 610, controller shows four multimedia menus.
Controller 180 also shows the menu (for example, " photograph album " menu option 621 in this example) with maximum probability discriminatively.The user can select in the shown menu any to carry out the function corresponding to selected menu then.In example shown in Figure 9, the user selects photograph album menu option 621, and the picture in the selected photograph album of controller 180 demonstrations, shown in display screen 620.
Equally, shown in the step S402 in Fig. 9 bottom, when having only single menu to be confirmed as being higher than predetermined probability, controller 180 also can be carried out function immediately.That is, to be confirmed as be when having unique menu of the discrimination that is higher than predetermined threshold or probability when the photograph album menu selects 621, and the user needn't choice menus photograph album menu option 621 controllers 180 just shows the picture in the photograph album immediately, shown in display screen 620.In addition, even menu has the clear and definite title such as " photograph album ", memory 160 also can be stored a plurality of information relevant with this menu, such as " photo, picture, photograph album ".
In addition, discuss with reference to figure 6B as above, when specific menu according to mode of operation or pattern (for example, being used to indicate the pattern of speech identifying function) by voice command or to touch input selected or when carrying out, controller 180 also can be exported to the user with help information.In addition, the user can utilize the suitable menu option that is arranged in the environment setup menu to come the setting operation pattern, is used for exporting help.Therefore, the user can not need or not have an operation terminal of the present invention under the situation of senior technical ability.That is, a lot of old men may not experience a plurality of different menu that is provided with in the operating terminal.Yet, utilizing terminal of the present invention, the user who generally is unfamiliar with the complicated user interface of terminal setting can easily operate this portable terminal.
In addition, when controller 180 voice command is identified as when having a plurality of implication (, when the natural language speech order does not comprise clear and definite menu identity, such as being included in when menu in " multimedia " category but when not having the clear and definite title of one of " camera ", " photograph album " and " video "), controller 180 shows a plurality of menus with the discrimination that is higher than a certain value (for example, 80%).
Next, Figure 10 is the general survey of a plurality of databases that is used to discern the voice command of portable terminal according to one embodiment of present invention by controller 180.In this embodiment, database storage controller 180 is used to judge the information of the implication of voice command, and can be any amount of database according to information characteristics.In addition, can under the control of controller 180, upgrade according to the associated databases of information characteristics configuration by continuous learning process.
For example, learning process attempts user's voice and corresponding speech are complementary.For example, when the Korean of being said by the user " Saeng-il " (referring to " birthday ") is misinterpreted as " Saeng-hwal " when (referring to " life "), the user is revised as " Saeng-il " with this speech " Saeng-hwal ".Therefore, the same pronunciation of being imported by the user afterwards will be identified as " Saeng-il ".
As shown in figure 10, the associated databases according to information characteristics comprises first database 161, second database 162, the 3rd database 163 and the 4th database 164.In this embodiment, first database 161 is that the unit storage is used to discern the voice messaging by the voice of microphone input with phoneme, syllable or morpheme.The information (for example, grammer, pronunciation accuracy, sentence structure etc.) of the whole implication of voice command is judged in 162 storages of second database based on the voice messaging of being discerned.163 storages and the function of portable terminal or the relevant information of menu of service of the 3rd database, and 164 storages of the 4th database are from the message or the voice messaging of portable terminal output, so that the user who receives about implication that voice command is judged confirms.
In addition, can the 3rd database 163 be appointed as the information relevant with the menu of specific category according to the territory of presetting for voice command identification.Equally, but corresponding database also stored sound (pronunciation) information and with the corresponding phoneme of pronunciation information, syllable, morpheme, word, keyword or sentence.Therefore, the implication of voice command be determined or be judged to controller 180 can by using in a plurality of databases 161 to 164 at least one, and carry out with the function of the implication of judging corresponding to voice command or serve relevant menu.
Next, Figure 11 is the general survey that the state that the speech identifying function of portable terminal according to an embodiment of the invention is being performed is shown.As shown in the figure, when controller 180 was carried out speech identifying function, controller 180 showed certain indicators or icon 500, and its notice user speech recognition function is performed.Also exportable sound of controller 180 or message are performed with notice user speech recognition function.
In addition, the above embodiments relate to the instruction of identification user's voice.Yet the present invention is applicable to that also the user carries out the additional input function when phonetic order is identified.For example, speech recognition and touch input, speech recognition and button input or speech recognition or touch/button input can be carried out simultaneously.
In addition, controller 180 can prevent that speech identifying function is with AD HOC or menu or carry out under particular operational state.In addition, audio-frequency information (for example, verbal announcement or tutorial message) or the video information (for example, the designator among Figure 11 500) that is being employed of indication speech identifying function can show under speech recognition mode, menu or mode of operation.Equally, the information of using speech identifying function can be offered the user by the output help information.
Figure 12 illustrates the general survey of handling the method for the subcommand relevant with the certain menu of portable terminal by voice command according to an embodiment of the invention.In this embodiment, suppose the voice activated recognition function of user.
Then, shown in the left side of Figure 12, the user touches alarm clock/schedule icon, and controller 180 demonstration ejection help menus, and it lists available function (for example, 1) alarm clock, 2) schedule, 3) plan target and 4) memorandum).Then, user input voice order " plan target ", and the implication of controller 180 clarifying voice commands and show a plurality of menus be confirmed as corresponding to voice command are shown in display screen 611.
That is, shown in display screen 611, controller 180 shows four incidents relevant with the plan target function.The user imports voice command then and " selects the 2nd ", and controller 180 is selected the 2nd option (meeting 1).The user imports voice command " I want to delete it " then.Controller 180 shows popup menu 613 then, and the request user is confirmed to be about the deletion clauses and subclauses or denys.User input voice order "Yes" then, controller 180 is deleted clauses and subclauses then, shown in the display screen 616 of Figure 12.
In addition, if not from user's response, then controller 180 can automatically perform subcommand by response is judged as affirmative acknowledgement (ACK).Controller 180 is also exported voice command 615, notifies this item of user deleted.Equally, except selecting first menu alarm clock/schedule by touching menu, the user can send another voice command instead.Equally, when the user at first selected alarm clock/schedule icon to notify the corresponding task of user to be performed, controller 180 can send speech message 617.
In addition, as discussed above, when certain menu was performed, controller 180 was appointed as the territory relevant with performed menu with the territory of discerning the database of benchmark as voice command.That is, this territory comprises the information relevant with the submenu of certain menu, or with the relevant information of carrying out from certain menu of subcommand.
Next, Figure 13 illustrates general survey of searching for the method for subway maps in portable terminal by voice command according to an embodiment of the invention.In this example, suppose the voice activated recognition function of user once more.In addition, suppose that also controller 180 is based on the user's voice order or utilize the manipulation of other input unit to carry out and the certain menu that shows that subway maps is relevant.
That is, controller 180 shows subway maps shown in display screen 621.As discussed above, when certain menu was performed, controller 180 can be appointed as the territory of discerning the database of benchmark as voice command the territory relevant with performed menu (for example, distance (time) information between the title of subway station, each station).In addition, this territory comprises the information relevant with the submenu of certain menu, or with can be from the relevant territory of subcommand that certain menu is carried out.
Controller 180 sends voice command 626 then, and the request user imports initial or the terminus.The user selects two stations then on display screen 621.That is, controller 180 receives two stations 622 and 623 from shown subway maps, and the user wonders through these two time quantums that the station is required.When by terminal (that is, saying the initial sum terminus) or by 622 and 623 promptings of two stations of touch, the user can utilize voice command to select two stations.It also is possible selecting other method at two stations.After the user selects two stations, controller 180 output speech messages 624, it comprises two stations (that is, ISU and Seoul station is selected) of selecting via loud speaker.Equally, except that the output speech message, controller 180 can show have ask or the pop-up window of input information instead.
In addition, when two stations are selected, the also exportable help information of controller 180.For example, shown in the display screen among Figure 13 621, the controller display column goes out the help of name of station and subway line color and ejects the balloon window.The user asked then through the required time of two institutes selective calling.The user can ask this information by input phonetic order " Wish i knew from ISU to Seoul station will with how long ".
Controller 180 detect then with the territory in handle the information-related significant speech of subway maps (for example, how long, usefulness, Isu, Seoul stands) so that the background of analyzing speech order and content.Based on the information of being analyzed, controller 180 determines that voice command has the implication of the temporal information between two subway station Isu of request and the Seoul station.
In addition, when controller 180 was judged the implication of phonetic orders, controller 180 can at first ask the user to confirm whether the implication of the voice command judged is accurate.Controller 180 shows these two stations then on subway maps, be communicated with the distance (or time) between two stations, the station number between two stations or the like, and output speech message 627, notifies subscriber-related result shown in the display screen among Figure 13 625.In addition, as mentioned above, if the user in the concrete time interval to confirming that request is responding, then controller 180 can be interpreted as it affirmative acknowledgement (ACK) and provide institute to ask the result who serves.
Next, Figure 14 illustrates the general survey of passing through the method for voice command multimedia rendering file in portable terminal according to an embodiment of the invention.In addition, following description hypothesis user has imported activation control signal, and the voice activated recognition function of controller 180 beginnings.Suppose that also the input by receiving voice command or the user that utilizes other input unit handle and carry out the certain menu relevant with the multimedia reproduction menu controller 180.
That is, shown in display screen 631, controller 180 explicit users can be selected the list of songs play.Therefore, in the present invention, the multimedia file of user expectation can directly search by voice command, and reproduces thus.More specifically, in case carry out the multimedia reproduction menu, controller 180 just is appointed as the territory relevant with performed menu with the territory of discerning the database of benchmark as voice command.
As mentioned above, the territory comprise the information relevant with the submenu of multimedia reproduction menu, with can from the relevant information of the subcommand that the multimedia reproduction menu is carried out or with multimedia file relevant information (for example, filename, recovery time, copyright owner etc.).
In addition, input that controller 180 can be by receiving voice command or the user that utilizes other input unit handle and show the multimedia file tabulation.In the example of Figure 14, shown in display screen 631, from listed files, selecting under the state of a file, the user imports its natural language speech order (for example, let us is play this first song).
In case voice command is transfused to, controller 180 is used for treatment of selected menu in the territory with regard to the detection significant speech relevant with submenu or subcommand (for example, broadcast, this first song).In addition, whole background and the content implication of judging voice command of controller 180 by analyzing detected speech and voice command.
In case judge the implication of voice command, whether controller 180 just receives the user and confirms accurately about the implication of the voice command judged.For example, as shown in figure 13, controller 180 shows pop-up window 633, and the request user says "Yes" or "No" about the broadcast of selected song.The also exportable speech message 632 of controller, whether inquiry user song 2 is the songs that will play.The user can say "Yes" then, then the song shown in controller 180 outputs, shown in display screen 634.
Perhaps, controller 180 can be play selected song automatically, and does not ask the user to confirm to select.The user also can use suitable menu option, and controller 180 requests are about the affirmation of selected task or do not ask affirmation to be set to acquiescence.In addition, if not from user's response, then controller 180 can automatically perform the voice command of being judged by response is judged as affirmative acknowledgement (ACK).
Therefore, in this embodiment, the file that selection will be reproduced, and the reproduction order by voice command input selected file.Yet, when user's known file name, can be by voice command with filename directly input from parent menu.
Next, Figure 15 illustrates the general survey of passing through the method for voice command send Email or text message in portable terminal according to an embodiment of the invention.Hypothesis has been imported activation control signal once more, the voice activated recognition function of controller 180 beginnings, and controller 180 input or the user that utilizes other input unit by receiving voice command handle and carry out certain menu (for example, mails/message transmission/reception menu) and describe present embodiment.。
More specifically, in case carry out mail (or message) transmission/reception menu, controller 180 just will be appointed as the territory relevant with performed menu as the database of voice command identification benchmark.This territory comprises the information relevant with the submenu of mails/message transmission/reception menu, the information relevant with the subcommand that can send/receive the menu execution from mails/message, the information (for example, transmitter, receiver, transmission/time of reception, title etc.) relevant with transmission/reception mails/message.
The controller 180 also input by receiving voice command or the user that utilizes other input unit is handled and is shown that mails/message transmissions/reception tabulates.As shown in display screen 641, user input voice instruction " I want to reply ".Controller 180 shows the recoverable message of user that is received then, shown in display screen 645.In this example, shown in display screen 645, from the mails/message tabulation, selecting under the state of a mails/message, the user uses its natural language (for example, replying this message).
In addition, in case voice command is transfused to, controller 180 just detects and the answer of selected mails/message in the territory is handled relevant significant speech (for example, answer, this message).Then, controller 180 is by analyzing speech and the whole background of voice command and the implication (carry out mails/message and reply menu) that voice command judged in context that is detected.
In case judge the implication of voice command, whether accurately controller 180 just can receive the user about the affirmation of the implication of the voice command judged.For example, for user's affirmation, exportable speech message 642, perhaps exportable text class message 643.When message that output needle is confirmed the user, the user can reply by voice or other input unit.If from user's response, then controller 180 can not automatically perform the function corresponding with the implication of being judged by response being judged as affirmative acknowledgement (ACK).Then, when carrying out mails/message answer menu, controller 180 writes the address/phone number of importing selected calling party in the window 644 automatically at mails/message.
Therefore, in this embodiment, at first select the mails/message that to reply, and utilize voice command to import the commands in return of selected mails/message.Yet, when the user knows information about calling party, the mails/message of calling party is replied and can directly be imported by voice command.
In addition, embodiment shown in Figure 15 can be modified with corresponding to sending text message.More specifically, controller 180 comprises the software that user's voice is converted to text, make the user can tell terminal he or she what is thought, and controller 180 will be imported speech conversion and become text message.Controller 180 also can show text through conversion to the user, so the user can confirm that this conversion is acceptable.But the user requesting terminal sends to desired user with text message then.
Modified embodiment is particularly advantageous, because be to require great effort very much and dull process with the hand input of text messages.Because a lot of different, a lot of users want to send text message rather than calling party, but do not want to experience the effort process that a plurality of keys of manual selection send single text message.Modified embodiment of the present invention makes the user can utilize the text message of its phonetic entry expectation, then text message is sent to expectation side.
Figure 16 illustrates general survey of carrying out the method for call in portable terminal by voice command according to an embodiment of the invention.Be similar to above embodiment, this embodiment supposes that also the user has imported activation control signal, the voice activated recognition function of controller 180, and controller 180 input by receiving voice command or the user who utilizes other input unit operate and carry out the certain menu relevant with the call telephone directory book or the menu list of nearest receipt of call (for example, about).
In case the menu about call is performed, controller 180 is appointed as the territory relevant with call with the territory as the database of the benchmark of voice command identification.In addition, this territory comprises and makes a call, incoming call, misses relevant information such as calling, and each phone relevant information (for example, initiation time, incoming call time, transmitter, receiver, call duration, calling frequency etc.).
In addition, input or the user that utilize other input unit of controller 180 by receiving voice command handles and shows phone call list.That is, the user uses his or her natural language input voice command (for example, I want to see the call that is received), shown in display screen 711.
In case input voice command, controller 180 just (for example detects the significant speech relevant with the call in the territory, see, receive, phone, calling), and judge that by analyzing the speech that detected and the whole background and the content of voice command voice command has the implication of " call that output is received ".In case the implication of voice command is judged, controller 180 is with regard to the tabulation of output needle to the call that received, shown in display screen 712.
In addition, the user imports voice command then and " calls out this people " under the state of one of selection from output listing.As a result, controller 180 judges that voice command has the implication of " the other side who calls out selected receipt of call ".Then, whether controller 180 receives users and confirms accurately about the implication of the voice command judged.That is, controller 180 exportable speech messages 713 or text class message 715.
The user also can reply by voice or other input unit.As mentioned above, if not from user's response, then controller 180 can automatically perform the function corresponding with the implication of being judged by response being judged as affirmative acknowledgement (ACK).Controller 180 is also exported indicating call and is connected ongoing message 714.
Therefore, in this embodiment, from phone call list, select calling party, and by the call command of voice command input to selected calling party.Yet, when the user has known information about calling party, can directly carry out calling by voice command to this people.
Next, Figure 17 illustrates the general survey of using the method for phone book information in portable terminal by voice command according to an embodiment of the invention.Make in the description here with above other embodiment in identical hypothesis is described.Promptly, suppose that controller 180 just begins voice activated recognition function in case input activates control information, and controller 180 is selected or (is for example carried out certain menu by receiving the voice command input or utilizing the user of other input unit to handle, the telephone directory menu), shown in display screen 720.
In case carry out the telephone directory menu, controller 180 just the territory as the database of voice command identification benchmark is designated as with can be from the submenu or the relevant territory of subcommand of the telephone directory menu of telephone directory menu execution.In addition, the territory is designated so that improve discrimination, but and nonessential appointment.
In addition, under holding state or the menu selecteed state relevant with telephone directory, the user is with its natural language input voice command (for example, editor James adds James, searches James, calls out James, and I want to send out message and give James).In case input voice command, controller 180 just detect significant speech relevant with call in the territory, and speech and the whole background of voice command and the implication separately that content is judged voice command by analyzing and testing.
In case judge the implication separately of voice command, controller 180 is just carried out function or the menu corresponding with respective voice, shown in display screen 722 to 724.In addition, before carrying out, whether controller 180 can receive user's the voice command implication about being judged and confirm accurately.As mentioned above, for user's affirmation, exportable speech message or text class message.
In addition, when message that output needle is confirmed the user, the user can reply by voice or other input unit.If from user's response, then controller 180 can not automatically perform the function corresponding with the judgement implication by response being judged as affirmative acknowledgement (ACK).
Next, Figure 18 illustrates the general survey that changes the method for rear projection screen in portable terminal by voice command according to an embodiment of the invention.This description is supposed once more: just begin voice activated recognition function in case input activates control information controller 180, and handle execution certain menu (for example, photograph album menu) by input that receives voice command or the user who utilizes other input unit.
The photograph album menu can be by voice command input or utilize the rapid submenu of multistep of other input unit to carry out.Equally, the photograph album menu can directly be carried out (for example, I want to see my photograph album) by the natural language verbal order, shown in display screen 731.According to the judgement implication of voice command, controller 180 is by carrying out the tabulation of photograph album menu output photo, shown in display screen 732.Then, controller 180 receives a photo of selecting from the album list of output.
Under this state, if input user voice command (for example, changing my wallpaper with this picture), then controller 180 detects submenu or the relevant meaningful information (for example, change, wallpaper) of subcommand with performed menu.Then, controller 180 by analyzing and testing to speech and the whole background of voice command and the implication that content is judged voice command.That is, controller 180 judges that voice command has the implication of " rear projection screen is become selected photo ".
Whether accurately in case judge the implication of voice command, controller 180 just shows the rear projection screen corresponding with selected photo, and receive the user about the affirmation of the implication of the voice command judged.Here, for the user confirms, exportable speech message 733, perhaps exportable text class message 734.The voice command of being judged also can directly be carried out under the situation that does not have the user to confirm according to high discrimination or predetermined environment setup menu.
When output was used for the message of user's affirmation, the user can reply by voice or other input unit.If from user's response, then controller 180 can not automatically perform the function corresponding with the voice command of being judged by response being judged as affirmative acknowledgement (ACK).
In order to change rear projection screen, can at first carry out the photograph album menu, as shown in the embodiment of the present invention.On the contrary, after carrying out the rear projection screen menu, but the photo of search subscriber expectation is to be used for change.
Figure 19 illustrates the general survey of passing through the method for voice command multimedia rendering file in portable terminal according to an embodiment of the invention.Be similar to above embodiment, this describes hypothesis: in case the input activation control signal, controller 180 just begins voice activated recognition function, and the input by receiving voice command or the user that utilizes other input unit handle and carry out certain menu (for example, multimedia reproduction menu).
For by user's multimedia rendering file, carry out certain menu, one of submenu of certain menu is selected with dir, and selects a file and reproduce thus from listed files.Yet, in the present invention, can reproduce thus by the multimedia file that voice command directly searches user expectation.
For example, if at the speech identifying function specific voice command (for example, moving to the Beatles photograph album) of back input that is activated, controller 180 implication of judging voice command by the whole background and the content of analyzing speech order then is shown in display screen 741.Based on the information of being analyzed, controller 180 is carried out specific function or menu, or by moving to the particular file folder dir, shown in display screen 742.
When select a file from listed files after, importing voice command (for example, play this first song or play the 3rd), the implication that controller 180 is judged voice command by the whole background and the content of analyzing speech order.In addition, corresponding with the implication of voice command function or menu can directly be carried out according to high discrimination or predetermined environment setup menu.
In case judge the implication of voice command, whether controller 180 just receives the user and confirms accurately about the implication of the voice command judged.Here, for user's affirmation, exportable text class message or speech message 743.When message that output needle is confirmed the user, the user can reply by voice or other input unit.If from user's response, then controller 180 can be by not being judged as the function that affirmative acknowledgement (ACK) automatically performs the voice command of being judged with response.Selected song is carried out and play to controller 18 then, shown in display screen 744.
Therefore, in this embodiment, the file that selection will be reproduced is by the reproduction order of voice command input to selected file.Yet, when the user knows filename, can reproduce being used for from the direct import file name of Previous Menu by voice.
Therefore,, under the state that speech identifying function is activated, will import voice command and convert particular form to, and its background and content and the database be appointed as the territory of benchmark will be compared according to each embodiment of the present invention.In addition, will output to the specific components of portable terminal with the corresponding end value of the implication that voice command is judged.
Portable terminal of the present invention can be by controlling with its specific function based on the implication of background and content judgement input voice command or serving relevant menu.In addition, portable terminal of the present invention can be by being appointed as the territory that is used for speech recognition with certain menu according to its mode of operation or operator scheme or serving relevant territory and improve phonetic recognization rate.
Equally, portable terminal of the present invention can pass through to use its one or more user interfaces (UI), even when speech identifying function was activated, selection simultaneously or execution were with specific function or serve relevant menu, so that detect user's manipulation.In addition, portable terminal of the present invention can be according to its mode of operation or operator scheme by providing about the help information of the input of voice command via voice command control with specific function or serve relevant menu, and no matter user's skill how.
In addition, a plurality of territories can comprise at least two territories in the following territory: corresponding to the free email domain of the Email that sends on the portable terminal and receive, corresponding to the schedule task domain that is distributed in the schedule incident on the portable terminal, contact person territory corresponding to the contact person on the portable terminal, corresponding to the telephone directory domain that is stored in the telephone number on the portable terminal, map domain corresponding to the cartographic information that provides by portable terminal, corresponding to the photograph field that is stored in the photo on the portable terminal, message field corresponding to the message that sends on the portable terminal and receive, multimedia domain, MMD corresponding to the multimedia function of carrying out on the portable terminal, the external equipment territory of the external equipment that can be connected to corresponding to portable terminal, call history territory corresponding to the calling that sends on the portable terminal and receive, and corresponding to the territory that is provided with that function is set of carrying out on the portable terminal.
In addition, can or the predetermined threshold of discrimination be set by mobile terminal user by the manufacturer of portable terminal.
In addition, each embodiment can use for example computer software, hardware or its certain combination and realizes in computer-readable medium more than.Realize that for hardware the foregoing description can be at one or more application-specific integrated circuit (ASIC)s (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, be designed to carry out realization in other electronic unit of function described herein or the combination of its selectivity.
Realize that for software embodiment as herein described can realize by the independent software module such as program and function, each software module realizes one or more in function as herein described and the operation.Software code can pass through the software application realization with any suitable programmed language compilation, and can be stored in the memory (for example, memory 160), and can be carried out by controller or processor (for example, controller 180).
In addition, portable terminal 100 can be realized with various different configurations.The example of these configurations comprises flip-shell, slide cover type, board-type, rotary-type, rotary type and combination thereof.
Those skilled in that art are appreciated that and can make various modifications and variations and not break away from the spirit or scope of the present invention the present invention.Therefore, the present invention is intended to contain all such modifications of the present invention and distortion, as long as they drop in the scope of appended claims and equivalent technique scheme thereof.

Claims (28)

1. portable terminal, it comprises:
Input unit, it is configured to receive input to activate the speech identifying function on the described portable terminal;
Memory, it is configured to store a plurality of territories relevant with operation with the menu of described portable terminal; And
Controller, it is configured to be included in special domain in a plurality of territories of described memory based on the input reference that being used for of being received activated described speech identifying function, import user speech with language model and acoustics Model Identification, and determine at least one menu and the operation of described portable terminal based on special domain of being visited and the user speech of being discerned based on the territory of being visited.
2. portable terminal as claimed in claim 1 is characterized in that, receives describedly when being used to activate the input of described speech recognition when described portable terminal is in concrete menu or operation, and the special domain of being visited is corresponding to described concrete menu or operation.
3. portable terminal as claimed in claim 2, it is characterized in that described concrete menu or operation comprise at least one in multimedia menu or operation, contact person's menu or operation, information receiving and transmitting menu or operation, voice menus or operation, organizer menu or operation, on-screen menu or operation, Utilities Menu or operation, camera menu or operation and setup menu or the operation.
4. portable terminal as claimed in claim 1 is characterized in that, described controller is configured to also to determine that determined menu and operation are exactly corresponding to the discrimination of described input user speech.
5. portable terminal as claimed in claim 4 is characterized in that, also comprises:
Display unit, it is configured to display message;
Wherein said controller also is configured to based on special domain of being visited that is confirmed as having the discrimination that is higher than predetermined threshold and the user speech of being discerned, all menus and the operation of the described portable terminal of output on described display unit.
6. portable terminal as claimed in claim 5, it is characterized in that, described input unit also is configured to receive the phonetic entry order that is used to select one of shown menu and operation, and described controller is discerned described input voice command and whether accurately output inquire the relevant input voice command of being discerned information.
7. portable terminal as claimed in claim 5, it is characterized in that, described controller also is configured to based on special domain of being visited with the discrimination that is higher than described predetermined threshold and the user speech of being discerned, with the order of higher discrimination to low discrimination, described all menus and the operation of the described portable terminal of output on described display unit.
8. portable terminal as claimed in claim 5 is characterized in that, described predetermined threshold is by the manufacturer of portable terminal or by described mobile terminal user setting.
9. portable terminal as claimed in claim 5, it is characterized in that, described controller also is configured at least one by the size of controlling described menu or operation, position, color, brightness and in highlighting, and shows to have the menu or the operation of high discrimination on described display unit distinguishablely.
10. portable terminal as claimed in claim 4, it is characterized in that, described controller also is configured to determine the certain menu on the described terminal or operate previous selecteed number of times, and based on determined described certain menu or operate the discrimination that previous selecteed number of times is regulated described certain menu or operation.
11. portable terminal as claimed in claim 1, it is characterized in that, described input unit comprises with in the lower unit at least one: 1) be touched to activate the touch soft key of described speech identifying function, 2) be pressed or handle to activate the hard button of described speech identifying function, 3) be included in being touched of touch-screen in the described input unit to activate the optional position of described speech identifying function, 4) be transfused to activate the strike note of described speech identifying function, 5) local zone radio signal or remote zone radio signal, and 6) from user's limbs information signal.
12. portable terminal as claimed in claim 1 is characterized in that, also comprises:
First database is configured to store voice or the pronunciation information that is used for discerning described input user speech by described controller;
Second database is configured to store speech, keyword or the sentence information that is used for discerning described input user speech by described controller;
The 3rd database is configured to store function or the relevant information of menu with described portable terminal; And
The 4th database is configured to store and will be output to notify the described controller of user just attempting to determine the help information of the implication of described input user speech.
13. portable terminal as claimed in claim 1 is characterized in that, described controller also is configured to export the audio or video information that the described speech identifying function of indication is in state of activation.
14. portable terminal as claimed in claim 1, it is characterized in that described a plurality of territories comprise at least two territories in the following territory: corresponding to the free email domain of the Email that sends on the described portable terminal and receive, corresponding to the schedule task domain that is distributed in the schedule incident on the described portable terminal, contact person territory corresponding to the contact person on the described portable terminal, corresponding to the telephone directory domain that is stored in the telephone number on the described portable terminal, map domain corresponding to the cartographic information that provides by described portable terminal, corresponding to the photograph field that is stored in the photo on the described portable terminal, message field corresponding to the message that sends on the described portable terminal and receive, multimedia domain, MMD corresponding to the multimedia function of carrying out on the described portable terminal, external equipment territory corresponding to the attachable external equipment of described portable terminal, call history territory corresponding to the calling that sends on the described portable terminal and receive, and corresponding to the territory that is provided with that function is set of carrying out on the described portable terminal.
15. a method of controlling portable terminal is characterized in that, described method comprises:
Reception is used to activate the input of the speech identifying function on the described portable terminal;
Based on the input that is used to activate described speech identifying function that is received, visit is included in the special domain in a plurality of territories of being stored in the memory of described portable terminal;
Language model and acoustics Model Identification input user speech based on the territory of being visited; And
At least one menu and the operation of exporting described portable terminal based on the special domain of being visited and the user speech discerned.
16. method as claimed in claim 15 is characterized in that, receives describedly when being used to activate the input of described speech recognition when described portable terminal is in concrete menu or the operation, the special domain of being visited is corresponding to described concrete menu or operation.
17. method as claimed in claim 16, it is characterized in that described concrete menu or operation comprise at least one in multimedia menu or operation, contact person's menu or operation, information receiving and transmitting menu or operation, voice menus or operation, organizer menu or operation, on-screen menu or operation, Utilities Menu or operation, camera menu or operation and setup menu or the operation.
18. method as claimed in claim 15 is characterized in that, also comprises:
Determine at least one menu and the operation of described portable terminal based on special domain of being visited and the user speech of being discerned; And
Determine that determined menu and operation are exactly corresponding to the discrimination of described input voice.
19. method as claimed in claim 18 is characterized in that, also comprises:
Based on special domain of being visited that is confirmed as having the discrimination that is higher than predetermined threshold and the user speech of being discerned, all menus and the operation of the described portable terminal of output on the display unit of described portable terminal.
20. method as claimed in claim 19 is characterized in that, also comprises:
Reception is used to select the phonetic entry order of one of shown menu and operation;
Discern described input voice command; And
Accurately whether the relevant input voice command of being discerned of output inquiry information.
21. method as claimed in claim 19 is characterized in that, also comprises:
Based on special domain of being visited with the discrimination that is higher than described predetermined threshold and the user speech of being discerned, with the order of higher discrimination to low discrimination, described all menus and the operation of the described portable terminal of output on described display unit.
22. method as claimed in claim 19 is characterized in that, described predetermined threshold is by the manufacturer of portable terminal or by described mobile terminal user setting.
23. method as claimed in claim 19 is characterized in that, also comprises:
Size, position, color, brightness by controlling described menu or operation and highlight at least one, on described display unit, show to have the menu or the operation of high discrimination distinguishablely.
24. method as claimed in claim 18 is characterized in that, also comprises:
Determine certain menu or the before once selecteed number of times of operation on the described terminal, and regulate the discrimination of described certain menu or operation based on determined described certain menu or the previous once selecteed number of times of operation.
25. method as claimed in claim 15, it is characterized in that, described input step comprises at least one in following: 1) be touched to activate the touch soft key of described speech identifying function, 2) be pressed or handle to activate the hard button of described speech identifying function, 3) be included in being touched of touch-screen in the described input unit to activate the optional position of described speech identifying function, 4) be transfused to activate the strike note of described speech identifying function, 5) local zone radio signal or remote zone radio signal, and 6) from user's limbs information signal.
26. method as claimed in claim 15 is characterized in that, also comprises:
Storage is used to discern the voice or the pronunciation information of described input user speech in first database;
Storage is used to discern speech, keyword or the sentence information of described input user speech in second database;
In the 3rd database, store function or the relevant information of menu with described portable terminal; And
Storage will be output the help information that is being determined with the implication of notifying the described input user speech of user in the 4th database.
27. method as claimed in claim 15 is characterized in that, also comprises:
The described speech identifying function of output indication is in the audio or video information of state of activation.
28. method as claimed in claim 15, it is characterized in that described a plurality of territories comprise at least two territories in the following territory: corresponding to the free email domain of the Email that sends on the described portable terminal and receive, corresponding to the schedule task domain that is distributed in the schedule incident on the described portable terminal, contact person territory corresponding to the contact person on the described portable terminal, corresponding to the telephone directory domain that is stored in the telephone number on the described portable terminal, map domain corresponding to the cartographic information that provides by described portable terminal, corresponding to the photograph field that is stored in the photo on the described portable terminal, message field corresponding to the message that sends on the described portable terminal and receive, multimedia domain, MMD corresponding to the multimedia function of carrying out on the described portable terminal, external equipment territory corresponding to the attachable external equipment of described portable terminal, call history territory corresponding to the calling that sends on the described portable terminal and receive, and corresponding to the territory that is provided with that function is set of carrying out on the described portable terminal.
CN2008101279100A 2008-04-08 2008-07-02 Mobile terminal and menu control method thereof Active CN101557432B (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
KR1020080032841 2008-04-08
KR1020080032843 2008-04-08
KR1020080032843A KR101521908B1 (en) 2008-04-08 2008-04-08 Mobile terminal and its menu control method
KR10-2008-0032841 2008-04-08
KR1020080032841A KR20090107364A (en) 2008-04-08 2008-04-08 Mobile terminal and its menu control method
KR10-2008-0032843 2008-04-08
KR1020080033350A KR101521909B1 (en) 2008-04-10 2008-04-10 Mobile terminal and its menu control method
KR1020080033350 2008-04-10
KR10-2008-0033350 2008-04-10

Publications (2)

Publication Number Publication Date
CN101557432A true CN101557432A (en) 2009-10-14
CN101557432B CN101557432B (en) 2013-06-19

Family

ID=41175373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101279100A Active CN101557432B (en) 2008-04-08 2008-07-02 Mobile terminal and menu control method thereof

Country Status (2)

Country Link
KR (1) KR20090107364A (en)
CN (1) CN101557432B (en)

Cited By (142)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931701A (en) * 2010-08-25 2010-12-29 宇龙计算机通信科技(深圳)有限公司 Method, system and mobile terminal for prompting contact information in communication process
CN102056021A (en) * 2009-11-04 2011-05-11 李峰 Chinese and English command-based man-machine interactive system and method
CN102467336A (en) * 2010-11-19 2012-05-23 联想(北京)有限公司 Electronic equipment and object selection method thereof
CN102685307A (en) * 2011-03-15 2012-09-19 中兴通讯股份有限公司 Method, device and system for processing command information
CN102792764A (en) * 2010-02-10 2012-11-21 惠普发展公司,有限责任合伙企业 Mobile device having plurality of input modes
CN103064530A (en) * 2012-12-31 2013-04-24 华为技术有限公司 Input processing method and device
CN103135893A (en) * 2011-12-02 2013-06-05 波音公司 Point of use verified aircraft assembly time collection
CN103366743A (en) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 Voice-command operation method and device
CN103514882A (en) * 2012-06-30 2014-01-15 北京百度网讯科技有限公司 Voice identification method and system
CN103593081A (en) * 2012-08-17 2014-02-19 上海博泰悦臻电子设备制造有限公司 Control method of vehicle device and voice function
CN103593134A (en) * 2012-08-17 2014-02-19 上海博泰悦臻电子设备制造有限公司 Control method of vehicle device and voice function
CN103677261A (en) * 2012-09-20 2014-03-26 三星电子株式会社 Context aware service provision method and apparatus of user equipment
CN103699293A (en) * 2013-12-02 2014-04-02 联想(北京)有限公司 Operation method and electronic equipment
CN103885661A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Control method and control device
CN103995657A (en) * 2013-02-19 2014-08-20 Lg电子株式会社 Mobile terminal and control method thereof
CN104021788A (en) * 2013-03-01 2014-09-03 联发科技股份有限公司 Voice control device and voice control method
CN104049722A (en) * 2013-03-11 2014-09-17 联想(北京)有限公司 Information processing method and electronic equipment
CN104077105A (en) * 2013-03-29 2014-10-01 联想(北京)有限公司 Information processing method and electronic device
CN104160372A (en) * 2012-02-24 2014-11-19 三星电子株式会社 Method and apparatus for controlling lock/unlock state of terminal through voice recognition
CN104169837A (en) * 2012-02-17 2014-11-26 Lg电子株式会社 Method and apparatus for smart voice recognition
CN104239043A (en) * 2014-09-04 2014-12-24 百度在线网络技术(北京)有限公司 Instruction execution method and device
CN104471639A (en) * 2012-07-20 2015-03-25 微软公司 Voice and gesture identification reinforcement
CN104715754A (en) * 2015-03-05 2015-06-17 北京华丰亨通科贸有限公司 Method and device for rapidly responding to voice commands
CN104796527A (en) * 2014-01-17 2015-07-22 Lg电子株式会社 Mobile terminal and controlling method thereof
CN105094331A (en) * 2015-07-27 2015-11-25 联想(北京)有限公司 Information processing method and electronic device
CN105190746A (en) * 2013-05-07 2015-12-23 高通股份有限公司 Method and apparatus for detecting a target keyword
CN105208204A (en) * 2015-08-27 2015-12-30 北京羽乐创新科技有限公司 Communication service processing method and apparatus
CN105379234A (en) * 2013-06-08 2016-03-02 苹果公司 Application gateway for providing different user interfaces for limited distraction and non-limited distraction contexts
CN105573582A (en) * 2015-12-14 2016-05-11 魅族科技(中国)有限公司 Display method and terminal
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system
CN105976157A (en) * 2016-04-25 2016-09-28 中兴通讯股份有限公司 Task creating method and task creating device
CN106683675A (en) * 2017-02-08 2017-05-17 张建华 Control method and voice operating system
CN103885596B (en) * 2014-03-24 2017-05-24 联想(北京)有限公司 Information processing method and electronic device
CN107544827A (en) * 2017-08-23 2018-01-05 金蝶软件(中国)有限公司 The method and relevant apparatus of a kind of funcall
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
CN109658926A (en) * 2018-11-28 2019-04-19 维沃移动通信有限公司 A kind of update method and mobile terminal of phonetic order
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US20190172248A1 (en) 2012-05-11 2019-06-06 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
CN109976702A (en) * 2019-03-20 2019-07-05 青岛海信电器股份有限公司 A kind of audio recognition method, device and terminal
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
CN111078175A (en) * 2019-12-25 2020-04-28 上海擎感智能科技有限公司 Mail processing method, mobile terminal and computer storage medium
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10714117B2 (en) 2013-02-07 2020-07-14 Apple Inc. Voice trigger for a digital assistant
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10741185B2 (en) 2010-01-18 2020-08-11 Apple Inc. Intelligent automated assistant
US10748546B2 (en) 2017-05-16 2020-08-18 Apple Inc. Digital assistant services based on device capabilities
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
CN111968637A (en) * 2020-08-11 2020-11-20 北京小米移动软件有限公司 Operation mode control method and device of terminal equipment, terminal equipment and medium
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10942703B2 (en) 2015-12-23 2021-03-09 Apple Inc. Proactive assistance based on dialog communication between devices
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11010127B2 (en) 2015-06-29 2021-05-18 Apple Inc. Virtual assistant for media playback
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11048474B2 (en) 2012-09-20 2021-06-29 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11269678B2 (en) 2012-05-15 2022-03-08 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11468282B2 (en) 2015-05-15 2022-10-11 Apple Inc. Virtual assistant in a communication session
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11928604B2 (en) 2005-09-08 2024-03-12 Apple Inc. Method and apparatus for building an intelligent automated assistant

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102012774B1 (en) * 2012-11-19 2019-08-21 엘지전자 주식회사 Mobil terminal and Operating Method for the Same
KR102344045B1 (en) * 2015-04-21 2021-12-28 삼성전자주식회사 Electronic apparatus for displaying screen and method for controlling thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6449496B1 (en) * 1999-02-08 2002-09-10 Qualcomm Incorporated Voice recognition user interface for telephone handsets
US7280970B2 (en) * 1999-10-04 2007-10-09 Beepcard Ltd. Sonic/ultrasonic authentication device

Cited By (199)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11928604B2 (en) 2005-09-08 2024-03-12 Apple Inc. Method and apparatus for building an intelligent automated assistant
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
CN102056021A (en) * 2009-11-04 2011-05-11 李峰 Chinese and English command-based man-machine interactive system and method
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10741185B2 (en) 2010-01-18 2020-08-11 Apple Inc. Intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
CN102792764A (en) * 2010-02-10 2012-11-21 惠普发展公司,有限责任合伙企业 Mobile device having plurality of input modes
US9413869B2 (en) 2010-02-10 2016-08-09 Qualcomm Incorporated Mobile device having plurality of input modes
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
CN101931701A (en) * 2010-08-25 2010-12-29 宇龙计算机通信科技(深圳)有限公司 Method, system and mobile terminal for prompting contact information in communication process
CN102467336B (en) * 2010-11-19 2013-10-30 联想(北京)有限公司 Electronic equipment and object selection method thereof
CN102467336A (en) * 2010-11-19 2012-05-23 联想(北京)有限公司 Electronic equipment and object selection method thereof
CN102685307A (en) * 2011-03-15 2012-09-19 中兴通讯股份有限公司 Method, device and system for processing command information
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
CN103135893A (en) * 2011-12-02 2013-06-05 波音公司 Point of use verified aircraft assembly time collection
CN104169837A (en) * 2012-02-17 2014-11-26 Lg电子株式会社 Method and apparatus for smart voice recognition
CN104169837B (en) * 2012-02-17 2017-03-22 Lg 电子株式会社 Method and apparatus for intelligent sound identification
CN108270903A (en) * 2012-02-24 2018-07-10 三星电子株式会社 Pass through the method and apparatus of locking/unlocked state of speech recognition controlled terminal
US9852278B2 (en) 2012-02-24 2017-12-26 Samsung Electronics Co., Ltd. Method and apparatus for controlling lock/unlock state of terminal through voice recognition
US20170153868A1 (en) 2012-02-24 2017-06-01 Samsung Electronics Co., Ltd. Method and apparatus for controlling lock/unlock state of terminal through voice recognition
CN104160372A (en) * 2012-02-24 2014-11-19 三星电子株式会社 Method and apparatus for controlling lock/unlock state of terminal through voice recognition
US10216916B2 (en) 2012-02-24 2019-02-26 Samsung Electronics Co., Ltd. Method and apparatus for controlling lock/unlock state of terminal through voice recognition
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
CN103366743A (en) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 Voice-command operation method and device
US10467797B2 (en) 2012-05-11 2019-11-05 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US10719972B2 (en) 2012-05-11 2020-07-21 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US10380783B2 (en) 2012-05-11 2019-08-13 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
TWI662465B (en) * 2012-05-11 2019-06-11 日商半導體能源研究所股份有限公司 Electronic device, storage medium, program, and displaying method
US20190172248A1 (en) 2012-05-11 2019-06-06 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US11216041B2 (en) 2012-05-11 2022-01-04 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US11815956B2 (en) 2012-05-11 2023-11-14 Semiconductor Energy Laboratory Co., Ltd. Electronic device, storage medium, program, and displaying method
US11269678B2 (en) 2012-05-15 2022-03-08 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
CN103514882B (en) * 2012-06-30 2017-11-10 北京百度网讯科技有限公司 A kind of audio recognition method and system
CN103514882A (en) * 2012-06-30 2014-01-15 北京百度网讯科技有限公司 Voice identification method and system
CN104471639A (en) * 2012-07-20 2015-03-25 微软公司 Voice and gesture identification reinforcement
CN103593134A (en) * 2012-08-17 2014-02-19 上海博泰悦臻电子设备制造有限公司 Control method of vehicle device and voice function
CN103593134B (en) * 2012-08-17 2018-01-23 上海博泰悦臻电子设备制造有限公司 The control method of mobile unit and phonetic function
CN103593081B (en) * 2012-08-17 2017-11-07 上海博泰悦臻电子设备制造有限公司 The control method of mobile unit and phonetic function
CN103593081A (en) * 2012-08-17 2014-02-19 上海博泰悦臻电子设备制造有限公司 Control method of vehicle device and voice function
CN103677261B (en) * 2012-09-20 2019-02-01 三星电子株式会社 The context aware service provision method and equipment of user apparatus
US11048474B2 (en) 2012-09-20 2021-06-29 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
US11907615B2 (en) 2012-09-20 2024-02-20 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
CN103677261A (en) * 2012-09-20 2014-03-26 三星电子株式会社 Context aware service provision method and apparatus of user equipment
US10042603B2 (en) 2012-09-20 2018-08-07 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
US10684821B2 (en) 2012-09-20 2020-06-16 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
CN103885661A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Control method and control device
CN103064530A (en) * 2012-12-31 2013-04-24 华为技术有限公司 Input processing method and device
US10714117B2 (en) 2013-02-07 2020-07-14 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
CN103995657B (en) * 2013-02-19 2017-10-24 Lg电子株式会社 Mobile terminal and its control method
CN103995657A (en) * 2013-02-19 2014-08-20 Lg电子株式会社 Mobile terminal and control method thereof
US9928028B2 (en) 2013-02-19 2018-03-27 Lg Electronics Inc. Mobile terminal with voice recognition mode for multitasking and control method thereof
US9691382B2 (en) 2013-03-01 2017-06-27 Mediatek Inc. Voice control device and method for deciding response of voice control according to recognized speech command and detection output derived from processing sensor data
CN104021788A (en) * 2013-03-01 2014-09-03 联发科技股份有限公司 Voice control device and voice control method
CN104021788B (en) * 2013-03-01 2017-08-01 联发科技股份有限公司 Sound-controlled apparatus and acoustic-controlled method
CN104049722B (en) * 2013-03-11 2017-07-25 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN104049722A (en) * 2013-03-11 2014-09-17 联想(北京)有限公司 Information processing method and electronic equipment
CN104077105A (en) * 2013-03-29 2014-10-01 联想(北京)有限公司 Information processing method and electronic device
CN104077105B (en) * 2013-03-29 2018-04-27 联想(北京)有限公司 A kind of information processing method and a kind of electronic equipment
CN105190746B (en) * 2013-05-07 2019-03-15 高通股份有限公司 Method and apparatus for detecting target keyword
CN105190746A (en) * 2013-05-07 2015-12-23 高通股份有限公司 Method and apparatus for detecting a target keyword
CN105379234A (en) * 2013-06-08 2016-03-02 苹果公司 Application gateway for providing different user interfaces for limited distraction and non-limited distraction contexts
CN105379234B (en) * 2013-06-08 2019-04-19 苹果公司 For providing the application gateway for being directed to the different user interface of limited dispersion attention scene and untethered dispersion attention scene
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
CN103699293A (en) * 2013-12-02 2014-04-02 联想(北京)有限公司 Operation method and electronic equipment
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
CN104796527A (en) * 2014-01-17 2015-07-22 Lg电子株式会社 Mobile terminal and controlling method thereof
US9578160B2 (en) 2014-01-17 2017-02-21 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN104796527B (en) * 2014-01-17 2017-08-11 Lg电子株式会社 Mobile terminal and its control method
CN103885596B (en) * 2014-03-24 2017-05-24 联想(北京)有限公司 Information processing method and electronic device
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10878809B2 (en) 2014-05-30 2020-12-29 Apple Inc. Multi-command single utterance input method
US10714095B2 (en) 2014-05-30 2020-07-14 Apple Inc. Intelligent assistant for home automation
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10657966B2 (en) 2014-05-30 2020-05-19 Apple Inc. Better resolution when referencing to concepts
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
CN104239043A (en) * 2014-09-04 2014-12-24 百度在线网络技术(北京)有限公司 Instruction execution method and device
CN104239043B (en) * 2014-09-04 2017-10-31 百度在线网络技术(北京)有限公司 The execution method and apparatus of instruction
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
CN104715754A (en) * 2015-03-05 2015-06-17 北京华丰亨通科贸有限公司 Method and device for rapidly responding to voice commands
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10930282B2 (en) 2015-03-08 2021-02-23 Apple Inc. Competing devices responding to voice triggers
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US11468282B2 (en) 2015-05-15 2022-10-11 Apple Inc. Virtual assistant in a communication session
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10681212B2 (en) 2015-06-05 2020-06-09 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11010127B2 (en) 2015-06-29 2021-05-18 Apple Inc. Virtual assistant for media playback
CN105094331B (en) * 2015-07-27 2018-08-07 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105094331A (en) * 2015-07-27 2015-11-25 联想(北京)有限公司 Information processing method and electronic device
CN105208204A (en) * 2015-08-27 2015-12-30 北京羽乐创新科技有限公司 Communication service processing method and apparatus
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
CN105573582A (en) * 2015-12-14 2016-05-11 魅族科技(中国)有限公司 Display method and terminal
US10942703B2 (en) 2015-12-23 2021-03-09 Apple Inc. Proactive assistance based on dialog communication between devices
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system
CN105976157A (en) * 2016-04-25 2016-09-28 中兴通讯股份有限公司 Task creating method and task creating device
WO2017185504A1 (en) * 2016-04-25 2017-11-02 中兴通讯股份有限公司 Method and device for creating task
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11656884B2 (en) 2017-01-09 2023-05-23 Apple Inc. Application integration with a digital assistant
CN106683675A (en) * 2017-02-08 2017-05-17 张建华 Control method and voice operating system
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10741181B2 (en) 2017-05-09 2020-08-11 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10847142B2 (en) 2017-05-11 2020-11-24 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US10909171B2 (en) 2017-05-16 2021-02-02 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10748546B2 (en) 2017-05-16 2020-08-18 Apple Inc. Digital assistant services based on device capabilities
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
CN107544827A (en) * 2017-08-23 2018-01-05 金蝶软件(中国)有限公司 The method and relevant apparatus of a kind of funcall
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10720160B2 (en) 2018-06-01 2020-07-21 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
CN109658926A (en) * 2018-11-28 2019-04-19 维沃移动通信有限公司 A kind of update method and mobile terminal of phonetic order
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
WO2020186712A1 (en) * 2019-03-20 2020-09-24 海信视像科技股份有限公司 Voice recognition method and apparatus, and terminal
CN109976702A (en) * 2019-03-20 2019-07-05 青岛海信电器股份有限公司 A kind of audio recognition method, device and terminal
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11360739B2 (en) 2019-05-31 2022-06-14 Apple Inc. User activity shortcut suggestions
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
CN111078175A (en) * 2019-12-25 2020-04-28 上海擎感智能科技有限公司 Mail processing method, mobile terminal and computer storage medium
CN111968637A (en) * 2020-08-11 2020-11-20 北京小米移动软件有限公司 Operation mode control method and device of terminal equipment, terminal equipment and medium

Also Published As

Publication number Publication date
CN101557432B (en) 2013-06-19
KR20090107364A (en) 2009-10-13

Similar Documents

Publication Publication Date Title
CN101557432B (en) Mobile terminal and menu control method thereof
CN101557651B (en) Mobile terminal and menu control method thereof
US9900414B2 (en) Mobile terminal and menu control method thereof
CN101605171B (en) Mobile terminal and text correcting method in the same
RU2412463C2 (en) Mobile communication terminal and menu navigation method for said terminal
KR101466027B1 (en) Mobile terminal and its call contents management method
KR101462930B1 (en) Mobile terminal and its video communication control method
US9225831B2 (en) Mobile terminal having auto answering function and auto answering method for use in the mobile terminal
CN101971250B (en) Mobile electronic device with active speech recognition
CN101729656B (en) Mobile terminal and control method thereof
CN101604521B (en) Mobile terminal and method for recognizing voice thereof
US20100009719A1 (en) Mobile terminal and method for displaying menu thereof
CN104978868A (en) Stop arrival reminding method and stop arrival reminding device
KR20120091495A (en) Method for controlling using voice action and the mobile terminal
CN105489220A (en) Method and device for recognizing speech
KR20090115599A (en) Mobile terminal and its information processing method
KR101521909B1 (en) Mobile terminal and its menu control method
CN104794074B (en) External equipment recognition methods and device
CN104660819B (en) Mobile device and the method for accessing file in mobile device
KR101451661B1 (en) Mobile terminal and menu control method
KR101521908B1 (en) Mobile terminal and its menu control method
CN106528886A (en) Information processing method and device, and terminal
KR101631913B1 (en) Mobile terminal and method for controlling the same
KR20100054038A (en) Terminal and method for controlling the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant