KR20130080380A - Electronic apparatus and method for controlling electronic apparatus thereof - Google Patents

Electronic apparatus and method for controlling electronic apparatus thereof Download PDF

Info

Publication number
KR20130080380A
KR20130080380A KR1020120001250A KR20120001250A KR20130080380A KR 20130080380 A KR20130080380 A KR 20130080380A KR 1020120001250 A KR1020120001250 A KR 1020120001250A KR 20120001250 A KR20120001250 A KR 20120001250A KR 20130080380 A KR20130080380 A KR 20130080380A
Authority
KR
South Korea
Prior art keywords
voice
application
keyword
user
input
Prior art date
Application number
KR1020120001250A
Other languages
Korean (ko)
Inventor
한상진
권용환
김정근
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020120001250A priority Critical patent/KR20130080380A/en
Publication of KR20130080380A publication Critical patent/KR20130080380A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

PURPOSE: An electronic device and a control method thereof are provided to use voice recognition, thereby enabling the executing of a specific function of an application and a service. CONSTITUTION: A control unit (140) executes an application corresponding to a keyword by extracting the keyword included in a voice. The control unit provides a function corresponding to the keyword by using the application. A storage unit (130) stores a set of word information by matching the word information to applications. The control unit divides the keyword into an object and a verb. The control unit executes the application corresponding to the verb and provides the function corresponding to the object by using the application. [Reference numerals] (110) Voice input unit; (130) Storage unit; (140) Control unit; (193) Display unit

Description

[0001] DESCRIPTION [0002] ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THE SAME [0002]

The present invention relates to an electronic device and a control method thereof, and more particularly, to an electronic device and a control method thereof controlled according to a user voice input through a voice input unit.

Various types of electronic devices have been developed and spread by the development of electronic technology. Especially, in recent years, various types of electronic devices including TVs are used in general households. These electronic devices have gradually become various functions according to the demand of the user. Especially, in the case of TV, recently, it is connected to the Internet and supports Internet service. In addition, the user can view a large number of digital broadcasting channels through the TV.

Accordingly, various input methods for efficiently using various functions of the electronic apparatus are required. For example, an input method using a remote controller, an input method using a mouse, and an input method using a touch pad have been applied to electronic devices.

However, with such a simple input method, it has been difficult to effectively use various functions of the electronic device. For example, if all the functions of the electronic device are controlled to be controlled by only the remote control, it is inevitable to increase the number of buttons of the remote control. In this case, it was never easy for ordinary users to learn how to use the remote control. In addition, in the method of displaying various menus on the screen and allowing the user to find and select the corresponding menu, the user has to check the complicated menu tree and select the menu desired by him.

In order to overcome such inconvenience, recently, in order to control the electronic device more conveniently and intuitively, the electronic device is controlled by using voice recognition. In particular, in recent years, a user can execute and manipulate an application using speech recognition.

However, in the related art, in order for a user to execute a specific function of an application by using voice recognition, there is a need to execute an application first and input a user voice separately for executing the function after the application is executed. For example, in order for a user to execute a directions function in a map application, a user must first execute a map application and then input a user voice for performing the directions function.

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and an object of the present invention is to provide an electronic device and a control method thereof for immediately executing a specific function and service of an application using voice recognition.

According to an embodiment of the present invention, a method of controlling an electronic device includes: receiving a user voice; Extracting a keyword included in the user voice; And a function providing step of executing an application corresponding to the extracted keyword and providing a function corresponding to the keyword by using the executed application.

The method may further include searching for an application matching the keyword by comparing the keyword information matched with each keyword and stored in the application.

The method may further include: separating an object and a verb from the extracted keywords, wherein the providing of the function includes: executing an application corresponding to the verb and providing a function corresponding to the object using the application. Can be.

In addition, the word information may be stored when the application is installed.

The word information may be updated together with the update of the matching application.

In addition, if there is no word information matching the keyword, displaying a feedback message including a message to re-enter the user voice; may include.

And receiving a voice start command; Switching to a voice task mode when the voice start command is input, wherein the user voice may be input in the voice task mode.

On the other hand, according to an embodiment of the present invention for achieving the above object, an electronic device, a voice input unit for receiving a user voice; And a controller configured to extract a keyword included in the user's voice, execute an application corresponding to the extracted keyword, and provide a function corresponding to the keyword by using the executed application.

The apparatus may further include a storage unit that matches word information for each application and stores the word information. The controller may compare the word information stored in the storage unit with the keyword to search for an application matching the keyword.

The controller may be further configured to separate an object and a verb from the extracted keywords, execute an application corresponding to the verb, and provide a function corresponding to the object using the application.

In addition, the word information may be stored in the storage unit when the application is installed.

The word information may be updated together with the update of the matching application.

The display apparatus may further include a display unit, and when there is no word information matching the keyword, the controller may display a feedback message including a message for inputting a user's voice again.

When the voice start command is input, the controller switches to the voice task mode, and the user voice may be input only in the voice task mode.

According to various embodiments of the present invention as described above, the user can immediately execute a specific function and service of the application by using voice recognition. And, as if talking with the electronic device, it is possible to execute a specific function of the application, thereby increasing the entertainment factor.

1 to 3 are block diagrams illustrating a configuration of an electronic device according to various embodiments of the present disclosure;
4 is a diagram illustrating a display screen of a voice task mode according to an embodiment of the present invention;
5 and 6 are diagrams for explaining a method for executing a specific function of an application using speech recognition according to an embodiment of the present invention; and
7 is a flowchart illustrating a control method of an electronic device according to an embodiment of the present disclosure.

1 is a schematic block diagram illustrating an electronic device according to an embodiment of the present disclosure.

Referring to FIG. 1, the electronic device 100 includes a voice input unit 110, a storage unit 130, a controller 140, and a display unit 193. In this case, the electronic device 100 may be implemented as a smart TV, a set-top box, a PC, or a digital TV, a mobile phone, etc., which may be connected to an external network, but is not limited thereto.

The voice input unit 110 receives a voice uttered by the user. The voice input unit 110 converts the input voice signal into an electrical signal and outputs the electrical signal to the control unit 140. For example, the voice input unit 110 may be implemented as a microphone. In addition, the voice input unit 110 may be implemented as an all-in-one as well as an isolated form with the electronic device 100. The separated voice input unit 110 may be connected to the electronic device 100 via a wired or wireless network.

In particular, the voice input unit 110 according to an embodiment of the present invention is provided in a voice input unit (for example, a microphone) provided in the electronic device 100 and an external device (for example, a remote controller) interoperating with the electronic device. A voice input unit.

The storage unit 130 stores various data and programs for driving and controlling the electronic device 100. The storage unit 130 stores a voice recognition module for recognizing a voice input through the voice input unit 110 and a motion recognition module for recognizing a motion input through the motion input unit 120.

The storage unit 130 may include a voice database and a motion database. The voice database refers to a database in which a voice task matching the preset voice and the preset voice is recorded. The motion database refers to a database in which a preset motion and a motion task matching the preset motion are recorded.

The storage unit 130 stores word information corresponding to the application when installing the application.

The display unit 193 displays an image corresponding to the broadcast signal received through the broadcast receiver. The display unit 193 may display image data (eg, a video) input through the external terminal input unit. The display unit 193 may display voice guide information for performing the voice task and motion guide information for performing the motion task under the control of the controller 140.

The controller 140 controls the voice input unit 110, the storage unit 130, and the display unit 193. The controller 140 may include a central processing unit (CPU) and a read only memory (ROM) and a random access memory (RAM) for storing data and a module for controlling the electronic device 100.

When a voice is input through the voice input unit 110, the controller 140 recognizes the voice using a voice recognition module and a voice database. Speech recognition is a method of recognizing speech by distinguishing words according to the type of input speech, including isolated word recognition, continuous speech recognition, continuous speech recognition, and continuous speech recognition (continuous speech recognition), and keyword spotting, which is an intermediate form of isolated word recognition and continuous speech recognition, and detects and recognizes predetermined keywords.

When the user's voice is input, the controller 140 detects the beginning and end of the voice uttered by the user in the input voice signal to determine the voice section. The control unit 140 calculates the energy of the input speech signal, classifies the energy level of the speech signal according to the calculated energy, and detects the speech period through dynamic programming. The control unit 140 generates a phoneme data by detecting a phoneme as a minimum unit of speech based on an acoustic model in the speech signal in the detected speech interval. The control unit 140 generates text information by applying a HMM (Hidden Markov Model) probability model to the generated phoneme data. However, as described above, the method of recognizing the user's voice is only an embodiment, and the user's voice can be recognized using another method. Accordingly, the control unit 140 can recognize the voice of the user included in the voice signal.

As described above, the controller 140 performs a task of the electronic device 100 by using the recognized voice. A task of the electronic device 100 may include at least one of the functions that may be performed by the electronic device 100, such as channel change, volume adjustment, playback of content (e.g., video, music, .

The controller 140 determines whether a voice start command for entering the voice task mode is input.

When a voice start command is input, the controller 140 switches the mode of the electronic device 100 to the voice task mode. In this case, the voice task mode is a mode in which the electronic device 100 is controlled by a user voice input through the voice input unit 110.

In this case, the voice task mode may be divided into a first voice task mode and a second voice task mode according to a voice start command. In detail, when the voice initiation command is a user voice command including a preset word input through the voice input unit 110 included in the electronic device 100, the controller 140 controls the electronic device 100 to be an electronic device ( Switch to the first voice task mode controlled by the user voice input through the voice input unit 110 included in the 100. If the voice start command is a user command for pressing a preset button included in an external device (for example, a remote controller) that interworks with the electronic device 100, the controller 140 controls the electronic device 100 to be an electronic device. Switch to the second voice task mode controlled according to the user's voice input to the external device linked with the 100.

When the voice task mode is switched, the controller 140 displays the voice guide information 400 as shown in FIG. 4. In this case, the voice guide information 400 may be displayed at the bottom of the screen on which the broadcast image is displayed so as not to disturb the viewing of the broadcast image. In addition, the voice guide information 400 includes an icon 410 indicating that the mode of the current display apparatus is the voice task mode and a plurality of voice items 421 to 427 for guiding the user's voice. The plurality of voice items include a power-off voice item 421, an external input voice item 422, a channel shortcut voice item 423, a channel up / down voice item 424, and a volume up / down ( UP / DOWN voice items 425, mute voice items 426, and MORE voice items 427. MORE voice item 427 is an item for viewing more voice items other than the displayed voice item.

In addition, when a user command to enter an application function shortcut mode for directly executing a specific function or service of an application is input, the controller 140 displays a message for guiding an application function shortcut mode as illustrated in FIG. 5. Displays an icon 510 including a. For example, as illustrated in FIG. 5, the icon 510 may include the phrase, “It is an application function shortcut mode.

When a user voice is input through the voice input unit 110, the controller 140 recognizes the input user voice, extracts a keyword included in the user voice, executes an application corresponding to the extracted keyword, and executes the execution. A function corresponding to the keyword is provided by using a predetermined application.

In detail, when a user voice is input, the controller 140 recognizes the user voice and extracts a keyword from the recognized user voice. For example, when the user voice input through the voice input unit 110 is "Samsung Station Route Finding", the controller 140 extracts the keywords "Samsung Station", "Road", and "Find" from the input user voice. .

The controller 140 compares the word information stored in the storage 130 with the extracted keyword and searches for an application matching the keyword. For example, when "road" and "find" are stored in the storage unit 130 as word information corresponding to the map application, the controller 140 searches for a map application that is an application corresponding to the extracted keyword information. At this time, the word information is stored in the storage unit 130 is matched and stored for each application, and is stored when the application is installed. The word information may be updated in conjunction with the application when the matching application is updated.

The controller 140 executes an application corresponding to the input user voice, and provides a function corresponding to the user voice by using the executed application.

In an embodiment of the present disclosure, the controller 140 may separate an object and a verb from the extracted keyword, execute an application corresponding to the verb, and provide a function corresponding to the object using the application. For example, if the extracted keywords are "samseong station", "direction", the controller 140 executes a map application using "direction" and uses the "samsung station" function to "find the samsung station". Will be executed. Therefore, as shown in FIG. 6, the controller 140 may display an application screen on which “Samsung Station directions” is performed.

As another example, when the user voice input through the voice input unit 110 is "find the latest movie", the controller 140 may extract "latest", "movie", and "find" as keywords. In addition, the controller 140 may execute a movie application, which is an application corresponding to the extracted keyword, and immediately display an application screen on which the "latest movie search" is performed.

If there is no word information matching the keyword, the controller 140 may display a feedback message including a message for inputting a user voice on the display unit 193.

Through the electronic device 100 as described above, a user can directly execute a specific function and service of an application by using voice recognition. And, as if talking with the electronic device, it is possible to execute a specific function of the application, thereby increasing the entertainment factor.

2 is a block diagram showing a configuration of an electronic device 100 according to another embodiment of the present invention. 2, the electronic device 100 includes an audio input unit 110, a motion input unit 120, a storage unit 130, a control unit 140, a broadcast receiving unit 150, an external terminal input unit 160, A receiving unit 170, a network interface unit 180, and a video output unit 190. The electronic device 100 shown in FIG. 2 may be implemented as a set-top box.

Meanwhile, the description of the voice input unit 110, the storage unit 130, and the controller 140 illustrated in FIG. 2 is the description of the voice input unit 110, the storage unit 130, and the controller 140 described with reference to FIG. 1. Since the description is the same, detailed description is omitted.

The motion input unit 120 receives a video signal (for example, a continuous frame) of the user's motion and provides the video signal to the control unit 140. For example, the motion input unit 120 may be implemented as a motion input unit 120 unit including a lens and an image sensor. In addition, the motion input unit 120 may be implemented in a separate form as well as integrated with the electronic device 100. The separated motion input unit 120 may be connected to the electronic device 100 via a wired or wireless network.

The broadcast receiving unit 150 receives a broadcast signal from an external source. The broadcast signal includes video, audio, and additional data (e.g., EPG). The broadcast receiver 150 can receive broadcast signals from various sources such as terrestrial broadcast, cable broadcast, satellite broadcast, internet broadcast, and the like.

The external terminal input unit 160 receives video data (for example, moving pictures, etc.), audio data (e.g., music, etc.) from the outside of the electronic device 100, The external terminal input unit 160 may include at least one of a high-definition multimedia interface input, a component input terminal, a PC input terminal, and a USB input terminal. The remote control signal receiving unit 170 receives a remote control signal input from an external remote control. The remote control signal receiving unit 170 can receive the remote control signal even when the electronic device 100 is in the audio task mode or the motion task mode.

The network interface unit 180 may connect the electronic device 100 and an external device (e.g., a server) under the control of the control unit 140. [ The control unit 140 can download an application or browse the web from an external device connected through the network interface unit 180. [ The network interface unit 180 may provide at least one of Ethernet, wireless LAN 182, and bluetooth.

The image output unit 190 is stored in the external broadcast signal received through the broadcast receiving unit 150, the image data input from the external terminal input unit 160, or stored in the storage unit 130 under the control of the controller 140. Image data is output to an external electronic device (eg, a monitor, a TV).

When the motion is input through the motion input unit 120, the controller 140 recognizes the motion by using the motion detection module and the motion database. Motion recognition is a process of recognizing an image (e.g., a continuous frame) corresponding to a motion of a user input through the motion input unit 120 using a motion recognition module, Or stretching his fingers to hold his fist), and recognize the movement of successive hands. When the user motion is input, the controller 140 stores the received image frame by frame, and detects an object (e.g., a user's hand) of the user motion using the stored frame. The controller 140 detects at least one of the type, the color, and the motion of the object included in the frame to detect the object. The control unit 140 can track the movement of the detected object using the position of each object included in the plurality of frames.

The controller 140 determines the motion according to the shape and motion of the tracked object. For example, the controller 140 determines the motion of the user using at least one of a change in shape, speed, position, and orientation of the object. The user's motion includes a grab that is a motion of a hand, a pointing movement that is a motion that moves a displayed cursor by hand, a slap that is a motion that moves the hand in one direction beyond a certain speed, A shake that is a motion that swings up / down or up / down, and a rotation that is a motion of rotating the hand. The technical idea of the present invention can be applied to motions other than the above-described embodiments. For example, a spread motion that stretches a hand may be further included.

The control unit 140 determines whether the object is out of the determined area (for example, a square of 40 cm x 40 cm) within the determined time (for example, 800 ms) to determine whether the motion of the user is a pointing movement or a slap . If the object does not deviate from the determined area within the determined time, the control unit 140 can determine the user's motion as a pointing movement. If the object is out of the determined area within the determined time, the control unit 140 may determine the motion of the user as a slab. In another example, when it is determined that the velocity of the object is equal to or less than a predetermined velocity (for example, 30 cm / s), the control unit 140 determines the motion of the user as a pointing movement. If it is determined that the speed of the object exceeds the predetermined speed, the controller 140 determines the motion of the user as a slap.

Figure 3 is a block diagram of an electronic device 100, in accordance with another embodiment of the present invention. 3, the electronic device 100 includes an audio input unit 110, a motion input unit 120, a storage unit 130, a control unit 140, a broadcast receiving unit 150, an external terminal input unit 160, A remote control signal receiving unit 170, a network interface unit 180, a display unit 193, and an audio output unit 196. At this time, the electronic device 100 may be a digital TV, but is not limited thereto.

3, a voice input unit 110, a motion input unit 120, a storage unit 130, a control unit 140, a broadcast receiving unit 150, an external terminal input unit 160, a remote control signal receiving unit 170, The description of the unit 180 and the display unit 193 is the same as that of the configuration having the same reference numerals in FIG. 1 and FIG. 2, and a detailed description thereof will be omitted.

The audio output unit 196 outputs the audio corresponding to the broadcast signal under the control of the control unit 140. [ The audio output unit 196 may include at least one of a speaker 196a, a headphone output terminal 196b, or an S / PDIF output terminal 163c.

Meanwhile, as illustrated in FIG. 3, the storage unit 130 may include a power control module 130a, a channel control module 130b, a volume control module 130c, an external input control module 130d, and a screen control module 130e. ), Audio control module 130f, internet control module 130g, application module 130h, search control module 130i, UI processing module 130j, voice recognition module 130k, motion recognition module 130l, Voice database 130m and motion database 130n. These modules 130a to 130n each have a power control function, a channel control function, a volume control function, an external input control function, a screen control function, an audio control function, an internet control function, an application execution function, a search control function, and a UI processing function, respectively. It may be implemented in software to perform the. The controller 140 may execute these software stored in the storage 130 to perform a corresponding function.

Hereinafter, a method for executing an application function shortcut using voice recognition will be described with reference to FIG. 7.

First, the electronic device 100 determines whether a voice start command is input (S710). In this case, the voice initiation command may be a user voice command input to a voice input unit provided in the electronic device 100 or a user voice command in which a voice task mode switch button provided in the remote controller is selected.

If it is determined that the voice start command is input (S710-Y), the electronic device 100 switches the mode of the electronic device 100 to the voice task mode (S720). In this case, the voice task mode is a mode in which the electronic device 100 is controlled by a user voice input through the voice input unit 110.

In operation S730, the electronic device 100 determines whether an execution command of an application function shortcut mode is input. In this case, the execution command of the application function shortcut mode may be a user voice command or a user command by an application function shortcut button provided in the remote controller.

When the execution command of the application function shortcut mode is input (S730-Y), the electronic device 100 receives a user voice (S740). The electronic device 100 recognizes the input user voice using a voice database and a voice recognition module.

In operation S750, the electronic device 100 extracts a keyword from the recognized user voice. In this case, the keyword is a word having a meaning excluding a search that has no meaning in the input user voice.

In operation S760, the electronic device 100 searches whether there is a pre-stored word in the extracted keyword. At this time, the pre-stored words are matched and stored word information for each application and are stored when the application is installed. In addition, the pre-stored words may be updated together when the application is updated.

If the pre-stored word exists (S760-Y), the electronic device 100 executes an application corresponding to the keyword (S770). Then, the electronic device 100 does not receive a separate user input after executing the application and immediately executes the function of the application corresponding to the keyword (S780). For example, if the input user voice is "Please search from top 1 to 10," the electronic device 100 executes a music application corresponding to the input voice, and the first to 10th place. It can provide a music search function.

However, if no pre-stored word exists (S760-N), the electronic device 100 displays a feedback message for inputting the user's voice again (S765). In addition, the electronic device 100 may receive a user voice again.

According to the control method of the electronic device 100 as described above, a user can immediately execute a specific function and service of the application by using voice recognition. In addition, as with the electronic device 100, a specific function of an application may be executed to increase the entertainment factor.

Meanwhile, in the above-described embodiment, when the execution command of the application function shortcut mode is input, the electronic device 100 has described that the application function shortcut can be executed. However, this is only an example. The application function shortcut may be directly executed in the voice task mode.

The program code for performing the control method according to the above various embodiments may be stored in various types of recording media. More specifically, it may be a random access memory (RAM), a flash memory, a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an Electrically Erasable and Programmable ROM (EEPROM), a register, a hard disk, a removable disk, And may be stored in various types of recording media readable by a terminal, such as a memory, a CD-ROM, and the like.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is clearly understood that the same is by way of illustration and example only and is not to be construed as limiting the scope of the invention as defined by the appended claims. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

110: voice input unit 120: motion input unit
130: storage unit 140:
150: broadcast receiver 160: external terminal input
170: Remote control signal receiving unit 180: Network interface unit
190: video output unit 193: display unit
196: Audio output section

Claims (14)

A method of controlling an electronic device,
Receiving a user voice;
Extracting a keyword included in the user voice;
And a function providing step of executing an application corresponding to the extracted keyword and providing a function corresponding to the keyword by using the executed application.
The method of claim 1,
And searching for an application matching the keyword by comparing the keyword information matched with each keyword and stored in the application.
The method of claim 2,
Separating the object and the verb from the extracted keywords; further comprising,
The function providing step,
And executing an application corresponding to the verb, and providing a function corresponding to the object using the application.
The method of claim 2,
The word information is stored when the application is installed.
The method of claim 2,
And the word information is updated together with the update of the matching application.
The method of claim 2,
If there is no word information matching the keyword, displaying a feedback message including a message for inputting a user voice again.
The method of claim 1,
Receiving a voice start command;
Switching to a voice task mode when the voice start command is input;
The user voice is input in the voice task mode.
In an electronic device,
A voice input unit for receiving a user voice; And
And a controller configured to extract a keyword included in the user's voice, to execute an application corresponding to the extracted keyword, and to provide a function corresponding to the keyword by using the executed application.
9. The method of claim 8,
Further comprising: a storage unit for matching and storing word information for each application;
The control unit,
And searching for an application matching the keyword by comparing the keyword information stored in the storage unit with the keyword.
10. The method of claim 9,
The control unit,
And separating an object and a verb from the extracted keywords, executing an application corresponding to the verb, and providing a function corresponding to the object using the application.
10. The method of claim 9,
The word information is stored in the storage unit when the application is installed.
10. The method of claim 9,
The word information is updated together with the update of the matching application.
10. The method of claim 9,
And a display unit,
The control unit,
And if the word information matching the keyword does not exist, displaying a feedback message including a message for inputting a user's voice again on the display unit.
9. The method of claim 8,
The control unit,
When the voice start command is input, the user enters the voice task mode.
And the user voice is input only in the voice task mode.
KR1020120001250A 2012-01-04 2012-01-04 Electronic apparatus and method for controlling electronic apparatus thereof KR20130080380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120001250A KR20130080380A (en) 2012-01-04 2012-01-04 Electronic apparatus and method for controlling electronic apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120001250A KR20130080380A (en) 2012-01-04 2012-01-04 Electronic apparatus and method for controlling electronic apparatus thereof

Publications (1)

Publication Number Publication Date
KR20130080380A true KR20130080380A (en) 2013-07-12

Family

ID=48992520

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120001250A KR20130080380A (en) 2012-01-04 2012-01-04 Electronic apparatus and method for controlling electronic apparatus thereof

Country Status (1)

Country Link
KR (1) KR20130080380A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101587625B1 (en) * 2014-11-18 2016-01-21 박남태 The method of voice control for display device, and voice control display device
WO2018097478A1 (en) * 2016-11-28 2018-05-31 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input
WO2020159190A1 (en) * 2019-01-28 2020-08-06 Samsung Electronics Co., Ltd. Method and apparatus for supporting voice instructions

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101587625B1 (en) * 2014-11-18 2016-01-21 박남태 The method of voice control for display device, and voice control display device
WO2016080713A1 (en) * 2014-11-18 2016-05-26 박남태 Voice-controllable image display device and voice control method for image display device
WO2018097478A1 (en) * 2016-11-28 2018-05-31 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input
US10191718B2 (en) 2016-11-28 2019-01-29 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and server for processing multi-modal input
US11023201B2 (en) 2016-11-28 2021-06-01 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and server for processing multi-modal input
US11561763B2 (en) 2016-11-28 2023-01-24 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and server for processing multi-modal input
WO2020159190A1 (en) * 2019-01-28 2020-08-06 Samsung Electronics Co., Ltd. Method and apparatus for supporting voice instructions
US20220108694A1 (en) * 2019-01-28 2022-04-07 Samsung Electronics Co., Ltd. Method and appartaus for supporting voice instructions

Similar Documents

Publication Publication Date Title
JP5746111B2 (en) Electronic device and control method thereof
JP5819269B2 (en) Electronic device and control method thereof
JP6111030B2 (en) Electronic device and control method thereof
JP6184098B2 (en) Electronic device and control method thereof
KR20130078486A (en) Electronic apparatus and method for controlling electronic apparatus thereof
JP5535298B2 (en) Electronic device and control method thereof
US11330320B2 (en) Display device and method for controlling display device
EP2986015A1 (en) Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same
EP2555538A1 (en) Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same
JP2014532933A (en) Electronic device and control method thereof
KR101237472B1 (en) Electronic apparatus and method for controlling electronic apparatus thereof
CN103914144A (en) Electronic Apparatus And Control Method Thereof
KR20130080380A (en) Electronic apparatus and method for controlling electronic apparatus thereof
KR20140085055A (en) Electronic apparatus and Method for controlling electronic apparatus thereof
KR20130078483A (en) Electronic apparatus and method for controlling electronic apparatus thereof
KR101324232B1 (en) Electronic apparatus and Method for controlling electronic apparatus thereof
KR20130078490A (en) Electronic apparatus and method for controlling electronic apparatus thereof
KR20130078489A (en) Electronic apparatus and method for setting angle of view thereof

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination