US11393467B2 - Electronic device, control method, and storage medium - Google Patents

Electronic device, control method, and storage medium Download PDF

Info

Publication number
US11393467B2
US11393467B2 US16/660,188 US201916660188A US11393467B2 US 11393467 B2 US11393467 B2 US 11393467B2 US 201916660188 A US201916660188 A US 201916660188A US 11393467 B2 US11393467 B2 US 11393467B2
Authority
US
United States
Prior art keywords
voice
external device
voice input
received
communication unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/660,188
Other languages
English (en)
Other versions
US20200135196A1 (en
Inventor
Shunji Fujita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FUJITA, SHUNJI
Publication of US20200135196A1 publication Critical patent/US20200135196A1/en
Application granted granted Critical
Publication of US11393467B2 publication Critical patent/US11393467B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present disclosure relates to an electronic device capable of recognizing a voice input.
  • Japanese Patent Application Laid-Open No. 2015-013351 discusses a communication robot which processes a voice input on a network that is received through a microphone, and vocally responds to that voice input.
  • an electronic device includes a voice receiving unit configured to receive a voice input, a first communication unit configured to communicate with an external device having a voice recognition function, and a control unit, wherein the control unit receives a notification indicating whether the external device is ready to recognize the voice input via the first communication unit, and wherein, in a case where the notification indicates that the external device is not ready to recognize the voice input, the control unit controls the external device to be ready to recognize the voice input via the first communication unit when a predetermined voice input including a phrase corresponding to the external device is received through the voice receiving unit.
  • FIG. 1 is a diagram illustrating a system configuration according to a first exemplary embodiment.
  • FIG. 2 is a block diagram illustrating an example of a configuration of a smart-speaker according to the first exemplary embodiment.
  • FIG. 3 is a block diagram illustrating an example of a configuration of a digital camera according to the first exemplary embodiment.
  • FIG. 4 is a table illustrating an example of a power supply state of the digital camera according to the first exemplary embodiment.
  • FIG. 5 is a sequence diagram illustrating an example of processing for setting a remote control function of the smart-speaker according to the first exemplary embodiment.
  • FIGS. 6A, 6B, and 6C are tables illustrating examples of a device management database (DB) according to the first exemplary embodiment.
  • DB device management database
  • FIG. 7 is a sequence diagram illustrating an example of processing which allows the smart-speaker according to the first exemplary embodiment in a power supply state PS 2 , to acquire information about an operation state of a voice control function of the digital camera.
  • FIG. 8 is a sequence diagram illustrating an example of processing which allows the smart-speaker according to the first exemplary embodiment in a power supply state PS 1 , to acquire information about an operation state of the voice control function of the digital camera.
  • FIG. 9 is a sequence diagram illustrating an example of processing of a remote control function executed when the voice control function of the digital camera according to the first exemplary embodiment is OFF.
  • FIG. 10 is a sequence diagram illustrating an example of processing of the remote control function executed when the voice control function of the digital camera according to the first exemplary embodiment is ON.
  • FIG. 11 is a flowchart illustrating an example of processing of the remote control function of the smart-speaker according to the first exemplary embodiment.
  • FIG. 12 is a flowchart illustrating an example of processing of the remote control function of the digital camera according to the first exemplary embodiment.
  • FIG. 1 is a diagram illustrating a system configuration according to the first exemplary embodiment.
  • the system described in the present exemplary embodiment is configured of a smart-speaker 100 , a digital camera 200 , a wireless local area network (LAN) router 300 , a server 400 , and a smartphone 500 .
  • LAN local area network
  • the smart-speaker 100 is an electronic device having a voice control function.
  • the voice control function is a function for executing a user command by determining the command based on a voice input.
  • an electronic device having the voice control function firstly recognizes a predetermined word (a so-called “wake word”) included in a received voice input and determines the voice input following after the wake word.
  • the smart-speaker 100 transmits the received voice input, to a server 400 connected thereto via a wireless LAN network, and determines a command based on the voice input by using the server 400 .
  • the smart-speaker 100 and the below-described digital camera 200 of the present exemplary embodiment internally recognize the wake word, and determine the voice input following the wake word by using the server 400 .
  • the smart-speaker 100 operates by receiving power from an external power supply such as a commercial power supply.
  • the smart-speaker 100 is connected to the wireless LAN router 300 and the server 400 . Further, the smart-speaker 100 can communicate with the digital camera 200 according to a communication standard compliant with the Bluetooth® Low Energy (BLE).
  • BLE Bluetooth® Low Energy
  • the smart-speaker 100 acquires information about an operation state of the voice control function of the digital camera 200 .
  • the digital camera 200 is a device having a voice control function.
  • the digital camera 200 includes an electric cell such as a lithium-ion cell or battery, and operates receiving power from the electric cell. Because an amount of electric power which the electric cell can store is finite, the user would like to save the power consumption of the digital camera 200 as much as possible. Therefore, the voice control function and the wireless LAN function of the digital camera 200 is usually turned off, and the user enables the functions as necessary.
  • a configuration of the digital camera 200 of the present exemplary embodiment is also applicable to devices such as a smartphone and a tablet terminal.
  • the wireless LAN router 300 forms a wireless LAN network.
  • the smart-speaker 100 , the digital camera 200 , and the below-described smartphone 500 can execute wireless LAN communication via the wireless LAN network formed by the wireless LAN router 300 . Further, the smart-speaker 100 and the digital camera 200 can communicate with the server 400 via the wireless LAN router 300 .
  • the server 400 provides a service for recognizing a command based on a voice input.
  • the server 400 provides a service for converting voice data into characters or sentences and a service for analyzing characters or sentences to convert the characters or sentences into an instruction to the digital camera 200 .
  • the service provided by the server 400 the user can easily execute processing that is rather burdensome for a portable electronic device such as the smartphone 500 or the digital camera 200 .
  • the smart-speaker 100 and the digital camera 200 analyze an input voice using a service provided by the server 400 .
  • the smartphone 500 executes various settings of the smart-speaker 100 via the wireless LAN communication. Specifically, an application for executing the various settings of the smart-speaker 100 is installed in the smartphone 500 .
  • the smart-speaker 100 of the present exemplary embodiment has a function for remotely controlling a part of the voice control function of the digital camera 200 . If a user speaks to the digital camera 200 when the voice control function is not operating, the smart-speaker 100 receives the user's voice input and enables the voice control function of the digital camera 200 . Further, the smart-speaker 100 can transmit the received voice data to the digital camera 200 and remotely control the digital camera 200 to execute a function corresponding to that received voice data. Details of remote control will be described below with reference to FIGS. 9 and 10 .
  • FIG. 2 is a block diagram illustrating an example of a configuration of the smart-speaker 100 .
  • a control unit 101 controls respective units of the smart-speaker 100 according to an input signal or a program stored in a read only memory (ROM) 102 .
  • the control unit 101 is configured of one or more processors such as central processing units (CPUs) or micro processing units (MPUs).
  • the entirety of the device may be controlled by a plurality of pieces of hardware by sharing the processing instead of being controlled by the control unit 101 .
  • the ROM 102 is an electrically erasable/recordable non-volatile memory, and the below-described program executed by the control unit 101 is stored therein.
  • a random access memory (RAM) 103 is a volatile memory used as a work memory for the control unit 101 to execute a program or a temporary storage area of various types of data.
  • a recording medium 104 is a medium used for recording.
  • the recording medium 104 is configured of a memory card, a flash memory, or a hard disk.
  • the recording medium 104 may be attachable to and detachable from the smart-speaker 100 , or may be built into the smart-speaker 100 . Therefore, the smart-speaker 100 just has to include at least a unit for accessing the recording medium 104 .
  • An operation unit 105 is a processing unit for receiving a user operation and notifying received information to the control unit 101 .
  • the operation unit 105 is configured of a touch panel, a button switch, and a cross key.
  • the operation unit 105 further includes a power switch which allows the user to input an instruction for turning on or off the power of the smart-speaker 100 .
  • a display unit 106 is a processing unit for displaying image data and an operation state of the device.
  • the display unit 106 is configured of a liquid crystal panel or a light-emitting diode (LED) panel.
  • the smart-speaker 100 does not always have to include the display unit 106 .
  • the smart-speaker 100 just has to be connectable to the display unit 106 and include at least a display control function for controlling display of the display unit 106 .
  • a voice receiving unit 107 is a processing unit which converts user's voice into digital data and stores the digital data in the RAM 103 .
  • the voice receiving unit 107 includes a microphone.
  • a voice output unit 108 is a processing unit which converts the data stored in the RAM 103 to voice and outputs the voice to an external portion through a speaker.
  • a wireless LAN communication unit 109 is a processing unit for executing wireless communication compliant with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard.
  • the wireless LAN communication unit 109 operates as a station (STA) of the wireless LAN to execute wireless communication by connecting to the wireless LAN network formed by the wireless LAN router 300 .
  • STA station
  • a BT communication unit 110 is a processing unit for executing wireless communication compliant with the Bluetooth® standard.
  • a Bluetooth® Low Energy (hereinafter, referred to as “BLE”) mode specified in the Bluetooth® version 4.0 or later is employed for Bluetooth® communication (BT communication).
  • a communicable range of the BLE communication is narrower (i.e., a communicable distance thereof is shorter) than that of the wireless LAN communication, and a communication speed thereof is slower than that of the wireless LAN communication.
  • power consumption of the BLE communication is lower than that of the wireless LAN communication.
  • the BT communication unit 110 operates as “central”, and executes wireless data communication with the digital camera 200 .
  • An internal bus 120 mutually connects the respective processing units.
  • a connection mode of the BLE communication standard is a master-slave star network.
  • a communication device operating as “central” (hereinafter, referred to as “central device”) functions as a master, whereas a communication device operating as “peripheral” (hereinafter, referred to as “peripheral device”) functions as a slave.
  • the central device manages participation of the peripheral device to the network and executes setting of various parameters for wirelessly connecting to the peripheral device.
  • the central device can concurrently connect to a plurality of peripheral devices, the peripheral device cannot establish wireless connection with more than one central device at a time. Further, wireless connection cannot be established between devices serving as central devices, so that one device should serve as a central device whereas another device should serve as a peripheral when wireless connection is to be established.
  • a role of the communication device in BLE communication has been described as the above.
  • a communication speed of communication realized by the wireless LAN communication unit 109 is faster than a communication speed of communication realized by the BT communication unit 110 . Further, a communicable range of communication realized by the wireless LAN communication unit 109 is wider than that of communication realized by the BT communication unit 110 .
  • the smart-speaker 100 constantly receives power from an external power supply such as a commercial power supply.
  • FIG. 3 is a block diagram illustrating an example of a configuration of the digital camera 200 .
  • a control unit 201 controls respective units of the digital camera 200 according to an input signal or a program stored in a ROM 202 .
  • the control unit 201 is configured of one or more processors such as CPUs or MPUs.
  • the entirety of the device may be controlled by a plurality of pieces of hardware by sharing the processing instead of being controlled by the control unit 201 .
  • the ROM 202 is an electrically erasable/recordable non-volatile memory, and the below-described program executed by the control unit 201 is stored therein.
  • a RAM 203 is a volatile memory, and serves as a work memory which the control unit 201 uses to execute a program, or a temporary storage area of various types of data.
  • a recording medium 204 is a medium used for recording.
  • the recording medium 204 is configured of a memory card, a flash memory, or a hard disk.
  • the recording medium 204 may be attachable to and detachable from the digital camera 200 , or may be built into the digital camera 200 . In other words, the digital camera 200 just has to include at least a unit for accessing the recording medium 204 .
  • An operation unit 205 is a processing unit for receiving a user operation and notifying received information to the control unit 201 .
  • the operation unit 205 is configured of a touch panel, a button switch, and a cross key.
  • the operation unit 205 further includes a power switch which allows the user to input an instruction for turning on or off the power of the digital camera 200 .
  • a display unit 206 is a processing unit for displaying image data and an operation state of the device.
  • the display unit 206 is configured of a liquid crystal panel or an LED panel.
  • the digital camera 200 does not always have to include the display unit 206 .
  • the digital camera 200 just has to be connectable to the display unit 206 and include at least a display control function for controlling display of the display unit 206 .
  • a voice receiving unit 207 is a processing unit which converts user's voice into digital data and stores the digital data in the RAM 203 .
  • the voice receiving unit 207 detects user's voice via a microphone.
  • a voice output unit 208 is a processing unit which converts the data stored in the RAM 203 to voice and outputs the voice to an external portion through a speaker.
  • a wireless LAN communication unit 209 is a processing unit for executing wireless communication compliant with the IEEE 802.11 standard.
  • the wireless LAN communication unit 209 operates as a station (STA) of the wireless LAN to execute wireless communication by connecting to the wireless LAN network formed by the wireless LAN router 300 .
  • STA station
  • a BT communication unit 210 is a processing unit for executing wireless communication compliant with the Bluetooth® standard. BT communication unit 210 operates as “peripheral”, and executes BLE communication with the smart-speaker 100 .
  • a power supply control unit 211 is a processing unit for controlling power to be supplied to respective processing units from a power supply unit 212 .
  • the power supply unit 212 can supply power to respective elements of the digital camera 200 .
  • the power supply unit 212 is a lithium ion cell or battery.
  • An internal bus 220 mutually connects the respective processing units.
  • the digital camera 200 is a device driven by a battery, and includes the power supply control unit 211 in order to realize low power consumption.
  • FIG. 4 is a table illustrating an example of a power supply state of the digital camera 200 .
  • a power supply state PS 0 power is not supplied to any of the processing units of the digital camera 200 , so that the power switch of the digital camera 200 is turned off.
  • the power supply control unit 211 does not distribute power to the voice receiving unit 207 , the voice output unit 208 , and the wireless LAN communication unit 209 .
  • the power supply state PS 1 is a state where power consumption is suppressed next to the power supply state PS 0 .
  • a power supply state PS 2 power is supplied to at least all of the units illustrated in FIG. 3 .
  • the digital camera 200 is shifted to the power supply state PS 2 from the power supply state PS 0 when the power switch is turned on by a user operation.
  • the digital camera 200 is shifted to the power supply state PS 0 from the power supply state PS 2 or PS 1 when the power switch is turned off by a user operation.
  • the power supply control unit 211 receives an instruction from the control unit 201 , the digital camera 200 is shifted to the power supply state PS 2 from the power supply state PS 1 , or shifted to the power supply state PS 1 from the power supply state PS 2 .
  • the control unit 201 instructs the power supply control unit 211 to shift the power supply state to the power supply state PS 2 .
  • the power control unit receives the instruction and supplies power to the voice receiving unit 207 , the voice output unit 208 , and the wireless LAN communication unit 209 .
  • the control unit 201 instructs the power supply control unit 211 to shift the power supply state PS 2 to the power supply state PS 1 , the power supply control unit 211 stops supplying power to the voice receiving unit 207 , the voice output unit 208 , and the wireless LAN communication unit 209 .
  • FIG. 5 is a sequence diagram illustrating an example of processing for setting a remote control function of the smart-speaker 100 . Details of the remote control function will be described below with reference to FIGS. 9 and 10 .
  • the smart-speaker 100 can receive user's voice that should be input to the digital camera 200 in place of the digital camera 200 . Further, if the smart-speaker 100 transmits voice data of the received voice to the digital camera 200 after the voice control function of the digital camera 200 is turned on, the digital camera 200 can execute processing based on the voice data.
  • This processing is executed when the user operates an application for setting the smart-speaker 100 on the smartphone 500 .
  • This application is installed in the smartphone 500 .
  • the smartphone 500 communicates with the smart-speaker 100 and the digital camera 200 through wireless LAN communication according to the user operation to execute the above-described processing.
  • the smartphone 500 , the smart-speaker 100 , and the digital camera. 200 are connected to the same wireless LAN network. Further, the smartphone 500 has detected the smart-speaker 100 and the digital camera 200 in the wireless LAN network and is accessible to the respective devices 100 and 200 .
  • the smartphone 500 transmits a message for requesting acquisition of information about the digital camera 200 to the digital camera 200 via wireless LAN communication.
  • the information about the digital camera includes a device name of the digital camera 200 , a wake word, and a Bluetooth® device (BD) address.
  • the device name is a character string which the user has set as a name of the digital camera 200 .
  • the wake word is a word for executing the voice control function of the digital camera 200 .
  • the BD address is 48-bit address information used for identifying a device as a communication partner in the BLE communication. This BD address is different for each device.
  • step S 502 in response to the message received in step S 501 , the digital camera 200 transmits the information about the digital camera 200 to the smartphone 500 via wireless LAN communication.
  • step S 503 the smartphone 500 transmits a message for requesting the smart-speaker 100 to start setting the remote control function to the smart-speaker 100 via wireless LAN communication.
  • step S 503 the smartphone 500 transmits the information about the device name, the wake word, and the BD address of the digital camera 200 received in step S 502 to the smart-speaker 100 .
  • step S 504 the smart-speaker 100 registers the information about the digital camera 200 received in step S 503 in a device management database (DB).
  • the device management DB is database information for managing the information about a device name, a wake word, a BD address, and an operation state of a voice control function for each device.
  • the device management DB is stored in the ROM 202 .
  • FIG. 6A is a table illustrating the device management DB after the processing in step S 504 is ended.
  • An identification (ID) 1 represents information about the smart-speaker 100
  • an ID 2 represents information about the digital camera 200 .
  • step S 504 the information about the digital camera 200 is registered.
  • step S 505 the smart-speaker 100 establishes connection for BLE communication with the digital camera 200 .
  • the BT communication unit 210 of the smart-speaker 100 transmits a message for requesting connection of BLE communication to establish the connection of BLE communication.
  • step S 506 the smart-speaker 100 detects a state notification service provided by the digital camera 200 via the BLE communication through an attribute (ATT) protocol.
  • the state notification service is a service for periodically notifying a communication partner about whether an operation state of the voice control function is ON or OFF.
  • step S 507 the smart-speaker 100 requests the digital camera 200 to start providing the state notification service detected in step S 506 via BLE communication.
  • step S 508 the digital camera 200 starts providing the state notification service to the smart-speaker 100 .
  • step S 509 the digital camera 200 transmits a notification indicating start of the state notification service to the smart-speaker 100 via BLE communication.
  • step S 510 the smart-speaker 100 transmits a notification indicating completion of setting of the remote control function to the smartphone 500 via wireless LAN communication.
  • the application for executing the processing for setting the remote control function to the smart-speaker 100 is installed in the smartphone 500 .
  • the application does not always have to be installed in the smartphone 500 .
  • the user may execute the processing via a Web client function (e.g., Web browser) of the smartphone 500 by using a Web application programming interface (API).
  • a Web client function e.g., Web browser
  • API Web application programming interface
  • the user executes the processing of the smart-speaker 100 by using the smartphone 500 .
  • the user may make the smart-speaker 100 execute the above-described processing by using the digital camera 200 .
  • the user installs the application for executing the processing in the digital camera 200 , and makes the smart-speaker 100 execute the processing by using the digital camera 200 .
  • FIGS. 7 and 8 are sequence diagrams illustrating examples of processing which allows the smart-speaker 100 to acquire information about an operation state of the voice control function of the digital camera 200 .
  • the digital camera 200 periodically transmits information indicating the operation state of the voice control function to the smart-speaker 100 via BLE communication. For example, the digital camera 200 transmits the information indicating the operation state of the voice control function to the smart-speaker 100 at an interval of 100 milliseconds. Based on the information received from the digital camera 200 , the smart-speaker 100 updates the operation state of the voice control function of the digital camera 200 registered in the device management DB.
  • FIGS. 7 and 8 are sequence diagrams respectively illustrating processing to be executed if the voice control function is ON or OFF when the digital camera 200 is executing the state notification service.
  • FIG. 7 is a sequence diagram of the processing executed when the digital camera 200 is in the power supply state PS 2 .
  • step S 701 the digital camera 200 changes the voice control function to the OFF state from the ON state. At this time, the power supply state of the digital camera 200 is shifted to the power supply state PS 1 from the power supply state PS 2 .
  • the processing of step S 701 is executed when the user does not operate the digital camera 200 for a predetermined period of time, or when the user manually disables the voice control function via the operation unit 205 .
  • step S 702 the digital camera 200 transmits a state notification message indicating the OFF state of the voice control function to the smart-speaker 100 via BLE communication.
  • the digital camera 200 transmits the state notification message periodically and repeatedly even after step S 702 .
  • step S 703 the smart-speaker 100 changes the operation state of the voice control function of the digital camera 200 registered in the device management DB to “OFF”.
  • the device management DB is updated to a state illustrated in FIG. 6B .
  • FIG. 8 is a sequence diagram of the processing executed when the digital camera 200 is in the power supply state PS 1 .
  • step S 801 the digital camera 200 changes the voice control function to the ON state from the OFF state. At this time, the power supply state of the digital camera 200 is shifted to the power supply state PS 2 from the power supply state PS 1 .
  • the processing of step S 701 is executed, for example, when the user manually enables the voice control function via the operation unit 205 .
  • step S 802 the digital camera 200 transmits a state notification message indicating the ON state of the voice control function to the smart-speaker 100 via BLE communication.
  • the digital camera 200 further transmits the state notification message periodically and repeatedly even after step S 802 .
  • step S 803 the smart-speaker 100 changes the operation state of the voice control function of the digital camera 200 registered in the device management DB to “ON”.
  • the device management DB is updated to a state illustrated in FIG. 6C .
  • FIGS. 9 and 10 are sequence diagrams illustrating examples of processing of a remote control function of the smart-speaker 100 .
  • the sequence diagram in FIG. 9 illustrates an example of processing to be executed if the user speaks to the digital camera 200 when the voice control function of the digital camera 200 is OFF.
  • the sequence diagram in FIG. 10 illustrates an example of processing to be executed if the user speaks to the digital camera 200 when the voice control function of the digital camera 200 is ON.
  • the processing for setting the remote control function of the smart-speaker 100 described with reference to FIG. 5 has been executed before the processing in FIG. 9 or 10 is executed.
  • FIG. 9 When the processing is to be started, the voice control function of the digital camera 200 is OFF, and the power supply state is the power supply state PS 1 . Further, a state of the device management DB of the smart-speaker 100 is the state illustrated in FIG. 6B .
  • This processing sequence is started, for example, when the user speaks to the digital camera 200 , “Hey, Thomas, show me the photo taken last time.” At this time, it is assumed that the user exists within a range where the user's voice can sufficiently reach the smart-speaker 100 and the digital camera 200 .
  • step S 901 the smart-speaker 100 detects a wake word included in a received voice input. For example, the smart-speaker 100 detects a wake word “Thomas” from the received voice input. The smart-speaker 100 stores the voice data of the voice input in the RAM 203 .
  • step S 902 the smart-speaker 100 refers to the device management DB and determines whether an entry corresponding to the wake word detected in step S 901 exists.
  • the smart-speaker 100 determines, for example, whether an entry corresponding to the wake word “Thomas” exists. If the entry corresponding to the detected wake word does not exist, the smart-speaker 100 ends the processing. If the entry corresponding to the detected wake word exists, the smart-speaker 100 advances the processing to step S 903 .
  • step S 903 the smart-speaker 100 determines whether a device corresponding to the wake word detected in step S 901 is the own device (smart-speaker 100 ) or an external device. If the smart-speaker 100 determines that the own device corresponds thereto, the smart-speaker 100 uses the own voice control function to analyze the received voice input and executes processing corresponding to that voice input. If the smart-speaker 100 determines that the external device corresponds thereto, the processing proceeds to step S 904 .
  • step S 904 the smart-speaker 100 determines whether the operation state of the voice control function of the external device is ON or OFF. In the processing, because the device management DB is in the state illustrated in FIG. 6B , the smart-speaker 100 determines that the ID2 (digital camera) corresponds to the wake word “Thomas”, and determines that the operation state of the voice control function is OFF.
  • the ID2 digital camera
  • step S 905 the smart-speaker 100 transmits a message for requesting the digital camera 200 to enable the voice control function, to the digital camera 200 via BLE communication.
  • the smart-speaker 100 transmits information about a service set identifier (SSID) and a cryptography key as the information necessary for connecting to the wireless LAN network to which the smart-speaker 100 is connected together with this message.
  • the digital camera 200 receives the message and starts processing for enabling the voice control function.
  • This wireless LAN network is the wireless LAN network formed by the wireless LAN router 300 illustrated in FIG. 1 .
  • step S 906 the digital camera 200 shifts the power supply state to the power supply state PS 2 from the power supply state PS 1 .
  • step S 907 the digital camera 200 connects to the wireless LAN network by using the information about the SSID and the cryptography key received in step S 905 . Further, the digital camera 200 detects the server 400 via the wireless LAN network and brings the voice control function into a usable state.
  • step S 908 the digital camera 200 transmits a message notifying completion of the request received in step S 905 to the smart-speaker 100 via BLE communication.
  • the message also includes information indicating the ON state of the voice control function of the digital camera 200 and the information such as an internet protocol (IP) address of the digital camera 200 necessary for accessing the digital camera 200 via the wireless LAN communication.
  • IP internet protocol
  • step S 909 the smart-speaker 100 changes the operation state of the voice control function of the ID2 (digital camera 200 ) registered in the device management DB to “ON”.
  • the device management DB is brought into a state illustrated in FIG. 6C .
  • step S 910 the smart-speaker 100 transmits a message requesting the digital camera 200 to execute the voice control function corresponding to the voice input detected in step S 901 to the digital camera 200 via wireless LAN communication.
  • This message includes the data of the voice input received by the smart-speaker 100 in step S 901 .
  • step S 911 the digital camera 200 executes the voice control function according to the message received in step S 910 .
  • the digital camera 200 interprets a portion of the voice data, “Show me the photo taken last time”, received in step S 910 to determine necessary processing.
  • the digital camera 200 displays a last still image data recorded in the recording medium 204 on the display unit 206 , and outputs a voice message, “Here, please see the photo”, in response to the user's request.
  • step S 912 the digital camera 200 transmits a message indicating completion of the processing requested in the message received in step S 910 to the smart-speaker 100 via wireless LAN communication.
  • steps S 901 to S 911 corresponds to the procedures of the voice control function executed with respect to the user's voice input, “Hey, Thomas, show me the photo taken last time.”
  • FIG. 10 When the processing is to be started, the voice control function of the digital camera 200 is ON, and the power supply state is the power supply state PS 2 . Further, the device management DB of the smart-speaker 100 is in the state illustrated in FIG. 6C .
  • this processing sequence is started when the user speaks, “Hey, Thomas, show me the last but one photo”, to the digital camera 200 after the processing in FIG. 9 .
  • step S 1001 the smart-speaker 100 detects a wake word from the received user's voice input. For example, the smart-speaker 100 detects a wake word “Thomas” from the received voice input.
  • step S 1002 the smart-speaker 100 refers to the device management DB and determines whether an entry corresponding to the wake word detected in step S 1001 exists. For example, the smart-speaker 100 determines whether an entry corresponding to the wake word “Thomas” exists. If the entry corresponding to the detected wake word does not exist, the smart-speaker 100 ends the processing.
  • step S 1003 the smart-speaker 100 determines whether the operation state of the voice control function of the device corresponding to the wake word is ON or OFF.
  • step S 1003 for example, because the device management DB is in the state illustrated in FIG. 6C , the smart-speaker 100 determines that the ID2 (digital camera 200 ) corresponds to the wake word “Thomas”, and determines that the operation state of the voice control function is ON. In this case, the smart-speaker 100 does not have to remotely control the digital camera 200 . Therefore, the smart-speaker 100 ends the processing.
  • step S 1004 the digital camera 200 detects a voice input via the voice receiving unit 207 .
  • a step number “S 1004 ” is applied to the above processing.
  • the processing is executed independently from the processing executed by the smart-speaker 100 when the user speaks to the digital camera 200 .
  • step S 1005 the digital camera 200 executes the voice control function according to the voice input received in step S 1004 .
  • the digital camera 200 determines a portion of the voice data, “Show me the last but one photo”, received in step S 1004 to determine necessary processing, and displays still image data that is recorded second-last on the display unit 206 .
  • the digital camera 200 further outputs a voice message, “Here, please see the photo”, in response to the user's voice input.
  • step S 901 As a method of executing the voice control function corresponding to the voice input received in step S 901 , a method of transmitting voice data from the smart-speaker 100 to the digital camera 200 via wireless LAN communication has been described. However, another method can be also used.
  • a method using a Web API can be carried out.
  • the Web API function which enables the smart-speaker 100 to remotely control various functions of the digital camera 200 via the network is provided, the smart-speaker 100 analyzes the voice input received in step S 901 .
  • the voice data can be converted into the Web API through the server, and the smart-speaker 100 may execute the converted Web API with respect to the digital camera 200 .
  • a function for converting the voice data into the Web API may be provided by the server on the Internet so that the smart-speaker 100 uses that server.
  • the smart-speaker 100 may output voice data received in step S 901 via the voice output unit 108 in place of the user.
  • FIG. 11 is a flowchart illustrating an example of processing of the remote control function of the smart-speaker 100 .
  • step S 1101 the control unit 101 detects a wake word included in the voice input received by the voice receiving unit 107 .
  • the control unit 101 stores voice data of the received voice input in the RAM 203 . This processing corresponds to the processing in step S 901 of FIG. 9 .
  • step S 1102 the control unit 101 determines whether the wake word detected in step S 1101 is registered in the device management DB stored in the ROM 202 . If the control unit 101 determines that the wake word is registered (YES in step S 1102 ), the processing proceeds to step S 1103 . If the control unit 101 determines that the wake word is not registered (NO in step S 1102 ), the processing is ended. The processing in step S 1102 corresponds to the processing in step S 902 of FIG. 9 .
  • step S 1103 the control unit 101 identifies a device that corresponds to the wake word determined to be registered in the device management DB in step S 1102 , and determines whether that device is an external device. If the control unit 101 determines that the device is the external device (i.e., digital camera 200 ) (YES in step S 1103 ), the processing proceeds to step S 1104 . If the control unit 101 determines that the device is the own device (NO in step S 1103 ), the processing proceeds to step S 1112 . This processing in step S 1103 corresponds to the processing in step S 903 of FIG. 9 .
  • step S 1104 the control unit 101 determines whether the operation state of the voice control function of the digital camera 200 identified in step S 1103 is ON or OFF. If the control unit 101 determines that the operation state of the voice control function is OFF (OFF in step S 1104 ), the processing proceeds to step S 1105 . If the control unit 101 determines that the operation state of the voice control function is ON (ON in step S 1104 ), the processing is ended. This processing in step S 1104 corresponds to the processing in step S 904 of FIG. 9 .
  • step S 1105 the control unit 101 transmits a message requesting the digital camera 200 to enable the voice control function through the BT communication unit 110 .
  • This processing in step S 1105 corresponds to the processing in step S 905 of FIG. 9 .
  • step S 1106 the control unit 101 determines whether a notification of completion is received from the digital camera 200 within a predetermined period in response to the message transmitted in step S 1105 . If the notification is received from the digital camera 200 within a predetermined period (YES in step S 1106 ), the processing proceeds to step S 1108 . If the notification is not received from the digital camera 200 within a predetermined period (NO in step S 1106 ), the processing proceeds to step S 1107 . This processing in step S 1106 corresponds to the processing in step S 908 of FIG. 9 .
  • step S 1107 the control unit 101 notifies the user that the voice control function of the digital camera 200 cannot be enabled and ends the processing.
  • a message such as “Digital camera 200 is not communicable” or “Digital camera 200 is not found in the vicinity” is notified to the user through the voice output unit 108 or the display unit 106 .
  • step S 1108 the control unit 101 updates the device management DB.
  • the control unit 101 changes the operation state of the voice control function of the digital camera 200 to “ON”.
  • This processing in step S 1108 corresponds to the processing in step S 909 of FIG. 9 .
  • step S 1109 the control unit 101 transmits a message requesting the digital camera 200 to execute the voice control function corresponding to the voice input received in step S 1101 to the digital camera 200 via the wireless LAN communication unit 109 .
  • This processing in step S 1109 corresponds to the processing in step S 910 of FIG. 9 .
  • step S 1110 the control unit 101 determines whether a notification of the completion is received from the digital camera 200 within a predetermined period. If the completion notification is received within a predetermined period (YES in step S 1110 ), the processing is ended. If the completion notification cannot be received from the digital camera 200 even if a predetermined time has passed or an error response is received (NO in step S 1110 ), the processing proceeds to step S 1111 . This processing in step S 1110 corresponds to the processing in step S 912 of FIG. 9 .
  • step S 1111 the control unit 101 notifies the user that the instruction provided by the user's voice is not executed by the digital camera 200 , and ends the processing.
  • the error processing for example, a message such as “Digital camera 200 is not communicable”, “Digital camera 200 is not found in the vicinity”, or “Please speak again” is notified to the user through the voice output unit 108 or the display unit 106 .
  • step S 1112 the control unit 101 executes the voice control function corresponding to the voice input received in step S 1101 . Specifically, the control unit determines a command based on the received voice input by using the server 400 via the wireless LAN communication unit 209 , and executes the processing instructed by the voice input.
  • a processing sequence of the remote control function of the smart-speaker 100 has been described as the above.
  • FIG. 12 is a flowchart illustrating an example of processing of the remote control function of the digital camera 200 . This flowchart is started when the digital camera 200 is activated.
  • step S 1201 the control unit 201 determines whether the voice control function is ON or OFF. If the voice control function is ON (ON in step S 1201 ), the processing proceeds to step S 1209 . If the voice control function is OFF (OFF in step S 1201 ), the processing proceeds to step S 1202 .
  • step S 1202 the control unit 201 determines whether a message requesting the digital camera 200 to enable the voice control function is received from the smart-speaker 100 via the BT communication unit 210 . If the message is received (YES in step S 1202 ), the processing proceeds to step S 1203 . If the message is not received (NO in step S 1202 ), the processing returns to step S 1201 , and the control unit 201 determines the operation state of the voice control function.
  • step S 1203 the control unit 201 controls the power supply control unit 211 to shift the power supply state to the power supply state PS 2 from the power supply state PS 1 .
  • step S 1204 the control unit 201 uses the SSID and the cryptography key included in the message received in step S 1202 to connect to the wireless LAN network via the wireless LAN communication unit 209 . Further, the control unit 201 detects the server 400 via the wireless LAN communication unit 209 and brings the voice control function into a usable state.
  • step S 1205 the control unit 201 transmits a completion notification indicating the enabled state of the voice control function to the smart-speaker 100 via the BT communication unit 210 .
  • step S 1206 the control unit 201 determines whether voice data is received from the smart-speaker 100 via the wireless LAN communication unit 209 . If the voice data is not received (NO in step S 1206 ), the control unit 201 ends the processing and stands ready in a state where the voice control function is enabled. If the voice data is received (YES in step S 1206 ), the processing proceeds to step S 1207 .
  • step S 1207 the control unit 201 analyzes the received voice data and executes processing based on the analysis result.
  • step S 1208 the control unit 201 transmits a message indicating that the processing has been completed based on the received voice data to the smart-speaker 100 via the wireless LAN communication unit 209 .
  • step S 1209 the control unit 201 determines whether a voice input is received. If the voice input is received (YES in step S 1209 ), the processing proceeds to step S 1210 . If the voice input is not received (NO in step S 1209 ), the control unit 201 ends the processing and stands ready until an instruction is given by the user.
  • step S 1210 the control unit 201 executes the voice control function based on the received voice input.
  • the control unit 201 analyzes voice data of the received voice input and executes processing based on the analysis result.
  • the smart-speaker 100 in place of the digital camera 200 , can receive the voice input even when the voice control function of the digital camera 200 is OFF, so that the user can use the voice control function. Further, through the above-described configuration, the user can use the voice control function while reducing the power consumption of the digital camera 200 .
  • the present disclosure can be realized in such a manner that a program for realizing one or more functions according to the above-described exemplary embodiments is supplied to a system or an apparatus via a network or a storage medium, so that one or more processors in the system or the apparatus read and execute the program. Further, the present disclosure can be also realized with a circuit (e.g., application specific integrated circuit (ASIC)) that realizes one or more functions.
  • ASIC application specific integrated circuit
  • the present disclosure is not limited to the above-described exemplary embodiments. Therefore, in the implementation phase, the present disclosure can be embodied by modifying the constituent element within a range which does not depart from the technical spirit thereof. Further, various embodiments can be achieved by appropriately combining the plurality of constituent elements described in the above exemplary embodiments. For example, some of the constituent elements may be deleted from all the constituent elements described in the above-described exemplary embodiments.
  • Embodiment(s) can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as a ‘non-
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)
  • Power Sources (AREA)
  • Telephone Function (AREA)
US16/660,188 2018-10-31 2019-10-22 Electronic device, control method, and storage medium Active 2040-07-16 US11393467B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018205868A JP7242248B2 (ja) 2018-10-31 2018-10-31 電子機器、その制御方法、およびそのプログラム
JP2018-205868 2018-10-31
JPJP2018-205868 2018-10-31

Publications (2)

Publication Number Publication Date
US20200135196A1 US20200135196A1 (en) 2020-04-30
US11393467B2 true US11393467B2 (en) 2022-07-19

Family

ID=70327094

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/660,188 Active 2040-07-16 US11393467B2 (en) 2018-10-31 2019-10-22 Electronic device, control method, and storage medium

Country Status (3)

Country Link
US (1) US11393467B2 (ja)
JP (1) JP7242248B2 (ja)
CN (1) CN111128145B (ja)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11289078B2 (en) * 2019-06-28 2022-03-29 Intel Corporation Voice controlled camera with AI scene detection for precise focusing
KR20220083199A (ko) * 2020-12-11 2022-06-20 삼성전자주식회사 전자장치 및 그 제어방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015013351A (ja) 2013-07-08 2015-01-22 有限会社アイドリーマ ロボットを制御するためのプログラム
US20180197533A1 (en) * 2017-01-11 2018-07-12 Google Llc Systems and Methods for Recognizing User Speech
US10271093B1 (en) * 2016-06-27 2019-04-23 Amazon Technologies, Inc. Systems and methods for routing content to an associated output device
US20200051554A1 (en) * 2017-01-17 2020-02-13 Samsung Electronics Co., Ltd. Electronic apparatus and method for operating same

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69712485T2 (de) * 1997-10-23 2002-12-12 Sony Int Europe Gmbh Sprachschnittstelle für ein Hausnetzwerk
US20030069733A1 (en) * 2001-10-02 2003-04-10 Ryan Chang Voice control method utilizing a single-key pushbutton to control voice commands and a device thereof
US8155636B2 (en) * 2006-05-05 2012-04-10 Mediatek Inc. Systems and methods for remotely controlling mobile stations
EP3261087A1 (en) * 2013-09-03 2017-12-27 Panasonic Intellectual Property Corporation of America Voice interaction control method
KR102188090B1 (ko) * 2013-12-11 2020-12-04 엘지전자 주식회사 스마트 가전제품, 그 작동방법 및 스마트 가전제품을 이용한 음성인식 시스템
CN103729425B (zh) * 2013-12-24 2018-11-16 腾讯科技(深圳)有限公司 操作响应方法、客户端、浏览器及系统
JP6501217B2 (ja) 2015-02-16 2019-04-17 アルパイン株式会社 情報端末システム
BR112017021673B1 (pt) 2015-04-10 2023-02-14 Honor Device Co., Ltd Método de controle de voz, meio não-transitório legível por computador e terminal
US9787887B2 (en) 2015-07-16 2017-10-10 Gopro, Inc. Camera peripheral device for supplemental audio capture and remote control of camera
US9691378B1 (en) * 2015-11-05 2017-06-27 Amazon Technologies, Inc. Methods and devices for selectively ignoring captured audio data
CN105913847B (zh) * 2016-06-01 2020-10-16 北京京东尚科信息技术有限公司 语音控制系统、用户端设备、服务器和中央控制单元
CN107770223A (zh) * 2016-08-19 2018-03-06 深圳市轻生活科技有限公司 一种智能语音监护系统和方法
CN107872371A (zh) * 2016-09-23 2018-04-03 深圳市轻生活科技有限公司 与智能无线路由器配合使用的语音控制及其语音控制方法
CN108574515B (zh) * 2017-03-07 2021-07-30 中移(杭州)信息技术有限公司 一种基于智能音箱设备的数据分享方法、装置和系统
CN107180632A (zh) * 2017-06-19 2017-09-19 微鲸科技有限公司 语音控制方法、装置及可读存储介质
CN108419108A (zh) * 2018-03-06 2018-08-17 深圳创维数字技术有限公司 语音控制方法、装置、遥控器和计算机存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015013351A (ja) 2013-07-08 2015-01-22 有限会社アイドリーマ ロボットを制御するためのプログラム
US10271093B1 (en) * 2016-06-27 2019-04-23 Amazon Technologies, Inc. Systems and methods for routing content to an associated output device
US20180197533A1 (en) * 2017-01-11 2018-07-12 Google Llc Systems and Methods for Recognizing User Speech
US20200051554A1 (en) * 2017-01-17 2020-02-13 Samsung Electronics Co., Ltd. Electronic apparatus and method for operating same

Also Published As

Publication number Publication date
JP2020071721A (ja) 2020-05-07
JP7242248B2 (ja) 2023-03-20
CN111128145A (zh) 2020-05-08
CN111128145B (zh) 2024-05-21
US20200135196A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
US11252286B2 (en) Voice control device, printing apparatus, control methods thereof, and storage medium
US9480020B2 (en) Method for controlling a bluetooth connection
US20170280492A1 (en) Communication apparatus, method of controlling the same, and storage medium
US20200296793A1 (en) Method for establishing wireless communication link and electronic device supporting same
US9606957B2 (en) Electronic device and method of linking a task thereof
US9807584B2 (en) Communication apparatus, control method, and storage medium
US11019622B2 (en) Information processing apparatus and control method
US11393467B2 (en) Electronic device, control method, and storage medium
KR20190095582A (ko) 다른 전자 장치의 인증을 수행하는 전자 장치와 이의 동작 방법
US10405282B2 (en) Information processing apparatus and information processing method
WO2016009762A1 (en) Communication system, communication method, communication apparatus, method of controlling the same, and computer program
US10021622B2 (en) Communication apparatus, control method for communication apparatus, and storage medium
US20200146081A1 (en) Method for wireless connection and electronic device therefor
KR20200052673A (ko) 무선 네트워크에서 근접한 장치들 간 통신 방법 및 장치
US11425668B2 (en) Electronic device for performing device-to-device communication and method therefor
US10462635B2 (en) Information processing apparatus and control method therefor, portable terminal and control method therefor, and service providing system
US10542387B2 (en) Reducing connection time in direct wireless interaction
US20230247406A1 (en) Information processing apparatus that establishes connection to a communication apparatus, control method, and non-transitory computer-readable storage medium storing program
US9736250B2 (en) Non-network controller communication
KR102658216B1 (ko) 고가용성 네트워크 연결 방법 및 이를 사용하는 전자 장치
US11166332B2 (en) Communication apparatus and control method thereof
US10067725B2 (en) Communication apparatus and control method of the same

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUJITA, SHUNJI;REEL/FRAME:051503/0489

Effective date: 20191003

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE