WO2018155810A1 - Dispositif électronique, procédé de commande associé, et support d'enregistrement non-transitoire lisible par ordinateur - Google Patents

Dispositif électronique, procédé de commande associé, et support d'enregistrement non-transitoire lisible par ordinateur Download PDF

Info

Publication number
WO2018155810A1
WO2018155810A1 PCT/KR2018/000336 KR2018000336W WO2018155810A1 WO 2018155810 A1 WO2018155810 A1 WO 2018155810A1 KR 2018000336 W KR2018000336 W KR 2018000336W WO 2018155810 A1 WO2018155810 A1 WO 2018155810A1
Authority
WO
WIPO (PCT)
Prior art keywords
determined
alternative
voice
determining
user
Prior art date
Application number
PCT/KR2018/000336
Other languages
English (en)
Korean (ko)
Inventor
황인철
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170157902A external-priority patent/KR102490916B1/ko
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to US16/485,061 priority Critical patent/US20200043476A1/en
Publication of WO2018155810A1 publication Critical patent/WO2018155810A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • the present disclosure relates to an electronic device, a control method thereof, and a non-transitory computer readable recording medium. More particularly, an electronic device providing a guide for guiding an alternative operation when an operation corresponding to a user's voice cannot be performed, and control thereof A method and a non-transitory computer readable recording medium.
  • AI Artificial Intelligence
  • AI technology is composed of elementary technologies that utilize machine learning (deep learning) and machine learning.
  • Machine learning is an algorithm technology that classifies / learns the characteristics of input data by itself
  • element technology is a technology that uses machine learning algorithms such as deep learning.Its linguistic understanding, visual understanding, reasoning / prediction, knowledge expression, motion control, etc. It consists of technical fields.
  • Linguistic understanding is a technology for recognizing and applying / processing human language / characters and includes natural language processing, machine translation, dialogue system, question and answer, speech recognition / synthesis, and the like.
  • Visual understanding is a technology that recognizes and processes objects as human vision, and includes object recognition, object tracking, image retrieval, person recognition, scene understanding, spatial understanding, and image enhancement.
  • Inference Prediction is a technique for judging, logically inferring, and predicting information. It includes knowledge / probability-based inference, optimization prediction, preference-based planning, and recommendation.
  • Knowledge expression is a technology that automatically processes human experience information into knowledge data, and includes knowledge construction (data generation / classification) and knowledge management (data utilization).
  • Motion control is a technology for controlling autonomous driving of a vehicle and movement of a robot, and includes motion control (navigation, collision, driving), operation control (action control), and the like.
  • such an electronic device provides an intelligent assistant or virtual personal assistant (VPA) function that recognizes a user's voice and provides corresponding information or performs an operation.
  • VPN virtual personal assistant
  • the existing intelligent assistant function provided only an error message for guiding an error when the voice of the user was not interpreted as a form capable of performing an operation.
  • the user may not know which user's voice to input in order to perform the intended operation if the user merely provides an error message.
  • An object of the present disclosure relates to an electronic device, a control method thereof, and a non-transitory computer readable recording medium for guiding an alternative operation that can replace an operation corresponding to the user's voice when an operation corresponding to the user's voice cannot be performed. .
  • a method of controlling an electronic device includes: receiving a user voice; Obtaining text data from the user voice, and determining a target component and a parameter component from the obtained text data; Determining an operation corresponding to the user voice based on the target component and the parameter component; If it is determined that it is impossible to perform the determined operation, determining an alternative operation for replacing the determined operation based on at least one of the target component and the parameter component; And providing a message for guiding the alternative operation.
  • the electronic device comprises an input unit for receiving a user voice; And obtaining text data from the user voice input through the input unit, determining a target component and a parameter component from the obtained text data, and performing an operation corresponding to the user voice based on the target component and the parameter component. And if it is determined that it is impossible to perform the operation, determine an alternative operation for replacing the determined operation based on at least one of the target component and the parameter component, and provide a message for guiding the alternative operation.
  • a processor
  • the control method of the electronic device is a user voice.
  • FIG. 1 is a block diagram schematically illustrating a configuration of an electronic device according to an embodiment of the present disclosure
  • FIG. 2 is a block diagram illustrating a configuration of an electronic device in detail according to an embodiment of the present disclosure
  • FIG. 3 is a block diagram illustrating a configuration for performing an intelligent assistant function according to an embodiment of the present disclosure
  • 4A through 5 illustrate a message for guiding an alternate operation according to an embodiment of the present disclosure
  • FIG. 6 is a diagram for describing a method of controlling an electronic device, according to an embodiment of the present disclosure.
  • FIG. 7 illustrates an intelligent secretary system including a user terminal and a server for performing an intelligent secretary function according to another embodiment of the present disclosure
  • FIG. 8 is a sequence diagram illustrating a control method of an intelligent secretary system according to an embodiment of the present disclosure
  • FIG. 9 is a block diagram illustrating a configuration of a processor according to an embodiment of the present disclosure.
  • 10A is a block diagram illustrating a configuration of a data learning unit according to an exemplary embodiment.
  • 10B is a block diagram illustrating a configuration of an alternative operation determiner according to an exemplary embodiment.
  • first and second may be used to describe various components, but the components are not limited by the terms. The terms are only used to distinguish one component from another.
  • first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • the module or unit performs at least one function or operation, and may be implemented by hardware or software or a combination of hardware or software.
  • a plurality of 'modules' or a plurality of 'units' may be integrated into at least one module except for 'modules' or 'units' that need to be implemented by specific hardware, and may be implemented as at least one processor.
  • FIG. 1 is a schematic block diagram illustrating a configuration of an electronic device 100 according to an embodiment of the present disclosure.
  • the electronic device 100 may provide an intelligent assistant service alone.
  • various electronic devices such as smart phones, tablet PCs, notebook PCs, desktop PCs, wearable devices such as smart watches, electronic photo frames, humanoid robots, audio devices, and smart TVs may be used. It may be implemented as a device.
  • the electronic device 100 may be implemented as a server to provide an intelligent secretarial service to a user in cooperation with an external user terminal 200.
  • 'intelligent assistant' refers to a software application that understands a user's language and performs instructions desired by a user through a combination of artificial intelligence and voice recognition technology.
  • an intelligent assistant may perform artificial intelligence functions such as machine learning, deep speech recognition, sentence analysis, and situational awareness, including deep learning.
  • the intelligent assistant can learn a user's habits or patterns to provide a personalized service for the individual. Examples of intelligent assistants include S voice and Bixby. Intelligent assistants may also be called virtual personal assistants, interactive agents, etc. in other terms.
  • the electronic device 100 includes an input unit 110 and a processor 130.
  • the input unit 110 receives a user voice.
  • the input unit 110 may be implemented as a microphone, and may receive a user voice through a microphone.
  • the input unit 110 may receive text corresponding to the user voice in addition to the user voice.
  • the processor 130 may control overall operations of the electronic device 100.
  • the processor 130 may acquire text data from a user voice input through the input unit 110 and determine a target component and a parameter component from the obtained text data.
  • the processor 130 may determine an operation corresponding to the user voice based on the target component and the parameter component. If it is determined that it is impossible to perform the determined operation, the processor 130 may determine an alternative operation for replacing the determined operation based on at least one of the target component and the parameter component, and provide a message for guiding the alternative operation. have.
  • the processor 130 may obtain text data corresponding to the user voice by analyzing the user voice input through the input unit 110.
  • the processor 130 may determine a target component and a parameter component from the text data.
  • the target component may indicate an intention of the user to make a user voice
  • the parameter component may indicate specific contents (for example, application type, time, object, etc.) of the user's intended operation.
  • the processor 130 may determine a task corresponding to the user voice based on the determined target component and parameter component. In this case, the processor 130 may determine the type of the operation corresponding to the user's voice based on the determined target component, and determine the content of the operation corresponding to the user's voice based on the parameter component.
  • the processor 130 may determine whether the determined operation can be performed. In detail, when the type of operation is determined based on the target component, the processor 130 may determine whether the content of the operation determined based on the parameter component is feasible.
  • the processor 130 may determine an alternative operation that may replace the determined operation based on at least one of the target component and the parameter component.
  • the processor 130 may determine one of the plurality of alternative operations that may replace the determined operation based on the content of the operation determined through the parameter component as the replacement operation. have. In this case, the determined operation and the plurality of alternative operations may be matched with each other and stored.
  • the processor 130 may determine the replacement operation by inputting the content of the determined operation to the learned substitute operation determination model.
  • the alternative motion determination model is a model for recognizing the alternative motion for replacing the specific motion, and may be built in advance.
  • the processor 130 may process and provide a message for guiding an alternative operation in a natural language form.
  • the processor 130 may provide a message through a display.
  • the processor 130 may provide a message to an external user terminal.
  • the electronic device 100 may include an input unit 110, a display 120, a processor 130, a voice output unit 140, a communication unit 150, and a memory 160.
  • the electronic device 100 may include various components such as an image receiving unit (not shown), an image processing unit (not shown), a power supply unit (not shown), and the like.
  • the electronic device 100 is not necessarily limited to being implemented by including all the configurations shown in FIG. 2. For example, when the electronic device 100 is implemented as a server, the display 120 and the voice output unit 140 may not be provided.
  • the input unit 110 may receive a user voice.
  • the input unit 110 may include a voice input unit (eg, a microphone) that receives a user voice.
  • a voice input unit eg, a microphone
  • the voice input unit may receive a user voice spoken by the user.
  • the voice input unit may be implemented as an integrated body integrated with the upper side, the front side, the side, or the like of the electronic device 100, or may be provided as a separate means and connected to the electronic device 100 through a wired or wireless interface.
  • the voice input unit may include a plurality of voice input units to generate voice signals by receiving voices from different locations. Using the plurality of voice signals, the electronic device 100 may generate a single voice signal that is enhanced in a pre-processing process before performing the voice recognition function.
  • the voice input unit includes a microphone, an analog-to-digital converter (ADC), an energy determiner, a noise remover, and a voice signal generator.
  • ADC analog-to-digital converter
  • the microphone receives an analog audio signal including a user's voice.
  • the ADC converts the multichannel analog signal input from the microphone into a digital signal.
  • the energy determination unit calculates energy of the converted digital signal to determine whether the energy of the digital signal is greater than or equal to a predetermined value. If the energy of the digital signal is greater than or equal to a predetermined value, the energy determination unit transmits the input digital signal to the noise canceller, and if the energy of the digital signal is less than the predetermined value, the energy determination unit does not output the input digital signal to the outside. , Wait for another input. As a result, the entire audio processing process is not activated by the sound other than the voice signal, and unnecessary power consumption can be prevented.
  • the noise remover When a digital signal input to the noise remover is input, the noise remover removes a noise component among the digital signal including the noise component and the user voice component.
  • the noise component is a sudden noise that may occur in a home environment, and may include an air conditioner sound, a cleaner sound, a music sound, and the like.
  • the noise removing unit outputs the digital signal from which the noise component is removed to the audio signal generating unit.
  • the voice signal generator obtains direction information on the user's voice by tracking the location of the user's utterance within the 360 ° range from the voice input unit using the Localization / Speaker Tracking module.
  • the voice signal generator extracts a target sound source within a 360 ° range from the voice input unit by using the digital signal from which the noise is removed and the direction information on the user voice through the target spoken sound extraction module.
  • the voice signal generator converts the user voice into a user voice signal in a form for transmitting to the electronic device, and transmits the user voice signal to the main body of the electronic device 100 using the wireless interface. send.
  • the input unit 110 may receive various types of user commands in addition to the user voice.
  • the input unit 110 may receive a user command for selecting one of a plurality of candidate operations displayed on the guide UI.
  • the input unit 110 may be implemented as a button, a motion recognition device, a touch pad, or the like.
  • the touch panel and the display 120 may be coupled to each other to form a touch screen in which a mutual layer structure is formed.
  • the touch screen may detect a touch input position, an area, a pressure of the touch input, and the like.
  • the display 120 may display various guides, image contents, information, UIs, and the like provided by the electronic device 100.
  • the display 120 is implemented as a liquid crystal display (LCD), an organic light emitting diode (OLED), a plasma display panel (PDP), or the like.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • PDP plasma display panel
  • Various screens that can be provided can be displayed.
  • the display 120 may provide an image corresponding to a voice determination result of the processor 130.
  • the display 120 may display the voice determination result of the user in text.
  • the display 120 may display a message for guiding an alternative operation.
  • the voice output unit 140 may output a voice.
  • the voice output unit 140 may output not only various audio data but also a notification sound or a voice message.
  • the electronic device 100 may include a voice output unit 140 as an output unit for providing an interactive intelligent secretary function. By outputting the natural language-processed voice message through the voice output unit 140, the electronic device 100 may provide a user experience as if the user is talking with the electronic device 100.
  • the voice output unit 140 may be embedded in the electronic device 100 or may be implemented in the form of an output port such as a jack.
  • the communicator 150 communicates with an external device.
  • the external device may be implemented as another electronic device, server, cloud storage, network, or the like.
  • the communicator 150 may transmit a voice determination result to an external device and receive corresponding information from the external device.
  • the communicator 150 may receive a language model for speech recognition and a learning model for motion determination from an external device.
  • the communication unit 150 may transmit a voice determination result to the server 200, and receive a control signal or a message for guiding an alternative operation for performing a corresponding operation in the server 200. .
  • the communication unit 150 may include various communication modules such as a short range wireless communication module (not shown), a wireless communication module (not shown), and the like.
  • the short range wireless communication module is a module for communicating with an external device located in a short range according to a short range wireless communication scheme such as Bluetooth, Zigbee, or the like.
  • the wireless communication module is a module that is connected to an external network and performs communication according to a wireless communication protocol such as WiFi, WiFi direct, or IEEE.
  • the wireless communication module performs communication by connecting to a mobile communication network according to various mobile communication standards such as 3G (3rd Generation), 3GPP (3rd Generation Partnership Project), Long Term Evoloution (LTE), LTE Advanced (LTE-A), etc. It may further include a mobile communication module.
  • the memory 160 may store various modules, software, and data for driving the electronic device 100.
  • the memory 160 may store an acoustic model (AM) and a language model (LM) that may be used to recognize a user's voice.
  • the memory 160 may store a learned substitute action determination model to determine the substitute action.
  • the memory 160 may store a model for Natural Language Generation (NLG).
  • NLG Natural Language Generation
  • the memory 160 may store programs and data for configuring various screens to be displayed on the display 120.
  • the memory 160 may store a program, an application, and data for performing a specific service.
  • the memory 160 may previously store various response messages corresponding to the voice of the user as voice or text data.
  • the electronic device 100 may also read at least one of voice and text data corresponding to the received user voice (especially, a user control command) from the memory 160 and output the readout to the display 120 or the voice output unit 140. have. In this way, the electronic device 100 may provide a user with a simple or frequently used message without passing through the natural language generation model.
  • the memory 160 is a storage medium that stores various programs necessary for operating the electronic device 100.
  • the memory 160 may be implemented in the form of a flash memory, a hard disk drive (HDD), a solid state drive (SSD), or the like.
  • the memory 160 may include a ROM for storing a program for performing an operation of the electronic device 100 and a RAM for temporarily storing data for performing an operation of the electronic device 100.
  • the memory 160 may store a plurality of software modules for performing an operation corresponding to a user voice. Specifically, as shown in FIG. 3, the memory 160 may include a text acquisition module 310, a text analysis module 320, an operation determination module 330, an operation performance determination module 340, and an operation performance module 350. ), The alternative motion determination module 360 and the alternative motion guide module 370.
  • the text acquiring module 310 obtains text data from a voice signal including a user voice.
  • the text analysis module 320 analyzes the text data to determine a target component and a parameter component of the user's voice.
  • the text determination module 330 determines an operation corresponding to the user's voice based on the target component and the parameter component. In particular, the text determination module 330 may determine the type of the operation corresponding to the user's voice using the target component, and determine the content of the operation corresponding to the user's voice using the parameter component.
  • the operation performance determining module 340 determines whether the determined operation can be performed. In detail, the operation performance determining module 340 may determine whether the operation may be performed based on the content of the operation determined using the parameter component. For example, the operation performance determining module 340 may determine that the operation cannot be performed when the contents of the operation determined by using the parameter component are inoperable or some contents of the contents determined by the parameter component are missing. have.
  • the operation performing module 350 performs the determined operation.
  • the replacement operation determination module 360 may determine an alternative operation that may replace the determined operation using the target component and the parameter component. In this case, the substitute operation determination module 360 may determine the substitute operation using a previously stored substitute action or a previously learned substitute action determination model matched with the determined action.
  • the substitute operation guide module 370 provides a message for guiding the determined substitute action.
  • the message for guiding the alternative operation may be in an audio form or a visual form, and may be processed and provided in a natural language form.
  • the processor 130 may control the above-described components of the electronic device 100.
  • the processor 130 may use a plurality of software modules stored in the memory 160 to determine an alternative operation that may replace an operation corresponding to the user's voice, and provide a message for guiding the determined alternative operation. Can be.
  • the processor 130 may be implemented as a single CPU to perform a voice recognition operation, a language understanding operation, a conversation management operation, an alternative operation search operation, a filtering operation, a response generation operation, or the like. It may be implemented as a dedicated processor that performs the same operation as at least one function of the software module.
  • the processor 130 may perform speech recognition based on a traditional hidden markov model (HMM), or may perform deep learning based speech recognition such as a deep neural network (DNN).
  • HMM hidden markov model
  • DNN deep neural network
  • the processor 130 may use big data and user-specific history data to determine voice recognition and substitute operation. In this way, the processor 130 may personalize the speech recognition model and the replacement gesture determination model while using the speech recognition model learned from the big data and the replacement gesture determination model for determining the replacement gesture.
  • Text data can be obtained from the user's voice.
  • the processor 130 may control the text analysis module 320 to determine the target component and the parameter component by analyzing the obtained text data. For example, the processor 130 may control the text analysis module 320 to analyze the text data "find it in a photo gallery taken yesterday and send it as a message to a friend" and determine a target component and a parameter component as follows. .
  • the processor 130 may control the motion determining module 330 to determine a motion corresponding to the user's voice based on the target component and the parameter component.
  • the processor 130 may control the motion determination module 330 to determine that the type of motion is “send photo” based on the target component, and the content of the motion “find the picture taken yesterday in a gallery application as a message”. Send ".
  • the processor 130 may determine whether the determined operation may be performed by controlling the operation performance determining module 340. When the determined operation can be performed, the processor 130 may control the operation performing module 350 to perform the determined operation or transmit a control signal corresponding to the determined operation to an external device. For example, if it is possible to "find the picture taken yesterday in the gallery application and send it as a message", the processor 130 controls the action performing module 350 to retrieve the picture taken yesterday in the gallery application and attach it to the message. It can be transmitted to the external device corresponding thereto.
  • the processor 130 may control the replacement operation determination module 360 to determine an alternative operation that may replace the determined operation based on the target component and the parameter component. For example, if the number of pictures that can be transmitted in a message is five, but there are ten pictures taken yesterday found in the gallery application, the processor 130 may control the operation performance determining module 340 to perform an operation determined. It can be determined that no.
  • the processor 130 may control the alternative gesture determining module 360 to determine that it is impossible to transmit a message of 10 pictures, and determine whether there is an alternative gesture corresponding to the user's voice. .
  • the alternative operation may include an operation having the same type of operation as the operation corresponding to the user's voice but having different contents of the operation, or an operation having a type of operation different from the operation corresponding to the user's voice and different contents of the operation.
  • the processor 130 controls the alternative motion determining module 360 to have the same type of picture transmission, but transmits the picture using a chat application other than the message content using the message. This can be determined by an alternate action.
  • the processor 130 controls the alternative motion determining module 360 to determine the alternative action of "transfer photo” and the content of the action "find the picture taken yesterday in the gallery application and send it to the chat application". Can be.
  • the processor 130 may control the alternative motion determining module 360 to determine an action having a different type of motion, such as a capture screen transmission instead of a picture transmission, as the alternative motion. That is, the processor 130 controls the alternative motion determination module 360 so that the type of motion is “send capture screen”, and the content of the motion is “find the picture taken yesterday in the gallery application and capture the screen and send it as a message”. Alternative actions can be determined.
  • the plurality of alternative operations corresponding to the specific operation may be stored in advance.
  • the memory 160 replaces the "picture transfer", and may pre-store “capture screen transfer", “message transfer”, and the like by matching "photo transfer”.
  • the processor 130 may control the substitute operation determining module 360 to determine one of the at least one prestored alternative operation based on the cause of the error as the alternative operation of the operation corresponding to the user's voice. For example, if the operation corresponding to the user's voice is not possible due to the number of photos that can be transmitted, the processor 130 controls the alternative motion determination module 360 to determine “send the captured screen” as the alternative motion, and transmit the same.
  • the processor 130 may control the substitute operation determining module 360 to determine “message transmission” as the substitute operation. In this case, the error cause and the replacement operation may also be matched and stored.
  • the processor 130 may control the substitute motion determining module 360 to determine the substitute motion for replacing the motion corresponding to the user's voice using the previously learned substitute motion determination model. That is, the processor 130 may control the substitute motion determination module 360 to input the action determined in the substitute action determination model previously learned by the user or another person to determine the substitute action corresponding to the determined action.
  • the alternative operation determination model will be described in detail with reference to FIGS. 9 to 10B.
  • the processor 130 may control the replacement operation guide module 370 to provide a message for guiding the replacement operation.
  • the message for guiding the alternative operation may include a message for guiding at least one of a cause for failing to perform an operation corresponding to the user voice and an alternative operation.
  • the processor 130 may control the substitute operation guide module 370 to display a message for guiding the substitute operation, and output the message in an audio form.
  • the processor 130 may control the substitute operation guide module 370 to process and provide a message for guiding the substitute operation in a natural language form.
  • the processor 130 may execute the alternative action guide module 370. As shown in FIG.
  • the display 120 may display a message in a natural language form, “Is not possible to send the message and send it to xxx chat.”
  • the processor 130 replaces the action.
  • the operation guide module 370 is controlled to display a message in natural language form, “Can't send all the pictures, and then synthesizes 10 pictures into one message?” To the display 120. I can display it.
  • the processor 130 may control the alternative operation guide module 370 to provide a pre-stored natural language message, but this is only an example, and the natural language message may be generated using a language model for natural language processing. Can be generated and provided.
  • the processor 130 controls the text acquisition module 310 to obtain text data from the user voice. can do.
  • the processor 130 may control the text analysis module 320 to determine the target component and the parameter component by analyzing the obtained text data. For example, the processor 130 may control the text analysis module 320 to analyze the text data “schedule tomorrow” to determine a target component and a parameter component as follows.
  • the processor 130 may control the motion determining module 330 to determine a motion corresponding to the user's voice based on the target component and the parameter component. Specifically, the processor 130 may control the motion determination module 330 to determine that the type of motion is "scheduling" based on the target component, and the content of the motion is "registering a meeting tomorrow in the schedule application". Can be determined.
  • the processor 130 may determine whether the determined operation may be performed by controlling the operation performance determining module 340. When the determined operation cannot be performed, the processor 130 may control the substitute operation determination module 360 to determine an alternative operation that may replace the determined operation based on the target component and the parameter component. For example, since there is no parameter component indicating who is meeting with, the processor 130 may control the operation performance determining module 340 to determine that the determined operation is not executable.
  • the processor 130 may control the substitute motion determining module 360 to determine whether there is an alternative motion corresponding to the user's voice. For example, since there is no information about who is meeting with, the processor 130 controls the alternative action determining module 360 so that the alternative action has a different type of action, such as "leave a note” rather than "schedule". Can be determined. That is, the processor 130 may control the alternative operation determining module 360 to determine an alternative action of "Make a note" and the content of the action "Make a meeting schedule for tomorrow".
  • the processor 130 may control the replacement operation guide module 370 to provide a message for guiding the replacement operation. For example, in the case of an alternative action of which the type of action is "leave a note" and the content of the action is "composing a meeting schedule for tomorrow," the processor 130 controls the alternative action guide module 370. As shown in FIG. 5, a message in a natural language form may be displayed on the display 120.
  • FIG. 6 is a flowchart illustrating a control method of the electronic device 100 according to an embodiment of the present disclosure.
  • the electronic device 100 receives a user voice in operation S610.
  • the electronic device 100 obtains text data from the user's voice.
  • the electronic device 100 determines a target component and a parameter component from the obtained text data.
  • the electronic device 100 determines an operation corresponding to the user's voice based on the target component and the parameter component (S640). In this case, the electronic device 100 may determine the type of the operation corresponding to the user's voice using the target component, and may determine the content of the operation corresponding to the user's voice using the parameter component.
  • the electronic device 100 may determine whether the determined operation may be performed.
  • the electronic device 100 performs the determined operation (S660).
  • the electronic device 100 determines an alternative operation for replacing the determined operation (S670).
  • the electronic device 100 may determine one of a plurality of pre-stored alternative operations, which are symmetrical to the determined operation, as an alternative operation, and determine the alternative operation by inputting the determined operation to the alternative operation determination model.
  • the electronic device 100 provides a message for guiding the replacement operation.
  • the electronic device 100 may process and provide a message for guiding an alternative operation in a natural language form.
  • the intelligent secretary system 1000 may include a user terminal 200 and a server 100. Meanwhile, the electronic device 100 described in the above-described embodiment may be implemented as a server in FIG. 7.
  • the user terminal 200 may obtain the user's voice spoken by the user and transmit the user's voice to the external server 100.
  • the server 200 may determine an operation or substitute operation corresponding to the received user voice, and transmit a control signal or a message for guiding the substitute operation to the user terminal 200.
  • the user terminal 200 and the server 100 may interwork to provide an intelligent secretary service.
  • the user terminal 200 may simply be implemented as an input / output device for receiving a user's voice and providing a message, and may be implemented in a form in which the server 100 processes most of the intelligent secretary service.
  • the user terminal 200 is implemented as a small wearable device such as a smart watch as shown in FIG. 7 and the available resources are limited, processes such as determining an alternative operation and generating a natural language are resource-rich servers 200. ) Can be performed.
  • FIG. 8 is a sequence diagram illustrating a control method of an intelligent secretary system according to an embodiment of the present disclosure.
  • the user terminal 200 obtains a user voice (S810).
  • the user terminal 200 may obtain a user voice from a microphone provided in the user terminal 200 or connected to the user terminal 200.
  • the user terminal 200 transmits the user's voice to the external server 100 (S820).
  • the user terminal 200 may transmit a voice signal corresponding to the user voice to the external server 100.
  • the server 100 obtains text data from the received user voice.
  • the server 100 analyzes the text data (S840) and determines an operation corresponding to the user's voice (S850). Specifically, the server 100 may determine the target component and the parameter component from the text data, determine the type of the operation for the user voice from the target component, and determine the content of the operation for the user voice from the parameter component.
  • the server 100 determines an alternative operation that may replace the operation corresponding to the user's voice (S860). In this case, the server 100 may determine one of the pre-stored alternative operations as the alternative operation, and determine the alternative operation using the learned substitute operation determination model.
  • the server 100 generates a message for guiding the replacement operation (S870).
  • the server 100 may generate a message in a natural language form.
  • the server 100 transmits a message to the user terminal 200 (S880), and the user terminal 200 outputs the received message (S890).
  • FIG. 9 is a block diagram of a processor 130 according to some embodiments of the present disclosure.
  • the processor 130 may include a data learner 131 and an alternative operation determiner 132.
  • the data learner 131 may learn a criterion for determining an alternative operation.
  • the processor 130 may analyze the input motion according to the learned criterion to determine an alternative motion that may replace the motion corresponding to the user's voice.
  • the data learner 131 may determine which data (or parameter component) to use to determine the replacement operation.
  • the data learner 131 may acquire the data to be used for learning, and apply the acquired data to an alternative operation determination model to be described later to learn the criteria for the alternative operation.
  • the alternative gesture determination unit 132 may determine an alternative gesture that may replace the gesture corresponding to the user's voice from the predetermined data using the previously learned alternative gesture determination model.
  • the replacement operation determiner 132 obtains predetermined data (eg, at least one of the target component and the parameter component of the determined operation) according to a predetermined criterion by learning, and uses the obtained data as an input value to replace the replacement operation. Judgment models can be used.
  • the substitute operation determination unit 132 may apply the inputted data to the substitute operation determination model to obtain a result value for the substitute operation.
  • the substitute operation determination unit 132 may update the substitute operation determination model based on user feedback on the input value and the output value.
  • At least one of the data learner 131 and the substitute operation determiner 132 may be manufactured in the form of one or a plurality of hardware chips and mounted on the electronic device 100.
  • at least one of the data learning unit 131 and the alternative operation determining unit 132 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be a conventional general purpose processor (for example, , A CPU or an application processor) or a part of an IP for a specific function, and may be mounted on the aforementioned various electronic devices 100.
  • AI artificial intelligence
  • a CPU or an application processor for example, A CPU or an application processor
  • the data learner 131 and the replacement operation determiner 132 are both mounted on the electronic device 100, but they may be mounted on separate devices.
  • one of the data learner 131 and the substitute operation determiner 132 may be included in the electronic device 100, and the other may be included in the user terminal 200.
  • the data learning unit 131 and the replacement operation determination unit 132 are connected to each other by wire or wirelessly, and the information on the replacement operation determination model built by the data learning unit 131 is provided to the replacement operation determination unit 132.
  • the data input to the substitute operation determiner 132 may be provided to the data learner 131 as additional learning data.
  • At least one of the data learner 131 and the substitute operation determiner 132 may be implemented as a software module.
  • the software module may be stored in a non-transitory computer readable recording medium.
  • At least one software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • some of the at least one software module may be provided by the OS, and some of the at least one software module may be provided by a predetermined application.
  • the data learner 131 includes a data acquirer 131-1, a preprocessor 131-2, a training data selector 131-3, and a model learner 131. -4) and the model evaluator 131-5.
  • the data acquirer 131-1 may acquire data necessary for determining the replacement operation.
  • the data acquirer 131-1 may acquire data for determining an operation corresponding to the user's voice as the training data. For example, at least one of a signal corresponding to a user voice input through the input unit 110, a text data corresponding to the user voice, a target component determined from the text data, and a parameter component may be input.
  • the preprocessor 131-2 may preprocess the acquired data so that the obtained data may be used for learning to determine an alternative operation.
  • the preprocessor 131-2 may process the acquired data in a predetermined format so that the model learner 131-4 to be described later uses the acquired data for learning to determine an alternative operation.
  • the preprocessor 131-2 may extract a section that is a recognition target for the input user voice.
  • the preprocessor 131-2 may perform noise reduction, feature extraction, and the like, on the signal corresponding to the user's voice, and convert the signal into text data.
  • the preprocessor 131-2 may generate voice data to be suitable for speech recognition by analyzing a frequency component of the input user voice to reinforce some frequency components and suppress the remaining frequency components.
  • the training data selector 131-3 may select data necessary for learning from the preprocessed data.
  • the selected data may be provided to the model learner 131-4.
  • the training data selector 131-3 may select data necessary for learning from preprocessed data according to a predetermined criterion for determining an alternative operation.
  • the training data selector 131-3 may select data according to a predetermined criterion by learning by the model learner 131-4 to be described later.
  • the training data selector 131-1 may select only a target component and a parameter component from the input text data.
  • the model learner 131-4 may learn a criterion on how to determine an alternative operation based on the training data. In addition, the model learner 131-4 may learn a criterion about what training data should be used to determine an alternative operation.
  • the model learner 131-4 may train the alternative motion determination model used for the alternative motion determination using the training data.
  • the alternative motion determination model may be a previously built model.
  • the alternative motion determination model may be a model built in advance by receiving basic training data.
  • the alternative motion determination model may be a model built in advance using big data.
  • the alternative motion determination model may be constructed in consideration of the application field of the recognition model, the purpose of learning, or the computer performance of the device.
  • the alternative motion determination model may be, for example, a model based on a neural network.
  • a model such as a deep neural network (DNN), a recurrent neural network (RNN), and a bidirectional recurrent deep neural network (BRDNN) may be used as an alternative operation determination model, but is not limited thereto.
  • DNN deep neural network
  • RNN recurrent neural network
  • BBDNN bidirectional recurrent deep neural network
  • the model learner 131-4 may substitute an alternative action determination model having a large correlation between input training data and basic training data. Can be determined by the judgment model.
  • the basic training data may be felt for each type of data, and the alternative motion determination model may be built in advance for each type of data.
  • the basic training data may be mood based on various criteria such as the region where the training data is generated, the time at which the training data is generated, the size of the training data, the genre of the training data, the creator of the training data, the types of objects in the training data, and the like. It may be.
  • model learner 131-4 may train the alternative motion determination model using a learning algorithm including, for example, error back-propagation or gradient descent. have.
  • the model learner 131-4 may train the alternative motion determination model through supervised learning using the training data as an input value.
  • the model learning unit 131-4 may perform the alternative operation through unsupervised learning that discovers a criterion for determining the alternative operation by learning the type of data necessary for the determination of the alternative operation without additional guidance. Train decision models.
  • the model learner 131-4 may train the alternative gesture determination model through reinforcement learning using feedback on whether the result of the alternative gesture determination according to the learning is correct.
  • the model learner 131-4 may store the learned alternative motion determination model.
  • the model learner 131-4 may store the learned substitute motion determination model in the memory 160 of the electronic device 100.
  • the memory 160 in which the learned substitute operation determination model is stored may also store commands or data related to at least one other element of the electronic device 100.
  • the memory 160 may also store software and / or programs.
  • the program may include a kernel, middleware, an application programming interface (API) and / or an application program (or “application”), and the like.
  • the model evaluator 131-5 inputs the evaluation data to the substitute operation determination model, and causes the model learner 131-4 to relearn if the determination result output from the evaluation data does not satisfy a predetermined criterion.
  • the evaluation data may be preset data for evaluating the alternative operation determination model.
  • the model evaluator 131-5 may determine a predetermined criterion when the number or ratio of evaluation data in which the determination result is not accurate among the determination results of the learned substitute operation determination model for the evaluation data exceeds a preset threshold. Can be evaluated as not satisfied. For example, when a predetermined criterion is defined as a ratio of 2%, the model evaluator 131-5 when the learned substitute motion judgment model outputs an incorrect judgment result for more than 20 evaluation data out of a total of 1000 evaluation data. ) Can be assessed as not suiting the learned alternative behavior decision model.
  • the model evaluator 131-5 evaluates whether each learned alternative motion determination model satisfies a predetermined criterion, and finally ends the model satisfying the predetermined criterion. It can be determined as an alternative behavior judgment model. In this case, when there are a plurality of models satisfying a predetermined criterion, the model evaluator 131-5 may determine any one or a predetermined number of models that are preset in the order of the highest evaluation score as the final replacement operation determination model.
  • At least one of -5) may be manufactured in the form of at least one hardware chip and mounted on the electronic device.
  • at least one of the data acquirer 131-1, the preprocessor 131-2, the training data selector 131-3, the model learner 131-4, and the model evaluator 131-5 One may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be manufactured as an existing general purpose processor (eg, a CPU or an application processor) or part of an IP for a specific function. It may be mounted on the electronic device 100.
  • AI artificial intelligence
  • the data obtaining unit 131-1, the preprocessor 131-2, the training data selecting unit 131-3, the model learning unit 131-4, and the model evaluating unit 131-5 are electronic components. It may be mounted on the device, or may be mounted on separate electronic devices, respectively.
  • some of the data acquirer 131-1, the preprocessor 131-2, the training data selector 131-3, the model learner 131-4, and the model evaluator 131-5. May be included in the electronic device 100, and some of them may be included in the server 200.
  • At least one of the data acquirer 131-1, the preprocessor 131-2, the training data selector 131-3, the model learner 131-4, and the model evaluator 131-5 is provided. It may be implemented as a software module. At least one of the data acquirer 131-1, the preprocessor 131-2, the training data selector 131-3, the model learner 131-4, and the model evaluator 131-5 is a software module. (Or, a program module including instructions), the software module may be stored on a non-transitory computer readable recording medium. At least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the at least one software module may be provided by the OS, and some of the at least one software module may be provided by a predetermined application.
  • OS operating system
  • Alternatively, some of the at least one software module may be provided by the OS, and some of the at least one software module may be provided by a predetermined application.
  • the alternative operation determiner 132 may include a data acquirer 132-1, a preprocessor 132-2, a data selector 132-3, and a determination result provider ( 132-4) and the model updater 132-5.
  • the data acquirer 132-1 may acquire data necessary for determining the replacement operation, and the preprocessor 132-2 may preprocess the acquired data so that the obtained data may be used for the replacement operation determination. have.
  • the preprocessing unit 132-2 may process the acquired data in a predetermined format so that the determination result providing unit 132-4, which will be described later, may use the acquired data for determining the replacement operation.
  • the data selector 132-3 may select data necessary for determining the replacement operation from the preprocessed data.
  • the selected data may be provided to the determination result providing unit 132-4.
  • the data selector 132-3 may select some or all of the preprocessed data according to a predetermined criterion for determining the replacement operation.
  • the data selector 132-3 may select data according to a predetermined criterion by learning by the model learner 142-4 to be described later.
  • the determination result providing unit 132-4 may apply the selected data to an alternative gesture determination model to determine an alternative gesture that may replace the gesture corresponding to the user's voice.
  • the determination result providing unit 132-4 may apply the selected data to the replacement operation determination model by using the data selected by the data selecting unit 132-3 as an input value.
  • the determination result may be determined by the alternative operation determination model.
  • the determination result providing unit 132-4 may determine an operation that may substitute an operation corresponding to the user's voice by inputting data capable of determining an operation corresponding to the user's voice into an alternative operation determination model. have.
  • the model updater 132-5 may update the alternative operation determination model based on the evaluation of the determination result provided by the determination result providing unit 132-4. For example, the model updater 132-5 may provide the model learner 131-4 to the model learner 131-4 by providing the determination result provided by the determination result provider 132-4 to the model learner 131-4. It is possible to update the alternative behavior determination model.
  • At least one of the 132-5 may be manufactured in the form of at least one hardware chip and mounted on the electronic device.
  • at least one of the data obtaining unit 132-1, the preprocessor 132-2, the data selecting unit 132-3, the determination result providing unit 132-4, and the model updating unit 132-5 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be manufactured as an existing general purpose processor (eg, a CPU or an application processor) or part of an IP for a specific function. It may be mounted on the electronic device 100.
  • AI artificial intelligence
  • one electronic device of the data acquirer 132-1, the preprocessor 132-2, the data selector 132-3, the determination result providing unit 132-4, and the model updater 132-5 may be mounted on, or may be mounted on separate electronic devices, respectively.
  • some of the data obtaining unit 132-1, the preprocessor 132-2, the data selecting unit 132-3, the determination result providing unit 132-4, and the model updating unit 132-5. May be included in the electronic device 100, and the remaining part may be included in a server interworking with the electronic watch 100.
  • At least one of the data acquirer 132-1, the preprocessor 132-2, the data selector 132-3, the determination result providing unit 132-4, and the model updater 132-5 It may be implemented as a software module.
  • At least one of the data acquisition unit 132-1, the preprocessor 132-2, the data selection unit 132-3, the determination result providing unit 132-4, and the model updater 132-5 is a software module. (Or, a program module including instructions), the software module may be stored on a non-transitory computer readable recording medium.
  • At least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the at least one software module may be provided by the OS, and some of the at least one software module may be provided by a predetermined application.
  • OS operating system
  • Alternatively, some of the at least one software module may be provided by the OS, and some of the at least one software module may be provided by a predetermined application.
  • the methods described above may be embodied in the form of program instructions that may be executed by various computer means and may be recorded in a computer readable medium.
  • the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un système d'intelligence artificielle (IA) utilisant un algorithme d'apprentissage automatique, tel qu'un apprentissage profond, et ses applications. En particulier, un procédé de commande d'un dispositif électronique de la présente invention comprend les étapes suivantes consistant : à recevoir la voix d'un utilisateur; à obtenir des données de texte à partir de la voix de l'utilisateur; à déterminer un composant cible et un composant de paramètre à partir des données de texte obtenues; à déterminer, sur la base du composant cible et du composant de paramètre, une action correspondant à la voix de l'utilisateur; s'il est déterminé que l'action déterminée ne peut pas être effectuée, à déterminer une action alternative pour remplacer l'action qui a été déterminée sur la base du composant cible et/ou du composant de paramètre; et à fournir un message pour guider l'action alternative.
PCT/KR2018/000336 2017-02-21 2018-01-08 Dispositif électronique, procédé de commande associé, et support d'enregistrement non-transitoire lisible par ordinateur WO2018155810A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/485,061 US20200043476A1 (en) 2017-02-21 2018-01-08 Electronic device, control method therefor, and non-transitory computer readable recording medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2017-0023121 2017-02-21
KR20170023121 2017-02-21
KR10-2017-0157902 2017-11-24
KR1020170157902A KR102490916B1 (ko) 2017-02-21 2017-11-24 전자 장치, 이의 제어 방법 및 비일시적인 컴퓨터 판독가능 기록매체

Publications (1)

Publication Number Publication Date
WO2018155810A1 true WO2018155810A1 (fr) 2018-08-30

Family

ID=63253946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/000336 WO2018155810A1 (fr) 2017-02-21 2018-01-08 Dispositif électronique, procédé de commande associé, et support d'enregistrement non-transitoire lisible par ordinateur

Country Status (1)

Country Link
WO (1) WO2018155810A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021045447A1 (fr) 2019-09-02 2021-03-11 Samsung Electronics Co., Ltd. Appareil et procédé de fourniture d'un service d'assistant vocal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000338993A (ja) * 1999-05-26 2000-12-08 Denso Corp 音声認識装置、その装置を用いたナビゲーションシステム
US20040172256A1 (en) * 2002-07-25 2004-09-02 Kunio Yokoi Voice control system
KR20140098525A (ko) * 2013-01-31 2014-08-08 삼성전자주식회사 음성 인식 장치 및 응답 정보 제공 방법
WO2016063564A1 (fr) * 2014-10-24 2016-04-28 株式会社ソニー・コンピュータエンタテインメント Dispositif de commande, procédé de commande, programme et support d'informations
KR101614746B1 (ko) * 2015-02-10 2016-05-02 미디어젠(주) 사용자 패턴에 기반한 오류 db모듈을 적용한 임베디드 음성인식 처리방법 및 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000338993A (ja) * 1999-05-26 2000-12-08 Denso Corp 音声認識装置、その装置を用いたナビゲーションシステム
US20040172256A1 (en) * 2002-07-25 2004-09-02 Kunio Yokoi Voice control system
KR20140098525A (ko) * 2013-01-31 2014-08-08 삼성전자주식회사 음성 인식 장치 및 응답 정보 제공 방법
WO2016063564A1 (fr) * 2014-10-24 2016-04-28 株式会社ソニー・コンピュータエンタテインメント Dispositif de commande, procédé de commande, programme et support d'informations
KR101614746B1 (ko) * 2015-02-10 2016-05-02 미디어젠(주) 사용자 패턴에 기반한 오류 db모듈을 적용한 임베디드 음성인식 처리방법 및 시스템

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021045447A1 (fr) 2019-09-02 2021-03-11 Samsung Electronics Co., Ltd. Appareil et procédé de fourniture d'un service d'assistant vocal
EP3997695A4 (fr) * 2019-09-02 2022-08-10 Samsung Electronics Co., Ltd. Appareil et procédé de fourniture d'un service d'assistant vocal
US11501755B2 (en) 2019-09-02 2022-11-15 Samsung Electronics Co., Ltd. Apparatus and method for providing voice assistant service

Similar Documents

Publication Publication Date Title
WO2020213762A1 (fr) Dispositif électronique, procédé de fonctionnement de celui-ci et système comprenant une pluralité de dispositifs d'intelligence artificielle
WO2020189850A1 (fr) Dispositif électronique et procédé de commande de reconnaissance vocale par ledit dispositif électronique
WO2019194451A1 (fr) Procédé et appareil d'analyse de conversation vocale utilisant une intelligence artificielle
WO2019182346A1 (fr) Dispositif électronique pour moduler une voix d'utilisateur à l'aide d'un modèle d'intelligence artificielle et son procédé de commande
KR102490916B1 (ko) 전자 장치, 이의 제어 방법 및 비일시적인 컴퓨터 판독가능 기록매체
EP3915039A1 (fr) Système et procédé pour un réseau de mémoire attentive enrichi par contexte avec codage global et local pour la détection d'une rupture de dialogue
WO2020080635A1 (fr) Dispositif électronique permettant d'effectuer une reconnaissance vocale à l'aide de microphones sélectionnés d'après un état de fonctionnement, et procédé de fonctionnement associé
WO2020130549A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2021071110A1 (fr) Appareil électronique et procédé de commande d'appareil électronique
US11450326B2 (en) Device for recognizing voice content, server connected thereto, and method for recognizing voice content
WO2020251074A1 (fr) Robot à intelligence artificielle destiné à fournir une fonction de reconnaissance vocale et procédé de fonctionnement associé
KR20190068021A (ko) 감정 및 윤리 상태 모니터링 기반 사용자 적응형 대화 장치 및 이를 위한 방법
WO2020180001A1 (fr) Dispositif électronique et procédé de commande associe
WO2021071271A1 (fr) Appareil électronique et procédé de commande associé
WO2018155810A1 (fr) Dispositif électronique, procédé de commande associé, et support d'enregistrement non-transitoire lisible par ordinateur
WO2020080771A1 (fr) Dispositif électronique fournissant un texte d'énoncé modifié et son procédé de fonctionnement
WO2019190243A1 (fr) Système et procédé de génération d'informations pour une interaction avec un utilisateur
US20200090663A1 (en) Information processing apparatus and electronic device
WO2022177103A1 (fr) Dispositif électronique de prise en charge de service pour agent à intelligence artificielle (ia) parlant avec un utilisateur
WO2018117608A1 (fr) Dispositif électronique, procédé servant à déterminer l'intention d'énonciation de son utilisateur, et support d'enregistrement lisible par ordinateur non transitoire
WO2020116766A1 (fr) Procédé pour générer un modèle de prédiction d'utilisateur pour identifier un utilisateur par des données d'apprentissage, dispositif électronique auquel est appliqué ledit modèle, et procédé pour appliquer ledit modèle
WO2018155807A1 (fr) Dispositif électronique, procédé d'affichage de document associé, et support d'enregistrement non temporaire lisible par ordinateur
WO2022108190A1 (fr) Dispositif électronique et son procédé de commande
WO2022114482A1 (fr) Dispositif électronique et son procédé de commande
WO2022097970A1 (fr) Dispositif électronique et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18757472

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18757472

Country of ref document: EP

Kind code of ref document: A1