KR101369810B1 - Empirical Context Aware Computing Method For Robot - Google Patents

Empirical Context Aware Computing Method For Robot Download PDF

Info

Publication number
KR101369810B1
KR101369810B1 KR1020100032792A KR20100032792A KR101369810B1 KR 101369810 B1 KR101369810 B1 KR 101369810B1 KR 1020100032792 A KR1020100032792 A KR 1020100032792A KR 20100032792 A KR20100032792 A KR 20100032792A KR 101369810 B1 KR101369810 B1 KR 101369810B1
Authority
KR
South Korea
Prior art keywords
situation
empirical
model
interaction
empirical model
Prior art date
Application number
KR1020100032792A
Other languages
Korean (ko)
Other versions
KR20110113414A (en
Inventor
이초강
Original Assignee
이초강
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이초강 filed Critical 이초강
Priority to KR1020100032792A priority Critical patent/KR101369810B1/en
Publication of KR20110113414A publication Critical patent/KR20110113414A/en
Application granted granted Critical
Publication of KR101369810B1 publication Critical patent/KR101369810B1/en

Links

Images

Abstract

The present invention relates to an empirical situation recognition method for a robot, in consideration of the empirical facts of a situation caused by a factor causing and changing the interaction of the robot with a human or a subject such as the robot and the interaction occurring in the situation. Empirical and probabilistic relations are used to empirical models of interactions on situations ("situation: interaction") to recognize the situation and to learn and adapt to the unknown environment or information through experience. The purpose is to provide a technical method for implementing the intelligent system to the robot.
By applying the present invention, it is possible to minimize technical difficulties in performing improved artificial intelligence processing in robots, applications and various systems, and to realize the ability to adapt and learn about unknown environments and knowledge information, and to meet new needs and environments. In addition, it has the advantage of being easy to use and excellent in scalability, reducing the development hassle and easily solving technical problems of complex situational awareness.

Description

A computer readable recording medium recording a program for executing an empirical situational awareness method for a robot. {Empirical Context Aware Computing Method For Robot}

The present invention relates to an empirical situation recognition method for a robot. More specifically, the robot recognizes a given situation and takes into account the empirical facts of the situation and interaction in performing an interaction with a subject such as a human or a robot. Based on its empirical and probabilistic relations, intelligent systems such as the human's intelligent interaction ability to recognize the situation and perform the ability of learning to adapt and learn through the experience on an unknown environment or information. The present invention provides a computer-readable recording medium having recorded thereon a program for executing a method for implementing on a robot.

The robot's ideal intelligence should be its ability to judge and operate on its own. Smarter robots will adapt themselves and even interact with each other in a strange environment, recognizing the given situation and interacting and feeling like humans.

In order for these robots to reach human level, they must overcome many technical difficulties and difficulties. For example, HRI technology such as voice recognition, face recognition, motion recognition, emotion recognition, wireless network technology, and ubiquitous network system are fused to each other to overcome the intelligent limitations of the robot.

However, until now, most of them are merely execution of simple commands or operation patterns, and the development process is often complicated and one-off.

These conventional techniques are highly likely to perform interactions that do not match the actual situation because they perform interactions based on pre-stored data (perception factors) for each object (information or data) recognized by the robot. That is, since the interactions are determined in correspondence with each of the recognized objects, the correlation between many factors that determine one interaction may be lacking, which may result in inappropriate interactions with reality. Therefore, as many cases as possible should be set up, and the application should be limited and new programs should be developed for new demands and environments. Of course, there is a technology that allows self-learning to compensate for these shortcomings, but it cannot solve the fundamental problem in that it is still in the same way.

The present invention considers that the problem of this method stems from the lack of sufficient consideration of the empirical facts between the situation and the interaction, and is intended to be able to determine and predict the interaction according to the situation based on the understanding of the empirical knowledge information. To do this, we need to understand the principle of human intelligent interaction and present a model of situational awareness that can be implemented based on it. For example, human intelligent interaction is possible because it is based on the understanding of empirical knowledge information through long time experience or learning, not simply by animal sense and perception.

In other words, more than 99% of human knowledge and information activities are made by experience and learning. It is concluded that it is irrelevant to treat only knowledge information by experience and learning in the interaction of subjects such as humans and robots through information.

Thus, the proposition of interactive intelligence rationality, "the intelligent range of human-subject interaction depends only on the empirical knowledge information system owned by the subject" (see Figure 1a), for example, A and B Suppose you have a phone call. Ask A what he is doing and say B is in a meeting. A announces that B is in a meeting and, in a quiet voice, hangs up to talk later. Here A does not ask B and does not know what B does. This means that you don't need as much intelligence as you need to know without asking. Asking what A is doing and responding to the situation by answering shows that the experience has gained the necessary intelligence. In other words, the experience of asking and listening to answers has given us the necessary intelligence. Also, coping with the situation is based on past experience. Such a real human intelligent interaction principle can be expressed as a pseudo model as shown in FIG.

The present invention proposes an empirical and empirical situational awareness model for applying the actual human intelligent interaction principle (FIG. 1b) as described above to an artificial intelligence such as a robot. In other words, the actual human intelligent interaction principle of FIG. 1B is made into a pseudo model as shown in FIG. 1C, and an empirical model (FIG. 1D) and an empirical situational awareness model (FIG. 1E) are proposed to solve this problem.

Basically, the factors that generate and change interactions themselves are not independent situations, and situations are collections of factors that have a one-to-one relationship to interactions ("situations: interactions"). As a result, the empirical knowledge information system manages the empirical models of such situations and interactions and provides an interface for controlling input and output. The empirical knowledge information system is continuously changed and updated through empiricalization. This suggests that the empirical situation recognition method can be solved based on the understanding of empirical knowledge information.

As described above, an object of the present invention is to provide an empirical intelligent system for overcoming the intelligent limitations of a robot by providing a technical method for implementing a model of empirical situational recognition and empiricalization.

As described above, it is an object of the present invention to solve the problem of the intelligent limit for performing the intelligent interaction of the robot, in order to generate and change the interaction between the robot and any subject (human or robot, etc.) Empirical models of situational factors, situations, and interactions ("situation: interactions") from factors should be created, and the situational law to identify the empirical and probabilistic relationships between the models should be presented. It is necessary to present an empirical context-aware algorithm that can determine the appropriate interaction for a given situation, and to provide a specific interface that can be implemented by computer program technology such as C that manages the empirical model and controls its input and output. PAL interface) Create an empirical knowledge information system according to the change of empirical knowledge information. An implementation model of empiricalization that can be modified should be presented.

The present invention is largely composed of four steps, the empirical model generation step of generating the empirical model of the situation from the input raw data, the step of performing the first algorithm, the step of performing the second algorithm and the first and second algorithm In the execution phase, the empirical modeling of the empirical model of interaction with unknown or newly added situations is performed.

The empirical model generation step may include an eleventh step of performing normalization for generating an empirical model of input raw data; A twelfth step of generating the empirical model through the hierarchical empirical knowledge information system based on the situation rules 1,2 and 3 of the normalized data; A thirteenth step of acquiring a master situation factor through a magnitude relation between the situation law 3 and a scalar displacement in the empirical models of the situation factors newly generated in the twelfth step; A fourteenth step of performing a normalization process for generating an empirical model of the situation based on the situation rules 1,2 for the empirical models of the situation factors of the twelfth step; A fifteenth step of generating a primitive model of the situation, which is an aggregate consisting of the scalar displacements of empirical models of the situation factors selected in the fourteenth step; The sixteenth step consists of creating an empirical model of the situation through the primitive model of the situation, which is a collection of empirical models of the situational factors of the fifteenth step.

Here, the raw data is divided into original data and meta information and transmitted. In the 12th step, the hierarchization is performed based on the work classified by the type of data and the horizontal and vertical relationship, and the PAL interface is established, and the primitive model, the classification model, and the derivative model are generated.

The first algorithm may include a seventeenth step of receiving an empirical model of a situation for recognizing; Whether the empirical model of the situation of the 17th stage exists and the number of elements is 1 and satisfies the situation rule 1 in the intersection of the set of items of the PAL disk driver for each situational element of the 17th stage of the empirical model of the situation. An eighteenth step of determining the model satisfying the condition as an experienced model for the empirical model of the situation of the seventeenth step; If the situation law 1 is not satisfied in the condition of step 18, and the element of the intersection is greater than 1 in step 18, the situation rule 5 shows that the element having the smallest number of elements (situation factor) in each situation is found in step 17. A nineteenth step of performing a judgment regarded as a model; A twenty-first step of obtaining an experienced model of a seventeenth step through probabilistic inference from empirical models of the empirical knowledge information system if the condition of the nineteenth step is not satisfied; If a model experienced in step 20 cannot be obtained, step 21 of selecting an empirical model of the situation of step 17 as an experienced model and requesting empiricalization of the model; A twenty-second step of requesting empirical case when an experienced model for an empirical model of a situation of a seventeenth step is not obtained or an exception occurs throughout the algorithm; A twenty-third step of requesting the empirical computation of the perceptual circuit to empirically the unknown empirical model.

In the eleventh step, the data is validated, a string-based information specification is given, data redefinition such as uniqueness assigned to a unique ID is performed, and data classification for later layering is performed. Include. In addition, in the nineteenth step, if the empirical model is multi-selected, the model belongs to the element of the PAL disc driver item set for the master situation factor (when the master situation factor exists) of the empirical model of the situation corresponding to the seventeenth step. By selecting or using Situational Law 3, the case relation of scalar displacement of situational factors is given priority.In the case of candidates, the situation where the number of elements is small is included in the situation where the number of elements is high. An empirical situation recognition method is provided for a robot, characterized in that the number of elements is chosen to be small.

The second algorithm may include a twenty-fourth step of receiving an empirical model of a situation to be recognized; A twenty-fifth step of obtaining an empirical model of interaction with a situation of a twenty-fourth step by using a PAL disc driver of an empirical knowledge information system and empirical probability inference; A twenty sixth step of selecting one of the empirical models having the highest priority among the empirical models of interaction obtained in the twenty-five step in consideration of the situation rule 3 and the master situation factor; Obtaining a heuristic model of the interaction with respect to candidate models of the empirical model of the situation corresponding to the twenty-fourth step if the empirical model of the interaction is not acquired in the twenty-fifth step; A 28th step of newly generating an empirical model of interaction with the empirical model of the situation in step 24 in consideration of the situation rules 3 and 5 and the master situation factor; Creating a run time model ACT of the empirical model of the finally selected interaction; The empirical model of the interaction with respect to the situation of the newly created stage 24 is requested to be empirical with the empirical model of the interaction with the situation ("situation: interaction"), and the interaction of the situation with the stage 24 If the empirical model of the action is not obtained, the thirty-stage step is regarded as an unknown situation and the thirty-stage request is made for experience.

In addition, the empirical model of the interaction of the unknown situation or the newly added situation may be embodied in the empirical model of the unknown situation or the newly added situation in the step of performing the first algorithm and the second algorithm. A thirty-first step of receiving an empirical model of interaction ("situation: interaction") for the user; A thirty-second step of performing hierarchization of the empirical knowledge information system of the thirty-first empirical models according to the situation rules 1,2,3; The thirty-third step of adjusting the scalar displacement between empirical models according to the Situations 1, 2, and 3 (preferably, in 33, the empirical model's experience value, the subject's attributes and the present situation may be taken into account); Adding a thirty-third empirical model to the PAL element group of the hierarchy of the empirical knowledge information system; A thirty-fifth step of examining the necessity of defining a new relationship to the interrelationship of the empirical models of the thirty-fourth step by the situation rules 1,2 and 3; A thirty-sixth step of adding the thirty-fourth PAL elements to the PAL disc driver; Among the empirical models of the thirty-fourth step, the thirty-seventh step is to prepare a PAL disk driver for the models to define a new relationship between the models according to the situation rules 1,2,3.

Preferably, in step 28, the priority of the action is determined by scalar displacement of the situation factors in the situation of step 24 according to situation rule 3, and the situation factor of low priority is removed based on situation rule 5 in step 24. A process of modifying the empirical model of the situation is included, and finally, the empirical model of interactions within the allowance is generated in consideration of the hierarchical relationship of the empirical model of the situation in the 24th stage.

Preferably, in the thirty-second step, the hierarchical empirical situation recognition is performed by applying the experience value to the empirical models stratified into the primitive model, the classification model, and the derivative model, and the experience value undergoes the empirical situation recognition. The empirical situation recognition method for the robot, which is composed of the frequency of action of the situation factors, the frequency of exposure of the situation, the intimacy of the interaction with the situation, and the result of the interaction, is provided.

Preferably, in step 35, the interrelationship between the models refers to a situation about a situation factor, an interaction about a situation, and the like. For example, if a new kind of situational factor is created, an empirical situational awareness method for robots is provided that requires writing a new PAL disk driver for the situation.

Contextual law 1 defined and interpreted in the present invention is the Equivalent Context Incompatible Law (ECIL), and the same situation is a law that cannot exist for different entities at the same time. If there are two identical situations, the situation factors should be two. However, one entity cannot exist together in different situations. This is because individuals are unique.

 The situation law 2 defined and interpreted in the present invention is a condition confrontation incompatible law (CCIL), which is a law that there is a situation factor that cannot be opposed to coexist in one situation. For example, different places or times for the same interaction cannot exist in one situation, and calling in a quiet library is not appropriate, so calling with the library can't coexist in the same situation.

The situation law 3 defined in the present invention is a Condition Operation Precedence Law (COPL), and a law in which recently occurring situation factors preferentially interact with each other. (Latest Situation Factor Priority Law) Situational factors related to the Attentional Topic act first (the rule of situational relation priority). For example, when a class teacher enters a noisy classroom, the interactions of the classroom, where students quietly and concentrate on the teacher, change.

The situation law 4 defined and interpreted in the present invention is an interaction indiscrimination law (IL), and the interactions that can occur in an idle context are not limited. In other words, any interaction may occur in atmospheric conditions. Therefore, the probability that any interaction can occur is the same for all interactions. For example, no one knows what will happen to a single person who has no conversation with anyone, just as someone might get a phone call. This is in line with the fact that the situational factors that can occur in atmospheric conditions are also indiscriminate. This law provides the subject with a very flexible range of intelligence in initiating intelligent interactions.

The situation law 5 defined and interpreted in the present invention is the Law of Interaction Visibility Shrinkage-Interaction Visibility Scope Restriction Law (IPNC). The smaller the number of situation factors, the more the number of interactions for the situation. It is also a law of shrinkage. That is, the smaller the number of situational factors, the smaller the number of interactions that can occur for the situation, thereby narrowing the range of interaction choices. (Interaction in Proportion to Number of Condition)

The empirical situation recognition method according to the present invention further improves the intelligence capability of the conventional robot's interaction performance and can solve the technical difficulties to perform the improved artificial intelligence work process by applying it to not only the robot but also to application programs or various systems. It is effective. In addition, the empirical intelligence system of the empirical situation recognition of the present invention implements the ability to adapt and learn about unknown environment and knowledge information by itself, and it is easy to use for new needs and environments and has excellent scalability. It can reduce the number of points and efficiently solve the technical problem of complex situational awareness.

Figure 1a is a conceptual diagram for explaining the intelligent rationality of the human experience, Figure 1b is a conceptual diagram for explaining the actual intelligent interaction model of human, Figure 1c illustrates the ECA intelligent interaction model introduced in the present invention 1D is an conceptual diagram for explaining an ECA empirical model, and FIG. 1E is a conceptual diagram for explaining each ECA empirical situational awareness model.
FIG. 2 is a diagram illustrating an external view of an entire system to which an ECA perception circuit is implemented, which embodies an empirical situation recognition method for a robot, to show connectivity between components according to data input / output flows.
3 is a flow chart showing an upper flow of the empirical situation recognition process included in the empirical situation recognition method for the robot according to the present invention;
FIG. 4 is a sub-flow diagram of the hierarchical empirical model of FIG. 3 included in the empirical situation recognition method for the robot and the empirical model generation of the situation.
5 is a diagram illustrating an algorithm for implementing a sub-flow of a process of obtaining an experienced model of the situation of FIG. 4 included in an empirical situation recognition method for a robot according to the present invention.
6 is a diagram illustrating an algorithm for implementing a sub-flow of a process of obtaining an empirical interaction model for the empirical situation of FIG. 2 included in the empirical situation recognition method for a robot according to the present invention;
7 is a view showing a sub-flow of the empirical process of the ECA perceptual circuit in which the empirical situation recognition method for the robot according to the present invention is implemented;
8, 9 and 10 are diagrams illustrating an execution state of a disk driver of a PAL interface that implements an empirical knowledge information system included in an empirical situation recognition method for a robot according to the present invention.

Hereinafter, the present invention will be described in detail with reference to the drawings.

The Empirical Context Aware Computing Method (ECA) for the robot according to the present invention focuses on the ecological grounds that humans perform knowledge and information activities by experience, thereby creating a model of human interaction principle. Efficiently implement intelligent interactions and easily solve the problem of complex situational awareness. In addition, the model constructed in this way mimics the metabolism of human knowledge and information activities, so it can flexibly respond to intelligent growth through self-experience, which does not require most additional development process even if the purpose of interaction and environment change.

In other words, robots, systems, and applications applied with the Empirical Context Aware Computing Method (ECA) for robots can adapt themselves to unknown information and environments by experiencing them like humans and growing intelligent systems. have.

In this way, the Empirical Context Aware Computing Method (ECA) for robots can be used to express opinions, interests, or intentions of subjects (robots, etc.) in queries such as short answer-based statements and questions. Enhances the intelligence of interactions to enable conversation of higher expressions.

More specifically, the Empirical Context Aware Computing Method (ECA) for robots abstracts the complex information analysis system required to implement intelligent interactions into an empirical image called "interaction on situation" and gives the context factors. Model interactions and context information to determine interactions.

Therefore, the Empirical Context Aware Computing Method (ECA) for robots can be an improved solution for the robot's intelligent interaction and AI implementation.

The technical value of the empirical contextual awareness method (ECA) for the robot according to the present invention will be described.

If the HRI technology of the existing robot is a simple query method, the empirical contextual awareness method (ECA) for the robot according to the present invention is capable of advanced conversation.

Example 1) Dialogue with a Recognition of Adapted Expression and Intention to the Other Through Experience

-Simple: Hello! (Always a pattern of greetings)

If the method of the invention applies: The last thing you asked for was well done. Or hello. How is it going? Or hello. I feel better than last time

Example 2) A dialogue in which opinions are reflected by analysis and prediction based on experience

-Of simple type:

  User: Can you find a Korean restaurant near you?

  There is a Korean restaurant at 200m at 3 o'clock.

Where the method of the invention is applied:

  User: Would you like to have lunch if you have a Korean restaurant near here? (Higher conversation)

  There is a Korean restaurant at 200m at 3 o'clock, but I regret it because it has no taste.

  There is a Korean restaurant at 200m at 3 o'clock, but if you go a little further,

Should we go there?

  User: Do you want to sell shares of Company A when the exchange rate rises?

 -Simple: don't sell (more complex expressions require procedural analysis)

 -If the method of the present invention is applied: The exchange rate will rise but company A share price will rise again

I think it's better to wait

Example 3) Higher conversations adapted to the user's lifestyle, personality and mood

-Of simple type:

  User: I'm having a bad customer response for a new product

  There is a way to lower the price. Or how to improve your advertising effectiveness.

All

Where the method of the invention is applied:

  User: Poor customer response to this launch (high level conversation)

  I'm sorry. Is the impact of third-party A products so large?

What should I do? It seems that the price range is lower than the previous one. It is better to prevent the flow of existing customer base with old products.

  User: It's cold today. (Higher conversation)

  Be careful of cold. It's minus 10 degrees, but it's going to be a little warmer tomorrow.

  User: I have an appointment with a friend I've met in five years today (advanced conversation)

  It's good for you. What do you want to do first?

  Oh Hyun-kook. You will see it after a long time.

  Would you spend some money for dinner?

Wouldn't it be nice to have a present?

  User: parted with 1 year girlfriend (higher conversation)

  It's sad. cheer up.

  Why did you break up? You were a good man.

Did you enjoy your time a few days ago?

On the other hand, the empirical contextual awareness method (ECA) for the robot according to the present invention adapts while experiencing an unknown environment by itself and learns about unknown information through learning.

For example, it is possible to easily change a home home service robot into a public building customer guide service robot simply by informing a work order through a conversation. It can also convert fire protection robots into flood protection robots. For example,

-Get to know how people behave in certain situations while experiencing the way people talk and live. If you're tired, you'll sleep early, and you'll know that you'll feel bad if you don't speak for a while.

-Talk with the first person you see, create a profile about them, and get to know their characteristics.

-Learn unknown sentences, things, etc. by simply talking like a human being.

On the other hand, in case of a simple intelligent robot, further development is needed to inform the unknown environment and to reinform the unknown information or knowledge.

Hereinafter, terms for describing an Empirical Context Aware Computing Method (ECA) for a robot according to the present invention will be defined.

1) Context

The collection of all real or imaginary elements at the time the entity exists. The point at which the entity exists does not necessarily need to be current. This may be the past or the future. The virtual element may be an image in an imagination or notion that does not actually exist. Realistic elements are all the objects that humans feel and perceive, such as time, space, state of matter, figures, animals, objects, events, and all other visual, audio, olfactory, doctoral, and emotional things.

2) Condition: An element that forms a situation.

3) Interaction

The act by which two or more entities make changes to each other through some information (such as changes in the state of an entity or the actor's ACT). The act of causing a change from an individual is not an interaction, only a change in the situation.

In this case, the entity does not necessarily need to be an actor performing an ACT. If there is an entity that responds to changes in the environment, it is also an interaction between the environment and the reacting entity.

4) Subject: The entity that performs the interaction.

5) Actor: A subject that performs an ACT.

6) Idle Context: A situation where no interaction occurs.

7) Nothing Context: A situation in which no context factor exists.

8) Master Condition: The situation factor that interacts with the situation or the situation factor that most closely affects the interaction.

9) Experience

The ability of a subject to know information or knowledge as he or she undergoes certain facts, including the dictionary meaning of experience.

10) Learning

It is a way of understanding and acquiring information or knowledge, including dictionary meanings. In other words, learning is a subset of experience.

Self Empirical Learned: The process by which a subject empirically understands and acquires some information or knowledge.

12) ACT

It is an action that a subject can perform at one time and becomes the basic unit of interaction of the actor. In the actual implementation, it means the run time model of the interaction. Example) Speech, Character, Movement, Command Operation

13) Perceptual Algebra League (PAL)

An interface created to empirical empirical knowledge information system in ECA. Interface that defines situational factors, situations, interactions, subject's emotions, rational characteristics, and hierarchical relationships among entities through algebraic modeling for easy mathematical interpretation and access . Implement in program development language (eg C / C ++, Java).

The PAL element has a code value defined as a numerical value that can determine the algebraic magnitude case.

14) PAL Describer

An interface that represents the relationship between two PAL element groups, divided into domains and ranges within a PAL. The relationship between domain and range is mostly represented by probability distribution, but in some cases, it can be implemented as a functional relationship.

15) empirical intelligence system and empirical knowledge information system

The empirical intelligence system refers to an intelligent system implemented through empirical situation recognition and empirical operation. The empirical knowledge information system is an interface that stores and manages the empirical models handled by the empirical intelligence system and controls the input / output.

The main characteristics of the Empirical Context Aware Computing Method (ECA) for the robot according to the present invention will be described.

The first feature implements the ESGI model by not looking at each factor that affects interaction as a situation but by looking at the whole empirical model as a situation and abstracting it as an image of "interaction with situation".

The second feature is the empirical probability of determining or predicting the relationship between the situation and the situation factors and the interactions appropriate to the situation.

The third feature is the flexibility to manage the subjective empirical knowledge information system through the PAL interface.

The fourth feature is to make the subject understand and acquire information or knowledge that the subject does not know.

The fifth feature is the self-growth of the subject's interactive intelligence through the ESGI model.

As described above, the empirical situation awareness method (ECA) for the robot according to the present invention is based on the five rules of the situation defined in the present invention.

Hereinafter, the technical implementation of the Empirical Context Aware Computing Method (ECA) for a robot according to the present invention will be described first.

1. Empirical situational awareness of perceptual circuits (ECA)

Step1. Perceptual circuit generates empirical models of situation factors, situations, and interactions from raw HRI results such as voice recognition, face recognition, emotion recognition, motion recognition, sensor data, or various information such as GPS or web, and the PAL of empirical knowledge information system. The interaction between the discriber and the empirical probabilistic method is used to determine the interactions for the situation, thus generating the runtime model ACT of the interactions.

Step2. Request the execution of the created ACT in the external dialog system.

Step3. Request empirical learning from the empirical computing unit for an unknown situation that has not been experienced.

2. Experience in Perceptual Circuits (SEL)

Empirical models of unknown situations that are not experienced are applied to the empirical knowledge information system.

Step1. Layering of the unknown empirical model. This involves classifying and creating primitive models, classification models, and derived models of empirical models.

The primitive model is a model in which the original properties of the model are not lost.

The classification model is a model classified by the horizontal and vertical hierarchical relationship of the model.

The derived model is an extensible model obtained by predicting from a classification model.

Step2. Create a PAL interface from an unknown empirical model.

This involves creating PAL elements and rewriting the PAL disk driver to generate PAL Training Samples. In this process, the scalar displacement between PAL elements is changed, and a PAL disk driver defining a new relationship between PAL elements can be additionally created.

Hereinafter, an empirical situation recognition method for a robot according to the present invention will be described in detail with reference to the accompanying drawings.

Figure 2 is a block diagram showing the external flow of the entire system to which the ECA perception circuit implementing the empirical situation recognition method for the robot according to the present invention. Referring to this, the figure shows the linkage of computational performance according to the flow of input / output (I / O) between components in a system to which an ECA perceptual circuit is implemented in which an empirical situation recognition method for a robot according to the present invention is implemented.

Reference numeral ① is a raw data output layer (layer) for sending raw data of situational factors. This data is returned from existing HRI solutions (voice and video input solutions), devices, and network systems, and can be sent through various devices or software such as input sensor data, HRI data, GPS, and Internet information. The raw data format differs depending on the layer (layer) that sends it, but should be divided into media original data and meta information.

For example, in the case of the SR HRI solution, a word or sentence in the form of a string is input, and in case of the SLU processing, the input may be in a structured data format (eg, XML). In the case of the image recognition process, it becomes the recognized image media data and meta information about it.

Data such as images and sounds also become media original data and meta information describing it. Media raw data is processed for reference. The input raw data is redefined as a data format processed by the ECA perceptual circuit, and an information specification about the string-based information is added.

Reference numeral ② is situation factor data (voice data, image data, etc.) classified by the type of raw data sent by reference numeral ①, and ② 'is a voice understanding operation layer (SLU) returned from the voice recognition (SR) layer of ①. Known Words are returned by understanding the Words. This is optional. The generated result data is input to the empirical situation recognition calculation circuit unit 14 of the ECA.

Reference numeral ③ is model data of the empirical empirical knowledge information system to be used in the ECA situational awareness processing stage. These data are managed by the PAL interface and are changed and updated in the empirical phase (SEL). Data are empirical models of information that are to be addressed in an empirical knowledge information system, such as situations and interactions, situations and situational factors. This is done through the PAL interface.

Reference numeral ④ is a computational circuit unit for computing the empirical situational awareness of the ECA. Normalize the situation factor data to create an empirical knowledge model called interaction about the situation. The generated empirical model finds out which empirical model it is based on the analysis of empirical and probabilistic relations based on the analysis of empirical and probabilistic relations according to the ECA situation law and generates ACT (6) for the subject's interaction. Also, the generated empirical model is reflected in the empirical knowledge information system (PAL), and the empirical computational circuit layer is requested to empirical.

Reference numeral ⑤ is an HRI based dialogue system computing layer, which renders the ACT (⑥) requested by the ECA.

Reference numeral 6 denotes an ACT rendered in the dialogue system computing layer 16. This can be done in various formats such as Speech, Word, Movement, Image, and Command.

Reference numeral ⑦ is an empirical computation unit that computes the empiricalization of the ECA perceptual circuit.

Figure 3 is a flow chart showing the high-level flow of the empirical situation recognition operation included in the empirical situation recognition method for the robot according to the present invention. Referring to this, that is, the figure shows an upper flow of the empirical situation recognition calculation circuit unit ④ of FIG. 2, and reference numeral 22 denotes an empirical probability inference through the hierarchical empirical knowledge information system from the input source context information. This is how the empirical model of the situational factors is generated.

Reference numeral 24 is a process of generating an empirical model of the situation based on the situation law 1,2 from the empirical model of the situation factors newly generated in the previous step (22).

Reference numeral 26 is a process of acquiring the experienced model of the empirical model of the situation newly generated in the previous step 24 through the algorithm of FIG. 5 described below.

Reference numeral 28 is a process of acquiring the empirical model of the interaction with the empirical model of the situation through the algorithm of FIG. 6 based on the empirical model of "situation: interaction" of the empirical knowledge information system. .

As described above, reference numeral 12 is learning data of an empirical model of the empirical knowledge information system, and is input data of the previous step 28.

4 is a flowchart illustrating a process of hierarchical empirical model generation and empirical model generation of a situation included in the empirical situation recognition method for a robot according to the present invention.

Referring to this, the drawing is a more detailed flow chart of the empirical model generation process (22, 24) of Figure 3, reference numeral 30 is a process for performing normalization to generate the empirical model of the input raw data, the data The data is validated, a string-based information specification is given, a data redefinition such as uniqueness assigned with a unique ID is performed, and a data classification operation for layering a post-processing process (eg, 32) is performed.

Reference numeral 32 is a process of generating an empirical model of the normalized data through the hierarchical empirical knowledge information system, and the hierarchization is performed based on the work classified by the data type and the horizontal and vertical relations. (At this stage, the PAL interface is built, creating a primitive model, classification model, and derived model.)

Reference numeral 34 is a process of exploring a master context factor through the magnitude relationship between Law 3 and Scalar Displacement in the empirical models of the situation factors newly generated in the previous process (32). It may be a candidate model for action and may not exist.

Reference numeral 36 is a process of normalizing the empirical models of the situation factors of the previous process (32) based on the situation law 1,2 to generate the empirical model of the situation.

Reference numeral 38 denotes an operation process for generating a primitive model of a situation, which is a collection of scalar displacements of empirical models of the situation factors selected in the previous process 36.

Reference numeral 40 is an operation process for generating an empirical model of the situation through the primitive model of the situation, which is a collection of empirical models of the situation factors of the previous process (38).

5 is a diagram illustrating an algorithm for acquiring an empirical model of the situation as a sub-flow of the process 26 of FIG. 3 included in the empirical situation recognition method for the robot according to the present invention.

Referring to this, reference numeral 42 is an operation process for receiving an empirical model of a situation for recognition.

Reference numeral 44 denotes an empirical model of the situation of the previous process 42 and the number of elements becomes 1 at the intersection of the set of items of the PAL disc driver for each situational element of the situational model of the situation of the previous process 42. It is a computational process that determines whether or not the situation law 1 is satisfied and determines the model that satisfies the condition as an experienced model of the empirical model of the situation in the previous process 42.

Reference numeral 46 denotes that the minimum number of elements (situation factors) in each situation is determined by the situation rule 5 when the condition of the previous process (44) does not satisfy the situation rule 1 and the elements of the intersection of the previous process (44) are larger than 1. It is an operation process that performs the judgment of considering the element as the experienced model of the previous process 42. At this time, if the experienced model is multi-selected, the model belonging to the element of the PAL disc driver item set for the master context factor (when the master context factor exists) of the empirical model of the situation corresponding to the previous process 42 or the situation is selected. Rule 3 is applied to determine the case relations of scalar displacements of situational factors. Among the candidates for the situation, it is appropriate to select the small number of elements according to the situation rule 5 because the situation where the number of elements is small is included in the situation where the number of elements is high.

Reference numeral 48 is an operation process of obtaining the experienced model of the previous process 42 through stochastic inference from the empirical models of the empirical knowledge information system if the conditions of the previous process 46 are not satisfied. At this time, a model satisfying the situation law 2 is obtained. If the number of the found models is plural, the process of the previous process 46 is performed.

Reference numeral 50 is a computational process for selecting an empirical model of the situation of the previous process 42 as an experienced model and requesting empirical modeling if the model experienced in the previous process 48 is not found.

Reference numeral 52 is an operation process for requesting empirical cases when an experienced model for an empirical model of the situation of the previous process 42 is not found or an exception occurs throughout the algorithm.

Reference numeral 54 denotes an operation process for requesting an empirical operation of the perceptual circuit to empirically an unknown empirical model.

FIG. 6 is a diagram illustrating an algorithm for obtaining an empirical model of interaction as a sub-flow of the process 28 of FIG. 3 included in the empirical situation recognition method for the robot according to the present invention.

Referring to this, reference numeral 60 is a process of inputting an empirical model of a situation to be recognized, and reference numeral 62 is an empirical probability of interaction with the situation of the previous process (60) by a PAL disc driver of an empirical knowledge information system and empirical probability inference. The operation of obtaining a model.

Reference numeral 64 is a computational process that selects one of the first heuristic models from the empirical models of interactions obtained in the previous process 62 in consideration of the situation rule 3 and the master situation factor.

Reference numeral 66 is a process of acquiring the empirical model of the interaction with respect to the candidate models of the empirical model of the situation corresponding to the previous process 60 when the empirical model of the interaction is not obtained in the previous process 62. This process is optional and can be omitted as needed.

Reference numeral 68 is a process of creating a new empirical model of interaction with the empirical model of the situation in consideration of the situation rules 3 and 5 and the master situation factor. The process involves modifying the empirical model of the previous process (60) by judging the priority of its action by scalar displacement of the situational factors of the situation, and removing the situational factors with low priority according to the situation law 5. Finally, considering the hierarchical relationship of the empirical model of the previous process 60, the empirical model of the interaction within the allowable range is generated.

Reference numeral 70 denotes a process of generating a run time model ACT of the empirical model of the finally selected interaction. The generated ACT is requested to the dialogue system.

Reference numeral 72 calls for empirical modeling of the interaction with respect to the newly created previous process 60 as an empirical model of interaction with the situation (“situation: interaction”), If the empirical model of the interaction with respect to the corresponding situation is not obtained, the corresponding situation of the previous process 60 is regarded as an unknown situation and the computational process is requested to the empirical operation unit.

7 is a diagram illustrating a sub-flow of the empirical process of the ECA perceptual circuit in which the empirical situation recognition method for the robot according to the present invention is implemented.

Referring to this, the drawing describes processes embodying the empirical process (⑦) of FIG. 2, and reference numeral 80 denotes an empirical model of an unknown situation or an interaction with a newly added situation ("Situation: Interaction"). It is an operation that receives an empirical model of ").

Reference numeral 82 denotes a process of performing the stratification of the empirical knowledge information system of the empirical models of the previous process (80). The process (82) is stratified into a primitive model, a classification model, and a derivative model through the hierarchical empirical situational recognition. The hierarchies are re-employed by reflecting the experience points in the empirical models. The experiential value is the data obtained through empirical situational awareness and consists of the frequency of action of situational factors, the frequency of exposure of the situation, the intimacy of interaction with the situation, and the result of the interaction.

Reference numeral 84 is a process of adjusting the scalar displacement between the empirical models, in which the empirical model's empirical value, the subject's attributes and the current situation can be considered.

Reference numeral 86 is an operation process for adding the empirical models of the previous process 84 to the PAL element group of the hierarchy of the empirical knowledge information system.

Reference numeral 88 is an operation process that examines the necessity of defining a new relationship to the interrelationship of the empirical models of the previous process (86), wherein the interrelationship between the models is the situation of the situation factors, the interaction of the situation, etc. Means. For example, if a new kind of context factor is created, then you need to write a new PAL disk driver for the context.

Reference numeral 90 denotes an operation for adding the PAL elements of the previous process 86 to the PAL disk driver. Reference numeral 92 is a calculation process for creating a PAL disk driver for the models that will define a new relationship among the empirical models of the previous process (86).

8, 9, and 10 are examples of performance states of the disk driver of the PAL interface implementing the empirical knowledge information system of the empirical situation recognition method for the robot according to the present invention. A diagram illustrating a disk driver of interaction with a situation.

Referring to this figure, the drawing shows a fragmentary example of the PAL disk driver of the ECA's empirical knowledge information system, and in actual implementation, the drawing is based on rich and hierarchical data sufficiently reflecting the user experience.

The PAL disk driver defines the relationship between two sets of empirical models (groups of PAL elements) as stochastic distributions or functional relationships, which means that only the occurrence frequency of the domain Y-axis model is defined for the domain X-axis model. It is not necessary to define various PAL disk drivers according to the relationship between empirical models.

For example, it may be possible to express the lifestyle of a subject in a situation that is irrelevant to interaction.

In addition, PAL a point on the disk driver is called PAL Item This item may have another disk driver PAL handle on (see on the program written in a computer language such as C or program pointer). We call it the PAL subdisk driver, which is the PAL disk driver for the elements attached to the two model elements X and Y of the item, respectively. It is mainly used to implement PAL disk drivers for detailed models of items.

8 shows a PAL disk driver of a situation for a time factor of time.

9 shows a PAL disk driver of a situation for a situational factor called an event.

Figure 10 shows the PAL disk driver of the ACT for the situation. In this example, the item (ask your friend, *) shows that you have a PAL subdisk driver for the detailed model of both model elements.

On the other hand, the empirical situation recognition method for a robot according to an embodiment of the present invention is not limited to the above embodiments, but various modifications can be made without departing from the technical gist of the present invention.

Claims (4)

An empirical model generation step of generating an empirical model of a situation from input raw data; Performing a first algorithm; Performing a second algorithm; And empirical knowledge information understanding situation-based situation recognition method for a robot, which is performed through the empirical experiment on an empirical model of interaction between an unknown situation or a newly added situation in the first and second algorithm execution steps.
The empirical model generation step,
On the basis of the situation rules 1,2,3, the empirical model of situation factors, situations, and interactions is generated from the input raw data, and the master situation factors are obtained.
The first algorithm is
A seventeenth step of receiving an empirical model of the situation for recognition; Whether the empirical model of the situation of the 17th stage exists and the number of elements is 1 and satisfies the situation rule 1 in the intersection of the set of items of the PAL disk driver for each situational element of the 17th stage of the empirical model of the situation. An eighteenth step of determining the model satisfying the condition as an experienced model for the empirical model of the situation of the seventeenth step; If the situation law 1 is not satisfied in the condition of step 18, and the element of the intersection is greater than 1 in step 18, the situation rule 5 shows that the element having the smallest number of elements (situation factor) in each situation is found in step 17. A nineteenth step of performing a judgment regarded as a model; A twenty-first step of obtaining an experienced model of a seventeenth step through probabilistic inference from empirical models of empirical knowledge information if the condition of the nineteenth step is not satisfied; If a model experienced in step 20 cannot be obtained, step 21 of selecting an empirical model of the situation of step 17 as an experienced model and requesting empiricalization of the model; A twenty-second step of requesting empirical case when an experienced model for an empirical model of a situation of a seventeenth step is not obtained or an exception occurs throughout the algorithm; It consists of the twenty-third step of requesting the empirical computation of the perceptual circuit to empirically the unknown empirical model,
The second algorithm is
A twenty-fourth step of receiving an empirical model of the situation to be recognized; A twenty-fifth step of obtaining an empirical model of interaction with the situation of the twenty-fourth step by a PAL disc driver of empirical knowledge information and empirical probability inference; A twenty sixth step of selecting one of the empirical models having the highest priority among the empirical models of interaction obtained in the twenty-five step in consideration of the situation rule 3 and the master situation factor; Obtaining a heuristic model of the interaction with respect to candidate models of the empirical model of the situation corresponding to the twenty-fourth step if the empirical model of the interaction is not acquired in the twenty-fifth step; A 28th step of newly generating an empirical model of interaction with the empirical model of the situation in step 24 in consideration of the situation rules 3 and 5 and the master situation factor; Creating a run time model ACT of the empirical model of the finally selected interaction; The empirical model of the interaction with respect to the situation of the newly created stage 24 is requested to be empirical with the empirical model of the interaction with the situation ("situation: interaction"), and the interaction of the situation with the stage 24 If the empirical model of the action is not obtained, the thirty-stage step is regarded as an unknown situation and the thirty-stage request is made.
A thirty-first step of receiving an empirical model of an unknown situation or an interaction (“situation: interaction”) with respect to a newly added situation in performing the first algorithm and the second algorithm; A thirty-second step of performing hierarchical empirical knowledge information of the thirty-first empirical models according to the situational rules 1, 2, and 3; A thirty-third step of adjusting the scalar displacement between the empirical models according to the situational rules 1,2,3; Adding a thirty-third empirical model to the PAL element group of the hierarchy to which heuristic knowledge information belongs; A thirty-fifth step of examining the necessity of defining a new relationship to the interrelationship of the thirty-fourth empirical models according to the situational rules 1,2,3; A thirty-sixth step of adding the thirty-fourth PAL elements to the PAL disc driver; An empirical situation recognition method for a robot comprising a thirty-seventh step of preparing a PAL disk driver for models that will define a new relationship between the models according to the situational rules 1, 2, and 3 among the empirical models of the 34th step. .
The method of claim 1,
In the nineteenth step, if the empirical model is multi-selected, the model belonging to the element of the PAL disc driver item set for the master situation factor (when the master situation factor exists) of the empirical model of the situation corresponding to the seventeenth step is selected. Selecting or applying Situational Law 3 to determine the magnitude of the scalar displacement of situational factors first, and if there are cases where a small number of elements is included in a situation where the number of elements is high among candidates, A computer-readable recording medium having recorded thereon a program for executing an empirical situation recognition method for a robot, characterized in that the number of elements is selected to be small.
The method of claim 1, wherein the step 28 determines the priority of the action by scalar displacement of the situation factors of the situation of step 24 according to the situation rule 3, and removes the situation factors of low priority based on the situation rule 5. The process of modifying the empirical model of the 24th stage situation is included, and finally, the empirical model of interactions within the allowance is generated by considering the hierarchical relationship of the modified empirical model of the relevant stage 24 situation. A computer-readable recording medium having recorded thereon a program for executing an empirical situational awareness method for a robot.












delete
KR1020100032792A 2010-04-09 2010-04-09 Empirical Context Aware Computing Method For Robot KR101369810B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100032792A KR101369810B1 (en) 2010-04-09 2010-04-09 Empirical Context Aware Computing Method For Robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100032792A KR101369810B1 (en) 2010-04-09 2010-04-09 Empirical Context Aware Computing Method For Robot

Publications (2)

Publication Number Publication Date
KR20110113414A KR20110113414A (en) 2011-10-17
KR101369810B1 true KR101369810B1 (en) 2014-03-05

Family

ID=45028773

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100032792A KR101369810B1 (en) 2010-04-09 2010-04-09 Empirical Context Aware Computing Method For Robot

Country Status (1)

Country Link
KR (1) KR101369810B1 (en)

Families Citing this family (209)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US7669134B1 (en) 2003-05-02 2010-02-23 Apple Inc. Method and apparatus for displaying information during an instant messaging session
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7633076B2 (en) 2005-09-30 2009-12-15 Apple Inc. Automated response to and sensing of user activity in portable devices
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9053089B2 (en) 2007-10-02 2015-06-09 Apple Inc. Part-of-speech tagging using latent analogy
US8364694B2 (en) 2007-10-26 2013-01-29 Apple Inc. Search assistant for digital media assets
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8327272B2 (en) 2008-01-06 2012-12-04 Apple Inc. Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars
US8065143B2 (en) 2008-02-22 2011-11-22 Apple Inc. Providing text input using speech data and non-speech data
US8289283B2 (en) 2008-03-04 2012-10-16 Apple Inc. Language input interface on a device
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US8464150B2 (en) 2008-06-07 2013-06-11 Apple Inc. Automatic language identification for dynamic text processing
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8768702B2 (en) 2008-09-05 2014-07-01 Apple Inc. Multi-tiered voice feedback in an electronic device
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US8862252B2 (en) 2009-01-30 2014-10-14 Apple Inc. Audio user interface for displayless electronic device
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10540976B2 (en) 2009-06-05 2020-01-21 Apple Inc. Contextual voice commands
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8381107B2 (en) 2010-01-13 2013-02-19 Apple Inc. Adaptive audio feedback system and method
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US8639516B2 (en) 2010-06-04 2014-01-28 Apple Inc. User-specific noise suppression for voice quality improvements
US9104670B2 (en) 2010-07-21 2015-08-11 Apple Inc. Customized search or acquisition of digital media assets
US8719014B2 (en) 2010-09-27 2014-05-06 Apple Inc. Electronic device with text error correction based on voice recognition data
US10515147B2 (en) 2010-12-22 2019-12-24 Apple Inc. Using statistical language models for contextual lookup
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US20120310642A1 (en) 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
WO2013185109A2 (en) 2012-06-08 2013-12-12 Apple Inc. Systems and methods for recognizing textual identifiers within a plurality of words
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US8935167B2 (en) 2012-09-25 2015-01-13 Apple Inc. Exemplar-based latent perceptual modeling for automatic speech recognition
JP2016508007A (en) 2013-02-07 2016-03-10 アップル インコーポレイテッド Voice trigger for digital assistant
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9733821B2 (en) 2013-03-14 2017-08-15 Apple Inc. Voice control to diagnose inadvertent activation of accessibility features
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10572476B2 (en) 2013-03-14 2020-02-25 Apple Inc. Refining a search based on schedule items
US9977779B2 (en) 2013-03-14 2018-05-22 Apple Inc. Automatic supplementation of word correction dictionaries
US10642574B2 (en) 2013-03-14 2020-05-05 Apple Inc. Device, method, and graphical user interface for outputting captions
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
CN105190607B (en) 2013-03-15 2018-11-30 苹果公司 Pass through the user training of intelligent digital assistant
CN112230878A (en) 2013-03-15 2021-01-15 苹果公司 Context-sensitive handling of interrupts
KR101759009B1 (en) 2013-03-15 2017-07-17 애플 인크. Training an at least partial voice command system
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN110442699A (en) 2013-06-09 2019-11-12 苹果公司 Operate method, computer-readable medium, electronic equipment and the system of digital assistants
CN105265005B (en) 2013-06-13 2019-09-17 苹果公司 System and method for the urgent call initiated by voice command
JP6163266B2 (en) 2013-08-06 2017-07-12 アップル インコーポレイテッド Automatic activation of smart responses based on activation from remote devices
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770429A1 (en) 2017-05-12 2018-12-14 Apple Inc. Low-latency intelligent automated assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. Far-field extension for digital assistant services
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
KR102449175B1 (en) 2017-08-01 2022-09-29 엘지전자 주식회사 Vacuum adiabatic body and refrigerator
KR102449177B1 (en) 2017-08-01 2022-09-29 엘지전자 주식회사 Vacuum adiabatic body and refrigerator
KR102427466B1 (en) 2017-08-01 2022-08-01 엘지전자 주식회사 Vehicle, refrigerater for vehicle, and controlling method for refrigerator for vehicle
KR102529116B1 (en) 2017-08-01 2023-05-08 엘지전자 주식회사 Vacuum adiabatic body, fabrication method for the vacuum adibatic body, and refrigerating or warming apparatus insulated by the vacuum adiabatic body
KR102459784B1 (en) 2017-08-01 2022-10-28 엘지전자 주식회사 Vacuum adiabatic body and refrigerator
KR102459786B1 (en) 2017-08-16 2022-10-28 엘지전자 주식회사 Vacuum adiabatic body and refrigerator
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK201970511A1 (en) 2019-05-31 2021-02-15 Apple Inc Voice identification in digital assistant systems
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
US11183193B1 (en) 2020-05-11 2021-11-23 Apple Inc. Digital assistant hardware abstraction
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009509673A (en) 2005-09-30 2009-03-12 アイロボット コーポレーション Companion robot for personal interaction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009509673A (en) 2005-09-30 2009-03-12 アイロボット コーポレーション Companion robot for personal interaction

Also Published As

Publication number Publication date
KR20110113414A (en) 2011-10-17

Similar Documents

Publication Publication Date Title
KR101369810B1 (en) Empirical Context Aware Computing Method For Robot
CN110785763B (en) Automated assistant-implemented method and related storage medium
CN111033492B (en) Providing command bundle suggestions for automated assistants
CN113557566B (en) Dynamically adapting assistant responses
Dybkjaer et al. Evaluation and usability of multimodal spoken language dialogue systems
Wahlster Smartkom: Symmetric multimodality in an adaptive and reusable dialogue shell
WO2021093821A1 (en) Intelligent assistant evaluation and recommendation methods, system, terminal, and readable storage medium
Trung Multimodal dialogue management-state of the art
López-Cózar et al. Multimodal dialogue for ambient intelligence and smart environments
Pruvost et al. User interaction adaptation within ambient environments
JP2001249949A (en) Feeling generation method, feeling generator and recording medium
Yan Paired speech and gesture generation in embodied conversational agents
Zouhaier et al. Generating accessible multimodal user interfaces using MDA-based adaptation approach
Wanner et al. Towards a multimedia knowledge-based agent with social competence and human interaction capabilities
Griol et al. A proposal for the development of adaptive spoken interfaces to access the Web
Griol et al. Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
DeMara et al. Towards interactive training with an avatar-based human-computer interface
Tian Application and analysis of artificial intelligence graphic element algorithm in digital media art design
Karagiannidis et al. Supporting adaptivity in intelligent user interfaces: The case of media and modalities allocation
van Mulken Reasoning about the user's decoding of presentations in an intelligent multimedia presentation system
LU101660B1 (en) Multi-user complex problems resolution system
Cuayáhuitl et al. Hierarchical dialogue policy learning using flexible state transitions and linear function approximation
Lemon et al. Statistical approaches to adaptive natural language generation
Lemon et al. Reinforcement learning approaches to natural language generation in interactive systems.
Nickles et al. Towards a Unified Model of Sociality in Multiagent Systems.

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
J201 Request for trial against refusal decision
S901 Examination by remand of revocation
GRNO Decision to grant (after opposition)
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20170228

Year of fee payment: 4

LAPS Lapse due to unpaid annual fee