US20200097067A1 - Artificial Intelligence System and Interactive Responding Method - Google Patents
Artificial Intelligence System and Interactive Responding Method Download PDFInfo
- Publication number
- US20200097067A1 US20200097067A1 US16/140,552 US201816140552A US2020097067A1 US 20200097067 A1 US20200097067 A1 US 20200097067A1 US 201816140552 A US201816140552 A US 201816140552A US 2020097067 A1 US2020097067 A1 US 2020097067A1
- Authority
- US
- United States
- Prior art keywords
- server
- input data
- user
- interactions
- states
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/002—Monitoring the patient using a local or closed circuit, e.g. in a room or building
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
-
- G06F17/30943—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4498—Finite state machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Definitions
- the present invention relates to an artificial intelligence (AI) system and interactive responding method, and more particularly, to an AI system and interactive responding method capable of fusing a game AI with an interactive AI.
- AI artificial intelligence
- NPC non-player character
- AI game artificial intelligence
- the user since the user may express in all kinds of methods in the virtual environment, e.g. speech, facial expression, body movement and gesture.
- the NPC in the virtual environment which is not capable of being interactive with the user or being a human-like NPC, affects the user experience.
- the present invention provides an artificial intelligence system and interactive responding method to improve the NPC in the virtual environment to be more interactive to the user and provide a better user experience.
- the present invention discloses an artificial intelligence system, comprising a first server configured to receive first input data from a user and to determine whether the first input data conform to anyone of a plurality of variation conditions or not; and a second server coupled to the first server and configured to receive second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to an action, a facial expression, a gaze, a text, a speech, a gesture, an emotion or a movement generated by the user
- the present invention further discloses an interactive responding method, for an artificial intelligence (AI) system, comprising a first server receiving a first input data from a user and determining whether the first input data conform to any one of a plurality of variation conditions or not; and a second server receiving second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to a an action, a facial expression, a gaze, text, a speech, a gesture, an emotion or a movement generated by the user.
- AI artificial intelligence
- FIG. 1 is a schematic diagram of an artificial intelligence system according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram of an operation process of the artificial intelligence system according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of an interactive responding process according to an embodiment of the present invention.
- FIG. 1 is a schematic diagram of an artificial intelligence (AI) system 10 according to an embodiment of the present invention.
- the AI system 10 includes a first server 102 , a second server 104 and a third server 106 .
- the first server 102 is configured to receive first input data from a user and to determine whether the first input data conform to any one of a plurality of variation conditions or not.
- the first server 102 may be a game AI server, which receives information from the user in a virtual environment, such as actions, facial expressions, gazes, texts, speeches, messages, body gestures or body movements generated by the user, and then, the game AI server determines whether the information generated by the user conforms to the variation conditions or not.
- the second server 104 is configured to receive second input data from the first server 102 when the first input data conform to the variation conditions, and to determine a plurality of interactions in response to the first input data from the user.
- the first input data and second input data might be the same.
- the second server 104 may be a Chatbot server, which determines the interactions in response to the actions, facial expressions, gazes, texts, speeches, messages, body gestures or body movements generated by the user. After the interactions are determined, the first server 102 displays a plurality of actions corresponding to the interactions via a non-player character (NPC).
- NPC non-player character
- the third server 106 may include a data application programming interface (API) server to retrieve and store a plurality of the states of the user and the NPC for the interactions in response to the first input data.
- the third server 106 stores the states of the user corresponding to the interactions displayed via the non-player character, such as a friendship value between the user and NPC, a sport preference value of the user or a happiness value of the NPC.
- the third server 106 may store that the user prefers soccer to basketball into a database. Therefore, the AI system 10 of the present invention coordinates the game AI server (i.e. the first server 102 ) with the Chatbot server (i.e. the second server 104 ) to interact with the user via the NPC with speeches, facial expressions, body movements and gestures.
- the game AI server i.e. the first server 102
- the Chatbot server i.e. the second server 104
- the AI system of the present invention provides a more intelligent and interactive method to communicate with the user in the virtual environment.
- the first server 102 and the second server 104 are not limited to be implemented by the game AI server and the Chatbot server, and other kinds of servers, which may analyze or understand the information generated by the user, may be adopted to determine the interactions and actions in response to the user, and not limited thereto.
- the first server 102 includes a pre-processor 108 and a finite state machine FSM.
- the pre-processor 108 generates third input data according to the first input data and the interactions generated by the second server 104 .
- the FSM changes a plurality of states according to the third input data, wherein the states correspond to the interactions. More specifically, please refer to FIG. 2 , which is a schematic diagram of an operation process of the AI system 10 according to an embodiment of the present invention.
- the first server 102 receives the first input data made by the user, such as speeches, movements or gestures, and then, the first server 102 determines whether the first input data conform to any one of the variation conditions, such as, whether a distance between the user and the NPC in the virtual environment is less than 1 meter.
- the first server 102 transfers the second input data to the second server 104 .
- the second server 104 determines the interactions in response to the first input data according to the speeches, movements or gestures made by the user.
- the third server 106 memorizes the interactions corresponding to the first input data determined by the second server 104 , and updates the interactions with the first server 102 and the second server 104 .
- the pre-processor 108 After the pre-processor 108 receives the interactions corresponding to the first input data form the second server 104 , the pre-processor 108 generates the third input data.
- the FSM of the first server 102 further generates the corresponding actions and controls the NPC to display the corresponding actions to the user when the state of the FSM is changed according to the third input data, e.g. an emotion state of the FSM is changed from happy to angry. Moreover, the FSM updates the states of the FSM to the first server 102 . In this way, the user may interact with the NPC in a more interactive way in the virtual environment.
- the first input data from the user are related to a text, a speech, a movement, a gesture or an emotion generated by the user, and not limited thereto.
- the FSM of the first server 102 may directly evaluate the first input data and generate corresponding actions thereto. More specifically, when the user asks the NPC essential questions, such as commands, self-introduction or road direction, the FSM of the first server 102 may directly lookup the third server 106 for reference and determine the corresponding actions to the questions. In an embodiment, when the user asks the AI system 10 to play music or video, the FSM of the first server 102 may directly generate the corresponding actions and play music or video. In this example, the first server 102 does not transfer the first input data to the second server 104 for further determination of interactions, since none of the variation conditions is satisfied.
- the second server 104 of the AI system 10 may control the actions or the emotions displayed via the NPC accordingly.
- the pre-processor 108 determines that the NPC actively walks to the user and chat according to the states of the FSM.
- the pre-processor 108 determines that the NPC actively walks to the user and asks the user for a reason accordingly.
- the FSM may determine actions with the user by gestures, movements or emotions.
- the FSM may determine the action about asking the user to stay closer to the NPC with a movement of waving hands.
- the FSM may determine the actions of stopping the user with a speech or shunning the user with anger.
- the FSM may determine the actions of covering a mouth of the NPC when agreeing with the user.
- the AI system 10 of the present invention may be implemented in all kinds of methods. Furthermore, the operating process of the AI system 10 may be concluded to an interactive responding method 30 as shown in FIG. 3 , which includes the following steps:
- Step 302 Start.
- Step 304 The first server 102 receives the first input data from the user and determines whether the first input data conform to any one of the variation conditions or not. If yes, execute step 306 ; if no, execute step 308 .
- Step 306 The second server 104 receives the second input data from the first server 102 when the first input data conform to any one of the variation conditions and determines the interactions in response to the first input data from the user.
- Step 308 The pre-processor 108 generates the third input data according to the first input data and the interactions generated by the second server 104 .
- Step 310 The FSM changes the states according to the third input data.
- Step 312 End.
- the details of the interactive responding process 30 may be referred to the above mentioned embodiments of the AI system 10 and are not narrated herein for brevity.
- variation conditions may be varied or be adjusted according to indications of a user or a manufacturer, or settings of a computer system, the interactions or the actions stored in a database DB of the third server, and not limited thereto, which all belongs to the scope of the present invention.
- the present invention provides an artificial intelligence system and interactive responding method to improve the NPC in the virtual environment to be more interactive to the user, such that the NPC may interact with the user with involvements of speeches, body gestures and emotions and provide a better user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Databases & Information Systems (AREA)
- Psychiatry (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Social Psychology (AREA)
- General Business, Economics & Management (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Child & Adolescent Psychology (AREA)
- Epidemiology (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- The present invention relates to an artificial intelligence (AI) system and interactive responding method, and more particularly, to an AI system and interactive responding method capable of fusing a game AI with an interactive AI.
- With the advancement and development of technology, the demand of interactions between a computer system and a user is increased. Human-computer interaction technology, e.g. somatosensory games, virtual reality (VR) environment, augmented environment (AR) and extended reality (XR) environment, becomes popular because of its physiological and entertaining function. In the above stated virtual environments, a non-player character (NPC) is created to help the user, such as an NPC avatar. The NPC is embedded with a game artificial intelligence (AI) which enables the NPC to interact with the user, such as, replying or reacting to simple messages or questions from the user, but the current NPC can only reply simple questions or machine responses, which is restricted to texts or words in the virtual environment.
- However, since the user may express in all kinds of methods in the virtual environment, e.g. speech, facial expression, body movement and gesture. Under this situation, the NPC in the virtual environment, which is not capable of being interactive with the user or being a human-like NPC, affects the user experience.
- Therefore, the present invention provides an artificial intelligence system and interactive responding method to improve the NPC in the virtual environment to be more interactive to the user and provide a better user experience.
- The present invention discloses an artificial intelligence system, comprising a first server configured to receive first input data from a user and to determine whether the first input data conform to anyone of a plurality of variation conditions or not; and a second server coupled to the first server and configured to receive second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to an action, a facial expression, a gaze, a text, a speech, a gesture, an emotion or a movement generated by the user
- The present invention further discloses an interactive responding method, for an artificial intelligence (AI) system, comprising a first server receiving a first input data from a user and determining whether the first input data conform to any one of a plurality of variation conditions or not; and a second server receiving second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to a an action, a facial expression, a gaze, text, a speech, a gesture, an emotion or a movement generated by the user.
- These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
-
FIG. 1 is a schematic diagram of an artificial intelligence system according to an embodiment of the present invention. -
FIG. 2 is a schematic diagram of an operation process of the artificial intelligence system according to an embodiment of the present invention. -
FIG. 3 is a schematic diagram of an interactive responding process according to an embodiment of the present invention. - Please refer to
FIG. 1 , which is a schematic diagram of an artificial intelligence (AI) system 10 according to an embodiment of the present invention. The AI system 10 includes afirst server 102, asecond server 104 and athird server 106. Thefirst server 102 is configured to receive first input data from a user and to determine whether the first input data conform to any one of a plurality of variation conditions or not. In an embodiment, thefirst server 102 may be a game AI server, which receives information from the user in a virtual environment, such as actions, facial expressions, gazes, texts, speeches, messages, body gestures or body movements generated by the user, and then, the game AI server determines whether the information generated by the user conforms to the variation conditions or not. Thesecond server 104 is configured to receive second input data from thefirst server 102 when the first input data conform to the variation conditions, and to determine a plurality of interactions in response to the first input data from the user. Notably, the first input data and second input data might be the same. In an embodiment, thesecond server 104 may be a Chatbot server, which determines the interactions in response to the actions, facial expressions, gazes, texts, speeches, messages, body gestures or body movements generated by the user. After the interactions are determined, thefirst server 102 displays a plurality of actions corresponding to the interactions via a non-player character (NPC). Thethird server 106 may include a data application programming interface (API) server to retrieve and store a plurality of the states of the user and the NPC for the interactions in response to the first input data. In detail, thethird server 106 stores the states of the user corresponding to the interactions displayed via the non-player character, such as a friendship value between the user and NPC, a sport preference value of the user or a happiness value of the NPC. In an embodiment, thethird server 106 may store that the user prefers soccer to basketball into a database. Therefore, the AI system 10 of the present invention coordinates the game AI server (i.e. the first server 102) with the Chatbot server (i.e. the second server 104) to interact with the user via the NPC with speeches, facial expressions, body movements and gestures. - The examples mentioned above briefly explain that the AI system of the present invention provides a more intelligent and interactive method to communicate with the user in the virtual environment. Notably, those skilled in the art may make proper modifications. For example, the
first server 102 and thesecond server 104 are not limited to be implemented by the game AI server and the Chatbot server, and other kinds of servers, which may analyze or understand the information generated by the user, may be adopted to determine the interactions and actions in response to the user, and not limited thereto. - In detail, the
first server 102 includes a pre-processor 108 and a finite state machine FSM. The pre-processor 108 generates third input data according to the first input data and the interactions generated by thesecond server 104. The FSM changes a plurality of states according to the third input data, wherein the states correspond to the interactions. More specifically, please refer toFIG. 2 , which is a schematic diagram of an operation process of the AI system 10 according to an embodiment of the present invention. Thefirst server 102 receives the first input data made by the user, such as speeches, movements or gestures, and then, thefirst server 102 determines whether the first input data conform to any one of the variation conditions, such as, whether a distance between the user and the NPC in the virtual environment is less than 1 meter. When the variation condition is satisfied, i.e. the distance between the user and the NPC is less than 1 meter, thefirst server 102 transfers the second input data to thesecond server 104. And thesecond server 104 determines the interactions in response to the first input data according to the speeches, movements or gestures made by the user. In an embodiment, thethird server 106 memorizes the interactions corresponding to the first input data determined by thesecond server 104, and updates the interactions with thefirst server 102 and thesecond server 104. After the pre-processor 108 receives the interactions corresponding to the first input data form thesecond server 104, the pre-processor 108 generates the third input data. The FSM of thefirst server 102 further generates the corresponding actions and controls the NPC to display the corresponding actions to the user when the state of the FSM is changed according to the third input data, e.g. an emotion state of the FSM is changed from happy to angry. Moreover, the FSM updates the states of the FSM to thefirst server 102. In this way, the user may interact with the NPC in a more interactive way in the virtual environment. Notably, the first input data from the user are related to a text, a speech, a movement, a gesture or an emotion generated by the user, and not limited thereto. - In another embodiment, when the first input data made by the user do not conform to any one of the variation conditions, the FSM of the
first server 102 may directly evaluate the first input data and generate corresponding actions thereto. More specifically, when the user asks the NPC essential questions, such as commands, self-introduction or road direction, the FSM of thefirst server 102 may directly lookup thethird server 106 for reference and determine the corresponding actions to the questions. In an embodiment, when the user asks the AI system 10 to play music or video, the FSM of thefirst server 102 may directly generate the corresponding actions and play music or video. In this example, thefirst server 102 does not transfer the first input data to thesecond server 104 for further determination of interactions, since none of the variation conditions is satisfied. - Since the first input data generated by the user are related to the text, speech, movement, gesture or emotion of the user, the
second server 104 of the AI system 10 may control the actions or the emotions displayed via the NPC accordingly. In a usage scenario of the virtual environment, when the user actively walks to the NPC and chat or when the user stays still and stares at the NPC for over 6 seconds, the pre-processor 108 determines that the NPC actively walks to the user and chat according to the states of the FSM. Alternatively, when the user walks by the NPC for more than 3 times, the pre-processor 108 determines that the NPC actively walks to the user and asks the user for a reason accordingly. - In addition to the actions stated above, the FSM may determine actions with the user by gestures, movements or emotions. In an example, when the user is too far from the NPC, the FSM may determine the action about asking the user to stay closer to the NPC with a movement of waving hands. In another example, when the user attempts to touch or contact the NPC in the virtual environment, the FSM may determine the actions of stopping the user with a speech or shunning the user with anger. Or, in still another example, the FSM may determine the actions of covering a mouth of the NPC when agreeing with the user.
- Based on different applications and design concepts, the AI system 10 of the present invention may be implemented in all kinds of methods. Furthermore, the operating process of the AI system 10 may be concluded to an
interactive responding method 30 as shown inFIG. 3 , which includes the following steps: - Step 302: Start.
- Step 304: The
first server 102 receives the first input data from the user and determines whether the first input data conform to any one of the variation conditions or not. If yes, executestep 306; if no, executestep 308. - Step 306: The
second server 104 receives the second input data from thefirst server 102 when the first input data conform to any one of the variation conditions and determines the interactions in response to the first input data from the user. - Step 308: The pre-processor 108 generates the third input data according to the first input data and the interactions generated by the
second server 104. - Step 310: The FSM changes the states according to the third input data.
- Step 312: End.
- The details of the interactive responding
process 30 may be referred to the above mentioned embodiments of the AI system 10 and are not narrated herein for brevity. - Notably, the embodiments stated above illustrate the concept of the present invention, those skilled in the art may make proper modifications accordingly, and not limited thereto. For example, the variation conditions may be varied or be adjusted according to indications of a user or a manufacturer, or settings of a computer system, the interactions or the actions stored in a database DB of the third server, and not limited thereto, which all belongs to the scope of the present invention.
- In summary, the present invention provides an artificial intelligence system and interactive responding method to improve the NPC in the virtual environment to be more interactive to the user, such that the NPC may interact with the user with involvements of speeches, body gestures and emotions and provide a better user experience.
- Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Claims (8)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/140,552 US10606345B1 (en) | 2018-09-25 | 2018-09-25 | Reality interactive responding system and reality interactive responding method |
JP2018227981A JP2020052993A (en) | 2018-09-25 | 2018-12-05 | Artificial intelligence system and interactive response method |
TW107144751A TWI698771B (en) | 2018-09-25 | 2018-12-12 | Reality interactive responding system and reality interactive responding method |
EP18212823.1A EP3629130A1 (en) | 2018-09-25 | 2018-12-15 | Artificial intelligence system and interactive responding method |
CN201811539859.4A CN110941329A (en) | 2018-09-25 | 2018-12-17 | Artificial intelligence system and interactive response method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/140,552 US10606345B1 (en) | 2018-09-25 | 2018-09-25 | Reality interactive responding system and reality interactive responding method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200097067A1 true US20200097067A1 (en) | 2020-03-26 |
US10606345B1 US10606345B1 (en) | 2020-03-31 |
Family
ID=65003083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/140,552 Active US10606345B1 (en) | 2018-09-25 | 2018-09-25 | Reality interactive responding system and reality interactive responding method |
Country Status (5)
Country | Link |
---|---|
US (1) | US10606345B1 (en) |
EP (1) | EP3629130A1 (en) |
JP (1) | JP2020052993A (en) |
CN (1) | CN110941329A (en) |
TW (1) | TWI698771B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11521642B2 (en) | 2020-09-11 | 2022-12-06 | Fidelity Information Services, Llc | Systems and methods for classification and rating of calls based on voice and text analysis |
WO2023073991A1 (en) * | 2021-11-01 | 2023-05-04 | 日本電信電話株式会社 | Information processing system, server device, information processing method, and program |
JPWO2023162044A1 (en) * | 2022-02-22 | 2023-08-31 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080300055A1 (en) * | 2007-05-29 | 2008-12-04 | Lutnick Howard W | Game with hand motion control |
US20110093820A1 (en) * | 2009-10-19 | 2011-04-21 | Microsoft Corporation | Gesture personalization and profile roaming |
US20130278501A1 (en) * | 2012-04-18 | 2013-10-24 | Arb Labs Inc. | Systems and methods of identifying a gesture using gesture data compressed by principal joint variable analysis |
US20140047316A1 (en) * | 2012-08-10 | 2014-02-13 | Vimbli, Inc. | Method and system to create a personal priority graph |
US20170228034A1 (en) * | 2014-09-26 | 2017-08-10 | Thomson Licensing | Method and apparatus for providing interactive content |
US20180068173A1 (en) * | 2016-09-02 | 2018-03-08 | VeriHelp, Inc. | Identity verification via validated facial recognition and graph database |
US20180068174A1 (en) * | 2016-09-07 | 2018-03-08 | Steven M. Gottlieb | Image and identity validation in video chat events |
US20190236416A1 (en) * | 2018-01-31 | 2019-08-01 | Microsoft Technology Licensing, Llc | Artificial intelligence system utilizing microphone array and fisheye camera |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000031656A2 (en) | 1998-11-19 | 2000-06-02 | Andersen Consulting, Llp | A system, method and article of manufacture for effectively interacting with a network user |
US10445800B2 (en) | 2011-08-01 | 2019-10-15 | Intel Corporation | Witnessed ad-hoc uservices |
US9349118B2 (en) | 2011-08-29 | 2016-05-24 | Avaya Inc. | Input, display and monitoring of contact center operation in a virtual reality environment |
CN103593546B (en) | 2012-08-17 | 2015-03-18 | 腾讯科技(深圳)有限公司 | Non-dynamic-blocking network game system and processing method thereof |
US9386152B2 (en) | 2013-03-15 | 2016-07-05 | Genesys Telecommunications Laboratories, Inc. | Intelligent automated agent and interactive voice response for a contact center |
CN104225918A (en) * | 2013-06-06 | 2014-12-24 | 苏州蜗牛数字科技股份有限公司 | NPC autonomous feedback interaction method based on online games |
JP6598369B2 (en) * | 2014-12-04 | 2019-10-30 | 株式会社トランスボイス・オンライン | Voice management server device |
CN105975622B (en) * | 2016-05-28 | 2020-12-29 | 福州云之智网络科技有限公司 | Multi-role intelligent chatting method and system |
CN106471444A (en) * | 2016-07-07 | 2017-03-01 | 深圳狗尾草智能科技有限公司 | A kind of exchange method of virtual 3D robot, system and robot |
KR101889279B1 (en) | 2017-01-16 | 2018-08-21 | 주식회사 케이티 | System and method for provining sercive in response to voice command |
-
2018
- 2018-09-25 US US16/140,552 patent/US10606345B1/en active Active
- 2018-12-05 JP JP2018227981A patent/JP2020052993A/en active Pending
- 2018-12-12 TW TW107144751A patent/TWI698771B/en active
- 2018-12-15 EP EP18212823.1A patent/EP3629130A1/en not_active Ceased
- 2018-12-17 CN CN201811539859.4A patent/CN110941329A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080300055A1 (en) * | 2007-05-29 | 2008-12-04 | Lutnick Howard W | Game with hand motion control |
US20110093820A1 (en) * | 2009-10-19 | 2011-04-21 | Microsoft Corporation | Gesture personalization and profile roaming |
US20130278501A1 (en) * | 2012-04-18 | 2013-10-24 | Arb Labs Inc. | Systems and methods of identifying a gesture using gesture data compressed by principal joint variable analysis |
US20140047316A1 (en) * | 2012-08-10 | 2014-02-13 | Vimbli, Inc. | Method and system to create a personal priority graph |
US20170228034A1 (en) * | 2014-09-26 | 2017-08-10 | Thomson Licensing | Method and apparatus for providing interactive content |
US20180068173A1 (en) * | 2016-09-02 | 2018-03-08 | VeriHelp, Inc. | Identity verification via validated facial recognition and graph database |
US20180068174A1 (en) * | 2016-09-07 | 2018-03-08 | Steven M. Gottlieb | Image and identity validation in video chat events |
US20190236416A1 (en) * | 2018-01-31 | 2019-08-01 | Microsoft Technology Licensing, Llc | Artificial intelligence system utilizing microphone array and fisheye camera |
Also Published As
Publication number | Publication date |
---|---|
CN110941329A (en) | 2020-03-31 |
US10606345B1 (en) | 2020-03-31 |
EP3629130A1 (en) | 2020-04-01 |
TWI698771B (en) | 2020-07-11 |
JP2020052993A (en) | 2020-04-02 |
TW202013146A (en) | 2020-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11161044B2 (en) | Adaptive gaming tutorial system | |
US10606345B1 (en) | Reality interactive responding system and reality interactive responding method | |
US20200380259A1 (en) | Response to a Real World Gesture in an Augmented Reality Session | |
US20240082734A1 (en) | In-game resource surfacing platform | |
Dasgupta et al. | Voice user interface design | |
JP2018008316A (en) | Learning type robot, learning type robot system, and program for learning type robot | |
US11579752B1 (en) | Augmented reality placement for user feedback | |
JP2018522342A (en) | Determining the appearance of objects in the virtual world based on sponsorship of object appearance | |
US20220241688A1 (en) | Method, Apparatus, GUIs and APIs For A User Experience Design Related To Hands-Free Gaming Accessibility | |
Brock et al. | Developing a lightweight rock-paper-scissors framework for human-robot collaborative gaming | |
US20230123535A1 (en) | Online machine learning-based dialogue authoring environment | |
US20230009454A1 (en) | Digital character with dynamic interactive behavior | |
CN111643903A (en) | Control method and device of cloud game, electronic equipment and storage medium | |
CN117618890A (en) | Interaction method, interaction device, electronic equipment and computer readable storage medium | |
JP2001249949A (en) | Feeling generation method, feeling generator and recording medium | |
JP2022071467A (en) | Communication support program, communication support method and communication support system | |
US20240091650A1 (en) | Systems and methods for modifying user sentiment for playing a game | |
US20230330526A1 (en) | Controlling agents in a video game using semantic machine learning and a natural language action grammar | |
Torok et al. | Smart controller: Introducing a dynamic interface adapted to the gameplay | |
CN114995927A (en) | Information display processing method, device, terminal and storage medium | |
US20240335740A1 (en) | Translation of sign language in a virtual environment | |
US20240066413A1 (en) | Ai streamer with feedback to ai streamer based on spectators | |
US20230122202A1 (en) | Grounded multimodal agent interactions | |
JP7479016B2 (en) | Computer program, method and server device | |
US20240335737A1 (en) | Gesture translation with modification based on game context |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: XRSPACE CO., LTD., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOU, PETER;CHU, FENG-SENG;LEE, CHENG-WEI;AND OTHERS;REEL/FRAME:046955/0970 Effective date: 20180918 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |