WO2018121624A1 - 一种机器人,服务器及人机互动方法 - Google Patents
一种机器人,服务器及人机互动方法 Download PDFInfo
- Publication number
- WO2018121624A1 WO2018121624A1 PCT/CN2017/119107 CN2017119107W WO2018121624A1 WO 2018121624 A1 WO2018121624 A1 WO 2018121624A1 CN 2017119107 W CN2017119107 W CN 2017119107W WO 2018121624 A1 WO2018121624 A1 WO 2018121624A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- companion
- information
- robot
- emotion
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/001—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1671—Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/08—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
- G09B5/12—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
- G09B5/125—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously the stations being mobile
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Definitions
- the invention relates to an artificial intelligence device, in particular to a taught companion robot with learning ability.
- the existing intelligent robots in the industry have the ability to recognize and express social emotions. Through cloud computing, robot learning technology, sound and facial recognition technologies, they can understand people's characteristics and feelings, so that they can interact with people, express emotions and communicate with others. Feelings, etc. With the development of artificial intelligence and the needs of society, children's educational robots and companion robots have emerged, but most of the current children's robots are simple voice or behavioral interactions, such as simple movement, dialogue or storytelling, which cannot be of interest to children. The object of machine learning, it is difficult to have emotional interaction and growth help with children.
- the embodiment of the present invention provides a companion robot, in particular, a child-oriented robot.
- the robot provided by the embodiment of the present invention can understand children's interests and habits through long-term learning, and is constantly changing according to children's growth process. The preference for adaptively choosing children likes content to interact with children. Further, it can be controlled by the parent or guardian to select the content that the parent also agrees with to interact with the child.
- the robot understands the needs of children and parents, helps children grow, and shares their interests with children.
- the object accompanying the robot also referred to as the companion target or the target object, may be a child.
- a caretaker or guardian of a target object (child) in real life is called a companion object of a target object (child).
- the companion robot extracts the surrounding events and companion objects that the child reacts from the image, filters the appropriate data, and sorts out the simulated object data, which may be referred to as a digital human or digital human resource, the simulation
- the object data is used to simulate or describe the companion object.
- the robot simulates the companion object with the simulated object data, and can simulate the child's guardian or the parent and child interaction in reality.
- the companion robot in the embodiment of the invention can have emotional interaction and growth education with children.
- the companion robot of the embodiment of the present invention first detects and collects sensing information of the companion object of the target object, and the emotion information of the target object when interacting with the companion object.
- a sensor module is disposed on the robot, and the sensor module may include various suitable sensors such as a camera, an acceleration sensor, a gyroscope, and the like.
- the sensor module can acquire the sensing information of the companion object by capturing the image, video or sound of the companion object through the camera or other sensors. It is also possible to further collect images or videos of the environment to enrich the sensory information.
- the content of the target object emotion information is recorded by collecting an image or a video of the target object through a camera or other sensor.
- the robot extracts an emotional feature quantity according to the emotional information, determines an emotional mode of the target object when interacting with the companion object according to the emotional feature quantity, and determines an interest of the target object to the companion object according to the emotional mode.
- the behavior data of the companion object is extracted from the sensing information, and the behavior data is filtered to obtain simulated object data.
- the simulated object data is used to describe the companion object.
- the target object (child) interacts with or interacts with the companion (parent), and can obtain the emotions in the process of the target object (child) and the companion (parent), the behavior of the parent or the voice, Children smile or get angry and so on.
- the robot can determine the emotional mode of the target through emotional information on the target object, such as happy, happy, scared or annoying. You can also analyze the process of changing your mood and so on. For example, robots can capture the behaviors that make children happy, obtain behavioral data, capture behavioral patterns that make children hate, obtain behavioral data, and behaviors that children respond to briefly, and obtain behavioral data. Integrate the child's interest change process or the overall attitude of the reaction to determine the child's interest in a person or thing, or behavior.
- the robot selects behavior data according to the degree of interest from the behavior data of the companion object, and the behavior data may include expressions, body movements or moods. For example, the behavior of the target object can be filtered out, and the behavior data describing the behavior is generated to generate the simulation object data. The robot can then simulate the interactive object based on the virtual simulation object.
- the link of interest may be omitted, and the robot extracts behavior data of the companion object from the sensor information according to the emotion mode.
- This can form simulated object data that allows children to enter certain emotions. In this way, instead of evaluating the child's interest in a person or thing as a whole, it directly establishes the simulated object data that allows the child to enter a certain mood or directly appease the child, or educate the child.
- the robot simulates the reality according to the simulated object data. People or things.
- the robot can simulate the companion object based on the simulated object data. For example, it is possible to directly imitate the child’s mother’s interaction with the child, especially when the child’s mother is temporarily absent. Or the child is particularly interested in a cartoon character, and can create simulation object data corresponding to the cartoon character to simulate the interaction of the cartoon character with the child. You can also simulate specific tasks and educate children in interactions with children. Help children grow or gain knowledge.
- the companion robot of the embodiment of the present invention can also put the data processing part into the server processing, and the robot is responsible for collecting the sensing information of the companion object of the collecting target object and the emotional information of the target object when interacting with the companion object, The sensing information and the emotional information are then sent to the service server, and the analysis of the information by the business server forms the simulated object data. Then, the simulation object data is sent to the robot, and after the robot obtains the simulation object data, the companion object is simulated to interact with the companion target according to the simulation object data.
- the robot of the embodiment of the invention can adaptively select the child-like content to interact with the child, and can simulate the appropriate companion object according to the emotion of the child during the interaction.
- screening the behavior data to obtain a possible implementation manner of the simulated object data may include: extracting key behavior characteristics from the behavior data, and generating the simulation object data by using the key features;
- the behavioral data includes a limb motion, the behavioral key feature including a limb key point or a limb motion unit, the key feature being generated by statistical learning or machine learning; or the behavior data includes an expression, the behavior key feature point including a face a local keypoint or facial action unit, the key feature being generated by prior specification or machine learning; or the behavioral data includes a tone, the behavioral key feature point comprising an acoustic signal feature in a companion object speech input, the key feature Generated by prior specification or machine learning.
- a possible implementation manner may also be determined by the service or guardian or the system to determine an imitation constraint in advance, matching with the imitation constraint condition, and generating the simulation object data according to the behavior data conforming to the imitation constraint condition.
- things that children are interested in or feel more happy with interactions, audio and video materials, etc. are not necessarily beneficial to children's growth, and some data that is excluded even if they are interested can be removed by screening. It may also be that the child is not very interested, but it is conducive to the growth of the child, or correct the child's misunderstanding.
- the behavior data with low interest of the child can be put into the source data of the data of your object through the constraint condition. in.
- the behavior data is sent to the data control terminal, the selection instruction of the data terminal is received, and the data of the simulation object is generated according to the selection instruction.
- the data terminal may be a smart phone or an application thereon, and the parent or guardian directly operates on the data terminal, and the data terminal issues a selection command to generate data of the simulated object.
- the data terminal can communicate with the robot, and the data terminal can directly issue instructions to the robot to simulate a specific object or manner to interact with the child and enter the mode in which the robot accepts the command work.
- This allows data terminal holders (parents or other guardians) to interact with children based on more specific needs.
- the robot may store the simulated object data, generate a simulated object database, or send to a service server, and build a simulated object database on the service server. New simulation data can be continuously added to the simulation object database.
- the robot needs to simulate a companion object, it can directly select the appropriate or corresponding simulated object data from the simulation object database for companion imitation.
- the robot can adaptively select which person or thing to simulate, or actively play what audio and video data according to the current situation or needs. That is, the emotion information of the target object is collected again or the emotions or rings of the current child are continuously collected, and the current interaction scenario is determined, and the simulation object data currently used for interaction is selected from the simulation object database according to the current interaction scenario. And simulating the corresponding companion object to interact with the target object according to the simulated object data used by the current interaction.
- the embodiment of the invention also provides a service server, which has a processing capability and a function processor, and can complete various method steps or functions of interacting with the robot in the above solution.
- Embodiments of the present invention provide a method for companion robot, server, and human-computer interaction according to the above-mentioned invention.
- the robot extracts surrounding events that are responsive to the target from the image, filters appropriate data, and displays or plays the child interaction.
- Emotional awareness filters the content that interacts with the target object, interacts with the target object, and implements a smarter companion function.
- FIG. 1 is a schematic diagram of a system architecture of a companion robot and a use environment according to an embodiment of the present invention
- FIG. 2 is a schematic view showing the form of a companion robot product according to an embodiment of the present invention.
- FIG. 3 is a schematic view of a companion robot assembly according to an embodiment of the present invention.
- FIG. 4 is a flowchart of a human-computer interaction method according to an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a service server according to an embodiment of the present invention.
- the system architecture of the companion robot and the use environment of the present invention is shown in FIG.
- the usage environment of FIG. 1 is applicable to any scenario (such as community, street, administrative district, province, country, transnational or even global), and includes the following units: a family or child care institution 301, including at least one child 303 and a child interactive robot 302.
- the system architecture of the usage environment also includes the Internet 320 that the
- FIG. 400 The product form of the implementation of the embodiment of the present invention is as shown in FIG. 400, which includes: the touch display screen 401 is configured to display graphic image information to the target object, and receives a touch control signal of the user; the speaker module 407 is configured to target the target object. A sound output signal is provided; the microphone array and sensor group 402 are used to detect the sound, expression, behavior and the like of the target object; the start/pause/emergency button 403 provides a simple operation instruction by the target object and responds to the user's interrupt instruction in an emergency situation.
- the processing and operation module 404 is based on the user status signal input by the microphone array and the sensor group 402, the user operation instruction of the button 403, the guardian request information of the caretaker from the network, the service instruction of the child care service from the network, and the third party.
- the network cloud service data, etc. calculates and outputs the control instructions of the child care robot, and the child care robot outputs sounds, images, images, limb movements, and movements.
- the child care robot also includes a track/wheel moving mechanism 405 and a robot arm 406.
- One possible product form of the present invention is a robot.
- a possible implementation of the core component "processing and computing module" 404 is shown in FIG. 3, including the main board 510 and other peripheral functional components.
- the sensor module 501 and the button 502 are respectively connected to the I/O module of the main board 510, the microphone array 503 is connected to the audio and video codec module of the main board 510, and the touch display controller of the main board 510 receives the touch input of the touch display 504 and provides Displaying the drive signal, the motor servo controller drives the motor and encoder 507 according to the program command to drive the crawler/wheel moving mechanism 405 and the robot arm 406 to form the movement and body language of the robot, and the sound is output by the audio codec module through the power amplifier.
- the hardware system further includes a processor and a memory on the main board 510.
- the memory In addition to recording the algorithm and execution program of the robot and the configuration file thereof, the memory also includes audio and video and image files required for the robot to perform the nursing work, and the program running time. Some temporary files.
- the communication module of the main board 510 provides communication functions between the robot and the external network, preferably short-range communication such as Bluetooth and Wifi modules.
- the motherboard 510 also includes a power management module that implements battery charging, discharging, and energy management of the device through the connected power system 505.
- the processor is the most central device, with computing and processing capabilities, and management and quality of other devices work together.
- the robot sensor module 501 detects and collects sensor information of the companion object of the target object and emotion information of the target object when interacting with the companion object.
- the sensing information includes at least one of view information and voice information
- the emotion information includes at least one of view information and voice information. It can be captured by the camera and can be done or matched by other sensors.
- the processor extracts an emotional feature quantity according to the emotion information, determines an emotional mode of the target object when interacting with the companion object according to the emotional feature quantity, and determines, according to the emotional mode, the target object to the companion The degree of interest of the object; extracting behavior data of the companion object from the sensing information according to the degree of interest, and filtering the behavior data to obtain simulated object data; and generating an action instruction according to the simulated object data.
- a behavior execution module configured to receive an action instruction of the processor to interact with the target object.
- the behavioral execution module can include components that can be moved to the outside, including track/wheel mobile mechanism 405, robotic arm 406, touch display screen 401,
- the processor of the robot only has a simple processing function, and the processing of the simulated object data is completed by the service server, and the communication module is further disposed on the robot, and communicates with the service server, the intelligent terminal, and the like through the antenna.
- the communication module sends the sensing information of the companion object of the target object to the service server, the emotion information of the target object when interacting with the companion object, and receives the simulated object data sent by the service server, and the processor
- the simulation object data is obtained again, and an action instruction is generated based on the simulation object data.
- FIG. 4 a flow chart of a method for interacting a robot with a target object in the embodiment of the present invention is exemplified below.
- the target audience is children.
- S101 Detect and collect sensor information of a companion object of the target object, and perform emotion information of the target object when interacting with the companion object.
- the sensing information includes at least one of view information and voice information
- the emotion information includes at least one of view information and voice information.
- the camera can be turned on by the machine to detect the child's schedule life, detect the child's expression, heartbeat, eyes, etc., judge the child's emotions, and capture the emotion corresponding time image to obtain the child's emotional information.
- the robot can capture the image or video of the current moment according to the child's behavior (expression, movement, etc.).
- the captured image can be an image, or several images or videos in a time period, and the content of the image can be included as a child.
- the behavior, surrounding environment, events of interest to children, etc., captured images can be saved locally to the robot or uploaded to the cloud server.
- the simulated object data is used by the robot to simulate the companion object, and the simulated object data is used to describe the companion object.
- the mock object data can be considered as a digital person data or a data person resource. Obtaining the simulated object data, it is possible to obtain a digital human image from the data.
- the screening of the behavior data to obtain the simulated object data may be a key feature of filtering and extracting behavior from the behavior data, and generating the simulated object data using the key features.
- the behavior data includes a limb motion
- the behavior key feature includes a limb key point or a limb motion unit
- the key feature is generated by statistical learning or machine learning
- the behavior data includes an expression
- the point includes a facial local key point or a facial action unit, the key feature being generated by prior specification or machine learning
- the behavior data includes a tone, the behavior key feature point including an acoustic signal feature in the accompanying object voice input
- the key features are generated by prior specification or machine learning.
- An example of visual feature extraction from sensory information is as follows: First, the constrained Bayesian shape model method is used to track 83 key feature points of the face, and then estimated by minimizing the energy function. Three-dimensional rigid motion of the head and three-dimensional flexible facial deformation. For the formed three-dimensional grid image, seven action unit vectors (AUV) are used, which are AUV6-closed eye, AUV3-eyebrow drooping, AUV5-outer eyebrow up, AUV0-upper lip up, AUV2-lip stretch, AUV14- The lip angle is drooping and each AUV is a column vector containing the coordinate displacement of all the mesh vertices of its element. While the input video sequence is fitted with the Candide-3 facial model, the animation parameters of these AUVs can also be obtained, so that for each image in the video, a 7-dimensional facial animation parameter is obtained as a visual emotional feature.
- AUV action unit vectors
- Emotional feature dimensionality reduction including linear dimension reduction methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), as well as nonlinear manifold dimensionality reduction methods such as Isomap and local linear embedding (LLE).
- PCA principal component analysis
- LDA linear discriminant analysis
- nonlinear manifold dimensionality reduction methods such as Isomap and local linear embedding (LLE).
- continuous emotion description considers that different emotions are gradual and smooth transitions, while emotional states correspond one-to-one with spatial coordinate points with certain dimensions.
- the more commonly used continuous sentiment description models include the Emotion Wheel and the three-dimensional arousal-pleasure-control description.
- Emotional wheel theory believes that emotions are distributed on a circular structure. The center of the structure is the natural origin, that is, a state with various emotional factors, but these emotional factors are too weak to be reflected at this point.
- the natural origin expands in different directions around the world, expressing different emotions, and classifying the same kind of emotions according to the change of emotional intensity.
- the strength and weakness of the same kind of emotions are described as the third dimension, and the concept of the emotional wheel is extended to the three-dimensional space.
- the emotion-related features in the video are matched into these spaces, and effective emotional description or classification can be performed.
- the way of extracting things can use the existing image/speech recognition algorithm, the operation of which can be operated locally by the robot, or the image or video can be uploaded to the server and operated by the server.
- the extracted content may be the content that the child is watching, or the person who interacts with the child.
- the robot learns to obtain appropriate data to interact with the child.
- the robot acquires the conversation content, the limb movement, the expression, and the tone of the companion object B; the robot performs machine learning training on the body movement, expression, and tone of the first object B to generate the child with the child.
- Interactive model For the person interested in the child (the companion object B), the robot acquires the conversation content, the limb movement, the expression, and the tone of the companion object B; the robot performs machine learning training on the body movement, expression, and tone of the first object B to generate the child with the child.
- the method may include: collecting the expression of the first object when the child A is interested; extracting various facial actions of the expression of interest or dissatisfaction; using the classification algorithm of svm, rf, deep learning, etc. Classifications of interest or disinterest; selection of facial movements of interest to children for robotic expression synthesis; robots interacting with children with learned expressions.
- facial expression data can be extracted and learned. For example: a total of 14 facial movements, including: inner eyebrows up, outer eyebrows up, eyebrows down, upper eyelids up, cheeks up, eyelid contraction, eyelids tight, nose up, upper lip up, mouth angle pull, mouth angle contraction The lower corner of the mouth is up, the mouth is pulled, the mouth is open, and the chin is downward.
- the method may include: collecting a voice signal of the first object when the child A is interested; extracting each acoustic signal of the voice signal of interest; and counting the characteristics of the expression sound signal of interest; using the acoustic signal of interest Features synthesize robotic speech; robots use the learned speech to interact with children.
- the acoustic data can be extracted and learned including information such as a fundamental frequency, a speech rate, and an unvoiced voiced ratio.
- the fundamental frequency signal is summed by the fundamental frequency of all voiced frames, and divided by the number of voiced frames.
- Unvoiced voiced ratio the ratio of the time of the voiced segment to the time of the unvoiced segment. Happy, angry, and acknowledged are slightly higher than calm, and calm is higher than fear and sadness.
- Speech rate The ratio of the number of words to the duration of the speech signal corresponding to the statement indicates that anger and surprise are fast; happiness and calm are second, fear and sadness are the slowest. Therefore, the effect of recognizing different emotions can be achieved by the above acoustic signals.
- the method may include: collecting the limb movements of the first object when the child A is interested or not interested; extracting each limb movement unit of the expression of interest or dissatisfaction; using svm, rf, deep learning, etc.
- the algorithm classifies the limb movement unit with interest or non-interest in children; selects the limb unit motion of the child's interest for the robot limb motion synthesis; the robot learns the limb movement and the child interaction.
- limb motion data can be extracted and learned.
- a total of 20 groups of action units including: body forward, head swing, nod, shaking head, raising hands, clapping, grabbing, walking, squatting, etc.
- key points include a total of 35, including the head (4) , chest and abdomen (7), arms (6 on one side, a total of 12); legs (6 on the side, a total of 12).
- the robot takes pictures/videos of interest to children, and learn to interact with children by learning to get the right data. Further, in daily life, the robot detects and collects the behavior information of the child, and the manner of collecting the child's emotional information can be the same in the above, that is, the process of detecting and collecting is the same. Or the source of the collection is one. In addition to the aforementioned judging children's emotions, learning companion objects accompanying children can also analyze the collected information to determine the current child's state and determine the current interaction scenario. For example, whether children are playing alone or with their parents. According to the current interaction scenario, the robot can select the simulation object data currently used by the interaction in the simulation object database, and simulate the corresponding companion object to interact with the child according to the simulation object data currently used by the interaction.
- a child currently wants to think about a mother, just as the mother is absent, the robot can imitate the interaction between the mother and the child according to the simulated object data corresponding to the mother generated by the previous learning. Or in the process of parental interaction, the child appears to be interested in a certain knowledge or phenomenon, and the robot can select the relevant simulated object data to simulate the corresponding companion object to interact with the child.
- the server or robot analyzes the name of the movie that the child sees based on the received video image/video, and analyzes the child's likes and dislikes of the characters in the movie according to the child's action picture/video/speech, and derives the name of the movie the child is watching. , as well as children's favorite idol names, even children love a certain fragment of idol; for example, analysis of children like to watch "Frozen", like Aisha Princess.
- the server queries the idol information on the web for the title and idol name information to complete the idol modeling based on the idol information so that the robot can simulate the idol of interest to the child.
- the robot mimics the object's data processing: objects that are of interest to children can be stored in the robot's local database, and objects that are not of interest to children are screened for positive energy events that match the age of the child, played or imitated for children to watch.
- the robot retrieves the data related to "Little Prince" in the local database. If the content can be retrieved, it means that it is a historical interest, and the robot can directly refer to the local database.
- the data (the illustration of the little prince, the animation video of the little prince, the voice of the little prince, etc.) is played or imitated for children to watch.
- the robot needs to judge the influence of the thing on the child, and the screening responds to the positive energy information.
- the specific method can be: searching for information or introducing the information through the web server. Confirm the characteristics of the thing; for example, it is detected from the image that the child is watching the cartoon of Conan, and the robot has found that the video has some violent content in the web server, which is not suitable for children under 6 years old.
- the robot will ignore this content; when it is detected from the image that the child is watching the cartoon of "Xi Yang Yang", the robot can find out that the video is suitable for children under 5 years old when viewing the video on the web server, then download the data related to "Xi Yang Yang” to the local , interact with children at any time.
- the robot directly confirms with the parent whether the thing can interact with the child: after the parent agrees, the robot can directly download the relevant number to interact with the child on the web server.
- the robot can play or imitate things directly (expressions/audio/actions, etc.) while using the camera to detect children's reactions to things.
- the robot will store the relevant data in the local database; for children not like (expression disgust, etc.) data: data stored in the local database: the robot can be directly deleted from the local database It can also be confirmed with the parent to confirm whether to delete; the data that has not been stored in the local database: the robot can not directly store, or can confirm with the parent to confirm whether to store.
- a service server is also provided, which may be a third-party cloud server, a child growth server, or a social public cloud server.
- the processor has processing and computing power and functionality to perform various method steps or functions of interacting with the robot in the above described scheme.
- server 70 includes a processor 705, a signal transceiver 702 that communicates with other devices, and a memory 706 that stores data, programs, and the like. It may also include a display 704, an input and output device (not shown), and the like, as well as various suitably needed devices.
- the various devices are connected via bus 707 to receive control and management of the processor.
- the server cooperates with the robot to organize the simulation object data for the robot and save the simulation object data database.
- the signal transceiver 702 receives sensing information of a companion object of the target object transmitted by the robot device, and emotion information of the target object when interacting with the companion object. In the aforementioned example, wherein the sensing information includes at least one of view information and voice information, the signal transceiver 702 transmits the processor-generated analog object data to the robot device.
- the processor 705 extracts an emotion feature quantity from the emotion information, determines an emotion pattern of the target object when interacting with the companion object according to the emotion feature quantity, and determines, according to the emotion mode, the target object to the The degree of interest of the companion object; extracting behavior data of the companion object from the sensing information according to the degree of interest, and screening the behavior data to obtain the simulated object data.
- the simulated object data is used by the robot to simulate the companion object, the virtual mock object being used to describe the companion object.
- the memory on the server is used to store a simulated object database to record the simulated object data.
- the parent holds the data terminal and is able to create imitation constraints directly on the data terminal. After the robot or the server obtains the data, it matches the imitation constraint, and generates the simulation object data according to the behavior data that meets the imitation constraint. Or the parent directly instructs the robot through the data terminal or through the server to instruct the behavior of the robot.
- the data terminal can be a remote control device that the robot is paired with, or it can also be a smart terminal loaded with an associated application.
- the selection command sent by the data control terminal can be received by the transceiver of the robot or the signal transceiver of the server.
- the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
- the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
- the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Abstract
Description
Claims (25)
- 一种人机交互方法,其特征在于,所述方法包括:检测并采集目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的情绪信息,其中所述传感信息至少包括视图信息和语音信息中的一种,所述情绪信息至少包括视图信息和语音信息中的一种;根据所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,并根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度;根据所述兴趣度,从所述传感信息中提取所述陪伴对象的行为数据,并对所述行为数据进行筛选获得模拟对象数据;其中,所述模拟对象数据用于机器人模拟所述陪伴对象,该模拟对象数据用于描述所述陪伴对象。
- 根据权利要求1所述的人机交互的方法,其特征在于,对所述行为数据进行筛选获得模拟对象数据包括:从行为数据进行筛选提取行为关键特征,使用关键特征生成模拟对象数据;其中,所述行为数据包括肢体动作,所述行为关键特征包括肢体关键点或肢体动作单元,所述关键特征通过统计学习或机器学习生成;或者,所述行为数据包括表情,所述行为关键特征点包括面部局部关键点或面部动作单元,所述关键特征通过事先规范或机器学习生成;或者,所述行为数据包括语气,所述行为关键特征点包括陪伴对象语音输入中的声学信号特征,所述关键特征通过事先规范或机器学习生成。
- 根据权利要求1所述的人机交互的方法,其特征在于,所述方法还包括:将所述传感信息和所述情绪信息发送给业务服务器;则根据所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,并根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度;根据所述兴趣度,从所述传感信息中提取所述陪伴对象的行为数据,并对所述行为数据进行筛选获得模拟对象数据包括:从所述业务服务器获得所述模拟对象数据,其中所述模拟对象数据由所述服务器根据所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,并根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度;根据所述兴趣度,从所述传感信息中提取所述陪伴对象的行为数据,并对所述行为数据进行筛选获得。
- 根据权利要求1、2或3所述的方法,其特征在于,对所述行为数据进行筛选获得模拟对象数据,包括:与模仿约束条件进行匹配,将符合所述模仿约束条件的行为数据生成模拟对象数据。
- 根据权利要求1、2或3所述的方法,其特征在于,所述对所述行为数据进行筛选获得模拟对象数据,包括:将所述行为数据发送给数据控制终端,接收所述数据终端的选择指令,根据所述选择指令生成模拟对象的数据。
- 根据权利要求1-5所述的任一方法,其特征在于,所述方法还包括所述机器人存 储所述模拟对象数据,生成模拟对象数据库。
- 根据权利要求1-5所述的方法,其特征在于,所述方法还包括:检测并采集环境信息或者再次采集所述目标对象的情绪信息,并确定当前交互情景,根据所述当前交互情景,从所述模拟对象数据库中选择当前交互使用的模拟对象数据,根据所述当前交互使用的模拟对象数据模拟对应的陪伴对象与所述目标对象互动。
- 根据权利要求1所述的人机交互方法,其特征在于,所述陪伴对象还包括影音资料,所述传感信息为所述影音资料的视图;则根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度包括:根据所述情绪模式确定所述影音资料中影视角色或者影视声音的兴趣度。
- 根据权利要求8所述的人机交互方法,其特征在于,所述模拟对象数据还包括与所述影音资料相关的资料,用于向所述目标对象播放所述相关的资料。
- 根据权利要求1-9任一权利要求所述的人机交互方法,其特征在于,所述检测并采集与所述陪伴对象进行互动时所述目标对象的情绪信息,包括:检测并采集目标对象人脸图像或者视频;所述根据所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,包括:从所述人脸图像或者视频中进行视觉特征提取,获得面部动画参数作为视觉情感特征,将提取的所述视觉情感特征与视觉情感特征库进行匹配以识别出所述目标对象的情绪特征并确定所述情绪模式。
- 一种陪伴式机器人,其特征在于,所述机器人包括:传感器模组,用于检测并采集目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的情绪信息,其中所述传感信息至少包括视图信息和语音信息中的一种,所述情绪信息至少包括视图信息和语音信息中的一种;处理器,用于根据所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,并根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度;根据所述兴趣度,从所述传感信息中提取所述陪伴对象的行为数据,并对所述行为数据进行筛选获得模拟对象数据;根据所述模拟对象数据生成行动指令;行为执行模块,用于接收所述处理器的行动指令与所述目标对象进行互动。
- 根据权利要求11所述的陪伴式机器人,其特征在于,所述处理器,具体用于对所述行为数据进行筛选获得模拟对象数据包括:从行为数据进行筛选提取行为关键特征,使用关键特征生成模拟对象数据;其中,所述行为数据包括肢体动作,所述行为关键特征包括肢体关键点或肢体动作单元,所述关键特征通过统计学习或机器学习生成;或者,所述行为数据包括表情,所述行为关键特征点包括面部局部关键点或面部动作单元,所述关键特征通过事先规范或机器学习生成;或者,所述行为数据包括语气,所述行为关键特征点包括陪伴对象语音输入中的声学信号特征,所述关键特征通过事先规范或机器学习生成。
- 根据权利要求11或12所述的陪伴式机器人,其特征在于,所述机器人还包括:存储器,用于保存模拟对象数据库以记录所述模拟对象数据;所述处理器,还用于根据当前交互情景,从所述模拟对象数据库中选择当前交互使用的模拟对象数据;并根据所述模拟对象数据控制所述行为执行模块。
- 根据权利要求11-13任一权利要求所述的陪伴式机器人,其特征在于,所述机器人还包括:通信模块,用于向业务服务器发送所述目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的所述情绪信息,接收所述业务服务器发送的所述模拟对象数据。
- 根据权利要求10所述的陪伴式机器人,其特征在于,所述处理器,具体用于通过与模仿约束条件进行匹配对所述行为数据进行筛选,将符合所述模仿约束条件的行为数据生成所述模拟对象数据。
- 根据权利要求11或12所述的机器人,其特征在于,所述通信模块,还用于接收数据控制终端发送的选择指令;所述处理器,还用于获得所述选择指令,根据所述选择指令对所述行为数据进行筛选生成所述模拟对象数据。
- 根据权利要求11或12所述的陪伴式机器人,其特征在于,所述传感器模组,还用于检测并采集环境信息;所述处理器,还用于根据所述环境信息和所述情绪信息确定当前交互情景。
- 一种陪伴式机器人,其特征在于,所述机器人包括:传感器模组,用于检测并采集目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的情绪信息,其中所述传感信息至少包括视图信息和语音信息中的一种;通信模块,用于向业务服务器发送所述目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的所述情绪信息,并接收所述业务服务器发送的模拟对象数据,其中所述模拟对象数据由所述服务器根据所述传感信息和所述情绪信息生成,用于描述所述陪伴对象;处理器,用于获得所述模拟对象数据,根据所述模拟对象数据生成行动指令;行为执行模块,用于接收所述处理器的行动指令与所述目标对象进行互动。
- 根据权利要求18所述的陪伴式机器人,其特征在于,所述机器人还包括:存储器,用于保存模拟对象数据库以记录所述模拟对象数据;所述处理器,还用于根据当前交互情景,从所述模拟对象数据库中选择当前交互使用的模拟对象数据;并根据所述模拟对象数据控制所述行为执行模块。
- 根据权利要求18或19所述的机器人,其特征在于,所述传感器模组,还用于检测并采集环境信息;所述处理器,还用于根据所述环境信息和所述情绪信息确定当前交互情景。
- 一种服务器,其特征在于,所述服务器包括:信号收发器,用于接收机器人设备发送的目标对象的陪伴对象的传感信息、与所述陪伴对象进行互动时所述目标对象的情绪信息,其中所述传感信息至少包括视图信息和语音信息中的一种,向所述机器人设备发送模拟对象数据,其中所述模拟对象数据用于所述机器人模拟所述陪伴对象,该虚拟模拟对象用于描述所述陪伴对象;处理器,用于从所述情绪信息提取情绪特征量,根据所述情绪特征量确定所述目标对象在与所述陪伴对象互动时的情绪模式,并根据所述情绪模式确定所述目标对象对所述陪伴对象的兴趣度;根据所述兴趣度,从所述传感信息中提取所述陪伴对象的行为数据,并对所述行为数据进行筛选获得所述模拟对象数据。
- 根据权利要求21所述的服务器,其特征在于,所述处理器,具体用于对所述行为数据进行筛选获得模拟对象数据包括:从行为数据进行筛选提取行为关键特征,使用关键特征生成模拟对象数据;其中,所述行为数据包括肢体动作,所述行为关键特征包括肢体关键点或肢体动作单元,所述关键特征通过统计学习或机器学习生成;或者,所述行为数据包括表情,所述行为关键特征点包括面部局部关键点或面部动作单元,所述关键特征通过事先规范或机器学习生成;或者,所述行为数据包括语气,所述行为关键特征点包括陪伴对象语音输入中的声学信号特征,所述关键特征通过事先规范或机器学习生成。
- 根据权利要求21或22所述的服务器,其特征在于,所述服务器还包括:存储器,用于保存模拟对象数据库以记录所述模拟对象数据;所述处理器,还用于从所述模拟对象数据库中获取当前使用的模拟对象数据或者根据所述当前使用的模拟对象数据生成行动指令;所述信号收发器,还用于将所述当前使用的模拟对象数据或所述行动指令发给所述机器人设备。
- 根据权利要求21或22所述的服务器,其特征在于,所述处理器,具体用于通过与模仿约束条件进行匹配对所述行为数据进行筛选,将符合所述模仿约束条件的行为数据生成所述模拟对象数据。
- 根据权利要求21或22所述的服务器,其特征在于,所述信号收发器,还用于接收数据控制终端发送的选择指令;所述处理器,具体用于获得所述选择指令,根据所述选择指令对所述行为数据进行筛选生成所述模拟对象数据。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019536067A JP6888096B2 (ja) | 2016-12-31 | 2017-12-27 | ロボット、サーバおよびヒューマン・マシン・インタラクション方法 |
KR1020197022134A KR102328959B1 (ko) | 2016-12-31 | 2017-12-27 | 로봇, 서버 및 인간-기계 상호 작용 방법 |
EP17887623.1A EP3563986B1 (en) | 2016-12-31 | 2017-12-27 | Robot, server and man-machine interaction method |
US16/457,676 US11858118B2 (en) | 2016-12-31 | 2019-06-28 | Robot, server, and human-machine interaction method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611267452.1A CN107053191B (zh) | 2016-12-31 | 2016-12-31 | 一种机器人,服务器及人机互动方法 |
CN201611267452.1 | 2016-12-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/457,676 Continuation US11858118B2 (en) | 2016-12-31 | 2019-06-28 | Robot, server, and human-machine interaction method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018121624A1 true WO2018121624A1 (zh) | 2018-07-05 |
Family
ID=59623644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/119107 WO2018121624A1 (zh) | 2016-12-31 | 2017-12-27 | 一种机器人,服务器及人机互动方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11858118B2 (zh) |
EP (1) | EP3563986B1 (zh) |
JP (1) | JP6888096B2 (zh) |
KR (1) | KR102328959B1 (zh) |
CN (1) | CN107053191B (zh) |
WO (1) | WO2018121624A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110405794A (zh) * | 2019-08-28 | 2019-11-05 | 重庆科技学院 | 一种用于儿童的拥抱机器人及其控制方法 |
CN111540358A (zh) * | 2020-04-26 | 2020-08-14 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
US10748644B2 (en) | 2018-06-19 | 2020-08-18 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11120895B2 (en) | 2018-06-19 | 2021-09-14 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107053191B (zh) * | 2016-12-31 | 2020-05-08 | 华为技术有限公司 | 一种机器人,服务器及人机互动方法 |
CN107030691B (zh) * | 2017-03-24 | 2020-04-14 | 华为技术有限公司 | 一种看护机器人的数据处理方法及装置 |
US20180301053A1 (en) * | 2017-04-18 | 2018-10-18 | Vän Robotics, Inc. | Interactive robot-augmented education system |
JP7073640B2 (ja) * | 2017-06-23 | 2022-05-24 | カシオ計算機株式会社 | 電子機器、感情情報取得システム、プログラム及び感情情報取得方法 |
US11568265B2 (en) * | 2017-08-23 | 2023-01-31 | Sony Interactive Entertainment Inc. | Continual selection of scenarios based on identified tags describing contextual environment of a user for execution by an artificial intelligence model of the user by an autonomous personal companion |
CN109521927B (zh) * | 2017-09-20 | 2022-07-01 | 阿里巴巴集团控股有限公司 | 机器人互动方法和设备 |
CN109635616B (zh) * | 2017-10-09 | 2022-12-27 | 阿里巴巴集团控股有限公司 | 互动方法和设备 |
CN107657852B (zh) * | 2017-11-14 | 2023-09-22 | 翟奕雲 | 基于人脸识别的幼儿教学机器人、教学系统、存储介质 |
JP6724889B2 (ja) * | 2017-12-07 | 2020-07-15 | カシオ計算機株式会社 | 見守りシステム及び見守り方法 |
TWI658377B (zh) * | 2018-02-08 | 2019-05-01 | 佳綸生技股份有限公司 | 機器人輔助互動系統及其方法 |
US11267121B2 (en) * | 2018-02-13 | 2022-03-08 | Casio Computer Co., Ltd. | Conversation output system, conversation output method, and non-transitory recording medium |
CN108161953A (zh) * | 2018-02-24 | 2018-06-15 | 上海理工大学 | 一种智能机器人头部系统 |
CN108297109A (zh) * | 2018-02-24 | 2018-07-20 | 上海理工大学 | 一种智能机器人系统 |
CN108393898A (zh) * | 2018-02-28 | 2018-08-14 | 上海乐愚智能科技有限公司 | 一种智能陪伴方法、装置、机器人及存储介质 |
CN108537178A (zh) * | 2018-04-12 | 2018-09-14 | 佘堃 | 一种智能识别及互动的方法及系统 |
CN110576433B (zh) * | 2018-06-08 | 2021-05-18 | 香港商女娲创造股份有限公司 | 机器人动作生成方法 |
CN108960191B (zh) * | 2018-07-23 | 2021-12-14 | 厦门大学 | 一种面向机器人的多模态融合情感计算方法及系统 |
CN108942941A (zh) * | 2018-08-02 | 2018-12-07 | 安徽硕威智能科技有限公司 | 一种教育机器人语音交互系统 |
CN111435268A (zh) * | 2019-01-11 | 2020-07-21 | 合肥虹慧达科技有限公司 | 基于图像的识别与重建的人机交互方法和使用该方法的系统及装置 |
CN109976513B (zh) * | 2019-02-20 | 2020-03-03 | 方科峰 | 一种系统界面设计方法 |
CN109920422A (zh) * | 2019-03-15 | 2019-06-21 | 百度国际科技(深圳)有限公司 | 语音交互方法及装置、车载语音交互设备及存储介质 |
CN109841122A (zh) * | 2019-03-19 | 2019-06-04 | 深圳市播闪科技有限公司 | 一种智能机器人教学系统及学生学习方法 |
JP7439826B2 (ja) * | 2019-04-16 | 2024-02-28 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP6993382B2 (ja) * | 2019-04-26 | 2022-02-04 | ファナック株式会社 | ロボット教示装置 |
CN110119715B (zh) * | 2019-05-14 | 2023-04-18 | 东北师范大学 | 一种陪伴机器人及情绪识别方法 |
CN111949773A (zh) * | 2019-05-17 | 2020-11-17 | 华为技术有限公司 | 一种阅读设备、服务器以及数据处理的方法 |
CN110287792B (zh) * | 2019-05-23 | 2021-05-04 | 华中师范大学 | 一种处于自然教学环境的课堂中学生学习状态实时分析方法 |
KR20210020312A (ko) * | 2019-08-14 | 2021-02-24 | 엘지전자 주식회사 | 로봇 및 그의 제어 방법 |
CN110480651B (zh) * | 2019-08-20 | 2022-03-25 | 深圳市一恒科电子科技有限公司 | 一种基于交互式陪伴机器人搜索系统的建云方法和系统 |
CN110576440B (zh) * | 2019-09-04 | 2021-10-15 | 华南理工大学广州学院 | 一种儿童陪护机器人及其陪护控制方法 |
CN112667068A (zh) * | 2019-09-30 | 2021-04-16 | 北京百度网讯科技有限公司 | 虚拟人物的驱动方法、装置、设备及存储介质 |
CN110751951B (zh) * | 2019-10-25 | 2022-11-11 | 智亮君 | 基于智能镜子的握手交互方法及系统、存储介质 |
CN111078005B (zh) * | 2019-11-29 | 2024-02-20 | 恒信东方文化股份有限公司 | 一种虚拟伙伴创建方法及虚拟伙伴系统 |
CN111402640A (zh) * | 2020-03-04 | 2020-07-10 | 香港生产力促进局 | 一种儿童教育机器人及其学习资料推送方法 |
CN111832691B (zh) * | 2020-07-01 | 2024-01-09 | 娄兆文 | 一种角色替代的可升级多对象智能陪伴机器人 |
CN113760142A (zh) * | 2020-09-30 | 2021-12-07 | 完美鲲鹏(北京)动漫科技有限公司 | 基于虚拟角色的交互方法及装置、存储介质、计算机设备 |
KR102452991B1 (ko) * | 2020-11-11 | 2022-10-12 | (주)이이알에스소프트 | 모듈형 피지컬 블록 기반의 epl 증강현실 시뮬레이터 시스템 |
KR102295836B1 (ko) * | 2020-11-20 | 2021-08-31 | 오로라월드 주식회사 | 성장형 스마트 토이 장치 및 스마트 토이 시스템 |
CN115101048B (zh) * | 2022-08-24 | 2022-11-11 | 深圳市人马互动科技有限公司 | 科普信息交互方法、装置、系统、交互设备和存储介质 |
CN116627261A (zh) * | 2023-07-25 | 2023-08-22 | 安徽淘云科技股份有限公司 | 交互方法、装置、存储介质和电子设备 |
CN117331460A (zh) * | 2023-09-26 | 2024-01-02 | 武汉北极光数字科技有限公司 | 基于多维交互数据分析的数字化展厅内容优化方法及装置 |
CN117371338B (zh) * | 2023-12-07 | 2024-03-22 | 浙江宇宙奇点科技有限公司 | 一种基于用户画像的ai数字人建模方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110131165A1 (en) * | 2009-12-02 | 2011-06-02 | Phison Electronics Corp. | Emotion engine, emotion engine system and electronic device control method |
WO2012141130A1 (ja) * | 2011-04-11 | 2012-10-18 | 株式会社東郷製作所 | 被介護者用ロボット |
CN103996155A (zh) * | 2014-04-16 | 2014-08-20 | 深圳市易特科信息技术有限公司 | 智能交互及心理慰藉机器人服务系统 |
CN105082150A (zh) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | 一种基于用户情绪及意图识别的机器人人机交互方法 |
CN105345818A (zh) * | 2015-11-04 | 2016-02-24 | 深圳好未来智能科技有限公司 | 带有情绪及表情模块的3d视频互动机器人 |
CN105868827A (zh) * | 2016-03-25 | 2016-08-17 | 北京光年无限科技有限公司 | 一种智能机器人多模态交互方法和智能机器人 |
CN107053191A (zh) * | 2016-12-31 | 2017-08-18 | 华为技术有限公司 | 一种机器人,服务器及人机互动方法 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10289006A (ja) * | 1997-04-11 | 1998-10-27 | Yamaha Motor Co Ltd | 疑似感情を用いた制御対象の制御方法 |
US6230111B1 (en) * | 1998-08-06 | 2001-05-08 | Yamaha Hatsudoki Kabushiki Kaisha | Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object |
CN100436082C (zh) * | 2003-08-12 | 2008-11-26 | 株式会社国际电气通信基础技术研究所 | 用于通信机器人的控制系统 |
JP2006123136A (ja) * | 2004-11-01 | 2006-05-18 | Advanced Telecommunication Research Institute International | コミュニケーションロボット |
JP2007041988A (ja) | 2005-08-05 | 2007-02-15 | Sony Corp | 情報処理装置および方法、並びにプログラム |
US7949529B2 (en) * | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
US8909370B2 (en) * | 2007-05-08 | 2014-12-09 | Massachusetts Institute Of Technology | Interactive systems employing robotic companions |
JP5157595B2 (ja) * | 2008-04-01 | 2013-03-06 | トヨタ自動車株式会社 | 接客システム及び接客方法 |
KR20100001928A (ko) * | 2008-06-27 | 2010-01-06 | 중앙대학교 산학협력단 | 감정인식에 기반한 서비스 장치 및 방법 |
JP5391144B2 (ja) * | 2010-05-10 | 2014-01-15 | 日本放送協会 | 顔表情変化度測定装置およびそのプログラム並びに番組興味度測定装置 |
JP2011253389A (ja) * | 2010-06-02 | 2011-12-15 | Fujitsu Ltd | 端末および擬似会話用返答情報作成プログラム |
CN102375918B (zh) | 2010-08-17 | 2016-04-27 | 上海科言知识产权服务有限公司 | 设备间互动虚拟角色系统 |
US20150314454A1 (en) | 2013-03-15 | 2015-11-05 | JIBO, Inc. | Apparatus and methods for providing a persistent companion device |
TWI484452B (zh) * | 2013-07-25 | 2015-05-11 | Univ Nat Taiwan Normal | 擴增實境學習系統及其方法 |
US9216508B2 (en) | 2014-01-14 | 2015-12-22 | Qualcomm Incorporated | Connectivity maintenance using a quality of service-based robot path planning algorithm |
JP6328580B2 (ja) * | 2014-06-05 | 2018-05-23 | Cocoro Sb株式会社 | 行動制御システム及びプログラム |
WO2016103881A1 (ja) * | 2014-12-25 | 2016-06-30 | エイディシーテクノロジー株式会社 | ロボット |
CN104767980B (zh) | 2015-04-30 | 2018-05-04 | 深圳市东方拓宇科技有限公司 | 一种实时情绪演示方法、系统、装置和智能终端 |
CN105843118B (zh) | 2016-03-25 | 2018-07-27 | 北京光年无限科技有限公司 | 一种机器人交互方法及机器人系统 |
-
2016
- 2016-12-31 CN CN201611267452.1A patent/CN107053191B/zh active Active
-
2017
- 2017-12-27 WO PCT/CN2017/119107 patent/WO2018121624A1/zh unknown
- 2017-12-27 KR KR1020197022134A patent/KR102328959B1/ko active IP Right Grant
- 2017-12-27 JP JP2019536067A patent/JP6888096B2/ja active Active
- 2017-12-27 EP EP17887623.1A patent/EP3563986B1/en active Active
-
2019
- 2019-06-28 US US16/457,676 patent/US11858118B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110131165A1 (en) * | 2009-12-02 | 2011-06-02 | Phison Electronics Corp. | Emotion engine, emotion engine system and electronic device control method |
WO2012141130A1 (ja) * | 2011-04-11 | 2012-10-18 | 株式会社東郷製作所 | 被介護者用ロボット |
CN103996155A (zh) * | 2014-04-16 | 2014-08-20 | 深圳市易特科信息技术有限公司 | 智能交互及心理慰藉机器人服务系统 |
CN105082150A (zh) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | 一种基于用户情绪及意图识别的机器人人机交互方法 |
CN105345818A (zh) * | 2015-11-04 | 2016-02-24 | 深圳好未来智能科技有限公司 | 带有情绪及表情模块的3d视频互动机器人 |
CN105868827A (zh) * | 2016-03-25 | 2016-08-17 | 北京光年无限科技有限公司 | 一种智能机器人多模态交互方法和智能机器人 |
CN107053191A (zh) * | 2016-12-31 | 2017-08-18 | 华为技术有限公司 | 一种机器人,服务器及人机互动方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748644B2 (en) | 2018-06-19 | 2020-08-18 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11120895B2 (en) | 2018-06-19 | 2021-09-14 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
US11942194B2 (en) | 2018-06-19 | 2024-03-26 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
CN110405794A (zh) * | 2019-08-28 | 2019-11-05 | 重庆科技学院 | 一种用于儿童的拥抱机器人及其控制方法 |
CN111540358A (zh) * | 2020-04-26 | 2020-08-14 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
CN111540358B (zh) * | 2020-04-26 | 2023-05-26 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP6888096B2 (ja) | 2021-06-16 |
JP2020507835A (ja) | 2020-03-12 |
KR102328959B1 (ko) | 2021-11-18 |
KR20190100348A (ko) | 2019-08-28 |
US11858118B2 (en) | 2024-01-02 |
US20190337157A1 (en) | 2019-11-07 |
EP3563986A4 (en) | 2020-01-01 |
EP3563986A1 (en) | 2019-11-06 |
CN107053191B (zh) | 2020-05-08 |
CN107053191A (zh) | 2017-08-18 |
EP3563986B1 (en) | 2023-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018121624A1 (zh) | 一种机器人,服务器及人机互动方法 | |
CN109789550B (zh) | 基于小说或表演中的先前角色描绘的社交机器人的控制 | |
KR102334942B1 (ko) | 돌봄 로봇을 위한 데이터 처리 방법 및 장치 | |
US11291919B2 (en) | Development of virtual character in a learning game | |
Dewan et al. | A deep learning approach to detecting engagement of online learners | |
KR101604593B1 (ko) | 이용자 명령에 기초하여 리프리젠테이션을 수정하기 위한 방법 | |
CN105126355A (zh) | 儿童陪伴机器人与儿童陪伴系统 | |
CN110488975B (zh) | 一种基于人工智能的数据处理方法及相关装置 | |
JP2019521449A (ja) | 永続的コンパニオンデバイス構成及び配備プラットフォーム | |
US20230173683A1 (en) | Behavior control device, behavior control method, and program | |
US20200324072A1 (en) | Robotic control using profiles | |
Alshammari et al. | Robotics Utilization in Automatic Vision-Based Assessment Systems From Artificial Intelligence Perspective: A Systematic Review | |
Celiktutan et al. | Computational analysis of affect, personality, and engagement in human–robot interactions | |
CN111949773A (zh) | 一种阅读设备、服务器以及数据处理的方法 | |
Bryer et al. | Re‐animation: multimodal discourse around text | |
CN114067033A (zh) | 三维立体记录及还原人生历程的系统及方法 | |
US20220284649A1 (en) | Virtual Representation with Dynamic and Realistic Behavioral and Emotional Responses | |
Farinelli | Design and implementation of a multi-modal framework for scenic actions classification in autonomous actor-robot theatre improvisations | |
WO2023017732A1 (ja) | 読み聞かせ情報作成装置、読み聞かせロボット、読み聞かせ情報作成方法、プログラム | |
Naeem et al. | An AI based Voice Controlled Humanoid Robot | |
Saadatian et al. | Design and development of playful robotic interfaces for affective telepresence | |
Aurobind et al. | An AI Integrated Emotional Responsive System: Kanmani-The Bot for India | |
Pasquier | Declaration of Committee | |
OGGIONNI | Be pleasurable, be innovative. The emotional side of design thinking | |
Shodhan | Facial Expression Synthesis for Entertainment Robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17887623 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019536067 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20197022134 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017887623 Country of ref document: EP Effective date: 20190731 |