WO2019072104A1 - 互动方法和设备 - Google Patents

互动方法和设备 Download PDF

Info

Publication number
WO2019072104A1
WO2019072104A1 PCT/CN2018/108308 CN2018108308W WO2019072104A1 WO 2019072104 A1 WO2019072104 A1 WO 2019072104A1 CN 2018108308 W CN2018108308 W CN 2018108308W WO 2019072104 A1 WO2019072104 A1 WO 2019072104A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
robot
information
interactive
interaction
Prior art date
Application number
PCT/CN2018/108308
Other languages
English (en)
French (fr)
Inventor
贾梓筠
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to US16/646,665 priority Critical patent/US20200413135A1/en
Priority to EP18865693.8A priority patent/EP3696648A4/en
Priority to JP2020510613A priority patent/JP7254772B2/ja
Publication of WO2019072104A1 publication Critical patent/WO2019072104A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition

Definitions

  • the present invention relates to the field of artificial intelligence technologies, and in particular, to an interactive method and device.
  • a child user uses a child robot to learn English words.
  • the child user can issue an instruction to the child robot to obtain the content resource, for example, saying “learning English words” to the child robot, thereby triggering the child robot to obtain the pre-generated corresponding audio and video content resources from the server for playing and displaying, and the content acquisition form. single.
  • the child user can only perform simple playback control operations, such as "start”, “pause”, “fast forward”, “rewind”, “previous”, “next”, etc. On the whole, children's users are still passively accepting these contents, and the lack of richer interactive functions makes the children's users experience less.
  • an embodiment of the present invention provides an interaction method and device for implementing personalized robot interaction for a new user.
  • an embodiment of the present invention provides an interaction method, which is applied to a robot, and includes:
  • an embodiment of the present invention provides an interactive apparatus, including:
  • a play module for playing live content selected by the user
  • An obtaining module configured to acquire emotion information of the user when viewing the live content
  • a sending module configured to send the sentiment information to a host end corresponding to the live content
  • the playing module is further configured to play the interactive content that is sent by the anchor end and corresponding to the emotion information.
  • an embodiment of the present invention provides an electronic device, which may be implemented to include a processor and a memory, where the memory is configured to store a program supporting the execution of the interaction method in the first aspect, the processor is Configured to execute a program stored in the memory.
  • the electronic device can also include a communication interface for communicating with other devices or communication networks.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for the electronic device, which includes a program for performing the interaction method in the first aspect.
  • an embodiment of the present invention provides an interaction method, which is applied to a client, and includes:
  • the interactive content triggered by the anchor according to the emotion information is sent to the robot.
  • an embodiment of the present invention provides an interaction apparatus, which is applied to a client, and includes:
  • a receiving module configured to receive emotion information sent by a user's robot, where the emotion information reflects an emotion of the user when viewing the live content corresponding to the anchor end;
  • a sending module configured to send, to the robot, the interactive content triggered by the anchor according to the emotion information.
  • an embodiment of the present invention provides an electronic device, where the electronic device can be implemented as a user terminal device, such as a smart phone, and the like, and includes a processor and a memory, where the memory is used to store and support the electronic device to perform the foregoing fourth aspect.
  • a program of a medium interaction method the processor being configured to execute a program stored in the memory.
  • the electronic device can also include a communication interface for communicating with other devices or communication networks.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for use in the electronic device, which includes a program for performing the interaction method in the fourth aspect.
  • the interaction method and device provided by the embodiments of the present invention provide the user with the required content in a live broadcast manner.
  • the user can select the live content to be viewed in the viewing client interface of the robot, thereby triggering the robot to obtain the live content and play it.
  • the emotion information of the user during the viewing is captured, so that the user's emotional information is sent to the corresponding anchor end, so that the anchor triggers the corresponding interactive content according to the user's emotional information. For example, when the user is found to be boring, adjust the live content to sing a song, dance a dance, or make a small game.
  • the live content is provided to the user in a live broadcast manner, and the user's emotion during the user watching the live content is sensed to interact with the user, the live broadcast technology is combined with the sensing technology, and the content viewed by the user is timely adjusted according to the user's viewing mood. To achieve effective interaction between the content provider and the content viewer.
  • FIG. 1a is a flowchart of an interaction method according to an embodiment of the present invention.
  • Figure 1b is a schematic diagram of an interaction process corresponding to the embodiment shown in Figure 1a;
  • FIG. 2a is a flowchart of another interaction method according to an embodiment of the present invention.
  • FIG. 2b is a schematic diagram of an interaction process corresponding to the embodiment shown in FIG. 2a;
  • FIG. 3a is a flowchart of still another interaction method according to an embodiment of the present invention.
  • Figure 3b is a schematic diagram of the interaction process corresponding to the embodiment shown in Figure 3a;
  • Figure 3c is a schematic diagram of the interaction process corresponding to the embodiment shown in Figure 3a;
  • Figure 3d is a schematic diagram of an interaction process corresponding to the embodiment shown in Figure 3a;
  • Figure 3e is a schematic diagram of an interaction process corresponding to the embodiment shown in Figure 3a;
  • FIG. 4 is a schematic structural diagram of an interaction apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device corresponding to the interaction device shown in FIG. 4;
  • FIG. 5 is a schematic structural diagram of an electronic device corresponding to the interaction device shown in FIG. 4;
  • FIG. 6 is a flowchart of still another interaction method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural view of an interactive device corresponding to the embodiment shown in FIG. 6;
  • FIG. 8 is a schematic structural diagram of an electronic device corresponding to the interactive device shown in FIG. 7;
  • FIG. 9 is an interaction flowchart of an interaction method according to an embodiment of the present invention.
  • first, second, third, etc. may be used to describe XXX in embodiments of the invention, these XXX should not be limited to these terms. These terms are only used to distinguish XXX.
  • the first XXX may also be referred to as a second XXX without departing from the scope of the embodiments of the present invention.
  • the second XXX may also be referred to as a first XXX.
  • the words “if” and “if” as used herein may be interpreted to mean “when” or “when” or “in response to determining” or “in response to detecting.”
  • the phrase “if determined” or “if detected (conditions or events stated)” can be interpreted as “when determined” or “in response to determination” or “when detected (stated condition or event) “Time” or “in response to a test (condition or event stated)”.
  • FIG. 1a is a flowchart of an interaction method according to an embodiment of the present invention.
  • the interaction method provided by this embodiment may be implemented by an interaction device, and the interaction device may be implemented as software or implemented as a combination of software and hardware.
  • the interactive device can be placed in the robot. As shown in FIG. 1a, the method includes the following steps:
  • the user may be a user who uses the robot, such as a child user who uses a child robot.
  • the user can obtain the content that is desired to be viewed by means of live broadcast.
  • the user's robot is installed with a viewing client. After the user opens the viewing client, the user can select a live content list that can be selected by the user in the viewing client interface, and the user selects the live content to be played for playing. .
  • the live content selected by the user is provided by the corresponding anchor, and the anchor client uploads the live content to the live broadcast service platform on the network side, so that the viewing client can pull the user selection from the live broadcast service platform. Live content is played.
  • the live content may be an educational and entertainment resource that can be viewed by the child user.
  • a plurality of types of collection devices such as a camera, a microphone array, and the like, are generally installed on the user's robot for collecting behaviors of the user during viewing of the live content, and then analyzing the collected behavior to obtain The user's emotional information when watching the live content, so that the anchor can adjust the live content in time according to the emotional information of the viewing user, to trigger appropriate interactive content to interact with the user, and improve the user's viewing enthusiasm.
  • the behavior triggered by the user when watching the live content often includes expressing an expression, speaking certain words, performing certain actions, etc., and therefore, by analyzing the facial expression of the user and recognizing the words spoken by the user, The user's current emotional state.
  • the emotion information of the user when watching the live content can be obtained by:
  • the process of the expression recognition can be implemented by using existing related technologies, and details are not described herein.
  • the results of facial expression recognition may include expressions such as happiness, anger, disgust, and sadness.
  • the statement library reflecting different emotions may be pre-built, that is, the common sentence set corresponding to the plurality of emotions may be stored in the sentence library.
  • the recognized statement is a statement that reflects the user's emotions. Therefore, optionally, the recognized statement may be directly fed back to the anchor end as an expression of the user's emotion; alternatively, the emotion corresponding to the recognized statement may be immediately associated with the recognized statement.
  • the emotions corresponding to the commonly used sentences are fed back to the anchor.
  • the anchor end can cause the anchor to trigger the corresponding interactive content according to the user's emotion, so as to attract the user's viewing and interaction enthusiasm. For example, when the user is found to be boring, adjust the live content to the following interactive content: sing a song, dance a dance, or make a small game.
  • the robot can provide the user with the required content by means of live broadcast, in a manner that the traditional robot obtains the content required by the user in a download manner.
  • the anchor can promptly trigger the corresponding interactive content according to the user's emotional information, thereby realizing effective interaction between the content providing end and the content viewing end, and improving viewing. User's viewing experience.
  • FIG. 2a is a flowchart of another interaction method according to an embodiment of the present invention. As shown in FIG. 2a, the method includes the following steps:
  • the interactive method provided by the embodiment of the present invention is particularly applicable to a scenario in which a child user learns and entertains through a robot. Compared with adult users, children's users' attention is more difficult to concentrate. In order to attract children's users to better use robots for learning, through the combination of sensing technology and live broadcast technology, the interaction of education, entertainment and other content can be realized on children's robots. Play.
  • different content resource acquisition manners may be provided for different types of users using the robot. For example, if the user currently using the robot is an adult, the traditional content acquisition manner may be adopted, that is, in response to the adult user pair. Selecting or searching for the required content resources, downloading pre-existing content resources from the corresponding server. If the user currently using the robot is a child, the live viewing client can be opened to display a live content list for the child user to select the live content, so that when the child user selects the live content to watch, the live broadcast service The platform pulls the live content for playback.
  • the robot first needs to identify whether the current user is a child user, and if it is a child user, provide live content for viewing in a live broadcast manner.
  • a way of identifying whether the current user is a child user is determined according to the registration information of the user.
  • the robot may be used only by members of a family, that is, only a certain number of users may have access to the robot. Therefore, when the robot is initially used, the user identity registration capable of using the robot can be performed in the relevant configuration interface, and the user type of different users can be set and the user image can be added in the identity registration process, wherein the user type can be divided into adults and children. Two. Therefore, when identifying whether the current user is a child user, the user image is collected, and the collected user image is matched with the image of each user that has already been registered. If there is a matching user image, the user corresponding to the matched user image is obtained. The type determines the user type of the current user.
  • feature extraction may be performed on the collected user image of the current user to perform user type determination according to the extracted user feature.
  • the extracted user features include, for example, height, facial wrinkles, and the like.
  • the anchor in addition to feeding the user's emotional information to the anchor, so that the anchor can trigger the corresponding interactive content to attract the user's enthusiasm, according to the user's
  • the relevant feedback component on the emotional information control robot performs the corresponding interactive operation.
  • the feedback component on the robot may include, for example, a touch sensor, an arm steering gear, a wheel motor, an LED lamp, and the like.
  • the wheel motor can be automatically controlled to rotate slightly before and after the arm servo starts to operate, and the LED light starts to blink, and the child's attention is turned on. Continue to transfer to the live content of the anchor.
  • FIG. 3a is a flowchart of still another interaction method according to an embodiment of the present invention. As shown in FIG. 3a, the method includes the following steps:
  • the same live content of the same anchor live broadcast can be viewed by different users. Therefore, different users who watch the same live content can interact with the anchor, and different users can also interact with each other through their corresponding robots.
  • a user who owns the robot can create a virtual interest group, and add a friend who views the same live content from the viewer list to the virtual interest group, so that the users in the group can be Interact with each other.
  • the interactive communication may be in the form of creating a chat window while creating a virtual interest group, so that users in the group can communicate with text, voice, image, video, etc. through the chat window.
  • interaction can also be realized by a robot.
  • a robot For example, when a user selects a friend from the viewer list and triggers a communication link with the friend, the user interacts with the robot, such as making certain actions or saying before the robot. In some words, the interaction will be displayed on the friend's robot. As shown in Figure 3d, a user says “hello” in front of his robot and raises his arms. The friend's robot will output “Hello.” "The voice, and lift the arms of the robot.
  • the sensory data reflecting the interaction behavior of the user may be collected, and then the sensory data is analyzed to determine that the user can control
  • the friend's robot mimics the interactive control information of the user's interactive behavior, thereby transmitting the obtained interactive control information to the friend's robot to control the friend's robot to perform the corresponding interactive operation.
  • the sensing data reflecting the interactive behavior of the user may include at least one of the following: a depth image, a color image, an interactive voice, touch sensing information, and the like.
  • the depth image can reflect the user's limb movements, such as the process of raising the arm;
  • the color image can reflect the user's facial expression features, such as a smile;
  • the interactive voice can reflect the voice spoken by the user, such as hello;
  • touch sensing information It can reflect the user's touch operation triggered by the robot, such as holding the palm of the robot.
  • the interactive voice may be directly used as part of the interactive control information, which is equivalent to transparently transmitting the user's interactive voice to the friend robot for playing, as shown in FIG. 3e, the user Xiaoming The saying "Hello, Xiaohong” will be played by the friend Xiaohong's robot.
  • the sensing data includes the touch sensing information
  • the light control information corresponding to the touch sensing information may be determined, and the light control information is used as part of the interactive control information, and is used to control the LED light of the friend's robot. Display effect.
  • a correspondence relationship between different touch positions and display effects of different LED lights may be preset, so that when it is detected that the user touches a certain position on the robot body, the corresponding relationship is determined as the The LED light of the control object and the display manner of the LED light, the light control information includes the LED light as the controlled object and the display manner of the LED light.
  • Xiaoming can control the LED light of the left hand of the friend Xiaohong to light up, thereby narrowing the distance of the remote interaction.
  • the color image when the color image is included in the sensing data, the color image may be subjected to facial expression recognition, and then the expression object corresponding to the recognized facial expression is determined from the preset expression database, and the expression object is used as an interactive control. Part of the information used to control the buddy robot to display the emoticon object.
  • the process of facial expression recognition can be implemented by using existing related technologies, and details are not described herein.
  • the result of the expression recognition may include expressions such as happiness, anger, surprise, fear, disgust, and sadness.
  • an expression library including an expression object corresponding to each expression recognition result may be pre-built, and the expression object may be an expression animation or an expression image.
  • the depth image when the depth image is included in the sensing data, the depth image may be bone-recognized to obtain the joint posture information of the user, thereby determining the robot joint posture information corresponding to the joint posture information of the user, thereby determining the The robot joint attitude information is used as part of the interactive control information to control the friend's robot to perform the corresponding action, as shown in Figure 3d.
  • the method for skeletal recognition can be implemented by using the related related art. This embodiment does not describe it.
  • the human joint posture information is a motion sequence of multiple joints of the human body.
  • the composition reflects the motion trajectories of multiple joints.
  • the joints of the robot and the joints of the human body may not be in one-to-one correspondence, this is related to the degree of humanoid imitation of the robot. Therefore, in the process of mapping the human joint posture information into the robot joint posture information, it may be involved.
  • the mapping of joints may also involve mapping of joint poses. Therefore, by mapping the mapping relationship between the human joint and the robot joint in advance, the mapping relationship between the human joint posture and the robot joint posture is determined.
  • the posture information of the human joint is relative to a certain reference line in the human body coordinate system, and the angles of the following joints with respect to the reference line at different times are respectively corresponding to the action of raising the right hand:
  • Joint 3 40 degrees, 50 degrees, 60 degrees ⁇ ;
  • Joint 4 40 degrees, 50 degrees, 60 degrees ⁇ .
  • the robot joint attitude information is relative to a reference line in the robot coordinate system, and the angles of the following joints corresponding to the human joint at different times relative to the reference line are:
  • Joint b 10 degrees, 23 degrees, 52 degrees ⁇ .
  • the user watching the live content can also interact with the friend through his own robot and the friend's robot, thereby improving the user's interactive experience and enriching the interactive form of the robot.
  • the apparatus includes: a play module 11, an acquisition module 12, and a sending module 13.
  • the playing module 11 is configured to play the live content selected by the user.
  • the obtaining module 12 is configured to acquire emotion information of the user when viewing the live content.
  • the sending module 13 is configured to send the sentiment information to the anchor end corresponding to the live content.
  • the playing module 11 is further configured to play the interactive content that is sent by the anchor end and corresponding to the emotion information.
  • the obtaining module 12 is specifically configured to: perform facial expression recognition on the collected image of the user to obtain an expression that reflects the user's emotion; and/or perform the collected voice of the user. Speech recognition to obtain a statement reflecting the user's emotions.
  • the device further includes: a control module 14.
  • the control module 14 is configured to control a feedback component of the robot to perform a corresponding interaction operation according to the emotion information.
  • the device further includes: an identification module 15 and a display module 16.
  • the identification module 15 is configured to identify whether the user is a child user.
  • the display module 16 is configured to display, when the identification module 15 identifies that the user is a child user, a live content selection page corresponding to the child user, for the child user to select the live content.
  • the device further includes: an acquisition module 17 and a determination module 18.
  • the collecting module 17 is configured to collect sensing data reflecting the interaction behavior of the user in response to the user selecting a friend from the viewer list.
  • the determining module 18 is configured to determine the interaction control information according to the sensing data.
  • the sending module 13 is further configured to send the interaction control information to a robot corresponding to the friend, to control a robot corresponding to the friend to perform a corresponding interaction operation.
  • the sensing data includes a color image
  • the determining module 18 is specifically configured to:
  • the sensing data includes a depth image
  • the determining module 18 is specifically configured to:
  • Bone recognition is performed on the depth image to obtain joint posture information of the user; robot joint posture information corresponding to the joint posture information of the user is determined, and the robot joint posture information is included in the interaction control information.
  • the sensing data includes touch sensing information
  • the determining module 18 is specifically configured to:
  • the sensing data includes an interactive voice
  • the interaction control information includes the interactive voice
  • the apparatus shown in FIG. 4 can perform the method of the embodiment shown in FIG. 1a, FIG. 2a and FIG. 3a.
  • the apparatus shown in FIG. 4 can perform the method of the embodiment shown in FIG. 1a, FIG. 2a and FIG. 3a.
  • the description in the embodiment shown in FIG. 1a, FIG. 2a and FIG. 3a and details are not described herein again.
  • the structure of the interaction device can be implemented as an electronic device, and the electronic device can be a robot.
  • the robot can include: processing 21 and memory 22.
  • the memory 22 is configured to store a program supporting the robot to perform the interactive method provided in the embodiment shown in FIG. 1a, FIG. 2a and FIG. 3a, and the processor 21 is configured to execute the storage in the memory 22. program of.
  • the program includes one or more computer instructions, wherein the one or more computer instructions are executed by the processor 21 to implement the following steps:
  • the processor 21 is further configured to perform all or part of the steps of the foregoing steps shown in FIG. 1a, FIG. 2a and FIG. 3a.
  • the structure of the robot may further include a communication interface 23 for the robot to communicate with other devices or communication networks, such as communication between the robot and the server.
  • the robot may further include: an audio component 24 and a sensor component 25.
  • the audio component 24 is configured to output and/or input an audio signal.
  • audio component 24 includes a microphone (MIC) that is configured to receive an external audio signal when the robot is in an operational mode, such as a voice recognition mode.
  • the received audio signal may be further stored in the memory 22 or transmitted via the communication interface 23.
  • audio component 24 also includes a speaker for outputting an audio signal.
  • sensor assembly 25 includes one or more sensors.
  • the sensor assembly 25 includes a display of the robot, and the sensor assembly 25 can also detect the presence or absence of contact of the user with the robot, and the like.
  • Sensor assembly 25 can include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 25 can also include an accelerometer sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor, and the like.
  • the robot provided by the embodiment of the present invention has multi-degree of freedom of activity flexibility.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for a robot, which includes a program for performing the interaction method in the embodiment shown in FIG. 1a, FIG. 2a and FIG. 3a. .
  • FIG. 6 is a flowchart of still another interaction method according to an embodiment of the present invention.
  • the interaction method provided by this embodiment may be implemented by an interaction device, which may be implemented as software or implemented as a combination of software and hardware.
  • the interactive device can be installed in an anchor client, and the anchor client can be installed in a user terminal device such as a smart phone, or can be installed in an intelligent robot. As shown in FIG. 6, the following steps may be included:
  • the child user can view the live content of the anchor through the child robot, and the child robot can collect the emotional information of the child user during the viewing of the live content, and feedback the obtained emotional information to The anchor, in order to enable the anchor to trigger the corresponding interactive content to achieve interaction with the child user.
  • FIG. 7 is a schematic structural diagram of an interaction device corresponding to the embodiment shown in FIG. 6. As shown in FIG. 7, the device includes: a receiving module 31 and a sending module 32.
  • the receiving module 31 is configured to receive emotion information sent by the user's robot, where the emotion information reflects an emotion of the user when viewing the live content corresponding to the anchor end.
  • the sending module 32 is configured to send the interactive content triggered by the anchor according to the emotion information to the robot.
  • the apparatus shown in FIG. 7 can perform the method of the embodiment shown in FIG. 6.
  • the apparatus shown in FIG. 7 can perform the method of the embodiment shown in FIG. 6.
  • the structure of the interactive device shown in FIG. 7 can be implemented as an electronic device, which is a user terminal device, such as a smart phone.
  • the user terminal device can include: a processor 41. And memory 42.
  • the memory 42 is configured to store a program that supports the user terminal device to perform the interactive method provided in the embodiment shown in FIG. 6, and the processor 41 is configured to execute the program stored in the memory 42.
  • the program includes one or more computer instructions, wherein the one or more computer instructions are executed by the processor 41 to enable the following steps:
  • the interactive content triggered by the anchor according to the emotion information is sent to the robot.
  • the processor 41 is further configured to perform all or part of the steps of the foregoing method shown in FIG. 6.
  • the structure of the user terminal device may further include a communication interface 43 for the user terminal device to communicate with other devices or a communication network.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions used by a user terminal device, which includes a program for performing the interaction method in the method embodiment shown in FIG. 6 above.
  • FIG. 9 is a flowchart of interaction of an interaction method according to an embodiment of the present invention. As shown in FIG. 9, the method may include the following steps:
  • the robot A identifies that the current user is a child user, and displays a live content selection page corresponding to the child user, so that the user selects the live content.
  • the robot A plays the live content selected by the user by watching the client.
  • the robot A acquires emotional information when the user views the live content.
  • the robot A controls its own feedback component to perform a corresponding interaction operation according to the user emotion information.
  • the robot A sends the user emotion information to the anchor client corresponding to the live content.
  • the anchor client sends the interactive content triggered by the anchor according to the user's emotional information to the robot A.
  • the robot A plays the interactive content by watching the client.
  • the robot A collects the sensing data reflecting the user interaction behavior, and determines the interaction control information according to the sensing data.
  • the robot A sends the interaction control information to the robot B corresponding to the friend.
  • the robot B performs a corresponding interaction operation according to the interaction control information.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Acoustics & Sound (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Toys (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Information Transfer Between Computers (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例提供一种互动方法和设备,该方法包括:播放用户选择的直播内容;获取用户在观看所述直播内容时的情绪信息;将该情绪信息发送至直播内容对应的主播端;播放主播端发出的与该情绪信息对应的互动内容。比如,当发现用户表现出无聊的情绪时,调整直播内容为唱一首歌曲、跳一段舞蹈或者做一个小游戏。通过本方案,以直播的方式为用户提供直播内容,并且感知用户观看直播内容期间的用户情绪以便与用户互动,将直播技术与感知技术相结合,根据用户的观看情绪及时调整用户观看到的内容,实现内容提供端与内容观看端的有效互动。

Description

互动方法和设备
本申请要求2017年10月9日递交的申请号为201710929662.0、发明名称为“互动方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及人工智能技术领域,尤其涉及一种互动方法和设备。
背景技术
近年来,随着机器人技术的发展和人工智能研究不断深入,智能移动机器人在人类生活中扮演越来越重要的角色,在诸多领域得到广泛应用,比如,为儿童定制的儿童机器人,可以用于儿童的教育、娱乐。
以儿童机器人为例来说,假设儿童用户使用儿童机器人来进行英语单词的学习。目前,儿童用户可以向儿童机器人发出获取内容资源的指令,比如对着儿童机器人说“学英语单词”,从而触发儿童机器人从服务端获取预先生成的相应音视频内容资源进行播放展示,内容获取形式单一。而且在整个播放期间,儿童用户一般只能进行简单的播放控制操作,比如“开始”、“暂停”、“快进”、“快退”、“上一首”、“下一首”等,整体来看儿童用户还是处于被动接受这些内容的状态,由于缺乏更加丰富的互动功能,使得儿童用户的体验较差。
发明内容
有鉴于此,本发明实施例提供一种互动方法和设备,用以实现针对新用户的个性化机器人互动。
第一方面,本发明实施例提供一种互动方法,应用于机器人中,包括:
播放用户选择的直播内容;
获取所述用户在观看所述直播内容时的情绪信息;
将所述情绪信息发送至所述直播内容对应的主播端;
播放所述主播端发出的与所述情绪信息对应的互动内容。
第二方面,本发明实施例提供一种互动装置,应用于中,包括:
播放模块,用于播放用户选择的直播内容;
获取模块,用于获取所述用户在观看所述直播内容时的情绪信息;
发送模块,用于将所述情绪信息发送至所述直播内容对应的主播端;
所述播放模块,还用于播放所述主播端发出的与所述情绪信息对应的互动内容。
第三方面,本发明实施例提供一种电子设备,该电子设备可以实现为,包括处理器和存储器,所述存储器用于存储支持执行上述第一方面中互动方法的程序,所述处理器被配置为用于执行所述存储器中存储的程序。该电子设备中还可以包括通信接口,用于与其他设备或通信网络通信。
另外,本发明实施例提供了一种计算机存储介质,用于储存该电子设备所用的计算机软件指令,其包含用于执行上述第一方面中互动方法所涉及的程序。
第四方面,本发明实施例提供一种互动方法,应用于客户端中,包括:
接收用户的机器人发送的情绪信息,所述情绪信息反映了所述用户在观看所述主播端对应的直播内容时的情绪;
将主播根据所述情绪信息触发的互动内容发送至所述机器人。
第五方面,本发明实施例提供一种互动装置,应用于客户端中,包括:
接收模块,用于接收用户的机器人发送的情绪信息,所述情绪信息反映了所述用户在观看所述主播端对应的直播内容时的情绪;
发送模块,用于将主播根据所述情绪信息触发的互动内容发送至所述机器人。
第六方面,本发明实施例提供一种电子设备,该电子设备可以实现为用户终端设备,比如智能手机等,包括处理器和存储器,所述存储器用于存储支持该电子设备执行上述第四方面中互动方法的程序,所述处理器被配置为用于执行所述存储器中存储的程序。该电子设备中还可以包括通信接口,用于与其他设备或通信网络通信。
另外,本发明实施例提供了一种计算机存储介质,用于储存该电子设备所用的计算机软件指令,其包含用于执行上述第四方面中互动方法所涉及的程序。
本发明实施例提供的互动方法和设备,以直播的方式为用户提供所需内容。具体地,用户可以在机器人的观看客户端界面中选择所需观看的直播内容,从而触发机器人获得该直播内容并播放。另外,在用户观看该直播内容的期间,捕获用户在观看时的情绪信息,以便将用户的情绪信息发送至对应的主播端,以使主播根据用户情绪信息触发相应的互动内容。比如,当发现用户表现出无聊的情绪时,调整直播内容为唱一首歌曲、跳一段舞蹈或者做一个小游戏。通过本方案,以直播的方式为用户提供直播内容,并且感知用户观看直播内容期间的用户情绪以便与用户互动,将直播技术与感知技术相结合,根据用户的观看情绪及时调整用户观看到的内容,实现内容提供端与内容观看端的有效 互动。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a为本发明实施例提供的一种互动方法的流程图;
图1b为与图1a所示实施例对应的互动过程示意图;
图2a为本发明实施例提供的另一种互动方法的流程图;
图2b为与图2a所示实施例对应的互动过程示意图;
图3a为本发明实施例提供的又一种互动方法的流程图;
图3b为与图3a所示实施例对应的互动过程示意图;
图3c为与图3a所示实施例对应的互动过程示意图;
图3d为与图3a所示实施例对应的互动过程示意图;
图3e为与图3a所示实施例对应的互动过程示意图;
图4为本发明实施例提供的一种互动装置的结构示意图;
图5为与图4所示互动装置对应的电子设备的结构示意图;
图6为本发明实施例提供的再一种互动方法的流程图;
图7为与图6所示实施例对应的一种互动装置的结构示意图;
图8为与图7所示互动装置对应的电子设备的结构示意图;
图9为本发明实施例提供的一种互动方法的交互流程图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本 发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义,“多种”一般包含至少两种,但是不排除包含至少一种的情况。
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应当理解,尽管在本发明实施例中可能采用术语第一、第二、第三等来描述XXX,但这些XXX不应限于这些术语。这些术语仅用来将XXX区分开。例如,在不脱离本发明实施例范围的情况下,第一XXX也可以被称为第二XXX,类似地,第二XXX也可以被称为第一XXX。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的商品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种商品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的商品或者系统中还存在另外的相同要素。
另外,下述各方法实施例中的步骤时序仅为一种举例,而非严格限定。
图1a为本发明实施例提供的一种互动方法的流程图,本实施例提供的该互动方法可以由一互动装置来执行,该互动装置可以实现为软件,或者实现为软件和硬件的组合,该互动装置可以设置在机器人中。如图1a所示,该方法包括如下步骤:
101、播放用户选择的直播内容。
该用户可以是使用机器人的用户,比如可以是使用儿童机器人的儿童用户。
本发明实施例中,用户可以通过直播的方式获得想要观看的内容。具体来说,用户的机器人中安装有观看客户端,用户开启观看客户端后,在观看客户端界面中可以显示有可供用户选择的直播内容列表,用户从中选择想要观看的直播内容进行播放。
可以理解的是,用户所选择观看的直播内容是由相应的主播提供的,主播客户端将 直播内容上传至网络侧的直播服务平台,从而观看客户端可以从该直播服务平台拉取用户选择的直播内容进行播放。
实际应用中,对应于儿童用户,上述直播内容可以是提供给儿童用户可以观看的教育、娱乐资源。
102、获取用户在观看直播内容时的情绪信息。
103、将用户的情绪信息发送至直播内容对应的主播端。
104、播放主播端发出的与所述情绪信息对应的互动内容。
可以理解的是,用户的机器人上一般会安装有多种类型的采集装置,比如摄像头、麦克风阵列等,以用于采集用户在观看直播内容期间的行为,进而对采集的行为进行分析,以获得用户在观看直播内容时的情绪信息,以便于主播可以根据该观看用户的情绪信息,及时调整直播内容,以触发适当的互动内容与用户互动,提高用户的观看积极性。
一般地,用户在观看直播内容时触发的行为往往包括表现出某种表情、说出某些话语,执行某些动作等,因此,可以通过分析用户的面部表情、识别用户所说的话,来识别用户当前的情绪状况。
从而,可选地,可以通过如下方式获取用户在观看直播内容时的情绪信息:
采集用户的图像,对采集到的用户图像进行表情识别,以获得反映用户情绪的表情;和/或,采集用户的语音,对采集到的用户语音进行语音识别,以获得反映用户情绪的语句,如图1b所示。
其中,表情识别的过程可以采用现有相关技术实现,在此不赘述。表情识别的结果可以包括高兴、生气、厌恶和悲伤等表情。
其中,可以预先构建反映不同情绪的语句库,即该语句库中可以存储有与多种情绪分别对应的常用语句集。通过对采集到的用户语音进行识别,得到用户所说出的语句是什么,进而从各个常用语句集中匹配是否存在与识别出的语句对应的常用语句,如果匹配到对应的常用语句,则确定该识别出的语句是反映了用户情绪的语句。从而,可选地,该识别出的语句可以作为一种用户情绪的表达,直接被反馈至主播端;可选地,也可以将与该识别出的语句对应的情绪即将与该识别出的语句相匹配的常用语句所对应的情绪,反馈至主播端。
其中,上述识别出的语句与某常用语句之间的匹配,不一定要求完全一致才可以,可以是两者之间语义相似即可。
主播端在接收到用户的情绪信息后,可以使得主播根据用户的情绪触发相应的互动 内容,以吸引用户的观看、互动积极性。比如,当发现用户表现出无聊的情绪时,调整直播内容为如下互动内容:唱一首歌曲、跳一段舞蹈或者做一个小游戏。
综上,相比于传统机器人以下载方式获得用户所需内容的方式,本发明实施例中,机器人可以通过直播的方式为用户提供所需内容。在直播的方式下,通过捕获用户在观看直播内容时的情绪信息并反馈至主播端,可以使主播根据用户情绪信息及时触发相应的互动内容,实现内容提供端与内容观看端的有效互动,提高观看用户的观看体验。
图2a为本发明实施例提供的另一种互动方法的流程图,如图2a所示,该方法包括如下步骤:
201、若识别到当前的用户为儿童用户,则显示与儿童用户对应的直播内容选择页面,以供用户选择所需的直播内容。
本发明实施例提供的互动方法尤其可以适用于儿童用户通过机器人进行学习、娱乐的场景。相比于成人用户,儿童用户的注意力更加难以集中,为了能够吸引儿童用户能够更好地使用机器人进行学习,通过将感知技术与直播技术结合,在儿童机器人上实现教育、娱乐等内容的互动式播放。
从而,可选地,针对使用机器人的用户类型的不同,可以提供不同的内容资源获取方式,比如,如果当前使用机器人的用户为成人,则可以采用传统的内容获取方式,即响应于成人用户对所需内容资源的选择或搜索操作,从相应的服务器中下载早已存在的内容资源。而如果当前使用机器人的用户为儿童,则可以开启直播的观看客户端,展示供儿童用户选择直播内容的直播内容列表,从而,当儿童用户从中选择出所需观看的直播内容后,从直播服务平台拉取该直播内容进行播放。
因此,机器人首先需要识别当前的用户是否为儿童用户,如果是儿童用户,在以直播的方式为其提供直播内容进行观看。
可选地,一种识别当前用户是否为儿童用户的方式为根据用户的注册信息确定。具体来说,在某实际应用场景中,机器人可能仅供某个家庭中的成员使用,即只有一定数量的用户可以有权使用该机器人。从而,可以在初始使用该机器人时,在相关配置界面进行能够使用该机器人的用户身份注册,身份注册过程中可以设置不同用户的用户类型并添加用户图像,其中,用户类型可以分为成人和儿童两种。从而,在识别当前用户是否为儿童用户时,采集用户图像,将采集的用户图像与已经注册的各用户的图像进行匹配,若存在匹配的用户图像,则根据匹配到的用户图像所对应的用户类型确定当前用户 的用户类型。
另外,可选地,也可以通过对采集到的当前用户的用户图像进行特征提取,以根据提取到的用户特征进行用户类型的判定。其中,提取的用户特征比如包括身高、面部皱纹等。
202、播放用户选择的直播内容。
203、获取用户在观看直播内容时的情绪信息。
204、将用户情绪信息发送至直播内容对应的主播端。
205、播放主播端发出的与所述情绪信息对应的互动内容。
上述步骤的具体实现可以参见前述实施例中的描述,在此不赘述。
206、根据用户情绪信息控制机器人的反馈组件执行对应的互动操作。
本实施例中,为了进一步增强机器人的互动效果,丰富机器人的互动形式,除了可以将用户的情绪信息反馈给主播,以使主播触发相应的互动内容以吸引用户的积极性外,还可以根据用户的情绪信息控制机器人上的相关反馈组件执行相应的互动操作。
其中,机器人上的反馈组件比如可以包括:触摸传感器、手臂舵机、轮子马达、LED灯,等等。
举例来说,比如当机器人发现儿童用户在打瞌睡或表情无聊时,如图2b所示,可以自动控制轮子马达前后小幅度振动,同时手臂舵机开始动作,LED灯开始闪烁,将儿童注意力继续转移到主播的直播内容上。
图3a为本发明实施例提供的又一种互动方法的流程图,如图3a所示,该方法包括如下步骤:
301、播放用户选择的直播内容。
302、响应于用户从观看者列表中对好友的选择操作,采集反映用户互动行为的感知数据。
303、根据感知数据确定互动控制信息。
304、将互动控制信息发送至好友对应的机器人,以控制好友对应的机器人执行对应的互动操作。
如图3b所示,同一主播直播的同一直播内容可以被不同的用户观看。从而,而观看同一直播内容的不同用户除了可以与主播进行互动之外,不同用户之间也可以通过各自对应的机器人进行互动交流。
可选地,如图3c所示,某个拥有机器人的用户可以创建虚拟兴趣小组,从观看者列表中将观看同一直播内容的好友添加到该虚拟兴趣小组中,从而可以进行组内用户间的彼此互动交流。该互动交流的形式可以是:在创建虚拟兴趣小组的同时,创建一个聊天窗口,从而,该组内的用户可以通过该聊天窗口进行文字、语音、图像、视频等交流。
另外,可选地,除了可以通过上述聊天窗口进行互动交流外,还可以通过机器人来实现互动。举例来说,当某用户从观看者列表中选择出某个好友而触发与该好友建立通信链接后,该用户对自己的机器人进行互动行为,比如在自己的机器人前做出某些动作或说某些话语,该互动行为会在好友的机器人上表现出来,如图3d所示,某用户在其机器人前说出“你好”,并抬起双臂,则好友的机器人会输出“你好”的语音,并抬起机器人的双臂。
为实现上述互动功能,具体地,当某用户选择出需要交流的好友,建立与该好友的通信链接后,可以采集反映该用户的互动行为的感知数据,进而分析该感知数据,以确定能够控制好友的机器人模仿该用户的互动行为的互动控制信息,从而将获得的互动控制信息发送至好友的机器人,以控制好友的机器人执行对应的互动操作。
其中,反映该用户的互动行为的感知数据可以包括如下至少一种:深度图像、彩色图像、互动语音、触摸传感信息等。其中,深度图像可以反映用户的肢体动作,比如抬起手臂的过程;彩色图像可以反映用户的人脸表情特征,比如微笑;互动语音可以反映用户说出的语音,比如你好;触摸传感信息可以反映用户的对机器人触发的触摸操作,比如握住机器人的手掌。
可选地,当感知数据中包括互动语音时,可以直接将该互动语音作为互动控制信息中的一部分,相当于将用户的互动语音透传至好友机器人中进行播放,如图3e中,用户小明说出的“你好,小红”会通过好友小红的机器人播放出来。
可选地,当感知数据中包括触摸传感信息时,可以确定与该触摸传感信息对应的灯控信息,该灯控信息作为互动控制信息中的一部分,用于控制好友的机器人中LED灯的展示效果。实际应用中,可以预先设定不同触摸位置与不同LED灯的展示效果之间的对应关系,从而,当检测到用户触摸了机器人机身上的某个位置后,基于该对应关系确定出作为被控对象的LED灯以及该LED灯的展示方式,灯控信息即包含作为被控对象的LED灯以及该LED灯的展示方式。比如,如图3e所示,当用户小明触摸了自己机器人的右手时,可以控制好友小红的机器人的左手上的LED灯亮,从而拉近远程互动的距离。
可选地,当感知数据中包括彩色图像时,可以对该彩色图像进行人脸表情识别,进 而从预设表情库中确定与识别出的人脸表情对应的表情对象,该表情对象作为互动控制信息中的一部分,以用于控制好友机器人显示该表情对象。其中,人脸表情识别的过程可以采用现有相关技术实现,在此不赘述。表情识别的结果可以包括高兴、生气、吃惊、恐惧、厌恶和悲伤等表情,相应地,可以预先构建包含各表情识别结果对应的表情对象的表情库,该表情对象可以是表情动画或表情图像。
可选地,当感知数据中包括深度图像时,可以对深度图像进行骨骼识别,以获得用户的关节姿态信息,进而确定与该用户的关节姿态信息对应的机器人关节姿态信息,从而,确定出的机器人关节姿态信息作为互动控制信息中的一部分,用于控制好友的机器人执行相应的动作,如图3d所示。
其中,骨骼识别的方法可以采用现有相关技术来实现,本实施例不赘述,本实施例中仅强调的是,骨骼识别的结果——人体关节姿态信息,是由人体多个关节的运动序列构成,反映了多个关节的运动轨迹。另外,由于实际上,机器人的关节与人体的关节可能并非一一对应的,这与机器人的仿人程度有关,因此,将人体关节姿态信息映射为机器人关节姿态信息的过程中,既可能涉及到关节的映射,也可能涉及到关节姿态的映射。因此,通过预先建立人体关节与机器人关节的映射关系,以便确定人体关节姿态与机器人关节姿态的映射关系。
简单举例来说,假设对应于抬起右手的动作,人体关节的姿态信息为相对于人体坐标系中的某基准线而言,以下各关节在不同时刻相对该基准线的角度分别为:
关节1:30度,40度,50度···;
关节2:20度,30度,40度···;
关节3:40度,50度,60度···;
关节4:40度,50度,60度···。
而机器人关节姿态信息为相对于机器人坐标系中的某基准线而言,与人体关节对应的以下各关节在不同时刻相对该基准线的角度分别为:
关节a:10度,40度,54度···;
关节b:10度,23度,52度···。
本实施例中,观看直播内容的用户除了可以与主播互动外,还可以通过自己的机器人以及好友的机器人实现与好友的互动,提高了用户的互动体验,且丰富了机器人的互动形式。
以下将详细描述本发明的一个或多个实施例的互动装置。本领域技术人员可以理解, 这些互动装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图4为本发明实施例提供的一种互动装置的结构示意图,如图4所示,该装置包括:播放模块11、获取模块12、发送模块13。
播放模块11,用于播放用户选择的直播内容。
获取模块12,用于获取所述用户在观看所述直播内容时的情绪信息。
发送模块13,用于将所述情绪信息发送至所述直播内容对应的主播端。
所述播放模块11,还用于播放所述主播端发出的与所述情绪信息对应的互动内容。
可选地,所述获取模块12具体用于:对采集到的所述用户的图像进行表情识别,以获得反映所述用户情绪的表情;和/或,对采集到的所述用户的语音进行语音识别,以获得反映所述用户情绪的语句。
可选地,所述装置还包括:控制模块14。
控制模块14,用于根据所述情绪信息控制所述机器人的反馈组件执行对应的互动操作。
可选地,所述装置还包括:识别模块15、显示模块16。
识别模块15,用于识别所述用户是否为儿童用户。
显示模块16,用于若识别模块15识别出所述用户是儿童用户,则显示与所述儿童用户对应的直播内容选择页面,以供所述儿童用户选择所述直播内容。
可选地,所述装置还包括:采集模块17、确定模块18。
采集模块17,用于响应于所述用户从观看者列表中对好友的选择操作,采集反映所述用户互动行为的感知数据。
确定模块18,用于根据所述感知数据确定互动控制信息。
所述发送模块13,还用于将所述互动控制信息发送至所述好友对应的机器人,以控制所述好友对应的机器人执行对应的互动操作。
可选地,所述感知数据中包括彩色图像,所述确定模块18具体用于:
对所述彩色图像进行人脸表情识别;从预设表情库中确定与识别出的人脸表情对应的表情对象,所述互动控制信息中包括所述表情对象。
可选地,所述感知数据中包括深度图像,所述确定模块18具体用于:
对所述深度图像进行骨骼识别,以获得所述用户的关节姿态信息;确定与所述用户的关节姿态信息对应的机器人关节姿态信息,所述互动控制信息中包括所述机器人关节姿态信息。
可选地,所述感知数据中包括触摸传感信息,所述确定模块18具体用于:
确定与所述触摸传感信息对应的灯控信息,所述互动控制信息中包括所述灯控信息。
可选地,所述感知数据中包括互动语音,所述互动控制信息中包括所述互动语音。
图4所示装置可以执行图1a、图2a和图3a所示实施例的方法,本实施例未详细描述的部分,可参考对图1a、图2a和图3a所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1a、图2a和图3a所示实施例中的描述,在此不再赘述。
以上描述了机器人互动装置的内部功能和结构,在一个可能的设计中,上述互动装置的结构可实现为一电子设备,该电子设备可以为机器人,如图5所示,该机器人可以包括:处理器21和存储器22。其中,所述存储器22用于存储支持机器人执行上述图1a、图2a和图3a所示实施例中提供的互动方法的程序,所述处理器21被配置为用于执行所述存储器22中存储的程序。
所述程序包括一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器21执行时能够实现如下步骤:
播放用户选择的直播内容;
获取所述用户在观看所述直播内容时的情绪信息;
将所述情绪信息发送至所述直播内容对应的主播端;
播放所述主播端发出的与所述情绪信息对应的互动内容。
可选地,所述处理器21还用于执行前述图1a、图2a和图3a所示方法步骤中的全部或部分步骤。
其中,所述机器人的结构中还可以包括通信接口23,用于机器人与其他设备或通信网络通信,比如机器人与服务器的通信。
另外,机器人中还可以包括:音频组件24、传感器组件25。
其中,音频组件24被配置为输出和/或输入音频信号。例如,音频组件24包括一个麦克风(MIC),当机器人处于操作模式,如语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器22或经由通信接口23发送。在一些实施例中,音频组件24还包括一个扬声器,用于输出音频信号。
其中,传感器组件25包括一个或多个传感器。例如,传感器组件25包括机器人的显示器,传感器组件25还可以检测用户与机器人接触的存在或不存在等。传感器组件25可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件25还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力 传感器或温度传感器等。
另外,本发明实施例提供的机器人具备多自由度的活动灵活性。
另外,本发明实施例提供了一种计算机存储介质,用于储存机器人所用的计算机软件指令,其包含用于执行上述图1a、图2a和图3a所示实施例中的互动方法所涉及的程序。
图6为本发明实施例提供的再一种互动方法的流程图,本实施例提供的该互动方法可以由一互动装置来执行,该互动装置可以实现为软件,或者实现为软件和硬件的组合,该互动装置可以设置在主播客户端中,该主播客户端可以安装于用户终端设备比如智能手机中,也可以安装在智能机器人中。如图6所示,可以包括如下步骤:
401、接收用户的机器人发送的情绪信息,所述情绪信息反映了用户在观看主播端对应的直播内容时的情绪。
402、将主播根据所述情绪信息触发的互动内容发送至用户的机器人。
参见前述各方法实施例中所介绍的,儿童用户可以通过儿童机器人来观看主播的直播内容,并且,该儿童机器人可以采集儿童用户在观看直播内容过程中的情绪信息,将获得的情绪信息反馈至主播,以使得主播触发相应的互动内容实现与儿童用户的互动。
图7为与图6所示实施例对应的一种互动装置的结构示意图,如图7所示,该装置包括:接收模块31、发送模块32。
接收模块31,用于接收用户的机器人发送的情绪信息,所述情绪信息反映了所述用户在观看所述主播端对应的直播内容时的情绪。
发送模块32,用于将主播根据所述情绪信息触发的互动内容发送至所述机器人。
图7所示装置可以执行图6所示实施例的方法,本实施例未详细描述的部分,可参考对图6所示实施例的相关说明。该技术方案的执行过程和技术效果参见图6所示实施例中的描述,在此不再赘述。
在一个可能的设计中,图7所示互动装置的结构可实现为一电子设备,该电子设备为用户终端设备,比如智能手机,如图8所示,该用户终端设备可以包括:处理器41和存储器42。其中,所述存储器42用于存储支持用户终端设备执行上述图6所示实施例中提供的互动方法的程序,所述处理器41被配置为用于执行所述存储器42中存储的程序。
所述程序包括一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处 理器41执行时能够实现如下步骤:
接收用户的机器人发送的情绪信息,所述情绪信息反映了所述用户在观看所述主播端对应的直播内容时的情绪;
将主播根据所述情绪信息触发的互动内容发送至所述机器人。
可选地,所述处理器41还用于执行前述图6所示方法步骤中的全部或部分步骤。
其中,所述用户终端设备的结构中还可以包括通信接口43,用于用户终端设备与其他设备或通信网络通信。
另外,本发明实施例提供了一种计算机存储介质,用于储存用户终端设备所用的计算机软件指令,其包含用于执行上述图6所示方法实施例中互动方法所涉及的程序。
图9为本发明实施例提供的一种互动方法的交互流程图,如图9所示,可以包括如下步骤:
501、机器人A识别出当前的用户为儿童用户,显示与儿童用户对应的直播内容选择页面,以供用户选择直播内容。
502、机器人A通过观看客户端播放用户选择的直播内容。
503、机器人A获取用户在观看直播内容时的情绪信息。
504、机器人A根据用户情绪信息控制自身的反馈组件执行对应的互动操作。
505、机器人A将用户情绪信息发送至直播内容对应的主播客户端。
506、主播客户端将主播根据用户情绪信息触发的互动内容发送至机器人A。
507、机器人A通过观看客户端播放互动内容。
508、响应于用户从观看者列表中对好友的选择操作,机器人A采集反映用户互动行为的感知数据,并根据感知数据确定互动控制信息。
509、机器人A将互动控制信息发送至好友对应的机器人B。
510、机器人B根据互动控制信息执行对应的互动操作。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借 助加必需的通用硬件平台的方式来实现,当然也可以通过硬件和软件结合的方式来实现。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以计算机产品的形式体现出来,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、 只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (12)

  1. 一种互动方法,应用于机器人中,其特征在于,包括:
    播放用户选择的直播内容;
    获取所述用户在观看所述直播内容时的情绪信息;
    将所述情绪信息发送至所述直播内容对应的主播端;
    播放所述主播端发出的与所述情绪信息对应的互动内容。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述用户在观看所述直播内容时的情绪信息,包括:
    对采集到的所述用户的图像进行表情识别,以获得反映所述用户情绪的表情;和/或,
    对采集到的所述用户的语音进行语音识别,以获得反映所述用户情绪的语句。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述情绪信息控制所述机器人的反馈组件执行对应的互动操作。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述播放用户选择的直播内容之前,还包括:
    识别所述用户是否为儿童用户;
    若是,则显示与所述儿童用户对应的直播内容选择页面,以供所述儿童用户选择所述直播内容。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    响应于所述用户从观看者列表中对好友的选择操作,采集反映所述用户互动行为的感知数据;
    根据所述感知数据确定互动控制信息;
    将所述互动控制信息发送至所述好友对应的机器人,以控制所述好友对应的机器人执行对应的互动操作。
  6. 根据权利要求5所述的方法,其特征在于,所述感知数据中包括彩色图像,所述根据所述互动数据确定互动控制信息,包括:
    对所述彩色图像进行人脸表情识别;
    从预设表情库中确定与识别出的人脸表情对应的表情对象,所述互动控制信息中包括所述表情对象。
  7. 根据权利要求5所述的方法,其特征在于,所述感知数据中包括深度图像,所述 根据所述互动数据确定互动控制信息,包括:
    对所述深度图像进行骨骼识别,以获得所述用户的关节姿态信息;
    确定与所述用户的关节姿态信息对应的机器人关节姿态信息,所述互动控制信息中包括所述机器人关节姿态信息。
  8. 根据权利要求5所述的方法,其特征在于,所述感知数据中包括触摸传感信息,所述根据所述互动数据确定互动控制信息,包括:
    确定与所述触摸传感信息对应的灯控信息,所述互动控制信息中包括所述灯控信息。
  9. 根据权利要求5所述的方法,其特征在于,所述感知数据中包括互动语音,所述互动控制信息中包括所述互动语音。
  10. 一种电子设备,其特征在于,包括存储器和处理器;其中,
    所述存储器用于存储一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行时实现如权利要求1至9中任一项所述的互动方法。
  11. 一种互动方法,应用于主播端,其特征在于,包括:
    接收用户的机器人发送的情绪信息,所述情绪信息反映了所述用户在观看所述主播端对应的直播内容时的情绪;
    将主播根据所述情绪信息触发的互动内容发送至所述机器人。
  12. 一种电子设备,其特征在于,包括存储器和处理器;其中,
    所述存储器用于存储一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行时实现如权利要求11所述的互动方法。
PCT/CN2018/108308 2017-10-09 2018-09-28 互动方法和设备 WO2019072104A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/646,665 US20200413135A1 (en) 2017-10-09 2018-09-28 Methods and devices for robotic interactions
EP18865693.8A EP3696648A4 (en) 2017-10-09 2018-09-28 INTERACTION PROCEDURE AND DEVICE
JP2020510613A JP7254772B2 (ja) 2017-10-09 2018-09-28 ロボットインタラクションのための方法及びデバイス

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710929662.0 2017-10-09
CN201710929662.0A CN109635616B (zh) 2017-10-09 2017-10-09 互动方法和设备

Publications (1)

Publication Number Publication Date
WO2019072104A1 true WO2019072104A1 (zh) 2019-04-18

Family

ID=66051089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/108308 WO2019072104A1 (zh) 2017-10-09 2018-09-28 互动方法和设备

Country Status (6)

Country Link
US (1) US20200413135A1 (zh)
EP (1) EP3696648A4 (zh)
JP (1) JP7254772B2 (zh)
CN (1) CN109635616B (zh)
TW (1) TW201916005A (zh)
WO (1) WO2019072104A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887746A (zh) * 2021-01-22 2021-06-01 维沃移动通信(深圳)有限公司 直播互动方法及装置
CN113645473A (zh) * 2021-07-21 2021-11-12 广州心娱网络科技有限公司 一种气氛机器人的控制方法及系统

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11675360B2 (en) * 2017-10-30 2023-06-13 Sony Corporation Information processing apparatus, information processing method, and program
CN110677685B (zh) * 2019-09-06 2021-08-31 腾讯科技(深圳)有限公司 网络直播显示方法及装置
CN112733763B (zh) * 2021-01-15 2023-12-05 北京华捷艾米科技有限公司 人机语音交互的实现方法及装置、电子设备、存储介质
CN113093914B (zh) * 2021-04-21 2022-10-28 广东电网有限责任公司电力科学研究院 一种基于vr的高临场视觉感知方法及装置
CN113438491B (zh) * 2021-05-28 2022-05-17 广州方硅信息技术有限公司 直播互动方法、装置、服务器及存储介质
CN113784155B (zh) * 2021-08-12 2024-08-20 杭州阿里云飞天信息技术有限公司 基于直播间的数据处理方法及装置
CN113656638B (zh) * 2021-08-16 2024-05-07 咪咕数字传媒有限公司 一种观看直播的用户信息处理方法、装置及设备
CN114170356B (zh) * 2021-12-09 2022-09-30 米奥兰特(浙江)网络科技有限公司 线上路演方法、装置、电子设备及存储介质
CN114393582B (zh) * 2022-01-20 2024-06-25 深圳市注能科技有限公司 一种机器人及其控制方法、系统及存储设备
CN115278286B (zh) * 2022-08-02 2024-06-28 抖音视界有限公司 一种信息处理方法及装置
CN116027907A (zh) * 2023-02-01 2023-04-28 浙江极氪智能科技有限公司 程序控制方法、装置、设备及存储介质
CN116271786B (zh) * 2023-02-08 2023-10-13 广州市邦杰软件科技有限公司 一种动漫游戏机的界面交互控制方法及装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035405A1 (en) * 1997-08-22 2002-03-21 Naohiro Yokoo Storage medium, robot, information processing device and electronic pet system
CN103209201A (zh) * 2012-01-16 2013-07-17 上海那里信息科技有限公司 基于社交关系的虚拟化身互动系统和方法
CN103531216A (zh) * 2012-07-04 2014-01-22 瀚宇彩晶股份有限公司 影音播放装置以及方法
US20150045007A1 (en) * 2014-01-30 2015-02-12 Duane Matthew Cash Mind-Controlled virtual assistant on a smartphone device
CN105045115A (zh) * 2015-05-29 2015-11-11 四川长虹电器股份有限公司 一种控制方法及智能家居设备
CN105511260A (zh) * 2015-10-16 2016-04-20 深圳市天博智科技有限公司 一种幼教陪伴型机器人及其交互方法和系统
CN105898509A (zh) * 2015-11-26 2016-08-24 乐视网信息技术(北京)股份有限公司 一种实现播放视频时的交互方法及系统
CN106791893A (zh) * 2016-11-14 2017-05-31 北京小米移动软件有限公司 视频直播方法及装置
CN107053191A (zh) * 2016-12-31 2017-08-18 华为技术有限公司 一种机器人,服务器及人机互动方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4556088B2 (ja) 2001-05-02 2010-10-06 ソニー株式会社 画像処理システム、画像処理装置及びその制御方法
JP4014044B2 (ja) 2003-01-28 2007-11-28 株式会社国際電気通信基礎技術研究所 コミュニケーションロボットおよびそれを用いたコミュニケーションシステム
JP2008134992A (ja) 2006-11-01 2008-06-12 Hitachi Ltd コンテンツダウンロード方法及び端末装置
JP2012155616A (ja) 2011-01-27 2012-08-16 Panasonic Corp コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム
US9035743B2 (en) * 2011-12-01 2015-05-19 New York University Song selection based upon axial pen pressure
US20140095504A1 (en) * 2012-09-28 2014-04-03 United Video Properties, Inc. Systems and methods for cataloging user-generated content
US20150326922A1 (en) * 2012-12-21 2015-11-12 Viewerslogic Ltd. Methods Circuits Apparatuses Systems and Associated Computer Executable Code for Providing Viewer Analytics Relating to Broadcast and Otherwise Distributed Content
JP6129119B2 (ja) 2014-06-04 2017-05-17 株式会社ソニー・インタラクティブエンタテインメント 画像処理装置、画像処理システム、撮像装置、および画像処理方法
JP6351528B2 (ja) 2014-06-05 2018-07-04 Cocoro Sb株式会社 行動制御システム及びプログラム
WO2016011159A1 (en) * 2014-07-15 2016-01-21 JIBO, Inc. Apparatus and methods for providing a persistent companion device
CN106874265B (zh) * 2015-12-10 2021-11-26 深圳新创客电子科技有限公司 一种与用户情绪匹配的内容输出方法、电子设备及服务器
CN106412710A (zh) * 2016-09-13 2017-02-15 北京小米移动软件有限公司 直播中通过图形标签进行信息交互的方法及装置
CN106878820B (zh) * 2016-12-09 2020-10-16 北京小米移动软件有限公司 直播互动方法及装置
CN106625678B (zh) * 2016-12-30 2017-12-08 首都师范大学 机器人表情控制方法和装置
CN107071584B (zh) * 2017-03-14 2019-12-24 北京潘达互娱科技有限公司 直播连麦方法及装置
CN107197384B (zh) * 2017-05-27 2019-08-02 北京光年无限科技有限公司 应用于视频直播平台的虚拟机器人多模态交互方法和系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035405A1 (en) * 1997-08-22 2002-03-21 Naohiro Yokoo Storage medium, robot, information processing device and electronic pet system
CN103209201A (zh) * 2012-01-16 2013-07-17 上海那里信息科技有限公司 基于社交关系的虚拟化身互动系统和方法
CN103531216A (zh) * 2012-07-04 2014-01-22 瀚宇彩晶股份有限公司 影音播放装置以及方法
US20150045007A1 (en) * 2014-01-30 2015-02-12 Duane Matthew Cash Mind-Controlled virtual assistant on a smartphone device
CN105045115A (zh) * 2015-05-29 2015-11-11 四川长虹电器股份有限公司 一种控制方法及智能家居设备
CN105511260A (zh) * 2015-10-16 2016-04-20 深圳市天博智科技有限公司 一种幼教陪伴型机器人及其交互方法和系统
CN105898509A (zh) * 2015-11-26 2016-08-24 乐视网信息技术(北京)股份有限公司 一种实现播放视频时的交互方法及系统
CN106791893A (zh) * 2016-11-14 2017-05-31 北京小米移动软件有限公司 视频直播方法及装置
CN107053191A (zh) * 2016-12-31 2017-08-18 华为技术有限公司 一种机器人,服务器及人机互动方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3696648A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887746A (zh) * 2021-01-22 2021-06-01 维沃移动通信(深圳)有限公司 直播互动方法及装置
CN113645473A (zh) * 2021-07-21 2021-11-12 广州心娱网络科技有限公司 一种气氛机器人的控制方法及系统

Also Published As

Publication number Publication date
US20200413135A1 (en) 2020-12-31
JP2020537206A (ja) 2020-12-17
EP3696648A4 (en) 2021-07-07
CN109635616B (zh) 2022-12-27
CN109635616A (zh) 2019-04-16
EP3696648A1 (en) 2020-08-19
TW201916005A (zh) 2019-04-16
JP7254772B2 (ja) 2023-04-10

Similar Documents

Publication Publication Date Title
WO2019072104A1 (zh) 互动方法和设备
KR102306624B1 (ko) 지속적 컴패니언 디바이스 구성 및 전개 플랫폼
JP7260221B2 (ja) ロボット対話方法およびデバイス
US11148296B2 (en) Engaging in human-based social interaction for performing tasks using a persistent companion device
US20170206064A1 (en) Persistent companion device configuration and deployment platform
AU2014236686B2 (en) Apparatus and methods for providing a persistent companion device
CN105126355A (zh) 儿童陪伴机器人与儿童陪伴系统
WO2016011159A9 (en) Apparatus and methods for providing a persistent companion device
US11074491B2 (en) Emotionally intelligent companion device
JP2018014094A (ja) 仮想ロボットのインタラクション方法、システム及びロボット
WO2016206645A1 (zh) 为机器装置加载控制数据的方法及装置
JP2024505503A (ja) 自然言語処理、理解及び生成を可能にする方法及びシステム
Mayachita et al. Implementation of entertaining robot on ROS framework
WO2018183812A1 (en) Persistent companion device configuration and deployment platform
Zikky et al. Utilizing Virtual Humans as Campus Virtual Receptionists
CN118176735A (zh) 用于动态头像的系统和方法
Agent A Generic Framework for Embodied Conversational Agent Development and its Applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18865693

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020510613

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018865693

Country of ref document: EP

Effective date: 20200511