CN113239172A - Conversation interaction method, device, equipment and storage medium in robot group - Google Patents
Conversation interaction method, device, equipment and storage medium in robot group Download PDFInfo
- Publication number
- CN113239172A CN113239172A CN202110641462.1A CN202110641462A CN113239172A CN 113239172 A CN113239172 A CN 113239172A CN 202110641462 A CN202110641462 A CN 202110641462A CN 113239172 A CN113239172 A CN 113239172A
- Authority
- CN
- China
- Prior art keywords
- instruction
- target
- robot
- session
- interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 221
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000002452 interceptive effect Effects 0.000 claims abstract description 212
- 230000004044 response Effects 0.000 claims abstract description 48
- 230000008569 process Effects 0.000 claims abstract description 21
- 230000006870 function Effects 0.000 claims description 25
- 230000015654 memory Effects 0.000 claims description 25
- 238000007667 floating Methods 0.000 claims description 9
- 238000012217 deletion Methods 0.000 claims description 7
- 230000037430 deletion Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 20
- 238000012545 processing Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/338—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a conversation interaction method, a conversation interaction device, conversation interaction equipment and a computer-readable storage medium in a robot group; the method comprises the following steps: presenting a conversation interface corresponding to a robot group comprising a plurality of robots; in the conversation interface, receiving an input conversation instruction used for indicating conversation interaction with a target robot and input content associated with robot names in a robot group; presenting at least one candidate robot recommended based on the content in the process of inputting the content in response to the conversation instruction; receiving a selection operation aiming at a candidate robot, taking the selected candidate robot as a target robot, and presenting a plurality of recommended conversation interaction instructions associated with the target robot; and outputting the result of the target robot executing the target interactive instruction in response to the selection operation aiming at the target interactive instruction in the plurality of session interactive instructions. Through the application, the conversation efficiency between the robot and the robot can be improved.
Description
Technical Field
The present application relates to computer technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for session interaction in a robot group.
Background
In the process of performing session interaction with a robot group including multiple robots, because different robots have different session interaction instructions, when a user does not fully learn about the session interaction instructions corresponding to the robots, the user is often required to try to perform multiple session input operations if the user wants to achieve a certain session interaction purpose with the robots, and the efficiency of the session with the robots is low.
Disclosure of Invention
The embodiment of the application provides a conversation interaction method, a conversation interaction device, conversation interaction equipment and a computer-readable storage medium in a robot group, and conversation efficiency between the robot group and a robot can be improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a conversation interaction method in a robot group, which comprises the following steps:
presenting a conversation interface corresponding to a robot group comprising a plurality of robots;
in the conversation interface, receiving input conversation instructions for indicating conversation interaction with a target robot and input contents associated with robot names in the robot group;
presenting, in response to the conversation instruction, at least one candidate robot recommended based on the content in the course of inputting the content;
receiving a selection operation aiming at the candidate robot, taking the selected candidate robot as the target robot, and presenting a plurality of recommended conversation interaction instructions associated with the target robot;
and outputting the result of the target robot executing the target interactive instruction in response to the selection operation of the target interactive instruction in the plurality of session interactive instructions.
The embodiment of the present application provides a session interaction device in a robot group, including:
the first presentation module is used for presenting a conversation interface corresponding to a robot group comprising a plurality of robots;
the input receiving module is used for receiving an input conversation instruction used for indicating conversation interaction with a target robot and input content associated with robot names in the robot group in the conversation interface;
a second presenting module, which is used for responding to the conversation instruction and presenting at least one candidate robot recommended based on the content in the process of inputting the content;
a third presenting module, configured to receive a selection operation for the candidate robot, take the selected candidate robot as the target robot, and present a plurality of recommended session interaction instructions associated with the target robot;
and the result output module is used for responding to the selection operation of a target interactive instruction in the plurality of session interactive instructions and outputting the result of the target robot executing the target interactive instruction.
In the above solution, the result output module is further configured to, when the instruction content of the target interactive instruction includes an instruction object, respond to a selection operation for the target interactive instruction, present the instruction content and guidance information corresponding to the instruction content in an input box corresponding to the session instruction;
the guiding information is used for guiding and inputting an object name corresponding to the instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target interaction instruction aiming at the target instruction object corresponding to the object name.
In the above scheme, the apparatus further comprises:
the object determination module is used for receiving the content which is input by the current account in the input box and is associated with the object name;
presenting at least one candidate instruction object recommended based on the input content in the process of inputting the content associated with the object name;
and in response to the selection operation for the candidate instruction object, taking the selected candidate instruction object as the target instruction object.
In the foregoing solution, the result output module is further configured to, when the instruction content of the target interaction instruction does not include an instruction object, respond to a selection operation for the target interaction instruction, present, in the session interface, a session message of the target robot corresponding to the current account, where the session message includes the instruction content, and
presenting a reply message to the session message by the target robot, the reply message including a result of the target robot executing the target interaction instruction.
In the above solution, the result includes a plurality of sub-session interactive instructions, and the instruction content of the sub-session interactive instructions includes an instruction object, and the apparatus further includes:
the sub-conversation interactive result output module is used for presenting the instruction content of the target sub-conversation interactive instruction and the guiding information corresponding to the instruction content in the input frame corresponding to the target sub-conversation interactive instruction when receiving a selection instruction aiming at the target sub-conversation interactive instruction in the plurality of sub-conversation interactive instructions;
the guiding information is used for guiding the instruction content of the target sub-session interaction instruction to be input to comprise an object name corresponding to an instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target sub-interactive instruction aiming at the target instruction object corresponding to the object name.
In the foregoing solution, before presenting the instruction content of the target sub-session interactive instruction and the guidance information corresponding to the instruction content in the input box corresponding to the target sub-session interactive instruction, the apparatus further includes:
a selection instruction receiving module used for presenting the use function item aiming at each sub-session interactive instruction;
and receiving a selection instruction aiming at a target sub-session interactive instruction in the plurality of sub-session interactive instructions in response to the trigger operation aiming at the use function item corresponding to the target sub-session interactive instruction.
In the above solution, after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further comprises:
the instruction updating module is used for receiving input instruction content associated with the conversation instruction corresponding to the target robot;
and updating the recommended plurality of conversation interaction instructions along with the input of the instruction content, so that the content of the updated conversation interaction instructions is matched with the instruction content.
In the above scheme, the third presenting module is further configured to present a text editing box for editing session content of session interaction;
and receiving a conversation instruction for indicating conversation interaction with the target robot in response to the directional character, which is input in the text editing box by the current account, for the target robot.
In the above solution, after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further comprises:
the instruction cancellation module is used for receiving deletion operation aiming at the directional character;
canceling the presented plurality of conversational interaction instructions associated with the target robot in response to the deletion operation.
In the above scheme, the third presenting module is further configured to obtain the use frequency of each session interaction instruction associated with the target robot;
and presenting each conversation interactive instruction according to the high use frequency and the prior mode of the corresponding conversation interactive instruction.
In the above scheme, the third presenting module is further configured to obtain an interval between last usage time and current time of each session interaction instruction associated with the target robot, respectively;
and presenting each conversation interactive instruction according to the mode that the interval is small and the corresponding conversation interactive instruction is in the front.
In the above solution, the third presenting module is further configured to present, in the session interface, a plurality of recommended session interaction instructions associated with the target robot in a floating layer or pop-up window manner; or,
presenting the recommended plurality of conversational interaction instructions associated with the target robot through a sub-interface that is independent of the conversational interface.
In the foregoing solution, before the presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further includes:
the command determining module is used for acquiring an incidence relation table which stores incidence relations between the robots and the session interaction commands;
and finding a plurality of session interaction instructions associated with the target robot in the association relation table.
In the foregoing solution, before outputting a result of the target robot executing the target interactive instruction, the apparatus further includes:
the session sending module is used for presenting sending function items corresponding to the target interaction instruction;
responding to the trigger operation aiming at the sending function item, sending a session message carrying the target interaction instruction, and sending the session message carrying the target interaction instruction
And presenting the session message of the current account corresponding to the target robot in the session interface.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the conversation interaction method in the robot group provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for implementing session interaction in a robot group provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
in the process of inputting the content related to the robot name, at least one recommended candidate robot is presented for a user to select, and the target robot can be determined from the complete name of the target robot without inputting the complete name of the target robot word by the user; secondly, when the target robot is determined, a plurality of conversation interactive instructions associated with the target robot are presented for a user to select, the user does not need to manually input the instruction content of the target interactive instruction word by word, and the target interactive instruction can be selected from the conversation interactive instructions, so that the result of the robot executing the target interactive instruction is obtained; therefore, in the process of carrying out conversation interaction with a plurality of robots in the robot group, a user only needs to input part of contents related to the robots and carries out corresponding selection operation based on options recommended step by step, so that corresponding execution results of the robots can be obtained, the number of conversation input times executed by the user to achieve a certain interaction purpose is greatly reduced, and the conversation efficiency between the user and the robots is improved.
Drawings
Fig. 1 is a schematic diagram of an alternative architecture of a conversation interaction system 100 in a robot group according to an embodiment of the present application;
fig. 2 is an alternative schematic structural diagram of an electronic device 500 provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a session interaction method in a robot group according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a recommended robot display provided in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a display of a recommended instruction according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a display of an execution result of an interactive instruction according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a display of an execution result of an interactive instruction according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a display of an execution result of an interactive instruction according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a display of an execution result of an interactive instruction according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a display of an execution result of an interactive instruction according to an embodiment of the present application;
fig. 11 is a flowchart illustrating a method for session interaction in a robot group according to an embodiment of the present disclosure;
fig. 12 is a flowchart illustrating a method for session interaction in a robot group according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a conversation interaction apparatus in a robot group according to an embodiment of the present disclosure.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The terminal comprises a client, and an application program running in the terminal and used for providing various services, such as a video playing client, an instant messaging client, a game client and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a conversation interaction system 100 in a robot group provided in this embodiment of the present application, in order to implement supporting an exemplary application, terminals (terminal 400-1 and terminal 400-2 are exemplarily shown) are connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both, and implements data transmission using a wireless link.
The terminal can be various user terminals such as a smart phone, a tablet computer, a notebook computer and the like, and can also be a desktop computer, a television or a combination of any two or more of the data processing devices; the server 200 is a background server corresponding to a client on the terminal, and may be a server configured independently to support various services, a server cluster, a cloud server, or the like.
In practical application, the terminal is provided with a client, such as a video playing client, an instant messaging client, a game client, and the like. When a user opens a client on a terminal to perform conversation with the robots in the robot group, the terminal presents a conversation interface corresponding to the robot group comprising a plurality of robots; in the conversation interface, receiving an input conversation instruction used for indicating conversation interaction with a target robot and input content associated with robot names in a robot group; the terminal responds to the conversation instruction and sends a recommendation request of the robot to the server 200 in the process of inputting the content; the server 200 determines and returns at least one recommended candidate robot to the terminal based on the recommendation request and the input content; the terminal presents at least one candidate robot recommended based on the content; when receiving a selection operation for the candidate robot, taking the selected candidate robot as a target robot, and sending an acquisition request of a session interaction instruction for the target robot to the server 200; the server acquires and returns a plurality of recommended conversation interactive instructions related to the target robot to the terminal for presentation based on the acquisition request; and the terminal responds to the selection operation aiming at the target interactive instruction in the plurality of session interactive instructions, sends the session message carrying the target interactive instruction to the target robot and outputs the result of the target robot executing the target interactive instruction.
Referring to fig. 2, fig. 2 is an optional schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be the terminal or the server 200 in fig. 1, and the electronic device is taken as the terminal shown in fig. 1 as an example, and an electronic device implementing the session interaction method in the robot group in the embodiment of the present application is described. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the conversation interaction device in the robot group provided by the embodiment of the present application may be implemented in software, and fig. 2 illustrates a conversation interaction device 555 in the robot group stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the first presenting module 5551, the input receiving module 5552, the second presenting module 5553, the third presenting module 5554 and the result output module 5555 are logical and thus can be arbitrarily combined or further split according to the implemented functions, which will be explained below.
In other embodiments, the session interaction Device in the robot group provided in this embodiment may be implemented in hardware, and for example, the session interaction Device in the robot group provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to perform the session interaction method in the robot group provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the session interaction system in the robot group provided in the embodiment of the present application, the following description of the session interaction method in the robot group provided in the embodiment of the present application is provided, and in actual implementation, the method may be implemented by the terminal or the server 200 shown in fig. 1 alone, or may be implemented by the terminal and the server 200 shown in fig. 1 in a cooperation manner. Next, with reference to fig. 1 and fig. 3, fig. 3 is a schematic flow chart of a session interaction method in a robot group provided in the embodiment of the present application, and a description is given by taking as an example that a terminal shown in fig. 1 separately implements the session interaction method in the robot group provided in the embodiment of the present application.
Step 101: the terminal presents a conversation interface corresponding to a robot group including a plurality of robots.
In practical application, a client is arranged on a terminal, and the terminal is provided with clients, such as a video playing client, an instant messaging client, a game client and the like. The user can carry out conversation with the robots in the robot group through the client, and when the user opens the client on the terminal to carry out conversation with the robots in the robot group, the terminal presents a corresponding conversation interface. In practical applications, the robot group may include friends other than the robots in addition to the robots, for example, for a game community, the members in the robot group may include: a plurality of users joining the game community and a plurality of robots providing services for games in the game community.
The robot is an object capable of performing interactive application with a user in a conversation, and can receive and send messages, different robots have different functions, that is, conversation interactive instructions which can be responded by different robots can be different, and in practical application, a user can add a corresponding robot to a robot group according to actual requirements to perform corresponding conversation interaction with the robot. For example, a notification robot and a chat robot are added to a robot group by a user, the user can receive a system message pushed by the notification robot and can also perform session interaction with the chat robot, the types of the chat robots can be various, the user sends a session message carrying a session interaction instruction to a certain chat robot, and when the chat robot is determined to be capable of responding to the session message instruction, the chat robot can feed back a reply message corresponding to the session message to the user so as to respond to the session interaction instruction of the user.
Step 102: in the conversation interface, input conversation instructions for instructing conversation interaction with a target robot and input contents associated with robot names in a robot group are received.
Here, in the group chat scenario, if a user wants to invoke a specific function of a certain robot, the user needs to use a session interaction command corresponding to the specific function, that is, the user needs to input a relevant session content capable of triggering the session interaction command.
In some embodiments, the terminal may receive an input conversational command for instructing conversational interaction with the target robot by: presenting a text edit box for editing session content of the session interaction; and receiving a conversation instruction for indicating conversation interaction with the target robot in response to the directional character, which is input in the text editing box by the current account, for the target robot.
Here, when the terminal receives a directional character (e.g. an @ symbol) input by the user, a conversation instruction may be triggered.
Step 103: in response to the conversation instruction, at least one candidate robot recommended based on the content is presented in the process of inputting the content.
Here, the terminal sends an identification request carrying the content input by the user to the server in real time in response to an input operation of the user during the process that the user inputs a directional character (such as an @ symbol) and a content (such as an initial character or a partial character) associated with a name of a target robot in a text editing box, the server matches the content input by the user with member names (including a robot name and names of other members except the robot) included in a pre-stored robot group based on the identification request, determines a recommended member including the member name of the content input by the user based on a matching result, and returns the recommended member to the terminal for presentation, the terminal pops up a recommended page after receiving the recommended member returned by the server and presents the recommended member recommended in real time in the recommended page, wherein the recommended member includes at least one candidate robot for the user to select, along with the change of the content input by the user, the recommendation members presented in the recommendation page also change correspondingly, so that the recommendation members presented in real time are always matched with the content input by the user.
Referring to fig. 4, fig. 4 is a schematic display diagram of a recommended robot provided in this embodiment of the present application, in a process of inputting content by a user, for example, when inputting a @ symbol and a character M, a terminal responds to the trigger operation, pops up a recommendation page, and presents a plurality of recommended members with names including the character M for the user to select, for example, a candidate robot including "Mudac" in the recommended member, and a group member with names including the character M in other members in a robot group.
Step 104: a selection operation for a candidate robot is received, the selected candidate robot is taken as a target robot, and a recommended plurality of conversational interaction instructions associated with the target robot are presented.
In some embodiments, before presenting the recommended plurality of session interaction instructions associated with the target robot, the terminal may further obtain an association table storing associations between the robots and the session interaction instructions; and finding a plurality of session interaction instructions associated with the target robot in the association relation table.
Here, when the user selects a target robot from among the at least one candidate robot, the terminal looks up a session interaction instruction associated with the selected target robot from an association table of associations between locally stored robots and session interaction instructions in response to the selection operation. In practical application, the association relation table may further be stored in the server, the terminal sends an instruction obtaining request for the target robot to the server in response to the selection operation, where the instruction obtaining request carries an identifier of the target robot, and the server obtains, based on the identifier of the target robot carried by the instruction obtaining request, a plurality of session interaction instructions corresponding to the identifier from the association relation table, where the session interaction instructions are used as a plurality of session interaction instructions associated with the target robot, and returns the session interaction instructions to the terminal for recommendation presentation.
In some embodiments, the terminal may present the recommended plurality of conversational interaction instructions associated with the target robot by: presenting a plurality of recommended conversation interaction instructions associated with the target robot in a conversation interface in a floating layer or popup window mode; alternatively, the recommended plurality of conversational interaction instructions associated with the target robot are presented through a sub-interface that is separate from the conversational interface.
In practical application, the floating layer or the pop-up window can have a certain transparency and can move in the session interface, the size of the floating layer or the pop-up window can be set according to practical application, when the session interactive instruction is more, the floating layer or the pop-up window can be larger to completely display the session interactive instruction, or the floating layer or the pop-up window can be smaller to display part of the session interactive instruction, in this case, the floating layer or the pop-up window is provided with a progress pull bar, and the rest part of the session interactive instruction can be displayed by dragging the progress pull bar. The sub-interface can also have a certain transparency, is positioned above the session interface, can view the video content played in the session interface through the sub-interface, and can only occupy one part of the session interface or occupy the whole session interface; therefore, the sub-interface with certain transparency is used for presenting the session interaction instruction, so that the user can see more information, and the requirement of the user for quickly acquiring the information is met; meanwhile, along with the sliding operation of the user, the presentation position of the sub-interface on the conversation interface synchronously moves.
Referring to fig. 5, fig. 5 is a schematic display diagram of a recommendation instruction provided in the embodiment of the present application, when a user selects a candidate robot "Mudac" as a target robot, the terminal presents a recommendation popup in response to the selection operation, and presents a plurality of recommended session interaction instructions associated with the target robot "Mudac" in the recommendation popup for the user to select from.
In some embodiments, the terminal may present the recommended plurality of conversational interaction instructions associated with the target robot by: respectively acquiring the use frequency of each session interaction instruction associated with the target robot; and presenting each conversation interactive instruction according to the mode that the use frequency is high and the corresponding conversation interactive instruction is in front.
Here, when the number of the session interactive commands is plural, the display order of the corresponding session interactive commands may be determined based on the frequency of use of each session interactive command, for example, each session display command may be displayed in a manner that the frequency of use and the corresponding session interactive command are in the front, so that popular session interactive commands frequently used by the user are displayed in the front, which is convenient for the user to select from, and is beneficial to improving the selection efficiency.
In some embodiments, the terminal may present the recommended plurality of conversational interaction instructions associated with the target robot by: respectively acquiring the interval between the last use time and the current time of each session interaction instruction associated with the target robot; and presenting each conversation interactive instruction according to the mode that the interval is small and the corresponding conversation interactive instruction is in front.
Here, when the number of the session interactive commands is multiple, the display sequence of the corresponding session interactive commands may be determined based on the usage interval of each session interactive command by the user, for example, the smaller the interval between the last usage time and the current time is, the earlier the display position of the corresponding session interactive command is, so that the session interactive command most recently used by the user is displayed in front, which is convenient for the user to select from, and is beneficial to improving the selection efficiency.
In some embodiments, when the number of the session interaction instructions is multiple, portrait information of a user corresponding to the current account can be acquired, matching degrees of the portrait information and each session interaction instruction are acquired respectively, and each session interaction instruction is presented in a mode that the matching degree is high and the corresponding session interaction instruction is in the front; therefore, by combining the portrait information of the user, each recommended conversation interactive instruction is presented, and the conversation interactive instruction with high matching degree with the portrait information of the user is presented in front, so that the recommendation accuracy of the conversation interactive instruction is improved, the user can conveniently select from the conversation interactive instruction, and the selection efficiency is improved.
In some embodiments, the terminal may further receive, after presenting the recommended plurality of conversational interaction instructions associated with the target robot, input instruction content associated with a conversational instruction corresponding to the target robot; and updating the recommended plurality of conversation interactive instructions along with the input of the instruction content, so that the content of the updated conversation interactive instructions is matched with the instruction content.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a display of an execution result of an interactive instruction provided in an embodiment of the present application, where after the terminal presents a plurality of recommended conversational interactive instructions associated with a target robot, if a user does not select a target interactive instruction from the plurality of conversational interactive instructions, and when instruction content is manually input in an input frame corresponding to the conversational instruction, the terminal responds to an input operation of the user, performs instruction matching on the content input by the user in real time, and obtains a plurality of conversational interactive instructions matched with the currently input content to update the plurality of conversational interactive instructions recommended before, for example, presents the recommended instructions including the content input by the user in sequence until the content input by the user does not have a corresponding instruction matching.
In some embodiments, the terminal may also receive a deletion operation for the directional character after presenting the recommended plurality of conversational interaction instructions associated with the target robot; in response to the delete operation, the presented plurality of conversational interaction instructions associated with the target robot are cancelled.
Here, when the session command is triggered by a directional character input in a text editing box for editing session content of session interaction based on a current account, if the directional character is deleted by the user, the terminal hides a plurality of recommended session interaction commands until the content in the text editing box is cleared, and then the user inputs the directional character again and inputs content associated with names of robots in the robot group, that is, when steps 101 to 104 are performed, the recommended session interaction commands can be presented.
Step 105: and responding to the selection operation of the target interactive instruction in the plurality of session interactive instructions, and outputting the result of the target robot executing the target interactive instruction.
Here, when the user selects a target interactive instruction from the plurality of session interactive instructions, the terminal automatically issues a session message carrying the target interactive instruction in response to the selection operation, that is, presents a session message of the target robot corresponding to the current account in the session interface, the session message containing the target interactive instruction, and at the same time, the target robot executes the target interactive instruction and displays the executed result in the session interface in the form of the session message, that is, presents a reply message of the session message sent by the target robot for the current account, the reply message including the result of the target robot executing the target interactive instruction.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a display of an execution result of an interaction instruction provided in this embodiment of the present application, when a user selects a target interaction instruction 701, a terminal automatically issues a session message 702 carrying the target interaction instruction in response to the selection operation, and at the same time, presents a reply message 703 carrying a result of the target interaction instruction executed by the target robot and replied by the target robot.
In some embodiments, the terminal may further present a sending function item corresponding to the target interaction instruction before outputting a result of the target robot executing the target interaction instruction; and responding to the triggering operation aiming at the sending function item, sending a conversation message carrying a target interaction instruction, and presenting the conversation message of the target robot corresponding to the current account in a conversation interface.
Here, when a user selects a target interactive instruction from a plurality of session interactive instructions, the terminal may present prompt information for confirming the target interactive instruction and a sending function item for the target interactive instruction in response to the selection operation, and when the user triggers the sending function item based on the prompt information, the terminal issues a session message carrying the target interactive instruction in response to the triggering operation, that is, the session message of the target robot corresponding to the current account is presented in the session interface, where the session message includes the target interactive instruction.
In some embodiments, the terminal may output a result of the target robot executing the target interactive instruction in response to the selecting operation for the target interactive instruction of the plurality of conversational interactive instructions by: when the instruction content of the target interactive instruction comprises an instruction object, responding to the selection operation aiming at the target interactive instruction, and presenting the instruction content and the guide information of the corresponding instruction content in the input frame of the corresponding conversation instruction; the guiding information is used for guiding the object name corresponding to the input instruction object; and when the object name input in the input box is received, outputting the result of the target robot executing the target interaction instruction aiming at the target instruction object corresponding to the object name.
Here, when the instruction content of the target interaction instruction includes an instruction object, after the user selects the target interaction instruction, the object name of the instruction object needs to be further filled, and when the terminal receives the object name filled by the user, a session message carrying the instruction content is automatically issued, and at the same time, the target robot executes the target interaction instruction, and displays the execution result in the session interface in the form of the session message, that is, a reply message of the target robot for the session message is presented, where the reply message includes the result of the target robot executing the target interaction instruction for the target instruction object corresponding to the object name.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a display of an execution result of an interactive instruction provided in this embodiment of the present application, when a user selects a target interactive instruction 801, a terminal, in response to the selection operation, presents instruction content 802 and guidance information 803 corresponding to the instruction content 802 in an input box corresponding to the interactive instruction, and when the user inputs an object name 804 in the input box based on the guidance information 803, issues a session message carrying the target interactive instruction, and outputs a result 805 that a target robot executes the target interactive instruction with respect to a target instruction object corresponding to the object name.
In some embodiments, the terminal may obtain the target instruction object by: receiving content which is input by the current account in an input box and is associated with an object name; presenting at least one candidate instruction object recommended based on the input content in the process of inputting the content associated with the object name; and in response to the selection operation for the candidate instruction object, taking the selected candidate instruction object as the target instruction object.
The method comprises the steps that in the process that a user inputs content related to an object name of an instruction object based on guide information, an identification request carrying the content input by the user is sent to a server in real time, the server matches the content input by the user with the object name of a prestored instruction object based on the identification request, determines a candidate instruction object containing the object name of the content input by the user based on a matching result, and returns the candidate instruction object to a terminal for presentation, the terminal pops up a recommendation page after receiving the candidate instruction object returned by the server, presents the candidate instruction object recommended in real time in the recommendation page for the user to select, and takes the candidate instruction object selected by the user as a target instruction object.
Referring to fig. 9, fig. 9 is a schematic display diagram of an execution result of an interactive instruction provided in the embodiment of the present application, in a process of inputting an object name corresponding to an instruction by a user, for example, when the user inputs a character B associated with the object name, a plurality of candidate instruction objects including the character B are presented in a recommendation page, and when the user selects a target instruction object from the candidate instruction objects, a result of the target robot executing the target interactive instruction with respect to the target instruction object is output.
In some embodiments, the terminal may output a result of the target robot executing the target interactive instruction in response to the selecting operation for the target interactive instruction of the plurality of conversational interactive instructions by: when the instruction content of the target interaction instruction does not contain an instruction object, in response to the selection operation aiming at the target interaction instruction, presenting a session message of the target robot corresponding to the current account in the session interface, wherein the session message contains the instruction content, and presenting a reply message aiming at the session message by the target robot, wherein the reply message comprises a result of the target robot executing the target interaction instruction.
Here, when the instruction content of the target interaction instruction does not include an instruction object, the terminal automatically issues a session message carrying the target interaction instruction in response to the selection operation for the target interaction instruction, that is, presents a session message of the target robot corresponding to the current account in the session interface, where the session message includes the target interaction instruction, and at the same time, the target robot executes the target interaction instruction and displays the executed result in the session interface in the form of the session message, that is, presents a reply message of the session message sent by the target robot for the current account, where the reply message includes the result of the target robot executing the target interaction instruction.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating a display of an execution result of an interactive instruction provided in this embodiment of the application, when a user selects a target interactive instruction 1001 and an instruction content of the target interactive instruction 1001 does not include an instruction object, a terminal automatically issues a session message 1002 carrying the target interactive instruction in response to a selection operation for the target interactive instruction, and simultaneously presents a reply message 1003 carrying a result of the target robot executing the target interactive instruction and replied by the target robot.
In some embodiments, when the instruction content of the target interactive instruction does not include the instruction object, the result of the target robot executing the target interactive instruction includes a plurality of sub-session interactive instructions, and the instruction content of the sub-session interactive instruction includes the instruction object, the terminal may further obtain the execution result of the target robot executing the sub-session interactive instruction by:
when a selection instruction aiming at a target sub-session interactive instruction in a plurality of sub-session interactive instructions is received, presenting instruction content of the target sub-session interactive instruction and guide information of the corresponding instruction content in an input frame corresponding to the target sub-session interactive instruction; the guiding information is used for guiding the instruction content of the input target sub-session interactive instruction to comprise an object name corresponding to the instruction object; and when the object name input in the input box is received, outputting the result of the target robot executing the target sub-interactive instruction aiming at the target instruction object corresponding to the object name.
Here, when the instruction content of the sub-session interactive instruction includes an instruction object, the terminal may present a use function item for each sub-session interactive instruction; responding to the trigger operation of the use function item corresponding to the target sub-session interactive instruction in the plurality of sub-session interactive instructions, and receiving a selection instruction aiming at the target sub-session interactive instruction; presenting an input box for the target sub-session interactive instruction in response to the selection instruction, and presenting instruction content of the target sub-session interactive instruction and guidance information of the corresponding instruction content in the input box, the user may further manually fill out an object name of the instruction object based on the guidance information, or select the target instruction object according to at least one candidate instruction object recommended based on the input content associated with the object name presented, when the terminal receives the object name filled in by the user, automatically issuing the session message carrying the instruction content, meanwhile, the target robot executes the target sub-session interactive instruction and displays the executed result in a session interface in the form of a session message, namely, a reply message of the target robot for the session message is presented, and the reply message comprises the result of the target robot executing the target sub-session interaction instruction for the target instruction object corresponding to the object name.
For example, in fig. 10, when the result of the target robot executing the target interaction instruction includes a plurality of sub-session interaction instructions, and the instruction content of the sub-session interaction instruction includes an instruction object, if the user wants to invoke the target robot to execute the target sub-session interaction instruction, the object name of the instruction object may be input by triggering a corresponding use function item, and when the terminal receives the input object name, a session message for acquiring the result of the target robot executing the target sub-session interaction instruction with respect to the target instruction object corresponding to the object name is automatically issued to the target robot, so as to obtain a corresponding result fed back by the target robot.
The method for session interaction in a robot group provided by the embodiment of the application can also be applied to a game scene, for example, a user or a player can perform session interaction with other players in a session area of the game a during the process of executing the game a, in order to obtain related contents related to the game a, such as acquisition of virtual props, use conditions of skills or game progress, a robot executing a corresponding session interaction instruction can be added to the robot group, and the user can send corresponding session information to a target robot to obtain feedback information of the target robot.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. The application is applied to the application scene of rapidly arousing and calling the specific function of the designated robot in the mobile terminal group chat, the user needs to use the designated instruction to arouse the specific function of the robot in the group chat scene, the user can input the instruction meeting the specification or analyze the natural language into the corresponding instruction through artificial intelligence,
referring to fig. 11, fig. 11 is a schematic flowchart of a method for session interaction in a robot group according to an embodiment of the present application, where the method includes:
step 201: the terminal presents at least one candidate robot recommended based on the content in the process of inputting the content associated with the name of the target robot in response to a conversation instruction triggered based on the input directional character.
Here, if the user wants to have a conversational interaction with the target robot, the user is required to input a directional character (e.g., @ character) for the target robot to trigger a conversational instruction for the conversational interaction with the target robot, and to input content associated with the name of the target robot, and the terminal presents at least one candidate robot recommended based on the content for the user to select in response to the conversational instruction during the input of the content associated with the name of the target robot.
Step 202: and judging whether the user is fast to the target robot.
Here, the user may select a target robot from at least one candidate robot, and may also determine the target robot by manually inputting a complete robot name; when the user does not quickly select a target robot from at least one candidate robot, such as when the target robot in the candidate robot is not clicked, step 203 is executed; otherwise, when the user quickly selects the target robot from the at least one candidate robot, such as clicking on the target robot in the candidate machines, step 208 is performed.
Step 203: content manually input by a user is received.
Step 204: and judging whether corresponding instructions exist in the content manually input by the user.
Here, when the user manually inputs the content, the terminal performs real-time instruction matching according to the content input by the user in the process of inputting the content, and sequentially recommends instructions including the content input by the user until the content input by the user does not have corresponding instruction matching. When the content input by the user has a corresponding instruction, executing step 205; otherwise, when there is no corresponding instruction in the content input by the user, step 207 is executed.
In practical application, when the content input by the user is the content associated with the name of the target robot, the terminal responds to the input operation of the user and sends an identification request carrying the content input by the user to the server in real time, the server matches the content input by the user with the names of the robots included in the pre-stored robot group based on the identification request, and determines whether the target robot corresponding to the name of the content input by the user is included or not based on the matching result; when the content input by the user has a corresponding target robot, executing step 205; when the corresponding target robot does not exist in the content input by the user, step 207 is performed.
When the content input by the user is the content associated with the target interactive instruction corresponding to the target robot, the terminal responds to the input operation of the user, and sends an identification request carrying the content input by the user to the server in real time, the server matches the content input by the user with the interactive instruction corresponding to the pre-stored robot based on the identification request, and determines whether the interactive instruction of the content input by the user is included based on the matching result, and when the content input by the user has the corresponding interactive instruction, the step 205 is executed; when the content input by the user does not have the corresponding interactive instruction, step 207 is executed.
Step 205: presenting the recommended plurality of conversational interaction instructions associated with the target robot.
Step 206: and judging whether the user quickly selects the target interaction instruction.
Here, when the user quickly selects the target interactive instruction from the plurality of session interactive instructions, for example, clicks the target interactive instruction in the plurality of session interactive instructions, step 209 is executed; otherwise, the user does not quickly select the target interactive instruction from the plurality of session interactive instructions, for example, when the target interactive instruction in the plurality of session interactive instructions is not clicked, step 203 is executed.
Step 207: the associated plurality of conversational interaction instructions are not presented.
Step 208: presenting a recommended plurality of conversational interaction instructions associated with the target robot in response to the selection operation for the target robot.
Here, after the user completes the robot name input (selects or inputs a robot name from candidate robots), the terminal recommends a corresponding instruction set according to the received robot name, where the instruction set includes a plurality of common conversational interactive instructions (high-frequency instructions used by the user once) and all other instructions.
Step 209: and responding to the selection operation aiming at the target interactive instruction in the plurality of session interactive instructions, and judging whether the target interactive instruction is an object instruction.
Here, the interactive instruction includes an object instruction and a non-object instruction, where the object instruction means that the instruction content of the interactive instruction includes an instruction object, and if the format of the object instruction is: @ + robot name + instruction behavior + instruction object; the no-object instruction means that the instruction content of the interactive instruction does not contain an instruction object, and if the format of the no-object instruction is as follows: @ robot name + instruction behavior.
When the target interactive instruction selected by the user is an object instruction, executing step 210; when the target interactive instruction selected by the user is the no object instruction, step 212 is executed.
Step 210: and presenting the instruction content and the guide information corresponding to the instruction content in the input box corresponding to the conversation instruction.
Here, when the user selects an object instruction, the user is required to continue to fill in the object name of the instruction object included in the instruction content, so that the instruction assists in completing the fixed content part of the instruction, [ @ machine target robot + instruction behavior ], and packaging the fixed content part into a whole, and when the user does not input the object name, performing input information prompt, that is, presenting the object name corresponding to the object for guiding the input of the instruction object, and guiding the user to input the content meeting the requirement.
Step 211: and when receiving the object name input in the input box, sending a conversation message carrying the target interaction instruction to the target robot.
The target interactive instruction carries instruction content and an object name.
Step 212: and presenting the result of the target robot executing the target interaction instruction.
Here, when the target interaction instruction is an object instruction, the target robot executes the target interaction instruction for the target instruction object corresponding to the object name to obtain a corresponding result, and feeds back the execution result to the current account in the form of a session message, that is, reply information of the target robot is presented in the session interface, where the reply information includes a result of the target robot executing the target interaction instruction.
When the target interactive instruction is an object-free instruction, after the target interactive instruction is selected by a user, the user does not need to input the object name of the instruction object by self definition, the terminal automatically issues the instruction for the user and starts the robot feedback, namely, the session message carrying the target interactive instruction is automatically issued, meanwhile, the target robot executes the target interactive instruction and displays the executed result in the session interface in the form of the session message, namely, the reply message of the target robot aiming at the session message is presented, and the reply message comprises the result of the target robot executing the target interactive instruction.
Referring to fig. 12, fig. 12 is a schematic flowchart of a method for session interaction in a robot group according to an embodiment of the present application, where the method includes:
step 301: the terminal responds to the input operation of the user and sends an identification request carrying the content input by the user to the server in real time.
Here, the input contents corresponding to the input operation are: aiming at the directional characters (such as @) of the target robot and the content associated with the name of the target robot, the terminal sends an identification request to the server in real time in the process of inputting the content.
Step 302: the server matches the contents input by the user with the pre-stored name of the robot based on the identification request.
Step 303: and the server recommends the candidate robot which is successfully matched to the terminal.
Here, when the matching result indicates that there is a candidate robot corresponding to the name containing the content input by the user (matching is successful), the server recommends the candidate robot to the terminal to invoke the shortcut object selection of the terminal, i.e., the user can quickly select the target robot from a plurality of candidate robots.
Step 304: the terminal transmits an instruction acquisition request for the target robot to the server in response to an input operation for the target robot.
In practical applications, the input operation for the target robot may be a selection operation for quickly selecting the target robot from a plurality of candidate robots, or a manual input operation in which the @ target robot name is manually input by a user and the blank space ends. Wherein the instruction acquisition request carries an identification of the target robot.
Step 305: and the server acquires a plurality of session interaction instructions corresponding to the identifier from the association relation table based on the instruction acquisition request.
Here, the incidence relation table stores incidence relation between the robot and the session interaction instruction, the server matches the identifier of the target robot carried by the instruction acquisition request with the identifier of the robot stored in the incidence relation table, and the session interaction instruction of the robot corresponding to the identifier successfully matched is used as the session interaction instruction associated with the target robot.
Step 306: and the server returns the associated multiple conversation interactive instructions of the target robot to the terminal for recommendation presentation.
The server returns the conversation interactive instruction associated with the target robot to the terminal for recommendation presentation, and prompts the quick instruction selection of the terminal, namely, the user can quickly select the target interactive instruction from a plurality of conversation interactive instruction persons.
Step 307: and the terminal responds to the selection operation aiming at the target interactive instruction in the plurality of session interactive instructions and sends the session message carrying the target interactive instruction to the server.
Step 308: and the server sends the session message carrying the target interaction instruction to the target robot.
Step 309: and the target robot executes the target interaction instruction to obtain an execution result.
Step 310: and the target robot feeds back a reply message carrying the execution result to the terminal.
Through the mode, the method provided by the embodiment of the application is combined with the interactive logic of the terminal issuing message, has hierarchical association recommendation capability based on user input, realizes hierarchical robot instruction input prompt and quick use, can quickly select a target robot, increases the complex semantic input auxiliary capability of real-time association prompt, is suitable for object instructions and non-object instructions, can effectively simplify the input complexity of the user and reduce the memory cost of the instructions, and solves the problem that the user is difficult to interact with the robot without a learning basis.
Continuing with the exemplary structure of the session interaction device 555 in the robot group provided by the embodiment of the present application implemented as a software module, in some embodiments, referring to fig. 13, fig. 13 is a schematic structural diagram of the session interaction device 555 in the robot group provided by the embodiment of the present application, and the software module stored in the session interaction device 555 in the robot group of the memory 550 in fig. 2 may include:
a first presenting module 5551, configured to present a session interface corresponding to a robot group including a plurality of robots;
an input receiving module 5552, configured to receive, in the conversation interface, an input conversation instruction indicating conversation interaction with a target robot and an input content associated with robot names in the robot group;
a second presenting module 5553, configured to present, in response to the session instruction, at least one candidate robot recommended based on the content in the process of inputting the content;
a third presenting module 5554, configured to receive a selection operation for the candidate robot, treat the selected candidate robot as the target robot, and present a recommended plurality of session interaction instructions associated with the target robot;
a result output module 5555, configured to output a result of the target robot executing the target interactive instruction in response to a selection operation for the target interactive instruction in the plurality of conversational interactive instructions.
In some embodiments, the result output module is further configured to, when an instruction object is included in the instruction content of the target interactive instruction, present, in response to a selection operation for the target interactive instruction, the instruction content and guidance information corresponding to the instruction content in an input box corresponding to the conversation instruction;
the guiding information is used for guiding and inputting an object name corresponding to the instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target interaction instruction aiming at the target instruction object corresponding to the object name.
In some embodiments, the apparatus further comprises:
the object determination module is used for receiving the content which is input by the current account in the input box and is associated with the object name;
presenting at least one candidate instruction object recommended based on the input content in the process of inputting the content associated with the object name;
and in response to the selection operation for the candidate instruction object, taking the selected candidate instruction object as the target instruction object.
In some embodiments, the result output module is further configured to, when the instruction content of the target interaction instruction does not include an instruction object, present, in the session interface, a session message of the current account corresponding to the target robot in response to the selection operation for the target interaction instruction, where the session message includes the instruction content, and
presenting a reply message to the session message by the target robot, the reply message including a result of the target robot executing the target interaction instruction.
In some embodiments, the result comprises a plurality of sub-session interaction instructions, the instruction content of the sub-session interaction instructions comprising instruction objects, the apparatus further comprising:
the sub-conversation interactive result output module is used for presenting the instruction content of the target sub-conversation interactive instruction and the guiding information corresponding to the instruction content in the input frame corresponding to the target sub-conversation interactive instruction when receiving a selection instruction aiming at the target sub-conversation interactive instruction in the plurality of sub-conversation interactive instructions;
the guiding information is used for guiding the instruction content of the target sub-session interaction instruction to be input to comprise an object name corresponding to an instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target sub-interactive instruction aiming at the target instruction object corresponding to the object name.
In some embodiments, before presenting the instruction content of the target sub-session interactive instruction and the guidance information corresponding to the instruction content in the input box corresponding to the target sub-session interactive instruction, the apparatus further includes:
a selection instruction receiving module used for presenting the use function item aiming at each sub-session interactive instruction;
and receiving a selection instruction aiming at a target sub-session interactive instruction in the plurality of sub-session interactive instructions in response to the trigger operation aiming at the use function item corresponding to the target sub-session interactive instruction.
In some embodiments, after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further comprises:
the instruction updating module is used for receiving input instruction content associated with the conversation instruction corresponding to the target robot;
and updating the recommended plurality of conversation interaction instructions along with the input of the instruction content, so that the content of the updated conversation interaction instructions is matched with the instruction content.
In some embodiments, the third presenting module is further configured to present a text editing box for editing the session content of the session interaction;
and receiving a conversation instruction for indicating conversation interaction with the target robot in response to the directional character, which is input in the text editing box by the current account, for the target robot.
In some embodiments, after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further comprises:
the instruction cancellation module is used for receiving deletion operation aiming at the directional character;
canceling the presented plurality of conversational interaction instructions associated with the target robot in response to the deletion operation.
In some embodiments, the third presenting module is further configured to obtain a frequency of use of each conversational interaction instruction associated with the target robot, respectively;
and presenting each conversation interactive instruction according to the high use frequency and the prior mode of the corresponding conversation interactive instruction.
In some embodiments, the third presenting module is further configured to obtain an interval between a last usage time and a current time of each session interaction instruction associated with the target robot, respectively;
and presenting each conversation interactive instruction according to the mode that the interval is small and the corresponding conversation interactive instruction is in the front.
In some embodiments, the third presenting module is further configured to present, in the conversational interface, a recommended plurality of conversational interaction instructions associated with the target robot by way of a floating layer or a pop-up window; or,
presenting the recommended plurality of conversational interaction instructions associated with the target robot through a sub-interface that is independent of the conversational interface.
In some embodiments, prior to the presenting the recommended plurality of conversational interaction instructions associated with the target robot, the apparatus further comprises:
the command determining module is used for acquiring an incidence relation table which stores incidence relations between the robots and the session interaction commands;
and finding a plurality of session interaction instructions associated with the target robot in the association relation table.
In some embodiments, before the outputting the result of the target robot executing the target instructions, the apparatus further comprises:
the session sending module is used for presenting sending function items corresponding to the target interaction instruction;
responding to the trigger operation aiming at the sending function item, sending a session message carrying the target interaction instruction, and sending the session message carrying the target interaction instruction
And presenting the session message of the current account corresponding to the target robot in the session interface.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the session interaction method in the robot group according to the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when being executed by a processor, the executable instructions cause the processor to execute the conversation interaction method in the robot group provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (17)
1. A method of session interaction in a group of robots, the method comprising:
presenting a conversation interface corresponding to a robot group comprising a plurality of robots;
in the conversation interface, receiving input conversation instructions for indicating conversation interaction with a target robot and input contents associated with robot names in the robot group;
presenting, in response to the conversation instruction, at least one candidate robot recommended based on the content in the course of inputting the content;
receiving a selection operation aiming at the candidate robot, taking the selected candidate robot as the target robot, and presenting a plurality of recommended conversation interaction instructions associated with the target robot;
and outputting the result of the target robot executing the target interactive instruction in response to the selection operation of the target interactive instruction in the plurality of session interactive instructions.
2. The method of claim 1, wherein outputting a result of the target robot executing a target interactive instruction of the plurality of conversational interactive instructions in response to a selection operation for the target interactive instruction comprises:
when the instruction content of the target interactive instruction comprises an instruction object, responding to the selection operation aiming at the target interactive instruction, and presenting the instruction content and the guide information corresponding to the instruction content in an input box corresponding to the conversation instruction;
the guiding information is used for guiding and inputting an object name corresponding to the instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target interaction instruction aiming at the target instruction object corresponding to the object name.
3. The method of claim 2, wherein the method further comprises:
receiving content which is input by the current account in the input box and is associated with the object name;
presenting at least one candidate instruction object recommended based on the input content in the process of inputting the content associated with the object name;
and in response to the selection operation for the candidate instruction object, taking the selected candidate instruction object as the target instruction object.
4. The method of claim 1, wherein outputting a result of the target robot executing a target interactive instruction of the plurality of conversational interactive instructions in response to a selection operation for the target interactive instruction comprises:
when the instruction content of the target interaction instruction does not contain an instruction object, responding to the selection operation aiming at the target interaction instruction, presenting a session message of the current account corresponding to the target robot in the session interface, wherein the session message contains the instruction content, and
presenting a reply message to the session message by the target robot, the reply message including a result of the target robot executing the target interaction instruction.
5. The method of claim 4, wherein the results include a plurality of sub-session interactivity instructions having instruction objects contained within their instruction content, the method further comprising:
when a selection instruction aiming at a target sub-session interactive instruction in the plurality of sub-session interactive instructions is received, presenting instruction content of the target sub-session interactive instruction and guiding information corresponding to the instruction content in an input box corresponding to the target sub-session interactive instruction;
the guiding information is used for guiding the instruction content of the target sub-session interaction instruction to be input to comprise an object name corresponding to an instruction object;
and when the object name input in the input box is received, outputting the result of the target robot executing the target sub-interactive instruction aiming at the target instruction object corresponding to the object name.
6. The method of claim 5, wherein prior to presenting the instructional content of the target sub-session interactivity instructions and the guidance information corresponding to the instructional content in the input box corresponding to the target sub-session interactivity instructions, the method further comprises:
presenting a use function item aiming at each sub-session interaction instruction;
and receiving a selection instruction aiming at a target sub-session interactive instruction in the plurality of sub-session interactive instructions in response to the trigger operation aiming at the use function item corresponding to the target sub-session interactive instruction.
7. The method of claim 1, wherein after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the method further comprises:
receiving input instruction content associated with a conversation instruction corresponding to the target robot;
and updating the recommended plurality of conversation interaction instructions along with the input of the instruction content, so that the content of the updated conversation interaction instructions is matched with the instruction content.
8. The method of claim 1, wherein receiving input of a conversational command indicating a conversational interaction with a target robot comprises:
presenting a text edit box for editing session content of the session interaction;
and receiving a conversation instruction for indicating conversation interaction with the target robot in response to the directional character, which is input in the text editing box by the current account, for the target robot.
9. The method of claim 8, wherein after presenting the recommended plurality of conversational interaction instructions associated with the target robot, the method further comprises:
receiving a deletion operation for the directional character;
canceling the presented plurality of conversational interaction instructions associated with the target robot in response to the deletion operation.
10. The method of claim 1, wherein the presenting the recommended plurality of conversational interaction instructions associated with the target robot comprises:
respectively acquiring the use frequency of each session interaction instruction associated with the target robot;
and presenting each conversation interactive instruction according to the high use frequency and the prior mode of the corresponding conversation interactive instruction.
11. The method of claim 1, wherein the presenting the recommended plurality of conversational interaction instructions associated with the target robot comprises:
respectively acquiring the interval between the last use time and the current time of each session interaction instruction associated with the target robot;
and presenting each conversation interactive instruction according to the mode that the interval is small and the corresponding conversation interactive instruction is in the front.
12. The method of claim 1, wherein the presenting the recommended plurality of conversational interaction instructions associated with the target robot comprises:
presenting a recommended plurality of conversational interaction instructions associated with the target robot in the conversational interface by means of a floating layer or a popup window; or,
presenting the recommended plurality of conversational interaction instructions associated with the target robot through a sub-interface that is independent of the conversational interface.
13. The method of claim 1, wherein prior to the presenting the recommended plurality of conversational interaction instructions associated with the target robot, the method further comprises:
acquiring an incidence relation table storing incidence relations between the robot and the session interaction instruction;
and finding a plurality of session interaction instructions associated with the target robot in the association relation table.
14. The method of claim 1, wherein prior to said outputting the result of said target robot executing said target interaction instructions, the method further comprises:
presenting a sending function item corresponding to the target interactive instruction;
responding to the trigger operation aiming at the sending function item, sending a session message carrying the target interaction instruction, and sending the session message carrying the target interaction instruction
And presenting the session message of the current account corresponding to the target robot in the session interface.
15. An apparatus for conversational interaction in a group of robots, the apparatus comprising:
the first presentation module is used for presenting a conversation interface corresponding to a robot group comprising a plurality of robots;
the input receiving module is used for receiving an input conversation instruction used for indicating conversation interaction with a target robot and input content associated with robot names in the robot group in the conversation interface;
a second presenting module, which is used for responding to the conversation instruction and presenting at least one candidate robot recommended based on the content in the process of inputting the content;
a third presenting module, configured to receive a selection operation for the candidate robot, take the selected candidate robot as the target robot, and present a plurality of recommended session interaction instructions associated with the target robot;
and the result output module is used for responding to the selection operation of a target interactive instruction in the plurality of session interactive instructions and outputting the result of the target robot executing the target interactive instruction.
16. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of conversational interaction in a group of robots of any one of claims 1 to 14 when executing executable instructions stored in the memory.
17. A computer-readable storage medium storing executable instructions for implementing the method of conversational interaction in a group of robots of any one of claims 1 to 14 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110641462.1A CN113239172B (en) | 2021-06-09 | 2021-06-09 | Conversation interaction method, device, equipment and storage medium in robot group |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110641462.1A CN113239172B (en) | 2021-06-09 | 2021-06-09 | Conversation interaction method, device, equipment and storage medium in robot group |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113239172A true CN113239172A (en) | 2021-08-10 |
CN113239172B CN113239172B (en) | 2024-08-27 |
Family
ID=77137314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110641462.1A Active CN113239172B (en) | 2021-06-09 | 2021-06-09 | Conversation interaction method, device, equipment and storage medium in robot group |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239172B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023131117A1 (en) * | 2022-01-07 | 2023-07-13 | 北京字跳网络技术有限公司 | Information exchange method and apparatus, and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2724208A1 (en) * | 2011-06-24 | 2014-04-30 | Google, Inc. | Group conversation between a plurality of participants |
CN106528692A (en) * | 2016-10-31 | 2017-03-22 | 北京百度网讯科技有限公司 | Dialogue control method and device based on artificial intelligence |
CN107837529A (en) * | 2017-11-15 | 2018-03-27 | 腾讯科技(上海)有限公司 | A kind of object selection method, device, terminal and storage medium |
CN108628454A (en) * | 2018-05-10 | 2018-10-09 | 北京光年无限科技有限公司 | Visual interactive method and system based on visual human |
CN111159380A (en) * | 2019-12-31 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Interaction method and device, computer equipment and storage medium |
CN112748974A (en) * | 2020-08-05 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium based on session |
-
2021
- 2021-06-09 CN CN202110641462.1A patent/CN113239172B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2724208A1 (en) * | 2011-06-24 | 2014-04-30 | Google, Inc. | Group conversation between a plurality of participants |
CN103797438A (en) * | 2011-06-24 | 2014-05-14 | 谷歌公司 | Group conversation between a plurality of participants |
CN106528692A (en) * | 2016-10-31 | 2017-03-22 | 北京百度网讯科技有限公司 | Dialogue control method and device based on artificial intelligence |
CN107837529A (en) * | 2017-11-15 | 2018-03-27 | 腾讯科技(上海)有限公司 | A kind of object selection method, device, terminal and storage medium |
CN108628454A (en) * | 2018-05-10 | 2018-10-09 | 北京光年无限科技有限公司 | Visual interactive method and system based on visual human |
CN111159380A (en) * | 2019-12-31 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Interaction method and device, computer equipment and storage medium |
CN112748974A (en) * | 2020-08-05 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium based on session |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023131117A1 (en) * | 2022-01-07 | 2023-07-13 | 北京字跳网络技术有限公司 | Information exchange method and apparatus, and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN113239172B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107608652B (en) | Method and device for controlling graphical interface through voice | |
CN108924626B (en) | Picture generation method, device, equipment and storage medium | |
CN103092612B (en) | Realize method and the electronic installation of Android operation system 3D desktop pinup picture | |
CN113663325B (en) | Team creation method, joining method, device and storage medium in virtual scene | |
CN107408010A (en) | The voice command for dynamically inferring that software operates is manipulated by the user of electronic equipment | |
CN112422405B (en) | Message interaction method and device and electronic equipment | |
CN113253880B (en) | Method and device for processing pages of interaction scene and storage medium | |
CN106572002B (en) | Intelligent session method, intelligent session method for customizing and relevant device | |
CN113282424B (en) | Information reference method and device and electronic equipment | |
CN111565320A (en) | Barrage-based interaction method and device, storage medium and electronic equipment | |
CN115408622A (en) | Online interaction method and device based on meta universe and storage medium | |
CN111736799A (en) | Voice interaction method, device, equipment and medium based on man-machine interaction | |
CN113239172B (en) | Conversation interaction method, device, equipment and storage medium in robot group | |
CN113282268B (en) | Sound effect configuration method and device, storage medium and electronic equipment | |
CN117544795A (en) | Live broadcast information display method, management method, device, equipment and medium | |
CN113485779A (en) | Operation guiding method and device for application program | |
CN111934985A (en) | Media content sharing method, device and equipment and computer readable storage medium | |
KR102184162B1 (en) | System and method for producing reactive webtoons | |
CN107562476B (en) | Method and device for generating application program | |
US20180089877A1 (en) | Method and apparatus for producing virtual reality content | |
CN114416248A (en) | Conversation method and device thereof | |
CN114845131A (en) | Interactive information configuration method, device, electronic equipment, medium and program product | |
CN114422843A (en) | Video color egg playing method and device, electronic equipment and medium | |
CN112632444A (en) | Visual website theme configuration method and device | |
US8468178B2 (en) | Providing location based information in a virtual environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40052184 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |