CN113190307A - Control adding method, device, equipment and storage medium - Google Patents

Control adding method, device, equipment and storage medium Download PDF

Info

Publication number
CN113190307A
CN113190307A CN202110399768.0A CN202110399768A CN113190307A CN 113190307 A CN113190307 A CN 113190307A CN 202110399768 A CN202110399768 A CN 202110399768A CN 113190307 A CN113190307 A CN 113190307A
Authority
CN
China
Prior art keywords
robot
session
control
interface
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110399768.0A
Other languages
Chinese (zh)
Inventor
陈加新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110399768.0A priority Critical patent/CN113190307A/en
Publication of CN113190307A publication Critical patent/CN113190307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects

Abstract

The disclosure relates to a control adding method, device, equipment and storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying a robot adding control based on a session interface of any session; responding to the triggering operation of adding a control to the robot, and displaying a robot display interface; responding to selection operation of any virtual robot in a robot display interface, adding a virtual account corresponding to the virtual robot into a conversation, wherein the virtual robot has a corresponding target control and a target operation; and adding a target control in the session interface, wherein the target control is used for triggering and executing target operation. According to the method, the virtual robot can be used for adding the target control in the session interface of the application, so that the flexibility of expanding the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.

Description

Control adding method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for adding a control.
Background
With the development of computer technology, the variety of applications is increasing, and executable operations in the applications are becoming more and more abundant, for example, many applications can execute operations for establishing sessions, and perform instant messaging operations using sessions, which greatly facilitates communication among users. Applications typically include multiple types of controls based on which different operations are performed, such as transmitting text messages, voice messages, placing video calls, and the like. However, the controls in the application are all set in advance and are fixed, new controls can only be extended in an application updating mode, flexibility is low, and extension cost is high.
Disclosure of Invention
The present disclosure provides a control adding method, device, apparatus, and storage medium, which can improve flexibility of extending a control in an application and reduce extension cost. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a control adding method is provided, where the method includes:
displaying a robot adding control based on a session interface of any session;
responding to the triggering operation of adding a control to the robot, and displaying a robot display interface, wherein the robot display interface comprises at least one virtual robot provided by a third-party server;
responding to selection operation of any virtual robot in the robot display interface, and adding a virtual account corresponding to the virtual robot into the conversation, wherein the virtual robot has a corresponding target control and a corresponding target operation;
and adding the target control in the session interface, wherein the target control is used for triggering and executing the target operation.
In the embodiment of the disclosure, when the target control needs to be extended in the application, the method is not limited to updating the application, and only the virtual robot with the target control needs to be selected from the robot display interface, and the virtual account corresponding to the virtual robot is added to the session, so that the target control can be added in the session interface of the application by using the virtual robot, and thus, the flexibility of extending the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.
In some embodiments, the adding the target control in the session interface includes:
adding the target control in a menu corresponding to the session message in the session interface;
the control adding method further comprises the following steps: responding to the menu operation of calling any conversation message, and displaying a menu corresponding to the conversation message in the conversation interface, wherein the menu comprises the target control.
In the embodiment of the disclosure, the target control is displayed in the menu corresponding to each session message, and the menu corresponding to the session message and including the target control is displayed in the session interface in response to the menu invoking operation on the session message, so that a user can quickly find the target control through the menu invoking operation, and the efficiency of executing the target operation is improved.
In some embodiments, the target operation includes an operation of uploading a session message corresponding to the target control to the third-party server, and after the menu corresponding to the session message is displayed in the session interface, the control adding method further includes:
responding to the trigger operation of the target control, and calling the virtual robot to read the session message corresponding to the target control;
and sending the session message to the third-party server.
In an embodiment of the present disclosure, a method for extending an operation of uploading a session message to a third-party server in an application using a virtual robot is provided. And the user only needs to execute the operation of triggering the target control corresponding to the session message, so that the session message can be uploaded to the third-party server, and the method is simple and efficient.
In some embodiments, the target operation includes an operation of sending a task creation request to the third-party server, and after the menu corresponding to the session message is displayed in the session interface, the control adding method further includes:
responding to the triggering operation of the target control, and calling the virtual robot to read the session message and the current login account corresponding to the target control;
and sending a task creating request to the third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task with the session message as task content for the current login account.
In an embodiment of the present disclosure, a method for extending an operation of sending a task creation request to a third-party server in a target application is provided. In addition, the user only needs to execute the operation of triggering the target control corresponding to the session message, and then the session request carrying the session message and the current login account can be sent to the third-party server, so that the third-party server creates the target task with the session message as the task content for the current login account, and the method is simple and efficient.
In some embodiments, after the invoking the virtual robot reads the session message and the current login account corresponding to the target control, the control adding method further includes:
displaying a task creation interface, wherein the task creation interface comprises a session message corresponding to the target control;
acquiring input task information based on the task creation interface;
and the task creating request sent to the third-party server also carries the task information, and the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
In the embodiment of the disclosure, by displaying the task creation interface, the user can complete the task information based on the task creation interface, so that the third-party server can create a target task with richer task information for the current login account.
In some embodiments, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes:
responding to a session message which is issued in the session and marks the virtual account as a receiving account, and calling the virtual robot to determine a reply message corresponding to the session message;
and issuing the reply message in the session by taking the virtual account as an issuing account.
In the embodiment of the disclosure, when a session message with a virtual account as a receiving account is issued in a session, a reply message corresponding to the session message is issued in the session by using the virtual account as an issuing account, so that the virtual robot simulates a session mode between real users to reply the session message issued by other users in the session, thereby making the virtual robot more anthropomorphic, and further improving an interaction effect between the users and the virtual robot in the session.
In some embodiments, said invoking said virtual robot to determine a reply message corresponding to said conversation message comprises:
calling the virtual robot to identify instruction keywords from the session message;
calling the virtual robot to determine corpus information matched with the instruction keywords;
and generating the reply message based on the corpus information.
In the embodiment of the present disclosure, since the conversation message may include a plurality of vocabularies, and some of the vocabularies are important vocabularies for determining the reply message, that is, contents of the reply message can be determined, the instruction keyword is identified from the conversation message by invoking the virtual robot, and the corpus information matched with the instruction keyword is determined by invoking the virtual robot, so that on one hand, accuracy of the reply message can be ensured, on the other hand, data volume of the corpus information is reduced, and simplicity of the reply message is ensured.
In some embodiments, the instruction keywords include keywords indicating a selected account, and the control adding method further includes:
calling the virtual robot to select a target account from a plurality of accounts included in the conversation, wherein the target account is an account matched with the instruction keywords;
generating the reply message based on the corpus information includes: and combining the target account and the corpus information to obtain the reply message.
In the embodiment of the disclosure, when the reply message is generated, for the case that the instruction keyword is a keyword for selecting an account, a target account can be selected from a plurality of accounts included in the session by the virtual robot in combination with account information in the session, and the target account and the corpus information are combined to obtain the reply message, so that the content of the reply message is enriched.
In some embodiments, said invoking said virtual robot to determine corpus information matching said instruction keyword comprises:
calling the virtual robot to obtain instruction configuration information, wherein the instruction configuration information comprises at least one reference instruction keyword and an information query interface corresponding to each reference instruction keyword;
and calling an information query interface corresponding to the instruction key words to query the corpus information matched with the instruction key words.
In the embodiment of the disclosure, the reference instruction keywords and the information query interface corresponding to each reference instruction keyword are stored in the instruction configuration information, so that the virtual robot can quickly query the information matched with the instruction keywords based on the information query interface corresponding to the instruction keywords, the efficiency of determining the corpus information is ensured, and the efficiency of generating the reply message is improved.
In some embodiments, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes:
responding to a reference instruction character input in a message input field of the session interface, and displaying a stored robot calling instruction in the session interface, wherein the robot calling instruction is used for calling the virtual robot, and the reference instruction character is used for triggering and displaying the robot calling instruction;
responding to the triggering operation of any displayed robot calling instruction, issuing the robot calling instruction to the conversation, and sending the robot calling instruction to a service interface corresponding to the virtual robot, wherein the third-party server is used for responding to the robot calling instruction through the service interface.
In the embodiment of the disclosure, by setting the reference instruction character, the user can quickly call out the stored robot call instruction by inputting the reference instruction character in the message input field, and then the user can call the robot call instruction by only triggering the required robot call instruction, so that the user is prevented from manually inputting the robot call instruction, and the operation efficiency of calling the virtual robot is improved.
In some embodiments, after displaying the stored robot call instruction in the conversation interface in response to entering a reference instruction character in a message input field of the conversation interface, the control addition method further includes:
in response to continuing to enter characters in the message entry field, filtering out robot invocation instructions from the conversation interface that do not include the characters.
In the embodiment of the disclosure, the user can quickly find the required robot calling instruction from the rest of the robot calling instructions including the character by responding to the character input in the message input field and filtering the robot calling instruction not including the character from the conversation interface, so that the operation efficiency of calling the virtual robot is improved.
In some embodiments, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes:
responding to the triggering operation of the robot identification in the conversation interface, and displaying a detail interface of the virtual robot corresponding to the robot identification, wherein the detail interface comprises a robot sharing control;
responding to the triggering operation of the robot sharing control, generating a sharing link of the virtual robot, and displaying a session identification list, wherein the session identification list comprises at least one session identification;
responding to the selection operation of any session identifier in the session identifier list, and issuing the sharing link to the session corresponding to the selected session identifier.
In the embodiment of the disclosure, the detail interface of the virtual robot is displayed by responding to the triggering operation of the robot identifier in the session interface, and the robot sharing control is displayed in the detail interface, so that a user can share the virtual robot to other sessions based on the robot sharing control, thereby being beneficial to the propagation of the virtual robot.
In some embodiments, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes:
displaying an authority setting interface of the virtual robot, wherein the authority setting interface comprises at least one operation type;
responding to the selection operation of the operation type in the authority setting interface, and determining the authority range of the virtual robot, wherein the authority range comprises the operation type selected from the authority setting interface;
and the authority range representation allows the virtual robot to execute the operation corresponding to the operation type.
In the embodiment of the disclosure, after the virtual account corresponding to the virtual robot is added to the session, the permission setting interface of the virtual robot is displayed, and then the user can set the permission range of the virtual robot based on the permission setting interface, so that the virtual robot executes the operation corresponding to the operation type in the permission range, thereby ensuring that the operations executed by the virtual robot are all allowed by the user, and improving the user stickiness.
In some embodiments, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes:
and issuing a session message in the session by taking the virtual account as an issuing account, wherein the session message comprises function description information, and the function description information is used for describing functions which can be realized by the virtual robot.
In some embodiments, a robot search control is included in the robot presentation interface, and the control adding method further includes:
acquiring a search word input in the robot search control;
acquiring a target robot which is provided by the third-party server and matched with the search terms;
displaying the target robot in the robot display interface.
In the embodiment of the disclosure, the robot search control is displayed in the robot display interface, and a user can quickly search for the target robot matched with the search word only by inputting the search word in the search control, so that the efficiency of searching for the target robot is greatly improved.
In some embodiments, a robot creation control is included in the robot presentation interface, and the control adding method further includes:
responding to the triggering operation of the robot creation control, and displaying a robot creation interface;
acquiring input robot information based on the robot creating interface;
and creating a virtual robot which accords with the robot information, and displaying the created virtual robot in the robot display interface.
In the embodiment of the disclosure, by displaying the robot creation control in the robot display interface, the user may not be limited to adding an existing virtual robot in a session, but may create a new virtual robot based on the robot creation control, thereby satisfying the personalized requirements of the user.
According to a second aspect of the embodiments of the present disclosure, there is provided a control adding apparatus, including:
a conversation interface display unit configured to execute a conversation interface based on any one of the conversations and display a robot addition control;
the robot display unit is configured to execute a triggering operation responding to the addition of the control to the robot and display a robot display interface, and the robot display interface comprises at least one virtual robot provided by a third-party server;
the robot adding unit is configured to execute a selection operation of any virtual robot in the robot display interface, and add a virtual account corresponding to the virtual robot to the session, wherein the virtual robot has a corresponding target control and a corresponding target operation;
and the target control adding unit is configured to add the target control in the session interface, and the target control is used for triggering and executing the target operation.
In some embodiments, the target control adding unit is configured to add the target control in a menu corresponding to a session message in the session interface; the control adding device further comprises:
and the target control display unit is configured to execute menu operation responding to the calling of any conversation message and display a menu corresponding to the conversation message in the conversation interface, wherein the menu comprises the target control.
In some embodiments, the target operation includes an operation of uploading a session message corresponding to the target control to the third-party server, and the control adding apparatus further includes:
a first operation execution unit configured to execute a trigger operation in response to the target control, and invoke the virtual robot to read a session message corresponding to the target control; and sending the session message to the third-party server.
In some embodiments, the target operation comprises an operation of sending a task creation request to the third-party server, and the control adding means further comprises:
the second operation execution unit is configured to execute a triggering operation responding to the target control, and call the virtual robot to read the session message and the current login account corresponding to the target control; and sending a task creating request to the third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task with the session message as task content for the current login account.
In some embodiments, the second operation execution unit is further configured to execute displaying a task creation interface, where the task creation interface includes a session message corresponding to the target control; acquiring input task information based on the task creation interface; and the task creating request sent to the third-party server also carries the task information, and the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
In some embodiments, the control adding apparatus further comprises:
a reply message determining unit configured to execute a session message which is issued in response to the session and marks the virtual account as a receiving account, and invoke the virtual robot to determine a reply message corresponding to the session message;
and the reply message issuing unit is configured to execute issuing the reply message in the session by taking the virtual account as an issuing account.
In some embodiments, the reply message determination unit includes:
a keyword identification subunit configured to execute calling the virtual robot to identify an instruction keyword from the conversation message;
a corpus information determination subunit configured to execute calling the virtual robot to determine corpus information matching the instruction keyword;
a reply message generation subunit configured to perform generation of the reply message based on the corpus information.
In some embodiments, the instruction keywords include a keyword indicating a selected account, and the control adding device further includes:
an account selecting unit configured to execute calling of the virtual robot to select a target account from a plurality of accounts included in the session, wherein the target account is an account matched with the instruction keyword;
and the reply message generation subunit is configured to perform combination of the target account and the corpus information to obtain the reply message.
In some embodiments, the corpus information determining subunit is configured to execute calling the virtual robot to obtain instruction configuration information, where the instruction configuration information includes at least one reference instruction keyword and an information query interface corresponding to each reference instruction keyword; and calling an information query interface corresponding to the instruction key words to query the corpus information matched with the instruction key words.
In some embodiments, the control adding apparatus further comprises:
an instruction display unit configured to execute displaying a stored robot call instruction in the conversation interface in response to inputting a reference instruction character in a message input field of the conversation interface, the robot call instruction being used to call the virtual robot, the reference instruction character being used to trigger displaying of the robot call instruction;
the instruction issuing unit is configured to execute a triggering operation responding to any displayed robot calling instruction, issue the robot calling instruction to the session, and send the robot calling instruction to a service interface corresponding to the virtual robot, and the third-party server is used for responding to the robot calling instruction through the service interface.
In some embodiments, the control adding apparatus further comprises:
an instruction filtering unit configured to perform filtering out a robot call instruction from the conversation interface that does not include the character in response to continuing to input the character in the message input field.
In some embodiments, the control adding apparatus further comprises:
the robot sharing unit is configured to execute triggering operation of a robot identifier in the session interface, and display a detail interface of the virtual robot corresponding to the robot identifier, wherein the detail interface comprises a robot sharing control; responding to the triggering operation of the robot sharing control, generating a sharing link of the virtual robot, and displaying a session identification list, wherein the session identification list comprises at least one session identification; responding to the selection operation of any session identifier in the session identifier list, and issuing the sharing link to the session corresponding to the selected session identifier.
In some embodiments, the control adding apparatus further comprises:
the authority determining unit is configured to execute displaying of an authority setting interface of the virtual robot, wherein the authority setting interface comprises at least one operation type; responding to the selection operation of the operation type in the authority setting interface, and determining the authority range of the virtual robot, wherein the authority range comprises the operation type selected from the authority setting interface; and the authority range representation allows the virtual robot to execute the operation corresponding to the operation type.
In some embodiments, the control adding apparatus further comprises:
and the function information display unit is configured to execute issuing a session message in the session by taking the virtual account as an issuing account, wherein the session message comprises function description information, and the function description information is used for describing functions which can be realized by the virtual robot.
In some embodiments, a robot search control is included in the robot presentation interface, and the control adding apparatus further includes:
a robot search unit configured to perform acquiring a search word input in the robot search control; acquiring a target robot which is provided by the third-party server and matched with the search terms; displaying the target robot in the robot display interface.
In some embodiments, a robot creation control is included in the robot presentation interface, and the control adding apparatus further includes:
a robot creation unit configured to perform a trigger operation in response to the robot creation control, and display a robot creation interface; acquiring input robot information based on the robot creating interface; and creating a virtual robot which accords with the robot information, and displaying the created virtual robot in the robot display interface.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the control addition method as described in the above aspect.
According to a fourth aspect provided by an embodiment of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the control adding method according to the above aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the control addition method of the above aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating one implementation environment in accordance with an example embodiment.
FIG. 2 is a flow diagram illustrating a control addition method in accordance with an exemplary embodiment.
FIG. 3 is a flow diagram illustrating a control addition method in accordance with an exemplary embodiment.
FIG. 4 is a diagram illustrating a conversation interface in accordance with an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a robotic presentation interface, according to an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating a robotic presentation interface, according to an exemplary embodiment.
FIG. 7 is a schematic diagram illustrating a robotic presentation interface, according to an exemplary embodiment.
FIG. 8 is a schematic diagram illustrating a robot creation interface in accordance with an exemplary embodiment.
FIG. 9 is a schematic diagram illustrating a robot creation interface in accordance with an exemplary embodiment.
FIG. 10 is a diagram illustrating a conversation portal presentation interface, according to an example embodiment.
FIG. 11 is a diagram illustrating a conversation interface in accordance with an illustrative embodiment.
FIG. 12 is a schematic diagram illustrating a details interface of a virtual robot, according to an example embodiment.
FIG. 13 is a flowchart illustrating a control addition method in accordance with an exemplary embodiment.
FIG. 14 is a diagram illustrating a display of a target control in a menu corresponding to a conversation message, according to an illustrative embodiment.
FIG. 15 is a diagram illustrating a display of a robot invocation instruction in a conversational interface, according to an exemplary embodiment.
Fig. 16 is a block diagram illustrating a control addition apparatus in accordance with an exemplary embodiment.
Fig. 17 is a block diagram of a terminal according to an example embodiment.
Fig. 18 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the description of the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
As used herein, the terms "at least one," "a plurality," "each," and "any," at least one of which includes one, two, or more than two, and a plurality of which includes two or more than two, each of which refers to each of the corresponding plurality, and any of which refers to any of the plurality. For example, the plurality of account numbers include 3 account numbers, each of which refers to each of the 3 account numbers, and any one of the 3 account numbers refers to any one of the 3 account numbers, which may be the first one, the second one, or the third one.
FIG. 1 is a schematic diagram of an implementation environment provided by embodiments of the present disclosure. Referring to fig. 1, the implementation environment includes at least one terminal 101 (2 are taken as an example in fig. 1) and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. In some embodiments, the terminal 101 is a computer, a cell phone, a tablet, or other terminal. In some embodiments, the server 102 is a background server of the target application or a cloud server providing services such as cloud computing and cloud storage.
In some embodiments, the terminal 101 has installed thereon a target application served by the server 102, through which the terminal 101 can implement functions such as data transmission, message interaction, and the like. In some embodiments, the target application is a target application in the operating system of the terminal 101 or a target application provided by a third party. The target application has a communication function, and the terminal 101 can receive and transmit a session message through the target application. For example, a target account logged in the target application can join multiple sessions, a session message is issued in each joined session, and other accounts in the session can receive the session message issued by the target account through the target application. Of course, the target application can also have other functions, such as shopping function, live broadcast function, game function, etc., which the present disclosure does not limit. In some embodiments, the target application is an instant messaging application, a short video application, a music application, a gaming application, a shopping application, or other application, to which the present disclosure is not limited.
In the embodiment of the present disclosure, the target application includes a virtual robot, the virtual robot is used to implement a target operation, and correspondingly, the implementation environment further includes a third-party server 103 providing the virtual robot, the third-party server 103 is a background server of the virtual robot or a cloud server providing services such as cloud computing and cloud storage, and the third-party server 103 is connected to the terminal 101 and the server 102 through a wireless or wired network.
In this embodiment of the present disclosure, the third-party server 103 is configured to send a virtual robot to the terminal 101, where the terminal 101 is configured to add a virtual account corresponding to the virtual robot to any session of a target application, and add a target control corresponding to the virtual robot in a session interface of the session, and then can call the virtual robot to execute a target operation corresponding to the virtual robot.
The control adding method provided by the disclosure can be applied to a scene of expanding new operation in the target application. For example, if a user wants to enable a target application to perform an operation of uploading a session message to the third-party server 103, it is only necessary to add a virtual account corresponding to a virtual robot for implementing the operation to any session of the target application by using the method provided by the present disclosure, a control for implementing the operation can be added to a session interface corresponding to the session, and then the session message corresponding to the control can be uploaded to the third-party server 103 by triggering the control.
For another example, if a user wants to enable a target application to perform an operation of sending a task creation request to the third-party server 103, only by adding a virtual account corresponding to a virtual robot for implementing the operation to any session of the target application through the method provided by the present disclosure, a control for implementing the operation can be added in a session interface corresponding to the session, and then the virtual robot can be called to send the task creation request to the third-party server 103 by triggering the control, and the task creation request carries a current login account and a session message corresponding to the control, so that the third-party server 103 creates a target task with the session message as a task content for the current login account.
Fig. 2 is a flowchart illustrating a control adding method according to an exemplary embodiment, which is used in the electronic device, as shown in fig. 2, and includes the following steps.
201. And displaying the robot adding control based on the session interface of any session.
202. And responding to the triggering operation of adding the control to the robot, and displaying a robot display interface, wherein the robot display interface comprises at least one virtual robot provided by a third-party server.
203. And responding to the selection operation of any virtual robot in the robot display interface, adding a virtual account corresponding to the virtual robot into the session, wherein the virtual robot has a corresponding target control and a target operation.
204. And adding a target control in the session interface, wherein the target control is used for triggering and executing target operation.
In the embodiment of the disclosure, when the target control needs to be extended in the application, the method is not limited to updating the application, and only the virtual robot with the target control needs to be selected from the robot display interface, and the virtual account corresponding to the virtual robot is added to the session, so that the target control can be added in the session interface of the application by using the virtual robot, and thus, the flexibility of extending the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.
In some embodiments, adding a target control in the session interface includes:
adding a target control in a menu corresponding to a session message in a session interface;
the control adding method further comprises the following steps: and responding to the menu operation of calling any conversation message, and displaying a menu corresponding to the conversation message in the conversation interface, wherein the menu comprises a target control.
In some embodiments, the target operation includes an operation of uploading a session message corresponding to the target control to the third-party server, and after displaying a menu corresponding to the session message in the session interface, the control adding method further includes:
responding to the trigger operation of the target control, and calling the virtual robot to read the session message corresponding to the target control;
and sending the session message to the third-party server.
In some embodiments, the target operation includes an operation of sending a task creation request to a third-party server, and after a menu corresponding to the session message is displayed in the session interface, the control adding method further includes:
responding to the trigger operation of the target control, and calling the virtual robot to read the session message and the current login account corresponding to the target control;
and sending a task creating request to a third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task taking the session message as task content for the current login account.
In some embodiments, after the virtual robot is invoked to read the session message and the current login account corresponding to the target control, the control adding method further includes:
displaying a task creating interface, wherein the task creating interface comprises session information corresponding to the target control;
acquiring input task information based on a task creation interface;
and the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
In some embodiments, after adding the virtual account corresponding to the virtual robot to the session, the control adding method further includes:
responding to a session message issued in the session and marking the virtual account as a receiving account, and calling the virtual robot to determine a reply message corresponding to the session message;
and issuing a reply message in the session by taking the virtual account as an issuing account.
In some embodiments, invoking the virtual robot to determine a reply message corresponding to the conversation message comprises:
calling a virtual robot to identify an instruction keyword from the session message;
calling a virtual robot to determine corpus information matched with the instruction keywords;
and generating a reply message based on the corpus information.
In some embodiments, the instruction keywords include keywords indicating a selected account, and the control adding method further includes:
calling the virtual robot to select a target account from a plurality of accounts included in the conversation, wherein the target account is an account matched with the instruction keywords;
generating a reply message based on the corpus information, comprising: and combining the target account and the corpus information to obtain a reply message.
In some embodiments, invoking the virtual robot to determine corpus information matching the instruction keyword includes:
calling a virtual robot to obtain instruction configuration information, wherein the instruction configuration information comprises at least one reference instruction keyword and an information query interface corresponding to each reference instruction keyword; and calling an information query interface corresponding to the instruction key words to query the corpus information matched with the instruction key words.
In some embodiments, after adding the virtual account corresponding to the virtual robot to the session, the control adding method further includes:
responding to the input of a reference instruction character in a message input field of a session interface, displaying a stored robot calling instruction in the session interface, wherein the robot calling instruction is used for calling a virtual robot, and the reference instruction character is used for triggering and displaying the robot calling instruction;
responding to the triggering operation of any displayed robot calling instruction, issuing the robot calling instruction to the conversation, sending the robot calling instruction to a service interface corresponding to the virtual robot, and responding to the robot calling instruction through the service interface by the third-party server.
In some embodiments, in response to entering the reference instruction character in the message input field of the conversation interface, after displaying the stored robot call instruction in the conversation interface, the control addition method further comprises:
in response to continuing to enter characters in the message entry field, robot invocation instructions that do not include characters are filtered from the conversation interface.
In some embodiments, after adding the virtual account corresponding to the virtual robot to the session, the control adding method further includes:
responding to the triggering operation of the robot identification in the session interface, and displaying a detail interface of the virtual robot corresponding to the robot identification, wherein the detail interface comprises a robot sharing control;
responding to triggering operation of a robot sharing control, generating a sharing link of the virtual robot, and displaying a session identification list, wherein the session identification list comprises at least one session identification;
responding to the selection operation of any session identifier in the session identifier list, and issuing the sharing link to the session corresponding to the selected session identifier.
In some embodiments, after adding the virtual account corresponding to the virtual robot to the session, the control adding method further includes:
displaying an authority setting interface of the virtual robot, wherein the authority setting interface comprises at least one operation type;
responding to the selection operation of the operation type in the authority setting interface, and determining the authority range of the virtual robot, wherein the authority range comprises the operation type selected from the authority setting interface;
and the authority range representation allows the virtual robot to execute the operation corresponding to the operation type.
In some embodiments, after adding the virtual account corresponding to the virtual robot to the session, the control adding method further includes:
and issuing a session message in the session by taking the virtual account as an issuing account, wherein the session message comprises function description information which is used for describing the function which can be realized by the virtual robot.
In some embodiments, the robot display interface includes a robot search control, and the control adding method further includes:
acquiring a search word input in a robot search control;
acquiring a target robot which is provided by a third-party server and matched with the search terms;
and displaying the target robot in the robot display interface.
In some embodiments, the robot presentation interface includes a robot creation control, and the control addition method further includes:
responding to the triggering operation of the robot creation control, and displaying a robot creation interface;
acquiring input robot information based on a robot creating interface;
and creating a virtual robot which accords with the robot information, and displaying the created virtual robot in a robot display interface.
FIG. 3 is a flow diagram illustrating a control addition method, as shown in FIG. 3, including the following steps, in accordance with an exemplary embodiment.
301. And the terminal displays the robot adding control based on a session interface of any session.
In the embodiment of the present disclosure, a session refers to a message group composed of accounts, the session includes at least two accounts, and a session message issued by any account in the session can be received by other accounts in the session. And one account of the at least two accounts is a current login account of the terminal. For example, the first session is composed of account a, account B, and account C, and when account a issues a session message in the session, account B and account C can both receive the session message. Each session has a session identity which characterizes the corresponding session, so that different sessions can be distinguished by means of the session identity. For example, the session identifier is an identifier such as a session name or a session number.
In some embodiments, a session is a session in a target application of a terminal, and when a plurality of accounts in the session are logged in based on the target application, if one of the accounts issues a session message in the session, the terminal logged in the account sends the session message to target applications of other terminals through the target application, where the other terminals are terminals logged in other accounts in the session. The other terminal can then display the session message in the target application.
In some embodiments, the target application has a corresponding server for storing each session that has been created and the account number in each session. In addition, the server is also used for providing a service of message forwarding for the target application according to the stored session and the account number in the session. For example, when one account in a session issues a session message, the session message is first sent to the server through the target application, and after determining other accounts in the session corresponding to the session message, the server forwards the session message to the terminal that logs in the other accounts.
Each conversation has a corresponding conversation interface for displaying a robot addition control for instructing the addition of a virtual robot in the conversation and a conversation message published in the conversation. In addition, other information can be displayed in the conversation interface. For example, a session addition control that is used to create a new session. As another example, a video initiation control for initiating a video call in the session. Of course, other information can also be included in the session interface, which is not limited in this disclosure.
The virtual robot is an artificial intelligent robot which can be on line at any time and can communicate with people through natural language. In the embodiment of the present disclosure, the virtual robot may be capable of performing various operations, for example, a weather query operation, a map query operation, a translation operation, a data statistics operation, a game operation, an operation of uploading a session message to a third-party server or sending a task creation request, and the like, which is not limited in the embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a session interface, and referring to fig. 4, a session entry display interface is arranged on the left side of the session interface, the session entry display interface includes a plurality of session entries, and a session interface corresponding to a second session entry on the right side is displayed after a second session entry is triggered. And a robot adding control is displayed at the upper right part of the conversation interface and is used for adding the virtual robot in the conversation corresponding to the conversation interface.
302. And the terminal responds to the triggering operation of adding the control to the robot in the session interface and displays a robot display interface, wherein the robot display interface comprises at least one virtual robot provided by a third-party server.
In some embodiments, the robot presentation interface includes at least one virtual robot and functional description information for each virtual robot. The function description information is used for describing the operation which can be executed by the robot and the method for triggering the virtual robot to execute the operation by the user. For example, the robot display interface includes three virtual robots, where the virtual robot a can perform an operation of sending a task creation request to the third-party server, and the method for the user to trigger the virtual robot a to perform the operation is to trigger a task creation control corresponding to the session message in the session interface. The virtual robot B can execute the operation of adding weather inquiry, and the method for triggering the virtual robot B to execute the operation by the user issues a conversation message which takes the virtual robot B as a receiving object in a conversation interface, and the conversation message carries instruction keywords related to weather inquiry. The virtual robot C can execute the translation operation, and the method for triggering the virtual robot C to execute the operation by the user is to trigger a translation control corresponding to the conversation message in the conversation interface. The virtual robot included in the robot display interface is only an exemplary illustration, and other types of virtual robots can also be included in the robot display interface, which is not limited in this disclosure.
In some embodiments, the functional description information of each robot in the robot presentation interface includes at least one of text information, picture information, and video information. The picture information is a process schematic diagram of the virtual robot executing corresponding operation. For example, a trigger operation corresponding to the operation is executed in a session in which the virtual robot has joined, a screen corresponding to the trigger operation and a screen on which the virtual robot executes the corresponding operation are cut, and the cut screens are used as the picture information. The video information is a video of the virtual robot executing corresponding operation. For example, a trigger operation corresponding to the operation is executed in a session that has joined the virtual robot, a video corresponding to the trigger operation and a video for which the virtual robot executes the corresponding operation are recorded, and the recorded video is used as the video information. In the embodiment of the disclosure, the functions of the virtual robot are described in the form of videos or pictures, so that a user can more intuitively know the functions and the usage of the virtual robot.
In some embodiments, the robot display interface includes a plurality of virtual robots provided by third-party servers, wherein each third-party server is configured to provide an operation service for a corresponding virtual robot.
In the embodiment of the disclosure, the terminal responds to the robot addition control and displays at least one virtual robot based on the robot display interface, so that a user can conveniently select a required virtual robot.
Fig. 5 is a schematic diagram of a robot display interface, and referring to fig. 4 and 5, when a trigger operation is performed on the robot addition control in the session interface in fig. 4, the robot display interface shown in fig. 5 can be displayed. The robot display interface comprises three virtual robots, function description information corresponding to each virtual robot and a selection control corresponding to each virtual robot, the selection control comprises adding prompt information, triggering operation is executed on the selection control of any virtual robot, and the corresponding virtual robot can be added to a session corresponding to the session interface.
In some embodiments, the robot display interface further includes a robot search control, the user can search the virtual robot through the robot search control, and accordingly, the terminal obtains a search word input in the robot search control; acquiring a target robot which is provided by a third-party server and matched with the search word; and displaying the target robot in the robot display interface.
In some embodiments, the search term includes at least one of a robot name, a robot number, a producer name of the robot. Of course, the search term can be other, which is not limited by the embodiment of the present disclosure.
In some embodiments, after the terminal searches for the target robot matching the search word, the terminal displays the target robot and the robot information of the target robot in the robot display interface, and highlights content matching the search word in the robot information of the target robot. For example, if the search word is "robot", a target robot including "robot" in the robot information is searched for, and three characters of "robot" in the robot information are highlighted.
In the embodiment of the disclosure, the robot search control is displayed in the robot display interface, and a user can quickly search for the target robot matched with the search word only by inputting the search word in the search control, so that the efficiency of searching for the target robot is greatly improved.
Fig. 6 is a schematic diagram of a robot presentation interface, and referring to fig. 6, information such as two virtual robots, a robot name of each virtual robot, a producer, a robot avatar, and the number of sessions using the virtual robot is included in the robot presentation interface. The robot display interface also comprises a search control, and when a search word is not input in the search control, two words of 'search' are displayed in the search control. Referring to fig. 7, after the search word "robot" is input in the search control, the searched robot L is displayed in the robot presentation interface.
In some embodiments, the robot display interface includes a robot creation control, a user can create a virtual robot based on the robot creation control, and accordingly, the terminal displays the robot creation interface in response to a triggering operation of the robot creation control; acquiring input robot information based on a robot creating interface; and creating a virtual robot which accords with the robot information, and displaying the created virtual robot in a robot display interface.
Wherein the robot creation interface is used to create a virtual robot. In some embodiments, a robot information input field in which a user can input information of a virtual robot is included in the robot creation interface, and the terminal can create the virtual robot based on the information of the virtual robot input by the user in the robot information input field. In some embodiments, the robot information includes robot name, robot avatar, robot number, robot profile, and the like. Of course, the robot information can also include other information, for example, a robot call instruction for calling the virtual robot, which is not limited by the embodiment of the present disclosure.
With continued reference to fig. 7, the robot presentation interface further includes a robot creation control, and the robot creation control displays a "create robot" therein, for prompting the user to trigger the robot creation control to create a virtual robot. Referring to fig. 8, after the robot creation control in fig. 7 is triggered, a robot creation interface shown in fig. 8 is displayed, where the robot creation interface includes a name input field, a robot number input field, a profile input field, and a head portrait setting control. After the user inputs various robot information in the robot creating interface, the user triggers a creating option, and then the virtual robot which accords with the input robot information can be created. Fig. 9 is a schematic diagram of a robot creation interface after a virtual robot is successfully created. Referring to fig. 9, the robot creation interface further includes a network hook address automatically generated for the created virtual robot, and an entry of a function configuration interface corresponding to the virtual robot. The network hook address is used for uniquely identifying the session added by the virtual robot, and after the virtual robot is added into the session, the server can trigger the virtual robot to issue the issued content in the session by issuing the content to the network hook address. And prompting information of configuring more functions is displayed on the entrance of the function configuration interface and is used for prompting a user to enter the function configuration interface of the virtual robot based on the entrance.
In the embodiment of the disclosure, by displaying the robot creation control in the robot display interface, the user may not be limited to adding an existing virtual robot in a session, but may create a new virtual robot based on the robot creation control, thereby satisfying the personalized requirements of the user.
303. And the terminal responds to the selection operation of any virtual robot in the robot display interface and adds the virtual account corresponding to the selected virtual robot into the conversation.
The account is used for distinguishing different users, and after the users register information in the server corresponding to the target application, one account can be obtained. The server can then distinguish between different users by account number. In the embodiment of the present disclosure, the account numbers include a virtual account number and a real account number. The virtual account refers to an account corresponding to the virtual robot, and the real account refers to an account corresponding to the real user.
After the terminal adds the virtual account corresponding to the virtual robot to the session, the terminal can execute the operation corresponding to the virtual robot based on the virtual robot. It should be noted that the above steps 301-303 are only one way to add a virtual robot to the session, and the virtual robot can also be added to the session through the following steps 304-307.
304. And the terminal displays a conversation entrance display interface, wherein the conversation entrance display interface comprises a robot adding control.
Each conversation has a corresponding conversation entrance, and a conversation interface corresponding to the conversation entrance can be displayed by triggering the conversation entrance. And the session portal exposure interface is used for exposing the session portal. In addition, other information can be displayed in the session portal presentation interface, such as a calendar control for viewing a calendar. Of course, other information can also be included in the session entry display interface, which is not limited in this disclosure.
305. And the terminal responds to the triggering operation of adding the control to the robot in the conversation entrance display interface and displays the robot display interface, wherein the robot display interface comprises at least one virtual robot provided by a third-party server.
The step is implemented in the same way as step 302, and is not described herein again.
Referring to fig. 10, a schematic diagram of a session portal showing interface is shown, where a plurality of session portals are included in the session portal showing interface, and a robot add control is displayed above the session portals. And if the control is added to the robot to execute the trigger operation, displaying a robot display interface on the right side of the session entrance display interface, wherein the robot display interface comprises three virtual robots, function description information corresponding to each virtual robot and a selection control corresponding to each virtual robot. The selection control includes "add" prompt information, and if a trigger operation is performed on the selection control of any virtual robot, a session identifier list as shown in fig. 10 can be displayed in the robot display interface, where the session identifier list includes four session identifiers and prompt information, where the session identifiers include session icons and session names, and the prompt information is used to prompt the selection operation performed on the session identifiers, so as to add the virtual robot corresponding to the selection control to the session corresponding to the selected session identifier.
306. And the terminal responds to the selection operation of any virtual robot in the robot display interface and displays a conversation identification list, wherein the conversation identification list comprises at least one conversation identification.
Because the user adds the virtual robot based on the robot addition control in the session entrance display interface, the user does not determine the session to be added by the virtual robot, and in this case, the session identification list is displayed, so that the user can conveniently select the session to be added by the virtual robot.
307. And the terminal responds to the selection operation of any session identifier in the session identifier list and adds the virtual account corresponding to the selected virtual robot to the session corresponding to the selected session identifier.
In some embodiments, this step is implemented as: and the terminal responds to the selection operation of any session identifier in the session identifier list and sends a robot addition request to the server corresponding to the target application, wherein the robot addition request carries the session identifier and the virtual account corresponding to the selected virtual robot. After receiving the robot addition request, the server determines a corresponding session based on the session identifier in the robot addition request, and then adds the virtual account in the robot addition request to the session.
It should be noted that the virtual robot has a corresponding target control and a target operation, and the target control is used for triggering execution of the target operation. Correspondingly, after the virtual account corresponding to the virtual robot is added to the session, the control adding method further includes the following steps 308 and 309.
It should be noted that, the target control and the target operation corresponding to the virtual robot are determined by a third-party server providing the virtual robot, and the target operation can be any operation, which is not limited in this disclosure.
308. And adding a target control corresponding to the virtual robot in a session interface corresponding to the session by the terminal.
In some embodiments, this step is implemented as: and adding a target control in a menu corresponding to each conversation message in a conversation interface by the terminal. The menu corresponding to the session message is used to display at least one control corresponding to the session message, for example, the menu corresponding to the session message includes a withdrawal control, which is used to withdraw the session message. As another example, a forwarding control is included in a menu corresponding to the session message, and is used for forwarding the session message to other sessions. Of course, other controls can be included in the menu corresponding to the session message, which is not limited in this disclosure.
In some embodiments, the virtual robot to which the target control belongs, the name of the target control, and the like are displayed in the target control, and certainly, other information can also be displayed in the target control, which is not limited in this disclosure.
And after adding a target control in a menu corresponding to each session message in a session interface, responding to the menu invoking operation of any session message by the terminal, and displaying the menu corresponding to the session message in the session interface, wherein the menu comprises the target control.
309. And the terminal responds to the trigger operation of the target control and calls the virtual robot to execute the target operation corresponding to the virtual robot.
In some embodiments, the target control is a translation control, the target operation is a translation operation, and correspondingly, in response to the trigger operation on the translation control, the terminal translates the session message corresponding to the translation control into a session message belonging to another language, and displays the translated session message. For example, a conversation message belonging to chinese is translated into a conversation message belonging to english. Therefore, if the user needs to translate the session message into the session message of other languages, the user does not need to jump from the current target application to other translation software, the translation function of the session message can be directly realized in the target application, and the method is simple and efficient.
In the embodiment of the disclosure, when the translation operation needs to be extended in the target application, only the virtual robot for implementing the translation operation needs to be added to the session, and the translation control can be added in the session interface based on the virtual robot, so that the translation operation on the session message is implemented in a manner of triggering the translation control, the target application does not need to be updated, the flexibility of the extension operation is improved, and the extension cost is reduced.
In some embodiments, the terminal responds to the triggering operation of the translation control, displays a translation interface, where the translation interface includes a language selection control, determines a target language of translation based on the language selection control, then translates a session message corresponding to the translation control into a session message belonging to the target language, and displays the translated session message in the translation interface. In this way, the user can freely select the target language to be translated through the language selection control, and the user stickiness is improved. In some embodiments, the language selection control is a language input field based on which the user can input the target language to be translated. Or, the language selection control is in the form of a sliding window, and the language displayed in the sliding window can be changed by the user through sliding operation, so that the language finally displayed in the sliding window is taken as the target language to be translated by the terminal.
In some embodiments, the target control is a language style conversion control, the target operation is a language style conversion operation, and accordingly, the terminal responds to the triggering operation of the language style conversion control, converts the session message corresponding to the language style conversion control into the session message belonging to other language styles, and displays the converted session message. For example, the conversation message is converted into an lovely style conversation message, an ultracool style conversation message, an ancient style conversation message, an art style conversation message, and the like. Therefore, the user can convert the conversation message in the conversation into the conversation messages of various styles, so that the conversation messages are more vivid and interesting, and the interaction effect among the users is improved.
In some embodiments, after the terminal adds the virtual account corresponding to the virtual robot to any session, the control adding method further includes: the terminal responds to a session message which is issued in the session and marks the virtual account as a received account, and calls the virtual robot to determine a reply message corresponding to the session message; and issuing a reply message in the session by taking the virtual account as an issuing account. Therefore, the conversation messages issued by other users in the conversation are replied in a conversation mode among the virtual robot simulation real users, the sense of reality of the virtual robot can be improved, the virtual robot is enabled to be more like a real user, and the interaction effect between the user and the virtual robot in the conversation is improved.
In some embodiments, when a user publishes a session message in a session, a receiving account of the session message can be marked in the session message, for example, the user marks an "@ virtual account" in the session message, and then the terminal invokes a virtual robot corresponding to the virtual account to publish a reply message in the session.
In some embodiments, the terminal invoking the virtual robot to determine a reply message corresponding to the conversation message includes: the terminal calls a virtual robot to identify instruction keywords from the session message; calling a virtual robot to determine corpus information matched with the instruction keywords; and generating a reply message based on the corpus information.
The instruction keywords include any type of keywords, for example, if the instruction keywords are weather-related instruction keywords, the virtual robot determines corpus information matched with the weather-related instruction keywords. For example, if the instruction keywords include "beijing" and "weather", the virtual robot determines weather information of beijing, and uses the weather information of beijing as corpus information. If the instruction keyword is the instruction keyword related to the picture, the virtual robot determines corpus information matched with the instruction keyword related to the picture. For example, if the instruction keyword includes "picture" and "cat," the virtual robot determines the picture of the cat and uses the picture of the cat as the corpus information.
In the embodiment of the present disclosure, since the conversation message may include a plurality of vocabularies, and some of the vocabularies are important vocabularies for determining the reply message, that is, contents of the reply message can be determined, the instruction keyword is identified from the conversation message by invoking the virtual robot, and the corpus information matched with the instruction keyword is determined by invoking the virtual robot, so that on one hand, accuracy of the reply message can be ensured, on the other hand, data volume of the corpus information is reduced, and simplicity of the reply message is ensured.
In some embodiments, the terminal invokes the virtual robot to determine corpus information matching the instruction keywords, including: the terminal calls the virtual robot to obtain instruction configuration information, wherein the instruction configuration information comprises at least one reference instruction keyword and corpus information matched with each reference instruction keyword. For example, the instruction configuration information includes reference instruction keywords "picture" and a cat picture corresponding to the "cat". Therefore, the reference instruction keywords and the corpus information matched with each reference instruction keyword are directly stored in the instruction configuration information, and the efficiency of determining the corpus information is improved.
In some embodiments, the terminal invokes the virtual robot to determine corpus information matching the instruction keywords, including: the method comprises the steps that a terminal calls a virtual robot to obtain instruction configuration information, wherein the instruction configuration information comprises at least one reference instruction keyword and an information query interface corresponding to each reference instruction keyword; and calling an information query interface corresponding to the instruction key words by the terminal to query the corpus information matched with the instruction key words.
In the embodiment of the disclosure, the reference instruction keywords and the information query interface corresponding to each reference instruction keyword are stored in the instruction configuration information, so that the virtual robot can quickly query the information matched with the instruction keywords based on the information query interface corresponding to the instruction keywords, the efficiency of determining the corpus information is ensured, and the efficiency of generating the reply message is improved.
In some embodiments, each information query interface is associated with a corpus database of the third-party server, and the corpus databases associated with different information query interfaces are different. Correspondingly, the terminal calls an information query interface corresponding to the instruction keyword to query the corpus information matched with the instruction keyword, and the information query interface comprises the following steps: the method comprises the steps that a terminal calls an information query interface corresponding to an instruction keyword to send a corpus acquisition request to a third-party server, the corpus acquisition request carries the instruction keyword, the third-party server queries corpus information matched with the instruction keyword in a corpus database associated with the information query interface after receiving the corpus acquisition request, and the corpus information is returned to the terminal through the information query interface.
In some embodiments, the instruction keywords include keywords indicating a selected account, and accordingly, the control adding method further includes: and the terminal calls the virtual robot to select a target account from a plurality of accounts included in the conversation, wherein the target account is an account matched with the instruction keywords. Correspondingly, the terminal generates a reply message based on the corpus information, and the reply message comprises: and the terminal combines the target account and the corpus information to obtain a reply message.
For example, the session message is 'magic mirror is the most beautiful', the session message includes instruction keywords 'magic mirror', 'who' and 'most beautiful', wherein 'who' and 'most beautiful' are keywords indicating a selected account, and corpus information matched with 'magic mirror' and 'most beautiful' is 'magic mirror determined', 'account selected by virtual robot > is the most beautiful person in the session', the terminal invokes the virtual robot to select an account matched with 'most beautiful' from a plurality of accounts included in the session, and combines the selected account with the corpus information to obtain a reply message. For example, if the account selected by the virtual robot is "account M", the combined reply message is "magic mirror has determined that account M is the most attractive person in the conversation".
In the embodiment of the disclosure, under the condition that the instruction keyword is a keyword for selecting an account, the virtual robot is called to select a target account from a plurality of accounts included in the session, and the target account and the corpus information are combined to obtain the reply message, so that when the virtual robot generates the reply message, the content of the reply message can be enriched by combining the account information in the session, and the application scene of the virtual robot to reply the message is expanded.
In some embodiments, after the terminal adds the virtual account corresponding to the virtual robot to any session, the control adding method further includes: the method comprises the steps that a terminal displays an authority setting interface of the virtual robot, wherein the authority setting interface comprises at least one operation type; the terminal responds to the selection operation of the operation type in the authority setting interface, and determines the authority range of the virtual robot, wherein the authority range comprises the operation type selected from the authority setting interface, and the authority range representation allows the virtual robot to execute the operation corresponding to the operation type.
The types of operations that the virtual robot can perform are various, for example, the virtual robot can perform an operation of uploading a session message to a third-party server, can perform an operation of sending a task creation request to the third-party server, can perform an operation of inquiring weather information, can perform an operation of reading account information in a session, can perform an operation of issuing a session message in a session, and the like, and the user may only want the virtual robot to perform some types of operations in the session and not want the virtual robot to perform other types of operations in the session, and therefore, in the embodiment of the present disclosure, after adding a virtual account corresponding to the virtual robot to the session, the authority setting interface of the virtual robot is displayed, and then the user can set the authority range of the virtual robot based on the authority setting interface so that the virtual robot performs operations corresponding to the operation types within the authority range, therefore, the operation executed by the virtual robot is allowed by the user, and the user viscosity can be improved.
In some embodiments, after the terminal adds the virtual account corresponding to the virtual robot to any session, the control adding method further includes: and the terminal takes the virtual account as an issuing account, issues a session message in the session, wherein the session message comprises function description information which is used for describing the function which can be realized by the virtual robot.
In some embodiments, the functional descriptive information includes an operation that the virtual robot implementation is capable of performing, and a method of the user triggering the virtual robot to perform the operation. Therefore, users in conversation can know the functions and the usage of the virtual robot conveniently, and further can execute operation based on the virtual robot.
Fig. 11 is a schematic diagram of a session interface, and referring to fig. 11, after a virtual account corresponding to a virtual robot is added to a session corresponding to a second session identifier, a session message issued by a virtual robot a is displayed in the session interface, where the session message includes function description information of the virtual robot a. Then, the user M issues a session message in the session, the session message taking the virtual robot a as the receiving object and carrying the instruction keyword "cat", and then, the virtual robot issues a reply message in the session, the reply message marks the user M as the receiving object and carries information corresponding to the instruction keyword cat, namely, pictures of three cats.
In some embodiments, after the terminal adds the virtual account corresponding to the virtual robot to the session, the control adding method further includes: and the terminal responds to the triggering operation of the robot identification in the session interface and displays a detail interface of the virtual robot corresponding to the robot identification, wherein the detail interface comprises a robot sharing control. The terminal responds to triggering operation of a robot sharing control, generates a sharing link of the virtual robot, and displays a conversation identification list, wherein the conversation identification list comprises at least one conversation identification; and the terminal responds to the selection operation of any session identifier in the session identifier list and issues the sharing link to the session corresponding to the selected session identifier.
After the terminal issues the sharing link to the session corresponding to the selected session identifier, the user in the session can click the sharing link, and correspondingly, the terminal corresponding to the user responds to the received triggering operation of the sharing link and displays the detail interface of the virtual robot, so that the user can add the virtual account corresponding to the virtual robot to the session based on the robot addition control in the detail interface.
The robot identification in the session interface includes a robot head portrait of the virtual robot, a corresponding virtual account number, and the like. The details interface of the virtual robot is used to display robot information of the virtual robot, such as a robot name, a producer, a function, and the like of the virtual robot.
In the embodiment of the disclosure, the detail interface of the virtual robot is displayed by responding to the triggering operation of the robot identifier in the session interface, and the robot sharing control is displayed in the detail interface, so that a user can share the virtual robot to other sessions based on the robot sharing control, thereby being beneficial to the propagation of the virtual robot.
In some embodiments, the detail interface further includes a message sending control, and after the message sending control is triggered, if the virtual robot is a contact in the address book of the current login account, the terminal jumps to the separate chat interface with the virtual robot, so that a user can conveniently enter the separate chat interface with the virtual robot from the session interface, and can talk with the virtual robot in the separate chat interface, thereby avoiding disturbance to other users in the session. If the virtual robot is not the contact in the address list of the current login account, the terminal displays addition prompt information to prompt a user to add the virtual account corresponding to the virtual robot into the address list. In some embodiments, a robot adding control is further included in the detail interface, and the robot adding control is used for adding the virtual robot to the address book. In some embodiments, in a case that the virtual robot has been added to the address book by the current login account, a trigger operation is performed on the robot addition control, and then the virtual account can be deleted from the address book.
In some embodiments, if the current login account is an account for creating the virtual robot, the detail interface further includes an entry of a function configuration interface, and the user can enter the function configuration interface of the virtual robot based on the entry, and configure a function of the virtual robot in the function configuration interface, for example, configure a robot call instruction in the function configuration interface.
Fig. 12 is a schematic diagram of a detail interface of the virtual robot. Referring to fig. 12, after the avatar of the robot L in the session interface is triggered, the details interface of the robot L is displayed in the floating window of the session interface. The detailed interface includes information such as a robot head, a creator, a profile, and the number of sessions using the robot L. In addition, the detail interface further comprises a robot sharing control, and two words of 'sharing' are displayed in the robot sharing control. And a robot adding control is also displayed on the right side of the robot sharing control and is used for adding the virtual account corresponding to the robot L into the address book. And an entrance of a function configuration interface is also displayed on the right side of the robot adding control, and the robot can enter the function configuration interface of the robot L by triggering the entrance. In addition, a message sending control is displayed below the profile, and if the message sending control is triggered, a separate chat interface with the robot L can be entered.
It should be noted that the embodiments of the present disclosure include operation controls belonging to different types, and these operation controls are all operation objects of a user, and need the user to perform an operation to trigger. The trigger operations for different types of operation controls may be the same or different, and accordingly, the operations that need to be executed by the user when triggering the operation controls may be the same or different. For example, the trigger operations on the operation control belonging to the count type and the operation control belonging to the reply type are both click operations, and the trigger operation on the operation control belonging to the link type is a long-press operation.
It should be noted that, in the embodiment of the present disclosure, the control is used as an entrance to control the virtual robot to interact with the third-party server, so that the session information can be uploaded to the third-party server, and the third-party server performs an operation. The specific operation executed by the third-party server is determined by the processing logic of the third-party server, which is equivalent to that an interactive interface is provided for a third-party developer based on the virtual robot, and any third-party developer can interact with the target application only by developing a robot with a corresponding function, so that a new function is expanded in the target application. Thus, the robot in the disclosed embodiment effectively assumes the role of a "bridge," tightly coupling the target application with the third party service.
It should be noted that, in the embodiment of the present disclosure, various low-cost interaction forms are provided, so that a user can freely use the robot capability to meet the requirements of information dissemination and human-computer interaction in various scenes. And moreover, the robot creation, retrieval and management experience with low cost is provided for the user, so that professional users can exert the robot capability, ordinary users are influenced, and the ordinary users can better interact with the robot.
In the embodiment of the disclosure, when the target control needs to be extended in the application, the method is not limited to updating the application, and only the virtual robot with the target control needs to be selected from the robot display interface, and the virtual account corresponding to the virtual robot is added to the session, so that the target control can be added in the session interface of the application by using the virtual robot, and thus, the flexibility of extending the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.
It should be noted that the embodiment corresponding to fig. 3 only describes some operations that can be performed by the virtual robot, and in the following embodiments, other operations that can be performed by the virtual robot are described.
FIG. 13 is a flowchart illustrating a control addition method, as shown in FIG. 13, including the following steps in accordance with an exemplary embodiment.
1301. The terminal adds a virtual account corresponding to the virtual robot to any session, the virtual robot has a corresponding target control and a target operation, and the target control is used for triggering execution of the target operation.
1302. And adding a target control in a menu corresponding to each conversation message in a conversation interface by the terminal.
1303. And the terminal responds to the menu operation of calling any conversation message and displays a menu corresponding to the conversation message on a conversation interface, wherein the menu comprises a target control.
The steps 1301 through 1303 are already described in the steps 301 through 308, and will not be described herein again.
In some embodiments, the target control comprises a message upload control, and the target operation is an operation of uploading the session message to the third-party server, and accordingly, step 1304 and 1305 are performed after step 1303. In some embodiments, the target control comprises a task creation control, and the target operation is an operation of sending a task creation request to the third-party server, and accordingly, step 1306 and 1307 are executed after step 1303.
1304. And the terminal responds to the triggering operation of the message uploading control and calls the virtual robot to read the session message corresponding to the message uploading control.
And the terminal responds to the triggering operation of the message uploading control, calls the virtual robot to read the session message corresponding to the message uploading control, and then can upload the session message to the third-party server.
1305. And the terminal sends the session message to the third-party server.
The third-party server is corresponding to the virtual robot and is used for providing service for the virtual robot.
In some embodiments, the terminal sends a session message to the third party server, including: and the terminal sends the conversation message to a service interface corresponding to the virtual robot. Correspondingly, the third party server receives the session message through the service interface. In some embodiments, after the terminal sends the session message to the third-party server, the third-party server receives the session message and performs an operation based on the session message. The operation executed by the third-party server based on the session message can be any operation, which is not limited in this disclosure.
In the embodiment of the present disclosure, the operation of extending in the target application by using the virtual robot further includes an operation of uploading a session message to a third-party server, that is, a method of flexibly extending the operation in the target application is provided. And the message uploading control is displayed in the menu of the session message, so that the user can upload the session message to the third-party server only by executing the operation of triggering the message uploading control, and the method is simple and efficient.
1306. And the terminal responds to the triggering operation of the task creation control and calls the virtual robot to read the session message and the current login account corresponding to the task creation control.
And the terminal responds to the triggering operation of the task creation control, calls the virtual robot to read the session message and the current login account corresponding to the task creation control, and then can send a task creation request based on the session message and the current login account.
1307. The terminal sends a task creating request to a third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task with the session message as task content for the current login account.
In some embodiments, after receiving the task creation request, the third-party server creates a target task using the session message as task content for the current login account in a third-party application corresponding to the third-party server.
In the embodiment of the present disclosure, the operation of expanding in the target application by using the virtual robot further includes an operation of sending a task creation request to the third-party server, that is, a method of flexibly expanding the operation in the target application is provided. And the task creation control is displayed in the session message, so that the user can send a session request carrying the session message and the current login account to the third-party server only by executing the operation of triggering the task creation control, and the third-party server creates a target task taking the session message as task content for the current login account.
In some embodiments, after the terminal calls the virtual robot to read the session message corresponding to the task creation control and the current login account, the control adding method further includes: the terminal displays a task creating interface, wherein the task creating interface comprises session information corresponding to the task creating control; the terminal acquires input task information based on a task creation interface; the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
In some embodiments, the task information includes a duration of the task, an identifier of the task, a type of the task, an account for issuing the task, and the like, which is not limited in this disclosure. In some embodiments, the task creation interface includes an information input field corresponding to at least one type of task information, and accordingly, the terminal obtains the task information input in each information input field.
In the embodiment of the disclosure, by displaying the task creation interface, the user can complete the task information based on the task creation interface, so that the third-party server can create a target task with richer task information for the current login account.
Fig. 14 is a diagram illustrating display of a target control in a menu corresponding to a session message. Referring to fig. 14, when the session message pointed by the arrow is triggered, a menu corresponding to the session message is displayed, where the menu includes a copy control, a collection control, a remark control, and two target controls, which are respectively created for a message upload control and a task.
In some embodiments, after the terminal adds the virtual account corresponding to the virtual robot to the session, the control adding method further includes: the terminal displays the stored robot call instruction in the conversation interface in response to entering the reference instruction character in the message input field of the conversation interface. And the terminal responds to the triggering operation of any displayed robot calling instruction, issues the robot calling instruction to the conversation, and sends the robot calling instruction to the service interface corresponding to the virtual robot. Then, the third-party server responds to the robot calling instruction through the service interface.
The robot calling instruction is used for calling the virtual robot. In some embodiments, the virtual robot has a corresponding robot invocation instruction for invoking the virtual robot to perform an operation corresponding to the robot invocation instruction. For example, the robot invoking instruction is a weather query instruction, and the robot invoking instruction is used for invoking the virtual robot to issue weather information in a session. In some embodiments, a virtual robot can correspond to a plurality of different robot invocation instructions, each for invoking the virtual robot, and each invoking an operation performed by the virtual robot.
The reference instruction character is used for triggering and displaying the robot calling instruction. The reference instruction character can be set to any character, for example, the reference instruction character is "/", and the embodiment of the present disclosure does not limit this. In some embodiments, entering the reference instruction character can trigger the display of a robot call instruction when no other characters have been entered in the message entry field. If the reference instruction character is input after other characters are input in the message input field, the reference instruction character is treated as a common character. In addition, if the robot calling command is triggered and displayed through the input reference command character and then other characters are input before the reference command character, the robot calling command is not displayed any more and the reference command character is treated as a common character.
In some embodiments, the display elements of each robot invocation instruction displayed in the conversation interface include: the name of the robot call instruction, the function description information of the robot call instruction, and at least one of the virtual robot to which the robot call instruction belongs or the call prompt information of the robot call instruction. The calling prompt information of the robot calling instruction is used for prompting a service parameter which needs to be input when the robot calling instruction is called, and the service parameter is a parameter which is needed when the third-party server responds to the robot calling instruction. For example, a function of a certain robot call instruction is to inquire weather, and a service parameter corresponding to the robot call instruction is a location parameter.
In some embodiments, the terminal displays the stored robot invocation instructions in a conversational interface, including: and the terminal determines robot call instructions corresponding to all virtual robots added in the current session from the stored robot call instructions and displays the determined robot call instructions in a session interface. In some embodiments, in the case that there are a plurality of determined robot call instructions, the plurality of robot call instructions are sorted first, and then the sorted plurality of robot call instructions are displayed in the conversation interface. The sorting mode can be any mode, for example, all robot call instructions corresponding to one virtual robot are arranged first, and then the robot call instruction corresponding to the next virtual robot is arranged. Then, for a plurality of virtual robots, the robot call commands corresponding to the plurality of virtual robots are arranged in order of the initials of the virtual robot names, and for a plurality of robot call commands of one virtual robot, the plurality of robot call commands are arranged in order of the initials of the robot call commands.
In some embodiments, the terminal issues the robot call instruction into the session in response to a trigger operation on any displayed robot call instruction, including: the terminal responds to the triggering operation of any displayed robot calling instruction, displays the robot calling instruction in the message input field, and responds to the receiving and sending confirmation operation, and the robot calling instruction is issued to the conversation. In some embodiments, the terminal receives information input subsequent to the robot call instruction after displaying the robot call instruction in the message input field, and issues the robot call instruction together with the information into the conversation in response to receiving the determination transmission operation. Correspondingly, the robot calling instruction and the information are simultaneously sent to the service interface corresponding to the virtual robot while the robot calling instruction and the information are issued to the session. The third-party server receives the robot call instruction and the information through the service interface, identifies service parameters from the information, and responds to the robot call instruction based on the service parameters.
It should be noted that, in the embodiment of the present disclosure, the manner in which the third-party server responds to the robot call instruction is determined by the third-party server. For example, the third-party server responds to the robot call instruction by returning a conversation message, and the virtual robot issues the conversation message in the conversation. For another example, the third-party server responds to the robot call instruction by returning a page address, and the terminal displays a page corresponding to the page address. For another example, the third-party server performs an operation corresponding to the function of the robot call instruction, but does not return information. For example, a timing task is established in the associated application at the third party server. Of course, the third-party server can respond to the robot call instruction in other ways, which is not limited in this disclosure.
In the embodiment of the disclosure, by setting the reference instruction character, the user can quickly call out the stored robot call instruction by inputting the reference instruction character in the message input field, and then the user can call the robot call instruction by only triggering the required robot call instruction, so that the user is prevented from manually inputting the robot call instruction, and the operation efficiency of calling the virtual robot is improved.
In some embodiments, after the terminal displays the stored robot call instruction in the conversation interface in response to inputting the reference instruction character in the message input field of the conversation interface, the control adding method further includes: the terminal filters out robot invocation instructions from the conversation interface that do not include the character in response to continuing to enter the character in the message entry field.
The fact that the robot calling instruction does not include the character means that the character is not included in the name of the robot calling instruction.
In the embodiment of the disclosure, the user can quickly find the required robot calling instruction from the rest of the robot calling instructions including the character by responding to the character input in the message input field and filtering the robot calling instruction not including the character from the conversation interface, so that the operation efficiency of calling the virtual robot is improved.
Fig. 15 is a schematic diagram showing a robot call instruction in the conversation interface. Referring to fig. 15, after a reference instruction character "/" is input in the message input field, three robot call instructions, i.e., a robot call instruction H, a robot call instruction F, and a robot call instruction G, are displayed in the conversation interface. The robot calling instruction H belongs to the robot L, the robot calling instruction F belongs to the robot O, the robot calling instruction G belongs to the robot P, and triggering operation is carried out on the robot calling instruction H, so that the @ robot L/robot calling instruction H' can be issued in a session, and the robot calling instruction H is sent to a service interface corresponding to the virtual robot L.
In the embodiment of the disclosure, when the target control needs to be extended in the application, the method is not limited to updating the application, and only the virtual robot with the target control needs to be selected from the robot display interface, and the virtual account corresponding to the virtual robot is added to the session, so that the target control can be added in the session interface of the application by using the virtual robot, and thus, the flexibility of extending the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.
Fig. 16 is a block diagram illustrating a control addition apparatus in accordance with an exemplary embodiment. Referring to fig. 16, the apparatus includes:
a conversation interface display unit 1601 configured to execute a conversation interface based on any one of the conversations, and display a robot addition control;
a robot display unit 1602, configured to perform a trigger operation in response to adding a control to a robot, and display a robot display interface, where the robot display interface includes at least one virtual robot provided by a third-party server;
a robot adding unit 1603 configured to perform a selection operation of any virtual robot in the robot display interface, and add a virtual account corresponding to the virtual robot to the session, wherein the virtual robot has a corresponding target control and a corresponding target operation;
and a target control adding unit 1604 configured to perform adding a target control in the session interface, wherein the target control is used for triggering execution of the target operation.
In some embodiments, the target control adding unit 1604 is configured to add the target control to a menu corresponding to the conversation message in the conversation interface; the control adding device further comprises:
and the target control display unit is configured to execute menu operation responding to the calling of any conversation message and display a menu corresponding to the conversation message in the conversation interface, wherein the menu comprises the target control.
In some embodiments, the target operation includes an operation of uploading a session message corresponding to the target control to a third-party server, and the control adding apparatus further includes:
the first operation execution unit is configured to execute a triggering operation responding to the target control and call the virtual robot to read the session message corresponding to the target control; and sending the session message to the third-party server.
In some embodiments, the target operation includes an operation of sending a task creation request to a third-party server, and the control adding means further includes:
the second operation execution unit is configured to execute a triggering operation responding to the target control and call the virtual robot to read the session message and the current login account corresponding to the target control; and sending a task creating request to a third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task taking the session message as task content for the current login account.
In some embodiments, the second operation execution unit is further configured to execute displaying a task creation interface, where the task creation interface includes a session message corresponding to the target control; acquiring input task information based on a task creation interface; and the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
In some embodiments, the control adding means further comprises:
the reply message determining unit is configured to execute a session message which is issued in response to the session and marks the virtual account as a receiving account, and invoke the virtual robot to determine a reply message corresponding to the session message;
and the reply message issuing unit is configured to execute issuing of a reply message in the session by taking the virtual account as an issuing account.
In some embodiments, the reply message determination unit comprises:
a keyword identification subunit configured to execute calling of the virtual robot to identify an instruction keyword from the conversation message;
the corpus information determining subunit is configured to execute the step of calling the virtual robot to determine corpus information matched with the instruction keywords;
and the reply message generation subunit is configured to execute the generation of the reply message based on the corpus information.
In some embodiments, the instruction keywords include keywords indicating a selected account, and the control adding device further includes:
the account selecting unit is configured to execute calling of the virtual robot to select a target account from a plurality of accounts included in the conversation, wherein the target account is an account matched with the instruction keywords;
and the reply message generation subunit is configured to perform combination of the target account and the corpus information to obtain a reply message.
In some embodiments, the corpus information determining subunit is configured to execute calling of the virtual robot to obtain instruction configuration information, where the instruction configuration information includes at least one reference instruction keyword and an information query interface corresponding to each reference instruction keyword; and calling an information query interface corresponding to the instruction key words to query the corpus information matched with the instruction key words.
In some embodiments, the control adding means further comprises:
the instruction display unit is configured to execute the steps of responding to the input of a reference instruction character in a message input field of the conversation interface, displaying a stored robot calling instruction in the conversation interface, wherein the robot calling instruction is used for calling the virtual robot, and the reference instruction character is used for triggering and displaying the robot calling instruction;
and the instruction issuing unit is configured to execute a triggering operation responding to any displayed robot calling instruction, issue the robot calling instruction to the session, and send the robot calling instruction to a service interface corresponding to the virtual robot, and the third-party server is used for responding to the robot calling instruction through the service interface.
In some embodiments, the control adding means further comprises:
an instruction filtering unit configured to execute a robot call instruction that filters out no characters from the conversation interface in response to continuing to input characters in the message input field.
In some embodiments, the control adding means further comprises:
the robot sharing unit is configured to execute triggering operation of a robot identifier in a session interface, and display a detail interface of the virtual robot corresponding to the robot identifier, wherein the detail interface comprises a robot sharing control; responding to triggering operation of a robot sharing control, generating a sharing link of the virtual robot, and displaying a session identification list, wherein the session identification list comprises at least one session identification; responding to the selection operation of any session identifier in the session identifier list, and issuing the sharing link to the session corresponding to the selected session identifier.
In some embodiments, the control adding means further comprises:
the authority determining unit is configured to execute an authority setting interface for displaying the virtual robot, and the authority setting interface comprises at least one operation type; responding to the selection operation of the operation type in the authority setting interface, and determining the authority range of the virtual robot, wherein the authority range comprises the operation type selected from the authority setting interface; and the authority range representation allows the virtual robot to execute the operation corresponding to the operation type.
In some embodiments, the control adding means further comprises:
and the function information display unit is configured to execute issuing a conversation message in a conversation by taking the virtual account as an issuing account, wherein the conversation message comprises function description information, and the function description information is used for describing functions which can be realized by the virtual robot.
In some embodiments, the robot display interface includes a robot search control, and the control adding device further includes:
a robot search unit configured to perform acquiring a search word input in the robot search control; acquiring a target robot which is provided by a third-party server and matched with the search terms; and displaying the target robot in the robot display interface.
In some embodiments, the robot presentation interface includes a robot creation control, and the control adding apparatus further includes:
the robot creating unit is configured to execute triggering operation of the robot creating control and display a robot creating interface; acquiring input robot information based on a robot creating interface; and creating a virtual robot which accords with the robot information, and displaying the created virtual robot in a robot display interface.
In the embodiment of the disclosure, when the target control needs to be extended in the application, the method is not limited to updating the application, and only the virtual robot with the target control needs to be selected from the robot display interface, and the virtual account corresponding to the virtual robot is added to the session, so that the target control can be added in the session interface of the application by using the virtual robot, and thus, the flexibility of extending the target control in the application is improved. Moreover, developers only need to develop the virtual robot, and do not need to develop the application with the target control again, so that the expansion cost is reduced.
It should be noted that: in the control adding apparatus provided in the above embodiment, when adding a control, only the division of each function module is illustrated, and in practical applications, the function distribution may be completed by different function modules according to needs, that is, the internal structure of the electronic device is divided into different function modules to complete all or part of the functions described above. In addition, the control adding device and the control adding method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In an exemplary embodiment, there is also provided an electronic device comprising one or more processors, and volatile or non-volatile memory for storing one or more processor-executable instructions, the one or more processors being configured to execute the instructions to implement the control addition method in the above-described embodiments.
Optionally, the electronic device is provided as a terminal. Fig. 17 shows a block diagram of a terminal 1700 according to an exemplary embodiment of the present application. The terminal 1700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
The terminal 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one program code for execution by the processor 1701 to implement the control addition method provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuit 1704, display screen 1705, camera assembly 1706, audio circuit 1707, positioning assembly 1708, and power supply 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of terminal 1700; in other embodiments, display 1705 may be at least two, each disposed on a different surface of terminal 1700 or in a folded design; in other embodiments, display 1705 may be a flexible display disposed on a curved surface or a folded surface of terminal 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used to locate the current geographic Location of the terminal 1700 to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be a Positioning component based on a GPS (Global Positioning System) in the united states, a beidou System in china, a greiner System in russia, or a galileo System in the european union.
Power supply 1709 is used to power the various components in terminal 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When power supply 1709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the display screen 1705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal 1700, and the gyro sensor 1712 may cooperate with the acceleration sensor 1711 to acquire a 3D motion of the user on the terminal 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be disposed on the side frames of terminal 1700 and/or underlying display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal 1700, the user's grip signal to the terminal 1700 can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed below the display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of terminal 1700. When a physical key or vendor Logo is provided on terminal 1700, fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the display screen 1705 is reduced. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
A proximity sensor 1716, also known as a distance sensor, is disposed on the front panel of terminal 1700. Proximity sensor 1716 is used to gather the distance between the user and the front face of terminal 1700. In one embodiment, when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually reduced, processor 1701 controls display 1705 to switch from a bright screen state to a dark screen state; when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually increased, processor 1701 controls display 1705 to switch from the sniff state to the brighten state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Optionally, the electronic device is provided as a server. Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1801 and one or more memories 1802, where at least one program code is stored in the memory 1802, and the at least one program code is loaded and executed by the processors 1801 to implement the control adding method provided by each method embodiment described above. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory including a degree code executable by a processor in an electronic device to perform the control addition method in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the control adding method in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A control adding method is characterized by comprising the following steps:
displaying a robot adding control based on a session interface of any session;
responding to the triggering operation of adding a control to the robot, and displaying a robot display interface, wherein the robot display interface comprises at least one virtual robot provided by a third-party server;
responding to selection operation of any virtual robot in the robot display interface, and adding a virtual account corresponding to the virtual robot into the conversation, wherein the virtual robot has a corresponding target control and a corresponding target operation;
and adding the target control in the session interface, wherein the target control is used for triggering and executing the target operation.
2. The control adding method according to claim 1, wherein the adding of the target control in the session interface comprises:
adding the target control in a menu corresponding to the session message in the session interface;
the control adding method further comprises the following steps: responding to the menu operation of calling any conversation message, and displaying a menu corresponding to the conversation message in the conversation interface, wherein the menu comprises the target control.
3. The control adding method according to claim 2, wherein the target operation includes an operation of uploading a session message corresponding to the target control to the third-party server, and after the menu corresponding to the session message is displayed in the session interface, the control adding method further includes:
responding to the trigger operation of the target control, and calling the virtual robot to read the session message corresponding to the target control;
and sending the session message to the third-party server.
4. The control adding method according to claim 2, wherein the target operation includes an operation of sending a task creation request to the third-party server, and after the menu corresponding to the session message is displayed in the session interface, the control adding method further includes:
responding to the triggering operation of the target control, and calling the virtual robot to read the session message and the current login account corresponding to the target control;
and sending a task creating request to the third-party server, wherein the task creating request carries the session message and the current login account, and the third-party server is used for creating a target task with the session message as task content for the current login account.
5. The control adding method according to claim 4, wherein after the virtual robot is invoked to read the session message and the current login account corresponding to the target control, the control adding method further comprises:
displaying a task creation interface, wherein the task creation interface comprises a session message corresponding to the target control;
acquiring input task information based on the task creation interface;
and the task creating request sent to the third-party server also carries the task information, and the third-party server is used for creating a target task which takes the session message as task content and contains the task information for the current login account.
6. The control adding method according to claim 1, wherein after the virtual account corresponding to the virtual robot is added to the session, the control adding method further comprises:
responding to a session message which is issued in the session and marks the virtual account as a receiving account, and calling the virtual robot to determine a reply message corresponding to the session message;
and issuing the reply message in the session by taking the virtual account as an issuing account.
7. The control addition method of claim 6, wherein said invoking the virtual robot to determine a reply message corresponding to the conversation message comprises:
calling the virtual robot to identify instruction keywords from the session message;
calling the virtual robot to determine corpus information matched with the instruction keywords;
and generating the reply message based on the corpus information.
8. A control adding device is characterized by comprising:
a conversation interface display unit configured to execute a conversation interface based on any one of the conversations and display a robot addition control;
the robot display unit is configured to execute a triggering operation responding to the addition of the control to the robot and display a robot display interface, and the robot display interface comprises at least one virtual robot provided by a third-party server;
the robot adding unit is configured to execute a selection operation of any virtual robot in the robot display interface, and add a virtual account corresponding to the virtual robot to the session, wherein the virtual robot has a corresponding target control and a corresponding target operation;
and the target control adding unit is configured to add the target control in the session interface, and the target control is used for triggering and executing the target operation.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the control addition method of any of claims 1-7.
10. A computer-readable storage medium having instructions thereon which, when executed by a processor of an electronic device, enable the electronic device to perform the control addition method of any of claims 1-7.
CN202110399768.0A 2021-04-14 2021-04-14 Control adding method, device, equipment and storage medium Pending CN113190307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110399768.0A CN113190307A (en) 2021-04-14 2021-04-14 Control adding method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110399768.0A CN113190307A (en) 2021-04-14 2021-04-14 Control adding method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113190307A true CN113190307A (en) 2021-07-30

Family

ID=76975788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110399768.0A Pending CN113190307A (en) 2021-04-14 2021-04-14 Control adding method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113190307A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495226A (en) * 2022-01-25 2022-05-13 成都鼎桥通信技术有限公司 Identity identification method, device and equipment based on wireless law enforcement recorder
CN115334027A (en) * 2022-08-10 2022-11-11 北京字跳网络技术有限公司 Information processing method, device, electronic equipment and storage medium
CN115334027B (en) * 2022-08-10 2024-04-16 北京字跳网络技术有限公司 Information processing method, apparatus, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109639828A (en) * 2019-01-15 2019-04-16 腾讯科技(深圳)有限公司 Conversation message treating method and apparatus
CN109691034A (en) * 2016-09-20 2019-04-26 谷歌有限责任公司 Robot interactive
CN112231463A (en) * 2020-11-10 2021-01-15 腾讯科技(深圳)有限公司 Session display method and device, computer equipment and storage medium
CN112287262A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Session display method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109691034A (en) * 2016-09-20 2019-04-26 谷歌有限责任公司 Robot interactive
CN109639828A (en) * 2019-01-15 2019-04-16 腾讯科技(深圳)有限公司 Conversation message treating method and apparatus
CN112287262A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Session display method and device, computer equipment and storage medium
CN112231463A (en) * 2020-11-10 2021-01-15 腾讯科技(深圳)有限公司 Session display method and device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495226A (en) * 2022-01-25 2022-05-13 成都鼎桥通信技术有限公司 Identity identification method, device and equipment based on wireless law enforcement recorder
CN114495226B (en) * 2022-01-25 2024-03-22 成都鼎桥通信技术有限公司 Identity recognition method, device and equipment based on wireless law enforcement recorder
CN115334027A (en) * 2022-08-10 2022-11-11 北京字跳网络技术有限公司 Information processing method, device, electronic equipment and storage medium
CN115334027B (en) * 2022-08-10 2024-04-16 北京字跳网络技术有限公司 Information processing method, apparatus, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110377195B (en) Method and device for displaying interaction function
CN110932963B (en) Multimedia resource sharing method, system, device, terminal, server and medium
CN110377200B (en) Shared data generation method and device and storage medium
CN111597455A (en) Social relationship establishing method and device, electronic equipment and storage medium
CN112181573A (en) Media resource display method, device, terminal, server and storage medium
CN110572716A (en) Multimedia data playing method, device and storage medium
CN112764607A (en) Timing message processing method, device, terminal and storage medium
CN112052354A (en) Video recommendation method, video display method and device and computer equipment
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN111459466B (en) Code generation method, device, equipment and storage medium
CN111628925A (en) Song interaction method and device, terminal and storage medium
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN113206781B (en) Client control method, device, equipment and storage medium
CN112764600B (en) Resource processing method, device, storage medium and computer equipment
CN114168369A (en) Log display method, device, equipment and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN113467663B (en) Interface configuration method, device, computer equipment and medium
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN112311661B (en) Message processing method, device, equipment and storage medium
CN113204724A (en) Method and device for creating interactive information, electronic equipment and storage medium
JP2022540736A (en) CHARACTER RECOMMENDATION METHOD, CHARACTER RECOMMENDATION DEVICE, COMPUTER AND PROGRAM
CN111245629A (en) Conference control method, device, equipment and storage medium
CN113204302B (en) Virtual robot-based operation method, device, equipment and storage medium
CN112311652A (en) Message sending method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination