CN114338577A - Information processing method and device, electronic equipment and storage medium - Google Patents
Information processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114338577A CN114338577A CN202011087496.2A CN202011087496A CN114338577A CN 114338577 A CN114338577 A CN 114338577A CN 202011087496 A CN202011087496 A CN 202011087496A CN 114338577 A CN114338577 A CN 114338577A
- Authority
- CN
- China
- Prior art keywords
- session
- conversation
- target user
- information
- displaying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Information Transfer Between Computers (AREA)
Abstract
The embodiment of the application discloses an information processing method and device, wherein the method comprises the following steps: displaying a session interface corresponding to a multi-party session, wherein the multi-party session is formed based on joining of a plurality of users into a session group; displaying a session image corresponding to a target user who is inputting session information in the multi-party session on the session interface, wherein the session image is a virtual object for identifying the target user; and displaying a type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information input by the target user belongs. According to the technical scheme of the embodiment of the application, the user inputting the session information can be known in real time through the session image and the type identification displayed on the session interface, and the problem that the real-time communication state of each user in a group chat scene cannot be acquired in the prior art is solved.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an information processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Network communication is different from face-to-face communication, and the communication state of each party cannot be known in real time during communication, so that the experience of network communication is poor.
Currently, in a network communication scenario based on the C2C (Customer To Customer, which means communication between users via the internet), when any party is inputting chat content, a user interface of the receiving party displays a word "the other party is inputting …" so that the receiving party knows in real time the status of the other party inputting chat content. However, in a multi-party group chat scene, due to the fact that the number of people participating in the communication is large, the function of acquiring the communication state of each party in real time cannot be achieved.
Disclosure of Invention
The embodiment of the application provides an information processing method and device, electronic equipment and a computer readable storage medium.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided an information processing method including: displaying a session interface corresponding to a multi-party session, wherein the multi-party session is formed based on joining of a plurality of users into a session group; displaying a session image corresponding to a target user who is inputting session information in the multi-party session on the session interface, wherein the session image is a virtual object for identifying the target user; and displaying a type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information input by the target user belongs.
According to an aspect of an embodiment of the present application, there is provided another information processing method including: monitoring the session state corresponding to each user terminal in a multi-party session to obtain multi-party session information, wherein the multi-party session information is used for representing that a target user terminal in the multi-party session is in a session state of inputting session information; based on the multi-party conversation information, sending a conversation control instruction to each user terminal, so that each user terminal responds to the conversation control instruction, displaying a conversation image corresponding to the target user terminal in a conversation interface corresponding to the multi-party conversation, and displaying a type identifier corresponding to the information type in the conversation interface according to the information type of the conversation information input by the target user terminal, wherein the conversation image is a virtual object for identifying a target user logged in the target user terminal.
According to an aspect of an embodiment of the present application, there is provided an information processing apparatus including: the conversation interface display module is configured to display a conversation interface corresponding to a multi-party conversation, and the multi-party conversation is formed by adding a plurality of users into a conversation group; a conversation image display module configured to display a conversation image corresponding to a target user who is inputting conversation information in the multi-party conversation on the conversation interface, wherein the conversation image is a virtual object for identifying the target user; and the type identifier display module is configured to display the type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information input by the target user belongs.
According to an aspect of an embodiment of the present application, there is provided another information processing apparatus including: a session state monitoring module configured to monitor a session state corresponding to each user terminal in a multi-party session to obtain multi-party session information, where the multi-party session information is used to represent that a target user terminal in the multi-party session is in a session state in which session information is being input; a session display control module configured to send a session control instruction to each user terminal based on the multi-party session information, so that each user terminal responds to the session control instruction, displays a session image corresponding to the target user terminal in a session interface corresponding to the multi-party session, and displays a type identifier corresponding to the information type in the session interface according to the information type to which the session information currently input by the target user terminal belongs, where the session image is a virtual object for identifying a target user logged in the target user terminal.
According to an aspect of the embodiments of the present application, there is provided an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, implement the information processing method as described above.
According to an aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to execute an information processing method as described above.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the information processing method provided in the above-mentioned various optional embodiments.
In the technical scheme provided by the embodiment of the application, after the session interface corresponding to the multi-party session is displayed, the session image corresponding to the target user who is inputting the session information in the multi-party session is displayed in the session interface, and the type identifier corresponding to the information type is displayed on the side of the session image corresponding to the target user according to the information type of the session information which is being input by the target user, so that for each user joining the multi-party session, the user who is inputting the session information can be known in real time through the session image and the type identifier displayed on the session interface, the real-time acquisition of the communication states of all parties in the multi-party session is full of interest, and the user experience in the multi-party session can be greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic illustration of an implementation environment to which the present application relates;
FIG. 2 is a flow diagram illustrating an information processing method according to an exemplary embodiment;
FIG. 3 is a diagram of an exemplary conversation interface;
FIG. 4 is a schematic diagram of another exemplary conversation interface;
FIG. 5 is a schematic diagram of another exemplary conversation interface;
FIG. 6 is a flow diagram for one embodiment of step S130 in the embodiment shown in FIG. 2;
FIG. 7 is a schematic diagram of another exemplary conversation interface;
FIG. 8 is a flowchart of step S133 in the embodiment shown in FIG. 6 for one embodiment;
FIG. 9 is a flowchart of step S133 in the embodiment of FIG. 6 in another embodiment;
FIG. 10 is a schematic illustration of another exemplary conversation interface;
fig. 11 is a flowchart illustrating an information processing method according to another exemplary embodiment;
FIG. 12 is a timing diagram illustrating a multi-party conversation in accordance with an illustrative embodiment;
fig. 13 is a block diagram of an information processing apparatus shown in an exemplary embodiment;
fig. 14 is a block diagram of an information processing apparatus shown in another exemplary embodiment;
fig. 15 is a schematic diagram illustrating a hardware structure of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should also be noted that: reference to "a plurality" in this application means two or more. "and/or" describe the association relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment related to the present application.
The implementation environment is embodied as a multi-party conversation system, which includes a plurality of user terminals 100 and a conversation server 200, wherein the plurality of user terminals 100 and the conversation server 200 communicate with each other through a wired or wireless network.
A plurality of user terminals 100 may join the same conversation group to form a multiparty conversation participated in by the multiparty users in common. Meanwhile, each user terminal 100 displays a session interface, and a user can trigger a user operation in the session interface to enable a session operation in a multi-party session, for example, inputting session information into a session group.
When any user terminal 100 is inputting session information into a session group, a session interface of each user terminal 100 associated with the session group displays a session image of a user corresponding to the user terminal 100 inputting the session information, and displays a type identifier corresponding to an information type to which the session information is being input, and based on this process, each user participating in a multi-party session can know a real-time communication state of each party according to the session image and the type identifier displayed in the user terminal 100 used by each user.
The session server 200 is used for providing data services for a plurality of user terminals 100 to support the multi-party session process between the plurality of user terminals 100.
It should be noted that the user terminal 100 may be a terminal device such as a smart phone, a tablet computer, a computer, and a notebook, which is not limited herein. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, which is not limited herein.
Fig. 2 is a flow chart illustrating an information processing method according to an example embodiment. The method can be applied to the multi-party conversation system shown in fig. 1, for example, each user terminal 100 in the multi-party conversation system shown in fig. 1 is specifically executed, so that each user participating in the multi-party conversation can know the communication state of other users in real time, and further the user experience of the user in the multi-party conversation is improved.
As shown in fig. 2, the information processing method at least includes steps S110 to S150, which are described in detail as follows:
step S110, displaying a session interface corresponding to a multi-party session, wherein the multi-party session is formed by joining a plurality of users into a session group.
It is first required that, in the present embodiment, a plurality refers to at least two, so the multiparty session referred to in the present embodiment is formed based on two or more users joining in the session group.
Since the multi-party session is formed based on a plurality of users joining a session group, the multi-party session may be understood as a session flow performed by the plurality of users in the session group, or as a group chat session performed between the plurality of users. If any user in the multi-party conversation inputs the conversation information into the conversation group, all users in the conversation group are taken as the receiving party of the conversation information, so that the conversation carried out by any user is shared among all users participating in the multi-party conversation.
In the process of carrying out multi-party conversation among a plurality of users, a user terminal where each user is located displays a multi-party conversation interface, the conversation interface is provided with an entrance for inputting the conversation information into the conversation group, and the user can trigger operations such as input and sending of the conversation information in the entrance so as to send the conversation information into the conversation group.
For example, FIG. 3 is a diagram of an exemplary session interface, where FIG. 3A illustrates an interface state when a user does not trigger an input information box in the session interface, and FIG. 3B illustrates an interface state after the user triggers an input information box in the session interface. As shown in fig. 3A and 3B, the session input box is an entry for the user to input session information into the session group, and when the user needs to input session information into the session group, the user can trigger to pop out the keyboard input box in the session interface by clicking the session input box; then, by triggering the keyboard input box, the session information to be sent can be input into the session input box; and when the user finishes inputting all the session information, clicking the 'sending' button to send the session information to the session group.
The session interface corresponding to the multi-party session also comprises a session display area, and the session display area is used for displaying the session information input by each user in the session group. For the user who sends the session information, after the user sends the session information to the session group, the input session information is correspondingly displayed in the session interface where the user is located.
Step S130, displaying a conversation image corresponding to a target user who is inputting conversation information in the multi-party conversation on a conversation interface, wherein the conversation image is a virtual object for identifying the target user.
Since the user terminals where the users participating in the multi-party conversation are located communicate with the session server, the session server can acquire the state of each user terminal, for example, the state of the user terminal may include whether the user logged in the user terminal is online, whether the user logged in the user terminal is inputting the conversation information in the conversation group, and the like.
Therefore, each user terminal associated with the multi-party session can detect the target user inputting the session information in the multi-party session by respectively communicating with the session server. For example, the user terminal may actively send a query request to the session server to request the session server to return information of a target user, which is currently inputting session information, to the user terminal, thereby allowing the user terminal to detect the target user. Wherein the sending of the query request by the user terminal to the session server may be periodic.
Or in other embodiments, the session server may periodically push, to each user terminal in communication with the session server, information of a target user currently inputting session information into the session group, so that the user terminal may detect the target user, which is not limited in this embodiment.
If a target user inputting the session information in the multi-party session is detected, the session image of the target user is displayed in the session interface, so that the user currently inputting the session information into the session group is prompted based on the session image displayed in the session interface.
Compared with the mode that the user is prompted to input the session information into the session group through characters, the method and the device for displaying the session image of the target user who is currently inputting the session information in the session interface can enable the user to know the communication state of each user in the multi-party session in real time, can more intuitively create the active atmosphere in the session group, and solve the problem that the real-time communication state of the user cannot be known in a group chat scene.
It should be noted that all users participating in the multi-party conversation related to the present embodiment should have corresponding conversation images, where a conversation image may be a user image that is set by the user in an application where the multi-party conversation is located, for example, a user avatar, or a user image that is set by the user for himself through a specific application, and the specific application and the application where the multi-party conversation is located are associated with each other. If there is a user participating in the multi-party conversation and sets a corresponding conversation image for the user, a default conversation image may be configured for the user, which is not limited in this embodiment.
Step S150, according to the information type of the conversation information input by the target user, displaying the type identification corresponding to the information type on the side of the conversation image corresponding to the target user.
First, in this embodiment, the session group may support session information input of any information type, for example, a user may input session information such as text, voice, picture, and expression into the session group, and the information types to which the session information belongs are not listed one by one here.
The session server can also know the information type of the session information input by the target user, and send the information type of the session information input by the target user to the user terminal while sending the information of the target user currently inputting the session information to the user terminal, so that the user terminal displays the type identifier corresponding to the information type in the session interface according to the information type of the session information input by the target user.
For example, as shown in fig. 4, if there are 3 target users currently inputting session information in the session group, and the information types to which the session information input by 2 users belongs are characters, a session image corresponding to each target user is displayed in the session interface, and a type identifier corresponding to the characters is displayed on the upper right side of each session image. The users in the multi-party conversation can accurately know each target user currently inputting the conversation in the conversation group and the specific information type of the conversation information input by each target user through the conversation image and the type identification displayed in the user interface, so that the real-time communication state of the group chat conversation in the conversation group can be displayed more truly, and the user experience can be further improved. It should be noted that the display position of the type identifier corresponding to the target user may be any position on the side of the conversation image corresponding to the target user, for example, the display position may also be a position on the upper left side, the left side, or the right side of the conversation image, and the embodiment does not limit this.
When the target user who is inputting the session information into the session group is the terminal user that the user terminal logs in, based on the method provided by the embodiment, the session image corresponding to the terminal user and the corresponding type identifier are also displayed in the session interface. For example, as shown in fig. 5, for the user terminal, the session image and the corresponding type identifier corresponding to the end user may be displayed in front of other target users, so as to enhance the participation of the end user, and thus the group chat experience of the end user can be improved. Meanwhile, based on the method provided by the embodiment, each member in the conversation group can be excited easily in the group chat process from the psychological perspective, and a psychological expectation is also established, so that each member can more theoretically wait for the conversation information being input by other users, and the group chat activity of each member in the conversation group is stimulated.
In another exemplary embodiment, as shown in fig. 6, displaying a conversation image corresponding to a target user who is inputting conversation information in a multi-party conversation in a conversation interface may include the following steps:
in step S131, the number of detected target users is determined.
First, in this embodiment, it is considered that, although the session image display positions configured in the user interface for displaying the session images corresponding to the target users are limited, a session group usually contains a large number of users, and the number of target users who are inputting session information into the session group at the same time is likely to be large, so how to display the session images corresponding to the target users at the session image display positions with limited number is a problem in displaying the session images corresponding to the target users in the session interface.
In order to solve the problem, in this embodiment, the number of the target users needs to be determined, then the number of the target users is compared with the number of the session image display positions configured in the session interface, and if the number of the target users is smaller than the number of the session image display positions configured in the session interface, it indicates that sufficient session image display positions exist in the user interface and can be used for displaying the session images of the target users, so that the detected session images corresponding to the target users are sequentially displayed at the session image display positions.
If the number of the target users is larger than the number of the conversation image display positions configured in the conversation interface, the content described in step S133 is specifically executed.
It should be noted that the conversation image display position provided in the user interface may be specifically located on the upper side of the information input box and may move along with the movement of the information input box in the conversation interface, for example, as shown in fig. 7, when the information input box is triggered by a user, the information input box may correspondingly move to the upper side of the keyboard input box, the conversation image display position should also move along with the movement of the information input box, and the moved conversation image display position is still located on the upper side of the information input box.
It should be further noted that, as shown in fig. 7B, when the terminal user of the user terminal is inputting the session information, the user terminal may detect that the terminal user is the target user, and thus, the session image corresponding to the terminal user may be displayed at the session image display position.
Step S133, if the number of the target users is greater than the number of the session image display positions configured in the session interface, displaying the session images corresponding to the target users at the session image display positions based on the sequence of the time points when the target users trigger the input of the session information.
The time point when the target user triggers to input information may be recorded by the session server and transmitted by the session server to each user terminal in communication therewith. The user terminal sorts the time points according to the received information of the time points of the session information triggered and input by each target user, and then displays the session images corresponding to each target user on the session image display positions with limited quantity based on the obtained sorting.
For example, the priority of each target user may be determined according to the sequence of the time point when each target user triggers to input the session information, and then the session image corresponding to each target user is updated and displayed at the session image display position based on the priority of each target user and the pre-configured session image display policy.
It should be understood that the earlier the target user triggers the time point of inputting the session information, the higher the priority thereof should be, and the earlier the corresponding session character thereof should be displayed on the session character presentation position. The preset conversation image display strategy is a rule for restricting the display of the conversation image at the conversation image display position, for example, the conversation image display strategy restricts the display time of the conversation image at the conversation image display position not to exceed a specified time, or restricts the display period of the conversation image at the conversation image display position, if the target user terminates the input of the conversation information into the conversation group, the display of the corresponding conversation image is quitted, and the display is not limited here.
When an idle conversation image display position exists, the conversation image corresponding to another target user is displayed on the idle conversation image display position based on the priority of each target user and a preset conversation image display strategy, so that the conversation image corresponding to each target user is continuously updated and displayed on the conversation image display position, the conversation image of each target user participating in the multi-party conversation is organically displayed on a user interface, the participation sense of each user participating in the group chat can be improved, and the group chat experience of each user is improved.
Specifically, in a case that the session image display policy indicates that the session image corresponding to the target user is displayed at the session image display position for a specified duration, it indicates that the duration of the session image corresponding to each target user at the session image display position does not exceed the specified duration, that is, when the session image of the target user with higher priority is displayed at the session image display position for the specified duration, the display of the session image is terminated, and the session image of the target user with lower priority is updated and displayed, and this process may specifically include steps S210 to S250 shown in fig. 8, which are described in detail as follows:
step S210, determining the session image display position in the idle state in the session interface.
First, it is explained that the session character display position in the idle state in the session interface is a session character display position where no session character is displayed.
Generally, when a multi-party conversation is started, all conversation image display positions arranged on a conversation interface are in an idle state. And in the process of the multi-party conversation, the conversation image display position in the idle state refers to the conversation image display position for quitting displaying the current conversation image in the conversation interface. Therefore, the session character presentation position in the idle state refers to a session character presentation position capable of displaying the session character corresponding to the target user.
Step S230, sequentially determining candidate target users according to the priorities of the target users, and displaying the session image corresponding to the candidate target users at the session image display position in the idle state.
The candidate target users are target users waiting for displaying the corresponding conversation image to the conversation image display position, so that the candidate target users can be obtained by sequentially determining the priority of each target user. The conversation image corresponding to the candidate target user is displayed at the conversation image display position in an idle state, so that the conversation image corresponding to the target user with higher priority is displayed at the conversation image display position.
Step S250, if the time length of the current conversation image displayed on the conversation image display position reaches the designated time length, quitting displaying the current conversation image, so that the conversation image display position quitting displaying the current conversation image is switched to an idle state.
Because the session image display strategy indicates that the time length of the session image corresponding to the target user displayed on the session image display position does not exceed the specified time length, when the time length of the current session image displayed on the session image display position reaches the specified time length, the current session image is quitted from being displayed, so that the session image display position quitting displaying the current session image is switched into an idle state. And then, displaying the conversation image corresponding to the target user which is not subjected to the candidate conversation image display in the idle conversation image display position according to the priority, thereby realizing the updated display of the conversation image in the conversation image display position.
Or in another embodiment, after step S250, a next session character display position after the session character display position exiting from displaying the current session character is determined based on the arrangement relationship of the session character display positions configured in the session interface, and then the session character displayed at the next session character display position is moved to the previous session character display position for displaying.
That is, after a session image is quitted from being displayed at the session image display position, the session image displayed at the session image display position after the session image display position is subjected to position complementing display, so that the session image display position in the idle state is the session image display position at the end, and when the session image corresponding to other candidate target users needs to be displayed, the session image is correspondingly displayed at the session image display position at the end, which is very convenient.
In addition, in the process of updating and displaying the conversation image based on the method, the conversation images displayed on the conversation image display positions are sequentially displayed according to the arrangement positions of the conversation image display positions, for example, the conversation image displayed on the conversation image display position at the initial position is always preferentially quitted from the display, and the latest displayed conversation image is always displayed on the last conversation image display position, so that the visualization effect of updating and displaying the conversation image in the conversation image display positions is further optimized, and the user experience in the group chat scene is further improved.
In the embodiment where the session image display policy indicates that the target user quits displaying the corresponding session image after terminating inputting the session information, the process of updating the session image of the target user with a lower display priority may specifically include steps S310 to S350 shown in fig. 9, which are described in detail as follows:
step S310, determining a session image display position in an idle state in a session interface;
step S330, sequentially determining candidate target users according to the priority of each target user, and displaying the conversation image corresponding to the candidate target users on the conversation image display position in an idle state;
step S350, if it is detected that the target user corresponding to the current session image displayed on the session image display position terminates the input of the session information, the current session image is quitted to be displayed, so that the session image display position quitting displaying the current session image is switched to an idle state.
The processes described in steps S310 and S330 are the same as the processes described in steps S210 and S230 in the embodiment shown in fig. 8, and therefore the processes described in steps S310 and S330 are not described again here.
In step S350, since the session image display policy indicates to quit displaying the corresponding session image after the target user terminates inputting the session information, when the user terminal detects that the target user corresponding to the current session image displayed at the session image display position terminates inputting the session information, the user terminal quits displaying the current session image, so that the session image display position quitting displaying the current session image is switched to an idle state. The terminating of the input of the session information by the target user may include that the transmission of the session information input by the target user in the session group has been completed, or that the target user cancels the currently input session information, which is not limited herein.
In this embodiment, the session server maintains a session queue corresponding to a multi-party session in real time according to a time point when a target user in the multi-party session triggers to input session information, and periodically sends the maintained session queue to each user terminal, so that after the user terminal acquires the session queue corresponding to the multi-party session, the priority of each target user can be determined according to the arrangement sequence of each target user in the session queue.
And the user terminal can determine which target users are newly added target users inputting session information and which target users have already terminated inputting session information according to the comparison between the currently received session queue and the last received session queue. Therefore, based on the session queue maintained by the session server in real time, the user terminal can be ensured to know the communication state of each user in the multi-party session in time, and the obtained communication state of each user can be displayed on the session interface, so that the user can also obtain the real-time communication state of each user in the group chat scene based on the information displayed on the session interface.
Or in another embodiment, the session queue may be maintained by the user terminal in real time based on a time point when the target user in the multi-party session sent by the session server triggers to input session information, which is not limited in this embodiment.
In addition, the above two conversation image display strategies can also be combined, that is, the conversation image displayed in the conversation image display area has at the same time that the display duration cannot exceed the specified duration, and when the target user terminates the input of the conversation information, the corresponding conversation image needs to quit displaying at least two characteristics, which can be selected and configured according to the actual demand environment.
In another embodiment, the session character display positions provided in the session interface may include a first session character display position and a second session character display position, wherein the first session character display position is used for displaying the session character corresponding to the end user, so that the session character corresponding to the end user is always resident in the session interface to improve the sense of participation of the end user, and the second session character display position is used for displaying the session characters corresponding to each target user except the end user in the session group.
Therefore, in the process that the user terminal displays the conversation image corresponding to the target user at the conversation image display position based on the sequencing of the time points of the target user triggering and inputting the conversation information, if the target user is determined to be other users except the terminal user, the conversation image corresponding to the target user is displayed at the second conversation image display position. In this embodiment, please refer to the content described in the foregoing embodiments for a specific process of displaying on the second session image display position by other users except the terminal user as target users, which is not described herein again.
And if the target user is determined to be the terminal user, displaying the conversation image corresponding to the next target user on the second conversation image display position because the conversation image corresponding to the terminal user is fixedly displayed on the first conversation image display position.
It should be noted that, if it is detected that the terminal user is the target user who is inputting the session information, the type identifier corresponding to the information type to which the session information input by the terminal user belongs still needs to be displayed. And if the fact that the terminal user terminates the input of the session information is detected, the display of the type identification corresponding to the information type of the session information input by the terminal user is cancelled. Therefore, even if the conversation image corresponding to the terminal user is fixedly displayed at the first conversation image display position, the embodiment can effectively prompt the communication state of each user in the multi-party conversation through the display condition of the type identifier.
In another embodiment, the users in the multi-party conversation may also correspond to a plurality of conversation figures, and when a target user who is inputting conversation information is detected, the process of displaying the conversation figure corresponding to the target user in the conversation interface includes: and determining a session image matched with the information type based on the information type to which the session information input by the target user belongs, and then displaying the determined session image in a session interface.
That is, in this embodiment, the conversation image corresponding to each user may be selected and displayed based on the information type to which the conversation information input by the user belongs when the user is the target user, for example, when the information type to which the conversation information input by the target user belongs is text, the mouth of the conversation image corresponding to the target user is in a closed state, and when the information type to which the conversation information input by the target user belongs is voice, the mouth of the conversation image corresponding to the target user is in an open state. The session image corresponding to each user in different information types may also be designed according to actual user requirements, which is not limited in this embodiment.
In another embodiment, if the multi-party conversation only contains a small number of users, that is, the multi-party conversation is formed by adding a small number of users to a conversation group, after a conversation interface corresponding to the multi-party conversation is displayed, conversation images corresponding to all the users are displayed in the conversation interface, when a target user who is inputting conversation information in the multi-party conversation is detected, because the conversation image of the target user is already displayed in the conversation interface, according to the information type to which the conversation information which is being input by the target user belongs, a type identifier corresponding to the information type is displayed on the side of the displayed conversation image corresponding to the target user, and therefore, the problem that the communication atmosphere is not active due to the small number of users in the multi-party conversation is solved.
FIG. 10 is an exemplary conversation interface diagram for the case where there are only 2 users in a multi-party conversation. As shown in fig. 10, session images corresponding to all users are fixedly displayed in the session interface, and a session image corresponding to an end user may be displayed on the left side or the right side of a session image corresponding to another user, or displayed according to other relative positions, which is not limited in this embodiment.
When the terminal user inputs the conversation information, the type identification corresponding to the information type of the input conversation information is displayed on the side of the conversation image corresponding to the terminal user. When another user is inputting the conversation information, the type identification corresponding to the information type of the conversation information input by the user is correspondingly displayed on the side of the conversation image corresponding to the user. Therefore, although the number of users in the multi-party conversation is small, the real-time communication state of the group chat conversation in the conversation group can be displayed more truly through the combined display of the conversation image and the corresponding type identifier, so that the atmosphere of the multi-party conversation is activated, and the group chat experience of the terminal user is improved.
Fig. 11 is a flowchart illustrating an information processing method according to another exemplary embodiment.
The information processing method may be specifically executed by the session server 200 in the multi-party conversation system shown in fig. 1, and the information processing method may include steps S410 to S430, which are described in detail as follows:
step S410, monitoring a session state corresponding to each user terminal in the multi-party session to obtain multi-party session information, where the multi-party session information is used to represent that a target user terminal in the multi-party session is in a session state in which session information is being input.
As described above, since the ue where each user is located in the multi-party session communicates with the session server, the session server can monitor the session state corresponding to each ue in the multi-party session.
For example, the session server may receive first input state information sent to the multi-party session by each user terminal, where the first input state information includes a time point when the target user terminal triggers to input the session information, and then insert a target user corresponding to the target user terminal into a session queue based on the received first input state information, where an ordering of the target user included in the session queue corresponds to an ordering of the time point when the target user terminal triggers to input the session information.
The session server also receives second input state information sent by the target user terminal in the multi-party session, wherein the second input state information is used for indicating the target user terminal to terminate input of the session information, and based on the received second input state information, the session server deletes the target user information corresponding to the target user terminal from the session queue.
Based on the process, the session server can perform real-time maintenance on the session queue, and the session queue maintained in real time is used as multi-party session information, wherein the multi-party session information obviously comprises information that a target user terminal in the multi-party session is in a session state in which session information is being input.
Step S430, based on the multi-party conversation information, sending a conversation control instruction to each user terminal, so that each user terminal responds to the conversation control instruction, displaying a conversation image corresponding to the target user terminal in a conversation interface corresponding to the multi-party conversation, and displaying a type identifier corresponding to the information type in the conversation interface according to the information type of the conversation information which is input by the target user terminal, wherein the conversation image is a virtual object for identifying the target user logged in the target user terminal.
The application server can acquire the communication state corresponding to each user in the multi-party session based on the multi-party session information, so that the application server can display and update the communication state corresponding to each user in the multi-party session in real time in the session interface of each user terminal by sending a session control instruction to each user terminal in the multi-party session, and the user can acquire the real-time communication state of each user in a group chat scene based on the information displayed on the session interface.
Fig. 12 is a timing diagram illustrating a multi-party conversation in which a user a, a user B, a user C, and an application server exist, and a conversation group is composed of the user a, the user B, and the user C, as shown in fig. 12, according to an exemplary embodiment. When a user a and a user B start inputting session information into a session group, a user terminal where the user a and the user B are located reports an input state of the user a and the user B to an application server, the application server takes the user a and the user B as target users, records an exchange state of each target user through a maintained session queue, and issues an input event to each user terminal according to a certain rule (fig. 12 only shows that the application server sends the input event to a user terminal where the user C is located, and a process that the application server sends the input event to the user terminals where the user a and the user B are located is omitted). For example, if the number of the target users in the session queue is less than or equal to the number of the session character display positions configured in the session interface, the application server directly notifies the user terminal, so that the user terminal directly displays the session character corresponding to each target user in the session queue on the session character display positions configured in the session interface.
And after the user A finishes sending the session information or stops inputting the session information, the application server deletes the information corresponding to the user A from the maintained session queue and enables the user terminal to not display the session image corresponding to the user A through the user terminal.
And if the number of the target users in the conversation queue is larger than the number of the conversation image display positions (for example, 4) configured in the conversation interface, the application server waits for the conversation image corresponding to the target user positioned at the head of the queue in the conversation queue to display the specified time length in the user terminal, or informs the user terminal to sequentially display the conversation images corresponding to other target users in the conversation queue according to the sequence corresponding to other target users in the conversation queue when the target user stops inputting the conversation information or finishes sending the conversation information in the first 4 target users in the conversation queue, so as to display the real-time communication state of each user in the multi-party conversation in the conversation interface.
Therefore, based on the method provided by the embodiment, the communication state corresponding to each user in the multi-party session can be displayed and updated in real time in the session interface of the user terminal, so that the user can acquire the real-time communication state of each user in the group chat scene based on the information displayed on the session interface, and the problem that the real-time communication state of each user in the group chat scene cannot be acquired in the prior art is solved.
Fig. 13 is a block diagram of an information processing apparatus shown in an exemplary embodiment. As shown in fig. 13, the information processing apparatus includes:
a conversation interface display module 510 configured to display a conversation interface corresponding to a multi-party conversation, the multi-party conversation being formed based on a plurality of users joining a conversation group; a conversation image display module 530 configured to display a conversation image corresponding to a target user who is inputting conversation information in the multi-party conversation on the conversation interface, wherein the conversation image is a virtual object for identifying the target user; the type identifier display module 550 is configured to display a type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information currently input by the target user belongs.
Based on the information processing device provided by the embodiment, the users in the multi-party conversation can acquire the real-time communication states of the users in the group chat scene based on the information displayed on the conversation interface, and the problem that the real-time communication states of the users in the group chat scene cannot be acquired in the prior art is solved.
In another exemplary embodiment, the conversation character display module 530 includes:
a number determination unit configured to determine the number of target users who are inputting session information in the multi-party session; and the sequencing display unit is configured to display the conversation image corresponding to the target user on the conversation image display position based on the sequencing of the time point when the target user triggers the input of the conversation information under the condition that the number of the target users is larger than that of the conversation image display positions arranged in the conversation interface.
In another exemplary embodiment, the sorting display unit includes:
the priority determining subunit is configured to determine the priority of each detected target user, wherein the priority corresponds to the sequence of the time points when the target user triggers the input of the session information; and the conversation image updating and displaying subunit is configured to update and display the conversation image corresponding to each target user at the conversation image display position based on the priority of each target user and a preset conversation image display strategy.
In another exemplary embodiment, the conversation character update display subunit includes:
the first display position determining subunit is configured to determine a session image display position in an idle state in a session interface; the first candidate user display subunit is configured to sequentially determine candidate target users according to the priority of each target user, and display the session image corresponding to the candidate target users at the session image display position in the idle state; and the first session image quitting control subunit is configured to quit displaying the current session image under the condition that the time length for displaying the current session image on the session image display position reaches the specified time length, so that the session image display position quitting displaying the current session image is switched into an idle state.
In another exemplary embodiment, the conversation character update display subunit further includes:
a next presentation position determining subunit configured to determine, based on an arrangement relationship of the session character presentation positions provided in the session interface, a next session character presentation position located after the session character presentation position where the current session character is quitted to be displayed is determined; and the display position padding subunit is configured to move the conversation image displayed at the next conversation image display position to the previous conversation image display position for displaying.
In another exemplary embodiment, the conversation character update display subunit includes:
the second display position determining subunit is configured to determine a session image display position in an idle state in the session interface; the second candidate user display subunit is configured to sequentially determine candidate target users according to the priority of each target user, and display the session image corresponding to the candidate target users at the session image display position in the idle state; and the second session image quitting control subunit is configured to quit displaying the current session image under the condition that the target user corresponding to the current session image displayed at the session image display position is detected to terminate inputting the session information, so that the session image display position quitting displaying the current session image is switched to an idle state.
In another exemplary embodiment, the priority determining subunit includes:
the conversation queue acquiring subunit is configured to acquire a conversation queue corresponding to the multi-party conversation, and the conversation queue contains the sequence of time points for triggering input of conversation information by a target user in the multi-party conversation; and the priority acquiring subunit is configured to determine the priority of each target user according to the sequence of the time point triggering the input of the session information in the session queue.
In another exemplary embodiment, the session character presentation position includes a first session character presentation position and a second session character presentation position, and a session character corresponding to the end user is displayed in the first session character presentation position; the session character display module 530 further includes:
the target user identity determining unit is configured to display the conversation image corresponding to the target user at the second conversation image display position if the target user is determined to be other users except the terminal user in the process of displaying the conversation image corresponding to the target user at the conversation image display position based on the sequencing of the time point of the target user triggering and inputting the conversation information; and the target user identity display unit is configured to display the conversation image corresponding to the next target user at the second conversation image display position under the condition that the target user is determined to be the terminal user.
In another exemplary embodiment, the conversation character display module 530 includes:
a conversation image determining unit configured to determine a conversation image matching the information type based on the information type to which the conversation information being input by the target user belongs; and the conversation image confirmation display unit is configured to display the determined conversation image in the conversation interface.
Fig. 14 is a block diagram of an information processing apparatus shown in another exemplary embodiment. As shown in fig. 14, the information processing apparatus includes:
the session state monitoring module is configured to monitor session states corresponding to user terminals in the multi-party session to obtain multi-party session information, and the multi-party session information is used for representing that a target user terminal in the multi-party session is in a session state in which session information is being input; the session display control module is configured to send a session control instruction to each user terminal based on the multi-party session information, so that each user terminal responds to the session control instruction, displays a session image corresponding to the target user terminal in a session interface corresponding to the multi-party session, and displays a type identifier corresponding to the information type in the session interface according to the information type of the session information which is input by the target user terminal, wherein the session image is a virtual object for identifying the target user logged in the target user terminal.
Based on the information processing device provided by the embodiment, the users in the multi-party conversation can acquire the real-time communication states of the users in the group chat scene based on the information displayed on the conversation interface, and the problem that the real-time communication states of the users in the group chat scene cannot be acquired in the prior art is solved.
In another exemplary embodiment, the session state listening module includes:
the first state information receiving unit is configured to receive first input state information sent by a user terminal in the multi-party conversation, and the first input state information contains a time point when a target user terminal triggers to input conversation information; the conversation queue inserting unit is configured to insert a target user corresponding to the target user terminal into a conversation queue based on the received first input state information, and the sequencing of the target user contained in the conversation queue corresponds to the sequencing of the time point when the target user terminal triggers to input the conversation information; and the multi-party conversation information acquisition unit is configured to take the obtained conversation queue as multi-party conversation information.
In another exemplary embodiment, the session state listening module further comprises:
the second state information receiving unit is configured to receive second input state information sent by a target user terminal in the multi-party session, wherein the second input state information is used for indicating the target user terminal to terminate input session information; and the session queue deleting unit is configured to delete the target user information corresponding to the target user terminal from the session queue based on the received second input state information.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit execute operations has been described in detail in the method embodiment, and is not described again here.
Embodiments of the present application also provide an electronic device, including a processor and a memory, where the memory has stored thereon computer readable instructions, and the computer readable instructions, when executed by the processor, implement the information processing method as described above.
FIG. 15 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1600 of the electronic device shown in fig. 15 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 15, computer system 1600 includes a Central Processing Unit (CPU)1601, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1602 or a program loaded from a storage portion 1608 into a Random Access Memory (RAM) 1603. In the RAM1603, various programs and data necessary for system operation are also stored. The CPU1601, ROM1602, and RAM1603 are connected to each other via a bus 1604. An Input/Output (I/O) interface 1605 is also connected to the bus 1604.
The following components are connected to the I/O interface 1605: an input portion 1606 including a keyboard, a mouse, and the like; an output section 1607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 1608 including a hard disk and the like; and a communication section 1609 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1609 performs communication processing via a network such as the internet. The driver 1610 is also connected to the I/O interface 1605 as needed. A removable medium 1611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1610 as necessary, so that a computer program read out therefrom is mounted in the storage portion 1608 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1609, and/or installed from the removable media 1611. When the computer program is executed by a Central Processing Unit (CPU)1601, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the information processing method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the information processing method provided in the above-described embodiments.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (15)
1. An information processing method characterized by comprising:
displaying a session interface corresponding to a multi-party session, wherein the multi-party session is formed based on joining of a plurality of users into a session group;
displaying a session image corresponding to a target user who is inputting session information in the multi-party session on the session interface, wherein the session image is a virtual object for identifying the target user;
and displaying a type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information input by the target user belongs.
2. The method of claim 1, wherein displaying a conversation image corresponding to a target user who is inputting conversation information in the multi-party conversation in the conversation interface comprises:
determining a number of target users in the multi-party conversation who are inputting conversation information;
and if the number of the target users is larger than the number of the session image display positions configured in the session interface, displaying the session images corresponding to the target users on the session image display positions based on the sequencing of the time points of the target users for triggering the input of the session information.
3. The method of claim 2, wherein displaying the session character corresponding to the target user on the session character display position based on the ordering of the time points when the target user triggers the input of the session information comprises:
determining the priority of each detected target user, wherein the priority corresponds to the sequence of the time points when the target user triggers to input the session information;
and updating and displaying the session image corresponding to each target user at the session image display position based on the priority of each target user and a pre-configured session image display strategy.
4. The method of claim 3, wherein the session character presentation policy indicates that the session character corresponding to the target user is displayed at the session character presentation location for a specified duration;
based on the priority of each target user and a preset conversation image display strategy, updating and displaying the conversation image corresponding to each target user at the conversation image display position, wherein the updating and displaying process comprises the following steps:
determining a session image display position in an idle state in the session interface;
sequentially determining candidate target users according to the priority of each target user, and displaying the session image corresponding to the candidate target users on the session image display position in the idle state;
and if the time length for displaying the current session image on the session image display position reaches the specified time length, quitting displaying the current session image so as to switch the session image display position for quitting displaying the current session image into an idle state.
5. The method of claim 4, wherein after exiting from displaying the current conversation character, the method comprises:
determining the next session image display position after the session image display position quitting displaying the current session image based on the arrangement relation of the session image display positions configured in the session interface;
and moving the conversation image displayed on the next conversation image display position to the previous conversation image display position for displaying.
6. The method of claim 3, wherein the session character presentation policy indicates to quit displaying the corresponding session character after the target user terminates inputting session information;
based on the priority of each target user and a preset conversation image display strategy, updating and displaying the conversation image corresponding to each target user at the conversation image display position, wherein the updating and displaying process comprises the following steps:
determining a session image display position in an idle state in the session interface;
sequentially determining candidate target users according to the priority of each target user, and displaying the session image corresponding to the candidate target users on the session image display position in the idle state;
and if detecting that a target user corresponding to the current session image displayed on the session image display position terminates inputting session information, quitting displaying the current session image so as to switch the session image display position quitting displaying the current session image into an idle state.
7. The method of claim 3, wherein determining the priority of each detected target user comprises:
acquiring a conversation queue corresponding to the multi-party conversation, wherein the conversation queue contains the sequence of time points for triggering input of conversation information by a target user in the multi-party conversation;
and determining the priority of the target user according to the sequence of the time point of the target user triggering the input of the session information in the session queue.
8. The method of claim 2, wherein the session character presentation positions include a first session character presentation position and a second session character presentation position, the first session character presentation position having a session character corresponding to the end user displayed therein; the method further comprises the following steps:
displaying the session image corresponding to the target user at the second session image display position if the target user is determined to be other users except the terminal user in the process of displaying the session image corresponding to the target user at the session image display position based on the sequencing of the time point when the target user triggers to input the session information;
and if the target user is determined to be the terminal user, displaying the conversation image corresponding to the next target user on the second conversation image display position.
9. The method of claim 1, wherein the users in the multiparty conversation correspond to a plurality of conversation personas; displaying a session image corresponding to the target user in the session interface, including:
determining a session image matched with the information type based on the information type to which the session information input by the target user belongs;
and displaying the determined conversation image in the conversation interface.
10. An information processing method characterized by comprising:
monitoring the session state corresponding to each user terminal in a multi-party session to obtain multi-party session information, wherein the multi-party session information is used for representing that a target user terminal in the multi-party session is in a session state of inputting session information;
based on the multi-party conversation information, sending a conversation control instruction to each user terminal, so that each user terminal responds to the conversation control instruction, displaying a conversation image corresponding to the target user terminal in a conversation interface corresponding to the multi-party conversation, and displaying a type identifier corresponding to the information type in the conversation interface according to the information type of the conversation information input by the target user terminal, wherein the conversation image is a virtual object for identifying a target user logged in the target user terminal.
11. The method of claim 10, wherein monitoring session states corresponding to the user terminals in the multi-party session to obtain multi-party session information comprises:
receiving first input state information sent by a user terminal in the multi-party conversation, wherein the first input state information contains a time point when the target user terminal triggers to input conversation information;
inserting a target user corresponding to the target user terminal into a session queue based on the received first input state information, wherein the sequence of the target user contained in the session queue corresponds to the sequence of the time point when the target user terminal triggers to input session information;
and using the obtained conversation queue as the multi-party conversation information.
12. The method of claim 11, wherein prior to using the obtained session queue as the multiparty session information, the method further comprises:
receiving second input state information sent by a target user terminal in the multi-party session, wherein the second input state information is used for indicating the target user terminal to terminate input session information;
and deleting the target user information corresponding to the target user terminal from the session queue based on the received second input state information.
13. An information processing apparatus characterized by comprising:
the conversation interface display module is configured to display a conversation interface corresponding to a multi-party conversation, and the multi-party conversation is formed by adding a plurality of users into a conversation group;
a conversation image display module configured to display a conversation image corresponding to a target user who is inputting conversation information in the multi-party conversation on the conversation interface, wherein the conversation image is a virtual object for identifying the target user;
and the type identifier display module is configured to display the type identifier corresponding to the information type on the side of the session image corresponding to the target user according to the information type to which the session information input by the target user belongs.
14. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1 to 9 or claims 10 to 12.
15. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1 to 9 or claims 10 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011087496.2A CN114338577B (en) | 2020-10-12 | 2020-10-12 | Information processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011087496.2A CN114338577B (en) | 2020-10-12 | 2020-10-12 | Information processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114338577A true CN114338577A (en) | 2022-04-12 |
CN114338577B CN114338577B (en) | 2023-05-23 |
Family
ID=81032919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011087496.2A Active CN114338577B (en) | 2020-10-12 | 2020-10-12 | Information processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114338577B (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010033298A1 (en) * | 2000-03-01 | 2001-10-25 | Benjamin Slotznick | Adjunct use of instant messenger software to enable communications to or between chatterbots or other software agents |
CN104639425A (en) * | 2015-01-06 | 2015-05-20 | 广州华多网络科技有限公司 | Network expression playing method and system and service equipment |
CN106355629A (en) * | 2016-08-19 | 2017-01-25 | 腾讯科技(深圳)有限公司 | Virtual image configuration method and device |
CN106445325A (en) * | 2016-08-30 | 2017-02-22 | 华为技术有限公司 | Method and device for creating group |
DE202017105929U1 (en) * | 2016-10-14 | 2018-01-09 | Google LLC (n.d.Ges.d. Staates Delaware) | Privacy settings for virtual reality |
WO2018141224A1 (en) * | 2017-02-06 | 2018-08-09 | 阿里巴巴集团控股有限公司 | Group message read-status display method, instant messaging client, and server |
CN108880975A (en) * | 2017-05-16 | 2018-11-23 | 腾讯科技(深圳)有限公司 | Information display method, apparatus and system |
CN109086860A (en) * | 2018-05-28 | 2018-12-25 | 北京光年无限科技有限公司 | A kind of exchange method and system based on visual human |
CN109873757A (en) * | 2019-03-29 | 2019-06-11 | 上海连尚网络科技有限公司 | Message display method, electronic equipment and readable medium for multi-conference |
CN110147188A (en) * | 2019-05-27 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of information cuing method, device, equipment and storage medium |
CN110717974A (en) * | 2019-09-27 | 2020-01-21 | 腾讯数码(天津)有限公司 | Control method and device for displaying state information, electronic equipment and storage medium |
WO2020042816A1 (en) * | 2018-08-28 | 2020-03-05 | Oppo广东移动通信有限公司 | Message display method and device, terminal, and storage medium |
CN110913077A (en) * | 2019-12-03 | 2020-03-24 | 深圳集智数字科技有限公司 | Session message display method and device |
CN111600730A (en) * | 2020-05-18 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Session interface display method, group chat creating method, device and equipment |
CN111589130A (en) * | 2020-04-24 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium in virtual scene |
US20200279240A1 (en) * | 2019-03-01 | 2020-09-03 | Grabango Co. | Cashier interface for linking customers to virtual data |
-
2020
- 2020-10-12 CN CN202011087496.2A patent/CN114338577B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010033298A1 (en) * | 2000-03-01 | 2001-10-25 | Benjamin Slotznick | Adjunct use of instant messenger software to enable communications to or between chatterbots or other software agents |
CN104639425A (en) * | 2015-01-06 | 2015-05-20 | 广州华多网络科技有限公司 | Network expression playing method and system and service equipment |
CN106355629A (en) * | 2016-08-19 | 2017-01-25 | 腾讯科技(深圳)有限公司 | Virtual image configuration method and device |
CN106445325A (en) * | 2016-08-30 | 2017-02-22 | 华为技术有限公司 | Method and device for creating group |
DE202017105929U1 (en) * | 2016-10-14 | 2018-01-09 | Google LLC (n.d.Ges.d. Staates Delaware) | Privacy settings for virtual reality |
WO2018141224A1 (en) * | 2017-02-06 | 2018-08-09 | 阿里巴巴集团控股有限公司 | Group message read-status display method, instant messaging client, and server |
CN108880975A (en) * | 2017-05-16 | 2018-11-23 | 腾讯科技(深圳)有限公司 | Information display method, apparatus and system |
CN109086860A (en) * | 2018-05-28 | 2018-12-25 | 北京光年无限科技有限公司 | A kind of exchange method and system based on visual human |
WO2020042816A1 (en) * | 2018-08-28 | 2020-03-05 | Oppo广东移动通信有限公司 | Message display method and device, terminal, and storage medium |
US20200279240A1 (en) * | 2019-03-01 | 2020-09-03 | Grabango Co. | Cashier interface for linking customers to virtual data |
CN109873757A (en) * | 2019-03-29 | 2019-06-11 | 上海连尚网络科技有限公司 | Message display method, electronic equipment and readable medium for multi-conference |
CN110147188A (en) * | 2019-05-27 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of information cuing method, device, equipment and storage medium |
CN110717974A (en) * | 2019-09-27 | 2020-01-21 | 腾讯数码(天津)有限公司 | Control method and device for displaying state information, electronic equipment and storage medium |
CN110913077A (en) * | 2019-12-03 | 2020-03-24 | 深圳集智数字科技有限公司 | Session message display method and device |
CN111589130A (en) * | 2020-04-24 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium in virtual scene |
CN111600730A (en) * | 2020-05-18 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Session interface display method, group chat creating method, device and equipment |
Non-Patent Citations (1)
Title |
---|
巴志超等: "微信群内部信息交流的网络结构、行为及其演化分析――基于会话分析视角", 《情报学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114338577B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10198238B2 (en) | Data transmission method, and relevant device and system | |
US11792273B2 (en) | Invitation link for launching multi-user applications | |
US20220043556A1 (en) | Image distribution method, image distribution server device and chat system | |
WO2009012117A1 (en) | Method, system and apparatus for sorting topics within a group | |
CN106161219A (en) | Message treatment method and device | |
CN105283233B (en) | Information processing system | |
CN110609970B (en) | User identity identification method and device, storage medium and electronic equipment | |
CN112057846B (en) | Interaction method, device, equipment and storage medium for cloud game service scheduling | |
CN113115114A (en) | Interaction method, device, equipment and storage medium | |
US20240314214A1 (en) | System and Method for Providing Recommendations Based on Synchronous Activity | |
CN113315869A (en) | Content display method and device, electronic equipment and storage medium | |
CN106209396A (en) | Matching process and relevant apparatus | |
US20220021715A1 (en) | Live streaming method and apparatus, device and computer readable storage medium | |
CN107800612B (en) | User matching method and device, storage medium and computer equipment | |
CN113869954A (en) | Information processing method and device | |
CN114844869A (en) | Multimedia conference participation statistical method and device, computer equipment and storage medium | |
CN113613060A (en) | Drawing live broadcast method, device, equipment and storage medium | |
CN114338577B (en) | Information processing method and device, electronic equipment and storage medium | |
CN112169312A (en) | Queuing scheduling method, device, equipment and storage medium for cloud game service | |
CN114518918A (en) | Data processing method, device, equipment and storage medium | |
CN114205320B (en) | Message display method and device, electronic equipment and storage medium | |
CN113497715A (en) | Chat service providing method and device | |
CN109544115A (en) | Conferencing information based reminding method, system and server, computer readable storage medium | |
CN114895830A (en) | Task information display method and device, electronic equipment and storage medium | |
CN111130983B (en) | Method and equipment for sending information and generating result information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40071435 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |