CN112152901A - Virtual image control method and device and electronic equipment - Google Patents

Virtual image control method and device and electronic equipment Download PDF

Info

Publication number
CN112152901A
CN112152901A CN201910562449.XA CN201910562449A CN112152901A CN 112152901 A CN112152901 A CN 112152901A CN 201910562449 A CN201910562449 A CN 201910562449A CN 112152901 A CN112152901 A CN 112152901A
Authority
CN
China
Prior art keywords
action
client
user
avatar
keyword
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910562449.XA
Other languages
Chinese (zh)
Inventor
陈家盛
庞晟立
洪荣富
林毅雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910562449.XA priority Critical patent/CN112152901A/en
Publication of CN112152901A publication Critical patent/CN112152901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of communication, in particular to an avatar control method, an avatar control device and electronic equipment, wherein the avatar control method, the avatar control device and the electronic equipment are used for acquiring contents sent by a chat interface of a first client, wherein the chat interface is at least associated with the first client and a second client; when the content is determined to contain preset keywords, matching action instructions corresponding to the keywords; and returning the action instruction to the first client and the second client so that the first client and the second client respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction, so that the sent content can be automatically converted into the action and expression of the avatar, the existence sense of the avatar is increased, and the emotion expression of the user and the interest of the chat are enhanced.

Description

Virtual image control method and device and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for controlling an avatar, and an electronic device.
Background
At present, when a user chats, the user not only expresses the information through characters or voice, but also increases the interest of chatting and inputting the content by frequently using the emoticons, so that the user can transmit the information more intuitively.
In the prior art, a plurality of fixed emoticons are usually provided for a user to select, and the user inputs the corresponding emoticons into the currently active application by selecting a chat message mode, but the prior art only provides a convenient emoticons selection and transmission mode for the user, and the emoticons are mutually transmitted and received in a chat message mode, and do not have any interaction and contact with the avatar in a chat scene, and the avatar has a small role in chatting and cannot interact with the user, so that the problem that how to provide a more interesting chat interaction mode for the avatar needs to be considered is solved.
Disclosure of Invention
The embodiment of the application provides a virtual image control method, a virtual image control device and electronic equipment, so that interestingness and emotion expression of a user on a social platform during chatting are improved.
The embodiment of the application provides the following specific technical scheme:
an embodiment of the present application provides an avatar control method, including:
acquiring content sent by a chat interface of a first client, wherein the chat interface is at least associated with the first client and a second client;
when the content is determined to contain preset keywords, matching action instructions corresponding to the keywords;
and returning the action instruction to the first client and the second client, so that the first client and the second client respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction.
Another embodiment of the present application provides an avatar control method, including:
receiving content input by a user on a chat interface, and sending the content to a server;
receiving an action instruction returned by the server, wherein the action instruction is obtained by matching the action instruction corresponding to the keyword when the server determines that the content contains the preset keyword;
and playing the action corresponding to the action instruction on the user virtual image of the user displayed on the chat interface.
Another embodiment of the present application provides an avatar control apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring content sent by a chat interface of a first client, and the chat interface at least relates to the first client and a second client;
the matching module is used for matching action instructions corresponding to the keywords when the content is determined to contain the preset keywords;
and the sending module is used for returning the action instruction to the first client and the second client so as to enable the first client and the second client to respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction.
Another embodiment of the present application provides an avatar control apparatus, including:
the first receiving module is used for receiving the content input by the user on the chat interface;
the sending module is used for sending the content to a server;
the second receiving module is used for receiving an action instruction returned by the server, wherein the action instruction is obtained by matching the action instruction corresponding to the keyword when the server determines that the content contains the preset keyword;
and the playing module is used for playing the action corresponding to the action instruction on the user virtual image of the user displayed on the chat interface.
Another embodiment of the present application provides an electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any of the avatar control methods described above when executing the program.
Another embodiment of the present application provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of any of the avatar control methods described above.
In the embodiment of the application, the first client and the second client establish chat, the first client sends message content to the second client, obtains the content sent by the chat interface of the first client, matches action instructions corresponding to the keywords when the content is determined to contain the preset keywords, and returns the action instructions to the first client and the second client, so that the first client and the second client can respectively control the user avatar of the first client displayed on the chat interface and play the action corresponding to the action instructions, thus, the corresponding action can be automatically played on the avatar according to the sent content, the exposure rate and the existence sense of the avatar are increased, the avatar is used as a carrier, information expressed by a user can be more vividly and intuitively transmitted, the avatar can be dynamically changed, and the actual requirements of the user can be more met, the emotion expression and substitution feeling of the user are enhanced, and the interestingness of chatting is improved.
Drawings
FIG. 1 is a diagram illustrating the association effect of an expression package in the prior art;
FIG. 2 is a schematic diagram of an application architecture of an avatar control method according to an embodiment of the present application;
FIG. 3 is a flowchart of an avatar control method in an embodiment of the present application;
FIG. 4 is a flowchart of another avatar control method in the embodiment of the present application;
FIG. 5 is a skeletal diagram of an avatar in an embodiment of the present application;
FIG. 6 is a detailed flowchart of an avatar control method in an embodiment of the present application;
FIG. 7 is an interface effect diagram of an avatar display of a chat scene in an embodiment of the present application;
FIG. 8 is a diagram illustrating interface effects of a user entering content in a chat interface in an embodiment of the present application;
FIG. 9 is a diagram illustrating interface effects of avatar rendering actions in an embodiment of the present application;
FIG. 10 is a diagram illustrating interface effects exhibited by action templates of an avatar in an embodiment of the present application;
FIG. 11 is a diagram illustrating an interface effect of selecting an action template according to an embodiment of the present application;
FIG. 12 is a diagram illustrating an enlarged interface effect of an avatar in an embodiment of the present application;
FIG. 13 is a schematic interface diagram illustrating an avatar playing action in an embodiment of the present application;
FIG. 14 is a schematic structural diagram of an avatar control apparatus according to an embodiment of the present application;
FIG. 15 is a schematic view of another avatar control apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For the purpose of facilitating an understanding of the embodiments of the present application, a brief introduction of several concepts is provided below:
a social platform: it is understood that the social platform may include a social network of individuals, and the social platform may include a social website or a social Application (APP), etc., for example, in the form of an instant messaging Application, which allows two or more people to use the network to transfer information in real time, and a user may apply for his/her social account on the social platform and may establish a social relationship with other social accounts. On the social platform, a user can establish a chat window through the social account of the user and the social accounts of other people, and can send and receive information to and from each other, the information comprises but is not limited to characters, voice, videos, facial expression bags and the like, the user can also own virtual images, and the virtual images of all chat parties can be displayed on a chat window interface in the embodiment of the application.
And (3) virtual image: the embodiment of the application shows that the virtual personal image of the user in the social platform can be a skeleton image, such as a complete little person, and the virtual image is not a static picture, but can dynamically perform action expressions and the like, such as a centimeter show in a mobile phone QQ, and after the centimeter show is opened, the virtual image, namely the virtual image, can be displayed on a chat interface.
After the inventor analyzes the prior art, the inventor finds that when a user chats on a social platform, the user does not only express the information through characters or voice, but also hopes to transmit the information through some emoticons or other more vivid modes, so that the user can transmit the information more intuitively, and the chatting interestingness is increased. In the prior art, fixed emoticons are mainly directly displayed, or a system can associate related emoticons according to characters input by a user, and then the system screens out some emoticons to be displayed for the user to select, for example, as shown in fig. 1, for an emoticons association effect schematic diagram in the prior art, as shown in fig. 1, the user inputs '666', the system can associate and display the emoticons related to '666', the user can input the corresponding emoticons to the currently active application by selecting a chat message mode, but only a convenient emoticons selection and sending mode is provided, the user cannot perform other control operations on the emoticons, and the emoticons do not have any interaction and contact with the emoticons in a chat scene, the emoticons often stand on the side badly during the chat, and the emoticons do not play a great role during the chat, there is a lack of interactivity with the user.
In order to solve the problem, a more interesting interaction mode for the avatar can be provided to improve the interest and emotional expression of the user chat on the social platform, and in view of this, the embodiment of the present application provides an avatar control method, specifically, the user can display the avatar of each chat party on the chat interface during the chat, recognize the content input by the user, match the corresponding action command, and automatically play the action corresponding to the action command on the corresponding avatar, so that the content input by the user can be automatically converted into the action and expression of the avatar, the exposure rate and the existence of the avatar can be increased, the substitution feeling of the user on the avatar can be enhanced, the avatar can automatically play the associated action, more vividly express the information transmitted by the user, and the emotional expression of the user during the chat can be enhanced, the chat interest is improved.
Some brief descriptions are given below to application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In a specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 2, an application architecture diagram of the avatar control method in the embodiment of the present application is shown, including a first client 100, a second client 200, and a server 300.
The first client 100 and the second client 200 may be run on any terminal such as a smart phone, a tablet computer, a portable personal computer, etc., in which at least one type of application may run, including but not limited to: an instant messaging application, and the like, where the client may be a client of the instant messaging application, and the user may use the instant messaging application through the terminal, for example, the user may establish a chat window with other users in the instant messaging application through the terminal and perform information interaction with other users.
In this embodiment, the first client 100 and the second client 200 may be clients of the same instant messaging application respectively running on different terminals, and both sides establish chat communication, and a user avatar of the first client 100 and a user avatar of the second client 200 may be displayed on a chat interface established by both sides, and the first client 100 and the second client 200 may be receivers of information or senders, for example, if the first client 100 sends information to the second client 200 on the chat interface, the first client 100 is a sender, the second client 200 is a receiver, and if the first client 100 receives information sent by the second client 200 on the chat interface, the first client 100 is a receiver, and the second client 200 is a sender.
The server 300 can provide various network services for the terminal, and for different applications on the terminal, the server 300 may be regarded as a background server providing corresponding network services, for example, the first client 100 and the second client 200 are clients of an instant messaging application, for example, a mobile phone QQ, and then the server 300 may be understood as a QQ background server for providing a service related to the QQ, for example, the server 300 may be used to handle various requirements of the instant messaging application in a process of implementing functions such as user registration, avatar configuration, and information interaction. In this embodiment, the server 300 may further receive content, which is sent by the first client 100 or the second client 200 and is input by the user on the chat interface, and perform matching, determine a corresponding action instruction, and return the action instruction to the first client 100 or the second client 200, so that the first client 100 or the second client 200 controls the corresponding avatar to play the action corresponding to the action instruction according to the action instruction, for example, the first client 100 sends information to the second client 200, and the server 300 may obtain the content sent by the first client 100, match the corresponding action instruction, and return the action instruction to the first client 100 and the second client 200, so that the first client 100 and the second client 200 may respectively control the user avatar of the first client 100 to play the corresponding action.
The server 300 may be a server, a server cluster composed of a plurality of servers, or a cloud computing center.
The first client 100 and the second client 200 may be connected to the server 300 via the internet to communicate with each other. Optionally, the internet described above uses standard communication techniques and/or protocols. The internet is typically the internet, but can be any Network including, but not limited to, Local Area Networks (LANs), Metropolitan Area Networks (MANs), Wide Area Networks (WANs), mobile, wired or wireless networks, private networks, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Markup Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
It should be noted that the application architecture diagram in the embodiment of the present application is to more clearly illustrate the technical solution in the embodiment of the present application, and does not limit the technical solution provided in the embodiment of the present application, and is not limited to the chat scenario of instant messaging, and for other application architectures and business applications, the technical solution provided in the embodiment of the present application is also applicable to similar problems. In the following embodiments of the present application, an application architecture of the avatar control method shown in fig. 2 is taken as an example for schematic explanation.
Based on the foregoing embodiment, the following describes an avatar control method in the embodiment of the present application, and refer to fig. 3, which is a flowchart of an avatar control method in the embodiment of the present application, and is mainly applied to a server side, where the method includes:
step 300: the method comprises the steps of obtaining content sent by a chat interface of a first client, wherein the chat interface at least is associated with the first client and a second client.
In the embodiment of the application, the method is mainly applied to a chat scene, the first client and the second client establish chat communication, chat interfaces are displayed on terminals of the two sides, and the user virtual image of the first client and the user virtual image of the second client are displayed on the chat interfaces.
In addition, the embodiment of the application can be used for a chat scene with a single friend, can also be applied to group chat scenes of a plurality of friends without limitation, and certainly, a user can also select to hide the avatar of a friend, and the avatar of the friend can be automatically displayed when the friend speaks.
The content sent by the user on the chat interface may be text content or voice content, and is not limited in the embodiment of the application.
Step 310: and when the content is determined to contain the preset keywords, matching action instructions corresponding to the keywords.
When step 310 is executed, the method specifically includes:
s1, determining whether the content contains the keywords in the keyword list, and if so, acquiring the corresponding keyword identifications in the keyword list, wherein the keyword list at least comprises the keywords and the associated keyword identifications.
When the input content is identified to contain the preset keywords, if the input content is the voice content, the input content can be converted into corresponding text content and then identified, the text content is matched with the keywords in the keyword list, and whether the input content contains the keywords in the keyword list is searched.
And S2, matching the action label identification corresponding to the keyword identification in a mapping relation table according to the keyword identification, wherein the mapping relation table at least comprises the mapping relation between the keyword identification and the action label identification.
And S3, matching in the action tag table according to the action tag identification to obtain a corresponding action instruction and an action tag name, wherein the action tag table at least comprises the action tag identification and the associated action instruction and action tag name.
That is to say, in the embodiment of the present application, an action tag is marked on an action instruction of each avatar, and a keyword corresponding to the action tag, for example, an action tag of "call" is marked on the call-making action instruction, and commonly used keywords such as "call, hello, hi" are added to the action tag.
In this way, in a specific implementation, three tables, namely a keyword table, a mapping relation table and an action tag table, are established in the background server in advance, and the three tables are used for recording the data, so that action instructions corresponding to the content input by the user can be obtained through matching of the three tables.
Several tables built in the server are explained below:
in the embodiment of the application, a semantic database is established in a background server, and three tables, namely a table A, a table B and a table C, are established in the semantic database.
Table a is an action tag table, where the action tag table at least includes an action tag identifier, and an associated action instruction and an action tag name, and is mainly used for recording an action tag of an avatar, where the action tag identifier may be an automatically increasing field identifier, for example, a record is inserted into table a, the action tag identifier is 1, the action instruction is D01, and the action tag name is a call, that is, the action tag name indicates that the action corresponding to D01 means a call.
The table B is a keyword table, which at least includes each keyword and associated keyword identifier, and is mainly used for recording the keyword of the avatar, wherein the keyword identifier is also a field identifier that can be automatically incremented, for example, a record is inserted, the keyword identifier is 2, and the keyword is hello, which represents that a keyword record is inserted.
Table C is a mapping relationship table, the mapping relationship table at least includes mapping relationships between keyword identifiers and action tag identifiers, and is mainly used for recording mapping relationships between action tags and keywords, in addition, in order to manage each mapping relationship, each mapping relationship may also correspond to a self-increment mapping identifier, that is, an automatically-incremented field identifier, for example, a record is inserted, a self-increment mapping identifier is 1, an action tag identifier is 1, and a keyword identifier is 2, which indicates that a mapping is established, and an action tag with an action tag identifier of 1 includes a keyword with a keyword identifier of 2, that is, an action tag for calling includes a good keyword.
Further, in the embodiment of the present application, the action tag table, the keyword table, and the mapping relationship table may be continuously updated, and operations such as addition, deletion, or modification may be performed.
For example, see table 1 for an example of an action tag table in the embodiment of the present application.
Table 1.
Action tag identification Motion command Action tag name
1 D01 Call out
2 D02 Laugh with Chinese character of' da xiao
3 D03 What is more
Referring to table 2, an example of a keyword table in the embodiment of the present application is shown.
Table 2.
Keyword identification Keyword
1 hi
2 You good
3 Happy
Table 3 shows an example of a mapping relationship table in the embodiment of the present application.
Table 3.
Mapping identification Action tag identification Keyword identification
1 1 1
2 1 2
3 2 3
Wherein, table 1, table 2 and table 3 only give examples of several records, and as can be seen from table 1, table 2 and table 3 above, the action tag name of the action instruction D01 of the action tag identifier 1 is a call, the action tag identifier 1 corresponds to two keyword identifiers 1 and 2, then the action instruction D01 contains two keywords "hi" and "hello", and the action tag name of D02 is laugh, containing one keyword "happy".
In this way, in the embodiment of the present application, after obtaining the content sent by the first client, the backend server performs search and matching, and determines whether the content includes a keyword in the keyword table, for example, if the content is "hello", it can be determined that the keyword "hello" in the keyword table is included, and then, the keyword identifier corresponding to the keyword is obtained, that is, the keyword id 2 corresponding to "hello" is obtained, and then the action tag id corresponding to the keyword id 2 is searched from the mapping relation table, for example, find out that the action label corresponding to the keyword label 2 is 1, and finally according to the action label, the corresponding action command is looked up in the action tag table, for example, the action command matched with action tag identification 1 is D01, the name of the action tag is called call, and the server can return the acquired action instruction and the action tag name to the first client.
Further, in the embodiment of the present application, if the server does not match the corresponding action instruction, no operation is performed, and the avatar on the first client and the second client may not perform any action.
Step 320: and returning the action instruction to the first client and the second client so that the first client and the second client respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction.
Further, the server can also send the user identifier of the first client to the second client, so that the second client can determine the user avatar of the first client according to the user identifier, further the user avatar of the first client can be displayed on the second client, and the user avatar of the first client is controlled to play corresponding actions.
In the embodiment of the application, the server simultaneously returns the action instruction to the first client and the second client, so that after the first client sends the content, the corresponding action is played on the user avatar of the first client on the chat interface of the first client, so that the user of the first client can know which action the user avatar does, and meanwhile, when the second client receives the content sent by the first client, the corresponding action is played on the user avatar of the first client on the chat interface of the second client, and the second client can simultaneously receive the content sent by the first client and the action played by the avatar, so that the existence sense of the avatar is enhanced, the emotional expression of the sender is enhanced, and the interest of chat is improved.
Further, if the action instruction represents an action for controlling interaction of the two avatars, corresponding actions can be played on the two avatars at the same time, for example, if the keyword is "hugging," the action instruction corresponding to the keyword can control the left arm and the right arm of the user avatar of the first client to extend forward and encircle the user avatar of the second client.
Further, if the number of action instructions matched by the server is greater than 1, when the action instructions are returned to the first client and the second client, a possible implementation manner is further provided in the embodiment of the present application, which specifically includes:
and S1, if the matched action instructions are more than 1, selecting a preset number of action instructions from the action instructions according to a preset rule.
For example, it is determined that the content includes a plurality of keywords, each keyword corresponds to a different action instruction, the plurality of action instructions are matched, and several of the keywords may be selected and returned to the first client, for example, the preset number is 5, which is not limited.
The method comprises the steps of selecting a preset number of action instructions according to a preset rule, determining the number of keywords contained in the content corresponding to the matched action instructions, sequencing the matched action instructions according to the number of the keywords from large to small, and selecting the previous preset number of action instructions.
And S2, returning the selected action instructions with the preset number to the first client, so that the first client sorts the action instructions with the preset number according to the incidence relation between the action instructions and the use times recorded locally, and displaying the sorted action instructions on the selection area associated with the user avatar of the first client. In the embodiment of the application, a database is also established locally at the client, and a table D is established in the database, and the table D is a usage number table and at least includes an association relationship between the action instruction and the usage number, that is, is used for recording the usage number of each action instruction.
In this way, after receiving the preset number of action instructions, the first client queries the table D, sorts the action instructions according to the number of times of use corresponding to the preset number of action instructions, sequentially displays the sorted action instructions, places the action instructions with higher frequency of use at a front position for a user to select, for example, displays the action instructions in a bubbling manner on the avatar, and is a floating window, and the user plays the corresponding action on the avatar of the first client on the chat interface after selecting the action instructions.
And S3, receiving the action command which is sent by the first client and selected by the user from the sorted action commands.
And S4, sending the selected action command to the second client.
That is, the server may also send the action command selected by the first client to the second client, so that the second client plays the corresponding action on the user avatar of the first client on the chat interface.
In the embodiment of the application, the content sent by the chat interface of the first client is obtained, when the content is determined to contain the preset keywords, the action instruction corresponding to the keywords is matched, and the action instruction is returned to the first client and the second client, so that the first client and the second client play the corresponding action on the user avatar of the first client displayed on the chat interface, and thus, the association action can be automatically played on the avatar by identifying the content input by the user, the content input by the user is linked with the avatar, the secondary selection of the user is not needed, the avatar can automatically make the corresponding action and expression according to the content input by the user, the exposure rate and the existence sense of the avatar are increased, the action of the avatar is further amplified, and the emotional expression and the sense of the user are enhanced by the action change of the avatar, the interest of chatting is improved.
Based on the foregoing embodiments, the present application further provides a client-side-based avatar control method, as shown in fig. 4, which is a flowchart of another avatar control method in the present application, and is mainly applied to a first client side, where the method includes:
step 400: and receiving the content input by the user on the chat interface and sending the content to the server.
In the embodiment of the application, the associated avatar of each chat party can be displayed on the chat interface of the client, the avatar can also be displayed when a certain user sends a message, and the avatar can be displayed at a set position on the chat interface, for example, on an input field of the chat interface.
Step 410: and receiving an action instruction returned by the server, wherein the action instruction is obtained by matching the action instruction corresponding to the keyword when the server determines that the content contains the preset keyword.
The implementation manner of the server matching and determining the corresponding action command is the same as that in the above embodiment, and is not described in detail here.
Further, if the server returns a plurality of action commands, the method further includes:
1) if a preset number of action instructions returned by the server are received, sequencing the preset number of action instructions according to the incidence relation between the locally recorded action instructions and the use times, wherein the preset number is larger than 1.
2) And displaying a selection area associated with the user avatar, and displaying the sequenced action instructions on the selection area so that the user can select from the sequenced action instructions.
The selection area can be a floating window in a preset shape, the selection area is displayed at a position associated with the virtual image, and a user can select a required action instruction from the selection area.
Further, if the selection operation of the user for the sequenced action instructions is not received after the time length threshold value is exceeded, the selection area can be hidden, and at the moment, the virtual image does not do any action.
The duration threshold may also be set empirically, for example, set to 5 seconds, and is not limited in the embodiment of the present application.
Step 420: and playing the action corresponding to the action instruction on the user virtual image of the user displayed on the chat interface.
When step 420 is executed under the condition that a preset number of action instructions returned by the server are received, the method specifically includes: and responding to the selection operation aiming at the displayed sequenced action instructions, and playing the action corresponding to the selected action instruction on the user virtual image of the user displayed on the chat interface.
That is, if the server matches 1 action instruction and returns the action instruction to the client, at this time, the first client plays the corresponding action on the user avatar of the first client after receiving the action instruction, and if the server matches more than 1 action instruction, the user of the first client needs to select, and after selecting one of the action instructions, plays the corresponding action on the avatar.
And further, according to the action played by the user avatar of the first client, the number of times of use locally recorded by the first client is also continuously updated, specifically, a possible implementation is provided in the embodiment of the present application, and according to the action instruction played by the user avatar of the first client, the number of times of use corresponding to the action instruction locally recorded is updated, that is, the number of times of use of the action instruction used is added by 1.
Specifically, when playing the action corresponding to the action instruction on the user avatar of the user displayed on the chat interface, the method includes:
s1, obtaining animation data corresponding to the action command, wherein the virtual image is a skeleton image, the skeleton image comprises each pre-divided skeleton node, and the animation data at least comprises position data of the skeleton node corresponding to each action in each time frame.
In the embodiment of the present application, the virtual image is preferably a 3D skeleton image, and may be a skeleton image of a person, or may be a skeleton image of various animals or living beings, which is not limited specifically.
In order to define animation data of each action of the avatar, a skeleton image is divided into a plurality of skeleton nodes, and the number of each skeleton node is determined, taking the skeleton image of the avatar as the skeleton image of the character as an example, as shown in fig. 5, which is a skeleton schematic diagram of the avatar in the embodiment of the present application, as shown in fig. 5, the skeleton node is divided into 18 skeleton nodes, wherein the skeleton node 0 is a fixed skeleton node, the other skeleton nodes are movable skeleton nodes, for each action, each skeleton node is defined, and position data of each skeleton node in each time frame can be predefined, the position data of each other skeleton node is the relative position relative to the skeleton node 0, and if the position of a certain skeleton needs to be changed in a certain time frame, the position data of the skeleton node is adjusted, so that the animation data can be modified, the realization is simple.
And S2, playing corresponding actions on the user virtual image displayed on the chat interface according to the animation data.
Therefore, when the action is played, the client can obtain the animation data corresponding to the action instruction from the server, and the positions of all the bone nodes can be adjusted in different time frames according to the animation data, so that the action playing effect is finally formed.
Further, when the action corresponding to the action instruction is played on the user avatar of the user shown on the chat interface, the method may further include: magnifying a particular location of the user avatar.
For example, when the action is a call and the call action is played on the avatar, the head or upper body part can be enlarged to enhance the effect of showing the action.
For example, when the motion is blinking and the avatar plays the blinking motion, the face can be enlarged and the user can see the motion change of the face more clearly.
Therefore, when the action is played, certain expressions or limb actions are close-up and amplified, the visual detail expression effect can be enhanced, and the substitution feeling of the user for the virtual image is further enhanced.
In the embodiment of the application, the virtual image can be displayed on the chat interface of the client, after the content input by the user is received and sent, the corresponding action can be automatically played on the virtual image on the chat interface, a more interesting interaction mode is provided for the virtual image, the information which the user wants to transmit can be more vividly expressed, and the interest of the chat is improved.
Based on the foregoing embodiment, the following describes an avatar control method using a specific application scenario, taking a client as a sender, that is, a first client in the foregoing embodiment, and taking an action instruction to represent and indicate only a user avatar of the first client to perform an action as an example, and introduces an interaction logic between the client and a server, specifically referring to fig. 6, which is a schematic flow diagram of an avatar control method in the embodiment of the present application.
Step 600: the client receives the content input by the user on the chat interface.
Step 601: the client sends the content to the server.
Step 602: the server determines whether the content includes a predetermined keyword, if yes, step 603 is performed, otherwise, step 611 is performed.
Specifically, the server compares the keywords in the content with the keyword list in the semantic database, judges whether the content has the keywords in the keyword list, if yes, the matching action instruction can be executed, if not, the operation is finished directly, the virtual image has no corresponding action instruction, and the corresponding action is not needed.
Step 603: and the server acquires an action instruction corresponding to the keyword.
Step 604: the server determines whether the number of the action commands is greater than 1, if yes, step 605 is executed, otherwise, step 609 is executed.
Step 605: and the server returns the selected preset number of action instructions to the client.
For example, the preset number is 5, and is not limited, and the server may select the first 5 action instructions according to the number of the keywords corresponding to each action instruction, and return the selected action instructions to the client.
Step 606: and the client sorts and displays the action instructions according to the incidence relation between the locally recorded action instructions and the use times.
Specifically, the client may sort the received preset number of action instructions according to the use times from large to small, and display the action instructions on a selection area associated with the avatar of the client, so that the user can select the action instructions.
Step 607: the client determines one action instruction selected by the user.
Step 608: the client updates the number of uses of the action instruction and proceeds to execute step 610.
Step 609: the server returns the action instruction to the client.
Step 610: and the client plays the corresponding action on the user virtual image of the client displayed on the chat interface according to the action instruction.
Specifically, the client acquires animation data corresponding to the action instruction from the server according to the action instruction, and plays a corresponding action on the user avatar of the client displayed on the chat interface according to the animation data.
Step 611: and (6) ending.
Therefore, in the embodiment of the application, in the chat scene, the corresponding action can be automatically played on the corresponding virtual image according to the content input by the user on the chat interface, the content is converted into the action and expression of the virtual image, the virtual image is used as a carrier to transmit the information which the user wants to transmit, the emotion and substitution feeling of the user during chatting are more vividly expressed, and the chat interestingness is improved.
Based on the above embodiment, the following explains the avatar control method in the embodiment of the present application from the product implementation side, taking a chat scene with a single friend as an example.
1) In a chat scene, a chat interface is associated with a first client and a second client, which correspond to different users, respectively, and an avatar of the first client and an avatar of the second client are displayed on the chat interface, for example, referring to fig. 7, an interface effect diagram displayed for the avatar of the chat scene in the embodiment of the present application is shown in fig. 7, for example, the user of the first client is a user himself, and the user of the second client is a chat friend, the avatar of the user himself and the avatar of the chat friend are displayed in real time on a text input field of the chat interface, for example, two kids shown in fig. 7, a boy represents the user himself, a girl represents the chat friend, and when both parties do not send messages, the two avatars may be displayed in a set state, for example, the avatar is not moving.
2) Referring to fig. 8, which is an interface effect diagram of content input by a user in a chat interface in the embodiment of the present application, as shown in fig. 8, when performing chat, the user of the first client may input a content to be sent in an input field, for example, the input content is a text "hi", and when the user inputs and sends a text, the user may automatically identify and extract a keyword in the text at a background server, determine whether the keyword includes a keyword in a keyword table, and match an action instruction corresponding to the keyword.
3) Further, the server may return the matched action instruction to the client, and the client plays the corresponding action on the avatar, for example, referring to fig. 9, which is an interface effect diagram of playing the action on the avatar in the embodiment of the present application, as shown in fig. 9, which is a schematic view of a chat interface of the first client, after the user of the first client sends the text "hi", the user avatar of the user may simultaneously play the corresponding action according to the action instruction returned by the server, as shown in fig. 9, the user avatar of the first client, i.e., the corresponding boy of the boy, may make the action and expression corresponding to the "hi".
Furthermore, the specific position of the currently acting avatar can be enlarged and displayed, the performance effect can be enhanced, because the avatars displayed on the chat interface are usually small in order not to influence the user chat, when one avatar is enlarged and displayed, the other avatar may be shielded, for example, fig. 9 only displays the avatar of the boy when the action of the boy is enlarged and played, and because the avatar is currently acting, the avatar shielding the girl for a while has no great influence.
Further, based on the foregoing embodiment, in the embodiment of the present application, another product-side implementation manner is also provided, in the embodiment of the present application, some action templates for an avatar may also be provided for a user to select, and meanwhile, some expressions or actions may also be enlarged in a close-up manner to enhance a visual detail expression effect, which is still described by taking a chat scene with a single friend as an example, specifically:
1) referring to fig. 10, an interface effect diagram illustrating an action template of an avatar according to an embodiment of the present invention is shown, as shown in fig. 10, avatars of both parties of a chat are displayed on a chat interface, action templates of the avatar may be displayed on an input panel, and the action templates may be further classified into different categories, such as "raise people", "do", and the like.
2) Referring to FIG. 11, a schematic diagram of an interface for selecting an action template in the embodiment of the present application is shown, as shown in FIG. 11, from which the user selects a "blink" action, and the user can select and send the action by clicking with a finger.
3) Further, the terminal automatically zooms in to the close-up position of the corresponding avatar, and zooms in the close-up position of the avatar, for example, as shown in fig. 12, which is an interface effect diagram for avatar zooming in the embodiment of the present application, after the user selects an action to send, for example, the avatar corresponding to the user is a boy, then the full view of the action and the expression can be better shown for the close-up position of the avatar, and further, after zooming in, the avatar starts playing the corresponding action, for example, as shown in fig. 13, which is an interface diagram for avatar playing in the embodiment of the present application, as shown in fig. 13, the avatar of the boy starts to perform the deduction of the action and the expression according to the action selected by the user.
Therefore, the action template is set for the virtual image, the playing action is amplified, the realization is simple, the chat interaction of the user based on the virtual image is facilitated, and the substitution feeling of the user to the virtual image is enhanced.
Certainly, the implementation effect for the group chat scene is similar, for example, when a certain user sends a message in the group chat, the avatar of the user plays a corresponding action, or one friend in the group chat is selected, when the user sends the message, the action instruction may instruct the user and the selected friend to perform a corresponding action, at this time, the two avatars are simultaneously displayed on the chat interface, and the corresponding action is played.
Based on the same inventive concept, the embodiment of the present application further provides an avatar control device, which may be, for example, the server in the foregoing embodiment, and the avatar control device may be a hardware structure, a software module, or a hardware structure plus a software module. Based on the above embodiments, referring to fig. 14, an avatar control apparatus in the embodiment of the present application specifically includes:
an obtaining module 1400, configured to obtain content sent by a chat interface of a first client, where the chat interface is associated with at least a first client and a second client;
the matching module 1410 is configured to match an action instruction corresponding to a preset keyword when the content includes the keyword;
the sending module 1420 is configured to return the action instruction to the first client and the second client, so that the first client and the second client respectively control the user avatar of the first client displayed on the chat interface to play an action corresponding to the action instruction.
Optionally, when it is determined that the content includes a preset keyword and an action instruction corresponding to the keyword is matched, the matching module 1410 is specifically configured to:
determining whether the content contains a keyword in a keyword table, and if the content contains the keyword in the keyword table, acquiring a corresponding keyword identifier in the keyword table, wherein the keyword table at least comprises each keyword and an associated keyword identifier;
matching action label identifications corresponding to the keyword identifications in a mapping relation table according to the keyword identifications, wherein the mapping relation table at least comprises the mapping relation between the keyword identifications and the action label identifications;
and matching in an action tag table according to the action tag identification to obtain a corresponding action instruction and an action tag name, wherein the action tag table at least comprises the action tag identification and the associated action instruction and action tag name.
Optionally, when the action instruction is returned to the first client and the second client, the sending module 1420 is specifically configured to:
if the matched action instructions are more than 1, selecting a preset number of action instructions from the action instructions according to a preset rule;
returning the selected action instructions with the preset number to the first client, so that the first client sorts the action instructions with the preset number according to the incidence relation between the action instructions and the use times recorded locally, and displaying the sorted action instructions on a selection area associated with the user avatar of the first client;
receiving an action instruction which is sent by the first client and selected by a user from the sequenced action instructions;
and sending the selected action instruction to the second client.
Based on the same inventive concept, an embodiment of the present application further provides another avatar control device, where the avatar control device may be, for example, a terminal or a first client or a second client in the foregoing embodiments, and the avatar control device may be a hardware structure, a software module, or a hardware structure plus a software module, as shown in fig. 15, the another avatar control device in the embodiment of the present application specifically includes:
a first receiving module 1500, configured to receive content input by a user on a chat interface;
a sending module 1510, configured to send the content to a server;
a second receiving module 1520, configured to receive an action instruction returned by the server, where the action instruction is obtained by matching, when the server determines that the content includes a preset keyword, an action instruction corresponding to the keyword;
the playing module 1530 is configured to play the action corresponding to the action instruction on the user avatar of the user displayed on the chat interface.
Optionally, if the second receiving module 1520 receives a preset number of action instructions returned by the server, the processing module 1540 is further configured to:
sequencing the action instructions with the preset number according to the incidence relation between the locally recorded action instructions and the use times, wherein the preset number is more than 1;
and displaying a selection area associated with the user avatar, and displaying the sequenced action instructions on the selection area so that the user can select from the sequenced action instructions.
Optionally, when the action corresponding to the action instruction is played on the user avatar displayed on the chat interface, the playing module 1530 is specifically configured to:
acquiring animation data corresponding to the action instruction, wherein the virtual image is a skeleton image, the skeleton image comprises each pre-divided skeleton node, and the animation data at least comprises position data of the skeleton node corresponding to each action in each time frame;
and playing corresponding actions on the user virtual image displayed on the chat interface according to the animation data.
Optionally, when the action corresponding to the action instruction is played on the user avatar of the user displayed on the chat interface, the method further includes:
a magnifying module 1550 for magnifying the specific location of the user avatar.
The division of the modules in the embodiment of the present application is schematic, and only one logic function division is provided, and in actual implementation, there may be another division manner, and in addition, each function module in the embodiment of the present application may be integrated in one processor, or may exist alone physically, or two or more modules are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the above embodiments, fig. 16 is a schematic structural diagram of an electronic device in an embodiment of the present application.
An embodiment of the present application provides an electronic device, which may include a processor 1610 (CPU), a memory 1620, an input device 1630, an output device 1640, and the like, where the input device 1630 may include a keyboard, a mouse, a touch screen, and the like, and the output device 1640 may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Memory 1620 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides processor 1610 with program instructions and data stored in memory 1620. In the embodiment of the present application, the memory 1620 may be used to store a program of any one of the avatar control methods in the embodiment of the present application.
The processor 1610 is configured to execute any avatar control method according to the present embodiment by calling the program instructions stored in the memory 1620 and the processor 1610 according to the obtained program instructions.
For example, the electronic device may be, for example, the server 300 in fig. 2, or a terminal corresponding to the first client 100 or the second client 200 in fig. 2.
Based on the above embodiments, in the embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the avatar control method in any of the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (10)

1. An avatar control method, comprising:
acquiring content sent by a chat interface of a first client, wherein the chat interface is at least associated with the first client and a second client;
when the content is determined to contain preset keywords, matching action instructions corresponding to the keywords;
and returning the action instruction to the first client and the second client, so that the first client and the second client respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction.
2. The method according to claim 1, wherein when it is determined that the content includes a preset keyword, matching an action instruction corresponding to the keyword specifically includes:
determining whether the content contains a keyword in a keyword table, and if the content contains the keyword in the keyword table, acquiring a corresponding keyword identifier in the keyword table, wherein the keyword table at least comprises each keyword and an associated keyword identifier;
matching action label identifications corresponding to the keyword identifications in a mapping relation table according to the keyword identifications, wherein the mapping relation table at least comprises the mapping relation between the keyword identifications and the action label identifications;
and matching in an action tag table according to the action tag identification to obtain a corresponding action instruction and an action tag name, wherein the action tag table at least comprises the action tag identification and the associated action instruction and action tag name.
3. The method of claim 1, wherein returning the action instruction to the first client and the second client specifically comprises:
if the matched action instructions are more than 1, selecting a preset number of action instructions from the action instructions according to a preset rule;
returning the selected action instructions with the preset number to the first client, so that the first client sorts the action instructions with the preset number according to the incidence relation between the action instructions and the use times recorded locally, and displaying the sorted action instructions on a selection area associated with the user avatar of the first client;
receiving an action instruction which is sent by the first client and selected by a user from the sequenced action instructions;
and sending the selected action instruction to the second client.
4. An avatar control method, comprising:
receiving content input by a user on a chat interface, and sending the content to a server;
receiving an action instruction returned by the server, wherein the action instruction is obtained by matching the action instruction corresponding to the keyword when the server determines that the content contains the preset keyword;
and playing the action corresponding to the action instruction on the user virtual image of the user displayed on the chat interface.
5. The method of claim 4, further comprising:
if a preset number of action instructions returned by the server are received, sequencing the preset number of action instructions according to the incidence relation between the locally recorded action instructions and the use times, wherein the preset number is more than 1;
and displaying a selection area associated with the user avatar, and displaying the sequenced action instructions on the selection area so that the user can select from the sequenced action instructions.
6. The method according to claim 4 or 5, wherein playing the action corresponding to the action instruction on the user avatar of the user shown on the chat interface specifically comprises:
acquiring animation data corresponding to the action instruction, wherein the virtual image is a skeleton image, the skeleton image comprises each pre-divided skeleton node, and the animation data at least comprises position data of the skeleton node corresponding to each action in each time frame;
and playing corresponding actions on the user virtual image displayed on the chat interface according to the animation data.
7. The method of claim 4, wherein when the action corresponding to the action command is played on the user avatar of the user presented on the chat interface, further comprising:
zooming in on a particular location of the user avatar.
8. An avatar control apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring content sent by a chat interface of a first client, and the chat interface at least relates to the first client and a second client;
the matching module is used for matching action instructions corresponding to the keywords when the content is determined to contain the preset keywords;
and the sending module is used for returning the action instruction to the first client and the second client so as to enable the first client and the second client to respectively control the user avatar of the first client displayed on the chat interface to play the action corresponding to the action instruction.
9. An avatar control apparatus, comprising:
the first receiving module is used for receiving the content input by the user on the chat interface;
the sending module is used for sending the content to a server;
the second receiving module is used for receiving an action instruction returned by the server, wherein the action instruction is obtained by matching the action instruction corresponding to the keyword when the server determines that the content contains the preset keyword;
and the playing module is used for playing the action corresponding to the action instruction on the user virtual image of the user displayed on the chat interface.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 1-3 or 4-7 are implemented when the program is executed by the processor.
CN201910562449.XA 2019-06-26 2019-06-26 Virtual image control method and device and electronic equipment Pending CN112152901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910562449.XA CN112152901A (en) 2019-06-26 2019-06-26 Virtual image control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910562449.XA CN112152901A (en) 2019-06-26 2019-06-26 Virtual image control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112152901A true CN112152901A (en) 2020-12-29

Family

ID=73869890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910562449.XA Pending CN112152901A (en) 2019-06-26 2019-06-26 Virtual image control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112152901A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953813A (en) * 2021-02-08 2021-06-11 维沃移动通信有限公司 Message sending method and device, electronic equipment and readable storage medium
CN113407850A (en) * 2021-07-15 2021-09-17 北京百度网讯科技有限公司 Method and device for determining and acquiring virtual image and electronic equipment
CN113485598A (en) * 2021-07-12 2021-10-08 维沃移动通信(杭州)有限公司 Chat information display method and device
WO2023071556A1 (en) * 2021-10-28 2023-05-04 腾讯科技(深圳)有限公司 Virtual image-based data processing method and apparatus, computer device, and storage medium
WO2024007655A1 (en) * 2022-07-06 2024-01-11 腾讯科技(深圳)有限公司 Social processing method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
WO2018112896A1 (en) * 2016-12-23 2018-06-28 深圳前海达闼云端智能科技有限公司 Chat interaction method and apparatus, and electronic device thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
WO2018112896A1 (en) * 2016-12-23 2018-06-28 深圳前海达闼云端智能科技有限公司 Chat interaction method and apparatus, and electronic device thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112953813A (en) * 2021-02-08 2021-06-11 维沃移动通信有限公司 Message sending method and device, electronic equipment and readable storage medium
CN113485598A (en) * 2021-07-12 2021-10-08 维沃移动通信(杭州)有限公司 Chat information display method and device
CN113407850A (en) * 2021-07-15 2021-09-17 北京百度网讯科技有限公司 Method and device for determining and acquiring virtual image and electronic equipment
WO2023071556A1 (en) * 2021-10-28 2023-05-04 腾讯科技(深圳)有限公司 Virtual image-based data processing method and apparatus, computer device, and storage medium
WO2024007655A1 (en) * 2022-07-06 2024-01-11 腾讯科技(深圳)有限公司 Social processing method and related device

Similar Documents

Publication Publication Date Title
CN112152901A (en) Virtual image control method and device and electronic equipment
US10116598B2 (en) System and method for increasing clarity and expressiveness in network communications
CN110178132A (en) The automatic suggestion of the image received in message is responded using language model
US10979375B2 (en) Methods and apparatuses for animated messaging between messaging participants represented by avatar
US11036469B2 (en) Parsing electronic conversations for presentation in an alternative interface
US9443271B2 (en) System and method for increasing clarity and expressiveness in network communications
US11336594B2 (en) Information processing system and information processing method
CN107294837A (en) Engaged in the dialogue interactive method and system using virtual robot
CN107632706A (en) The application data processing method and system of multi-modal visual human
CN112929253B (en) Virtual image interaction method and device
CN107704169A (en) The method of state management and system of visual human
CN109155024A (en) Content is shared with user and receiving device
CN111131005A (en) Dialogue method, device, equipment and storage medium of customer service system
CN111158924A (en) Content sharing method and device, electronic equipment and readable storage medium
WO2022041201A1 (en) Interaction method employing virtual intelligent character, client, and system
CN111787042A (en) Method and device for pushing information
CN107025043A (en) A kind of information processing method and device
CN109842546B (en) Conversation expression processing method and device
CN114363277A (en) Intelligent chatting method and device based on social relationship and related products
KR101986153B1 (en) System and method for communication service using webtoon identification technology
CN114430506B (en) Virtual action processing method and device, storage medium and electronic equipment
CN110855554B (en) Content aggregation method and device, computer equipment and storage medium
US20230396572A1 (en) Contextual reply augmentation system
US20190392619A1 (en) Information communication system and method
CN114501132A (en) Resource processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229