CN113569031A - Information interaction method and device, electronic equipment and storage medium - Google Patents

Information interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113569031A
CN113569031A CN202110874385.4A CN202110874385A CN113569031A CN 113569031 A CN113569031 A CN 113569031A CN 202110874385 A CN202110874385 A CN 202110874385A CN 113569031 A CN113569031 A CN 113569031A
Authority
CN
China
Prior art keywords
information
target
user
emotion
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110874385.4A
Other languages
Chinese (zh)
Inventor
郭宏磊
虞启骋
刘孝章
王文举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110874385.4A priority Critical patent/CN113569031A/en
Publication of CN113569031A publication Critical patent/CN113569031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to an information interaction method, an information interaction device, an electronic device and a storage medium. The method comprises the following steps: acquiring an operation instruction of a user, and acquiring target information based on the operation instruction; determining emotion information and answer information corresponding to the target information, and determining target expression information corresponding to the emotion information; displaying a preset object and the reply information; the preset object has the capability of updating expression information, and the expression information of the preset object is updated to the target expression information. The method can improve the efficiency of information interaction and improve the user experience.

Description

Information interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of interaction technologies, and in particular, to an information interaction method and apparatus, an electronic device, and a storage medium.
Background
Currently, when a user uses an electronic device to perform information interaction, the user usually interacts with the user manually.
For example, after a user purchases a commodity on a certain e-commerce platform, the user needs to inquire about the relevant matters of the commodity, and can use the electronic device, namely a mobile phone, to communicate with the manual customer service of the e-commerce platform. Specifically, the user opens a chat page with the commodity customer service, and sends text messages to inquire, and the customer service also replies the text messages to reply.
However, the efficiency of information interaction with users through a manual mode is low, and when the number of users needing interaction is large, the information interaction is often difficult to be performed manually in time, so that the user experience is poor.
Disclosure of Invention
The disclosure provides an information interaction method, an information interaction device, an electronic device and a storage medium, so as to at least solve the problem of poor user experience in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an information interaction method, including:
acquiring an operation instruction of a user, and acquiring target information based on the operation instruction;
determining emotion information and answer information corresponding to the target information, and determining target expression information corresponding to the emotion information;
displaying a preset object and the reply information; the preset object has the capability of updating expression information, and the expression information of the preset object is updated to the target expression information.
Optionally, the determining emotional information corresponding to the target information includes:
and performing emotion recognition on the target information, and determining the recognized emotion as emotion information corresponding to the target information.
Optionally, the determining reply information corresponding to the target information includes:
determining a reply information set corresponding to the target information; the reply information set includes at least one reply information for replying to the target information, different reply information corresponding to different emotional information;
in the determined reply information set, reply information corresponding to the determined emotion information is determined as reply information corresponding to the target information.
Optionally, before acquiring the operation instruction of the user, the method further includes: and monitoring an operation instruction of a user.
Optionally, the monitoring an operation instruction of the user includes:
and monitoring an operation instruction of a user under the condition of displaying a preset interactive interface.
Optionally, the preset object is generated according to a preset image, and the preset image includes expression information; or
The preset objects are determined from a preset object set, and any object in the preset object set comprises expression information.
According to a second aspect of the embodiments of the present disclosure, there is provided an information interaction apparatus, including:
an acquisition unit configured to perform: acquiring an operation instruction of a user, and acquiring target information based on the operation instruction;
a determination unit configured to perform: determining emotion information and answer information corresponding to the target information, and determining target expression information corresponding to the emotion information;
a presentation unit configured to perform: displaying a preset object and the reply information; the preset object has the capability of updating expression information, and the expression information of the preset object is updated to the target expression information.
Optionally, the determining unit includes:
a mood recognition subunit configured to perform: and performing emotion recognition on the target information, and determining the recognized emotion as emotion information corresponding to the target information.
Optionally, the determining unit includes:
a set determination subunit configured to perform: determining a reply information set corresponding to the target information; the reply information set includes at least one reply information for replying to the target information, different reply information corresponding to different emotional information;
an information determination subunit configured to perform: in the determined reply information set, reply information corresponding to the determined emotion information is determined as reply information corresponding to the target information.
Optionally, the apparatus further comprises: a monitoring unit configured to perform: and monitoring an operation instruction of a user.
Optionally, the monitoring unit includes:
a condition monitoring subunit configured to perform: and monitoring an operation instruction of a user under the condition of displaying a preset interactive interface.
Optionally, the preset object is generated according to a preset image, and the preset image includes expression information; or
The preset objects are determined from a preset object set, and any object in the preset object set comprises expression information.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the above information interaction method.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the above-mentioned information interaction method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the above-mentioned information interaction method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: by utilizing the preset corresponding relation between the reply information and the target information, the user can be quickly and automatically replied, and information interaction can be carried out with the user, so that the efficiency of information interaction can be improved, the user is prevented from waiting, and the user experience is improved. The emotion information corresponding to the target information can be determined according to the corresponding relation between the preset emotion information and the target information, the expression information corresponding to the emotion information is determined according to the corresponding relation between the preset emotion information and the expression information, so that the expression information of the preset object can be updated and displayed, the interestingness of information interaction can be improved, the emotion of a user is relieved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow chart illustrating a method of information interaction in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating an information interaction interface in accordance with an illustrative embodiment;
FIG. 3 is a block diagram illustrating an information interaction device, according to an example embodiment;
FIG. 4 is a block diagram illustrating an information interaction system in accordance with an exemplary embodiment;
FIG. 5 is a schematic block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Currently, when a user uses an electronic device to perform information interaction, the user usually interacts with the user manually.
For example, after a user purchases a commodity on a certain e-commerce platform, the user needs to inquire about the relevant matters of the commodity, and can use the electronic device, namely a mobile phone, to communicate with the manual customer service of the e-commerce platform. Specifically, the user opens a chat page with the commodity customer service, and sends text messages to inquire, and the customer service also replies the text messages to reply.
However, the efficiency of information interaction with users through a manual mode is low, and when the number of users needing interaction is large, the information interaction is often difficult to be performed manually in time, so that the user experience is poor.
For example, in a short time of a certain e-commerce platform or a peak time of a certain live broadcast, the number of users inquiring commodity information is large, and manual response is often difficult to deal with, so that more users may need to wait for manual response, and experience is poor.
In order to solve the above problem, the present disclosure provides an information interaction method.
By utilizing the preset corresponding relation between the reply information and the target information, the user can be quickly and automatically replied, and information interaction can be carried out with the user, so that the efficiency of information interaction can be improved, the user is prevented from waiting, and the user experience is improved. The emotion information corresponding to the target information can be determined according to the corresponding relation between the preset emotion information and the target information, the expression information corresponding to the emotion information is determined according to the corresponding relation between the preset emotion information and the expression information, so that the expression information of the preset object can be updated and displayed, the interestingness of information interaction can be improved, the emotion of a user is relieved, and the user experience is improved.
In addition, in the information interaction method provided by the present disclosure, the device performing information interaction with the user may also be an electronic device being used by the user.
For example, a user may use a chat robot on a mobile phone to perform information interaction, and the chat robot may perform information interaction with the user based on local information, or may interact information with other devices (e.g., a server) to perform information interaction with the user.
Specifically, when information interaction is performed with the user, optionally, a keyword matching the user information may be determined, and then reply information corresponding to the matched keyword is used for reply.
For example, after determining the keyword "good morning" matching the user information "y good morning" the user can reply with the reply information "good morning, respected user" corresponding to the keyword "good morning".
Furthermore, in order to improve the experience of the user during information interaction and improve the interest and the display effect of the information interaction, the current emotion of the user can be identified through the information sent by the user during the information interaction, so that the corresponding expression is displayed for the user, and the emotion of the user is relieved.
For example, in a case where the current emotion of the user is recognized as "hurt" based on the information "not happy a bit" sent by the user, the electronic device may display expression information of "smile", specifically, a head portrait, an image, a video, a 2D model, or a 3D model with the expression of "smile", so as to ease the "hurt" emotion of the user.
Meanwhile, corresponding expressions are displayed according to the emotion of the user, more vivid service can be provided for the user during information interaction, and the user experience is improved.
In addition, when information interaction is performed, the device may display the interaction information in other manners. For example, the text message of the response may be presented by voice. Specifically, a sound production module in the equipment can be used for displaying voice information for a user; it is also possible to use a voice message recorded in advance for the text message of the reply for presentation.
Fig. 1 is a flowchart illustrating an information interaction method according to an exemplary embodiment, and as shown in fig. 1, the information interaction method may be applied to an electronic device. The method may include the following steps.
S101: and acquiring an operation instruction of a user, and acquiring target information based on the acquired operation instruction.
The target information may be generated based on a user operation, and the target information may be information sent when the user performs information interaction. The flow of the method is not limited to the form of the target information, and specifically may be a text form, an image form, a voice form, and the like. The operation instruction of the user may specifically be an instruction for inputting characters, an instruction for sending images, an instruction for recording voice, or the like, and the target information may be obtained according to the operation instruction of the user.
For example, for a command for inputting characters by a user, the target information of the characters input by the user may be acquired according to the command.
S102: emotion information and reply information corresponding to the target information are determined, and target expression information corresponding to the emotion information is determined.
S103: displaying a preset object and the determined reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to the determined target expression information.
For the target information in S101, alternatively, the target information may be any information generated based on a user operation. For convenience of description, this information is referred to as target information.
The flow of the method does not limit the form of the target information, and specifically can be characters, images, audio or video.
In an optional embodiment, before performing S101, the method flow may further include the steps of: and monitoring an operation instruction of a user.
In the information interaction process, it is usually difficult to predict the time point when the user sends the information for interaction, so optionally, the operation instruction of the user may be monitored, so as to obtain the target information based on the operation instruction of the user in real time, and improve the efficiency and speed of information interaction.
For example, after the operation instruction of the user is monitored to be generated, the generated user operation instruction can be acquired, and further the target information is acquired based on the acquired user operation instruction; or after the operation instruction of the receiving user is monitored, the target information can be obtained based on the obtained user operation instruction.
Further, the operation instruction of the monitoring user may be continuously monitored, or the operation instruction of the monitoring user may be monitored under specific conditions in order to save computing resources.
In an alternative embodiment, the operation instruction of the user can be monitored under the condition that the preset interactive interface is displayed.
The preset interactive interface may be an interface for information interaction, such as a customer service chat interface, a live broadcast interactive interface, and the like. And the user usually operates on a preset interactive interface to generate an operation instruction for generating target information for information interaction. Specifically, the target information may be sent to other devices for information interaction, or the device locally performs information interaction for the target information.
Therefore, the operation instruction of the user can be monitored under the condition of displaying the preset interactive interface, and the computing resource is saved.
In other alternative embodiments, the user's operating instructions may be monitored in the event that a process for information interaction is invoked; and the operation instruction of the user can be monitored under the condition of clicking the information interaction function.
For the operation instruction of the user obtained in S101, optionally, the operation instruction may be a user operation instruction obtained from another device, and specifically, the operation instruction may be a user operation instruction sent by another device; the target information may also be a user operation instruction obtained locally, and specifically, after an operation instruction is generated based on a user operation, the target information may be directly obtained according to the local operation instruction.
In an alternative embodiment, the first device receives a user operation instruction sent by the second device; and the user operation instruction may be generated based on an operation of the second device by the user.
For the emotion information in S102, in an alternative embodiment, emotion recognition may be performed on the target information, and the recognized emotion may be determined as emotion information corresponding to the target information. In the present embodiment, the recognition accuracy of emotion information can be improved to facilitate the execution of subsequent S103.
Specifically, when performing emotion recognition, in the case where the target information is in a text form, the recognition may be performed using a language model, or may be performed based on a correspondence set between keywords and emotion information.
Of course, the language model may be trained in advance, and the corresponding relation set of the keywords and the emotion information may also be deployed in advance.
For example, a corresponding relationship set of keywords and emotion information is deployed in advance, which may include: keywords corresponding to happy emotion information may include "happy", and the like. After the keyword 'happiness' is identified in the user sending information, the emotion information corresponding to the keyword 'happiness' can be determined to be happy.
In the case where the target information is in another form, the character information included in the target information may be extracted first, and then emotion recognition may be performed.
For example, character recognition is carried out on target information in an image form, and character information in an image is recognized; and performing voice-character conversion on the target information in the audio form, and determining character information in the audio.
It is noted that in an alternative embodiment, the emotion information included in the target information may not be recognized by emotion recognition. For example, the target information in the form of text is too short, or the image does not contain text information. In this case, optionally, a preset mood information may be automatically determined, specifically "calm"; optionally, the emotion information may be determined comprehensively according to the preceding and following periods of information interaction, specifically, the emotion information recognized last time may be used as the emotion information recognized this time; alternatively, the method flow may be ended without recognizing the emotion information.
In another alternative embodiment, in determining the emotion information, the emotion information may be determined comprehensively from the target information and the preceding text of the target information.
Since the emotion information is usually continuously changed, the emotion information can be optionally determined by referring to the former part of the target information in the information interaction, and the accuracy of the determined emotion information is improved. Specifically, the target information and the former part of the target information may be input into the language model together, or when the emotion information determined according to the target information is greatly different from the emotion information determined according to the former part of the target information, the emotion information determined according to the target information may be adjusted to be emotion information which is slightly different from the emotion information determined according to the former part of the target information.
For example, in the case where the emotion information is determined to be "sad" from the foregoing of the target information, if the emotion information is determined to be "happy" from the target information itself, the emotion information may be further determined to be "calm" between "sad" and "happy".
For the reply information in S102, in an alternative embodiment, the corresponding reply information may be determined directly according to the corresponding relationship set of the keyword and the reply information, and according to the keyword contained in the target information.
For example, a corresponding relation set of keywords and reply information is deployed in advance, and the set may include a corresponding relation between the keyword "commodity position" and the reply information "hello, and the commodity position is in xxx". In the case where the keyword "commodity position" is included in the determination target information, reply information corresponding to the keyword "commodity position" may be determined.
In another alternative embodiment, in order to further fit emotional information, a reply information set corresponding to the target information may be determined first; the reply information set may include at least one reply information for replying to the target information.
Optionally, there may be one or more corresponding reply messages for the same target message. Different response messages may correspond to different emotional messages, and in particular, different moods and words may be used for responding to different emotional messages.
For example, for the target information for inquiring the position of the commodity, the reply information set may include: "you are good, the commodity position is at xxx" and is used for dealing with the emotional information of "calm", the "bad meaning" is that the commodity position is at xxx, i help you solve the problem "the emotional information of" managing to be angry "for the staff who calls the accessories at once, and" happy to serve you ", the commodity position is at xxx, and can help you" the emotional information of "managing to be happy".
Therefore, alternatively, reply information corresponding to the emotion information determined in S102 may be determined as reply information corresponding to the target information in the determined reply information set.
In the reply information set, different reply information and the corresponding relationship between different reply information and emotion information may be deployed in advance.
The embodiment can perform different responses by considering the emotion information of the user during information interaction, so that the emotion information can be better fitted, the emotion of the user can be better relieved, better information interaction service is provided, and the user experience is improved.
For the target expression information in S102, in an optional embodiment, a corresponding relationship set of emotion information and expression information may be deployed in advance, and the target expression information corresponding to the emotion information is determined according to the corresponding relationship set.
The expression information corresponding to the emotion information can be used for helping to relieve the emotion of the user and improving the interestingness of information interaction.
For example, the corresponding relationship set of emotion information and expression information may include: the correspondence between the "happy" emotion and the "smile" expression, the correspondence between the "sad" emotion and the "heartache" expression, and the correspondence between the "angry" emotion and the "apology" expression.
According to the embodiment, the target expression information can be quickly determined according to the corresponding relation set, and the information interaction efficiency is improved.
In another alternative embodiment, the target expression information may be locally generated and stored expression information, for example, when a user self-shoots, the expression information in the self-shot image may be determined and stored locally, and emotion information corresponding to the self-shot expression information may be further determined; or the service party providing the information interaction service collects more portraits in advance, determines expression information from the portraits and stores the expression information in the local service party equipment.
The method flow does not limit the method for specifically determining the target expression information, so that the method flow can automatically adapt to expression information of various sources and update the expression information of the current preset object to the target expression information.
For the preset object in S103, the preset object may include expression information, and the preset object has an ability to update the expression information, so that the preset object with different expression information may be displayed. It should be noted that the preset object may show only a single expression information at the same time.
For ease of understanding, some examples of preset objects are given below. The preset object may specifically be a still image, a moving image, a video, a 2D model, or a 3D model.
As a specific example, the preset object may be an avatar in the interface, where the avatar contains expression information. The avatar may specifically be a static or dynamic image. The preset object can also be a 2D model, and the model contains expression information.
In a specific example, a user can perform information interaction with one 3D model displayed by the device, the user can record voice or send characters, and the 3D model can perform information interaction with the user by updating different expression information correspondingly, so that the information interaction is more vivid, and the user experience is improved.
In an alternative embodiment, the preset object may be generated in real time, specifically, the preset object may be generated according to a preset image, and the preset image may include expression information.
Alternatively, the preset image may be a portrait containing expression information, for example, a self-portrait image of the user or other portrait images. Specifically, the preset object is generated according to a preset image, and the preset object may be generated by using a filter or an image algorithm according to the preset image. For example, a cartoon-style portrait is generated as a preset object from a character image, or a 3D character model is generated as a preset object from a character image.
Of course, the preset image may be an animal image containing expression information. The preset image may be an image that is liked by the user, familiar with the user, or selected by the user and contains facial expression information, so as to improve the user experience.
In another alternative embodiment, the preset objects may be directly acquired without being generated in real time. Specifically, the expression information may be determined from a preset object set, where any object in the preset object set includes expression information.
The preset object set may include preset objects in various forms, for example, a 2D model, a 3D model, and the like. The preset object set can be preset, can be called when the preset object needs to be displayed, and is selected and determined in the preset object set by a user.
In this embodiment, the displaying of the preset object can improve the interest of information interaction, and the preset object meeting the requirement can be conveniently determined by utilizing the preset image or the preset object set, so that the user experience is improved.
In a specific embodiment, optionally, the user may customize the preset object, and specifically, may select from a preset object set deployed in advance. The preset object set may include a plurality of preset objects, and specifically may include preset objects of different forms and preset objects of different images. The user may also generate a preset object of a preset style by capturing an image containing expression information, such as an image of the user himself or an image of another person or animal.
In this embodiment, the user can customize the preset object, so that the user requirements can be further met, and the user experience is improved.
In another specific embodiment, when a user watches a live broadcast room of a main broadcast, if the user has a requirement for information interaction, the preset object may be generated according to a current portrait of the main broadcast, specifically, a portrait of a certain style may be generated as the preset object for the portrait of the main broadcast, or the portrait of the main broadcast may be directly used as the preset object, so as to improve the interest of the user in information interaction. The portrait familiar to the user is used as a preset object for displaying and information interaction, so that the user experience can be improved, and the satisfaction degree of the user on the information interaction is improved.
In addition, the step of generating the preset object may be performed before the preset object is presented, or may be performed before the target information is acquired.
In an optional embodiment, before the preset object is displayed, the expression information of the preset object may be updated according to the target expression information determined in S102, so that the expression information of the preset object is the target expression information.
Optionally, the specifically updated expression information may be that the target expression information is directly replaced with the current expression information, or that the current expression information is adjusted according to the target expression information.
For example, according to the target expression information "happy laughing", five sense organs in the current portrait may be directly replaced with the five sense organs of the happy laughing, or may be adjusted on the basis of the five sense organs of the current portrait, and specifically, the five sense organs of the current portrait may be adjusted to the expression of "happy laughing" in a manner of stretching, compressing, or the like.
For the step of displaying the reply information in S103, the process of the method is not limited to a specific display manner when the reply information is displayed.
For example, the text information may be directly displayed for the text reply information, or the text reply information may be played by a device using a preset sound program. For the reply information in the form of voice, the reply information can be directly played.
According to the embodiment, the display form of the reply information can be enriched, the interestingness of information interaction is improved, and the user experience is improved.
In an alternative embodiment, the preset object and the reply information may be presented in a preset interactive interface.
In an optional embodiment, in addition to updating the expression information of the preset object, in order to further improve the interest of information interaction and improve user experience, the action information of the preset object may be updated under the condition that the preset object includes the action information.
For example, a limb model is included for a 3D character model as a preset object, so that it is possible to update the current motion information while updating the expression information. Specifically, the bow action or the cry action may be updated simultaneously with the update of the "apology" expression information.
With respect to S102, in an alternative embodiment, it may be that a single device locally determines emotion information, answer information, and target expression information.
In an alternative embodiment, multiple devices may interact with each other. Specifically, after obtaining the target information, the single device sends the target information to other devices for interaction, and the other devices determine emotion information, reply information, and target expression information.
For convenience of description, a single device that acquires target information is referred to as a target device. The other device may return the determined emotion information, reply information, and target expression information to the target device.
In the embodiment, the target device does not need to determine emotion information, reply information and target expression information, so that computing resources and storage resources are saved.
For S103, it should be noted that, in an alternative embodiment, the preset object may be preset locally by the target device, specifically, the preset object may be selected and determined in the preset object set by the target device based on a user operation, or the preset image selected by the user by the target device based on the user operation may be generated by the target device.
And when the preset object is displayed, the target device may update the expression information of the preset object to the target expression information. The target expression information may be received by the target device from other devices or may be generated locally by the target device.
In a specific embodiment, when the user communicates with the customer service in the live broadcast room, the user can use a customer service image (i.e. a preset object) customized by the user to communicate. The facial expression information of the customer service image can be received by the user side from the live broadcast side.
In another specific embodiment, when the user interacts with the chat robot, the user can use the user-defined character image (i.e. the preset object) for interaction. The facial expression information of the character image may be locally generated at the user terminal.
In another alternative embodiment, the preset object may be set by other devices. Specifically, the other device may be selected and determined from the preset object set based on the operation of the service person, or the other device may be generated according to the preset image based on the operation of the service person.
Before the target device displays the preset object, the other device may send the preset object to the target device so as to be displayed. The target device may update the expression information of the preset object to the target expression information. The target expression information may be received by the target device from other devices or may be generated locally by the target device.
In a specific embodiment, when the user communicates with the customer service in the live broadcast room, the user can use the customer service image (i.e. the preset object) customized by the service personnel in the live broadcast room to communicate. The facial expression information of the customer service image can be received by the user side from the live broadcast side.
In another specific embodiment, when the user interacts with the chat robot, the user may use a fixed character image (i.e., a preset object) carried by the chat robot to interact with the chat robot. The facial expression information of the character image may be locally generated at the user terminal.
The method can quickly and automatically answer the user by utilizing the preset corresponding relation between the answer information and the target information, and performs information interaction with the user, thereby improving the efficiency of information interaction, avoiding the waiting of the user and improving the user experience. The emotion information corresponding to the target information can be determined according to the corresponding relation between the preset emotion information and the target information, the expression information corresponding to the emotion information is determined according to the corresponding relation between the preset emotion information and the expression information, so that the expression information of the preset object can be updated and displayed, the interestingness of information interaction can be improved, the emotion of a user is relieved, and the user experience is improved.
Further, the user experience may be improved in other ways, for example, one reply information that can be used for mitigating emotion information is determined among a plurality of reply information for replying to the same target information based on the emotion information, and the like.
With respect to the above method flows, in an alternative embodiment, the method flows may be performed by a single electronic device and interact with other devices during the execution of the steps.
The method specifically comprises the following steps: the target device acquires an operation instruction of a user and acquires target information based on the acquired user operation instruction. In the case of transmitting the target information to the other device, the target device may receive emotion information and reply information corresponding to the target information determined by the other device, and the determined target expression information corresponding to the emotion information. The target equipment displays a preset object and reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to be the target expression information.
In another alternative embodiment, it may be performed by a single electronic device and not interact with other devices during the execution of the steps.
The method specifically comprises the following steps: the target device acquires an operation instruction of a user and acquires target information based on the acquired user operation instruction. The target device may locally determine emotion information and reply information corresponding to the target information, and the determined target expression information corresponding to the emotion information. The target equipment displays a preset object and reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to be the target expression information.
Of course, it may also be implemented by 2 electronic devices, respectively a first device and a second device. The method specifically comprises the following steps: the method comprises the steps that a first device receives target information sent by a second device; the target information is generated based on the operation of the user on the second device; the first device determines emotion information and reply information corresponding to the target information, and determines target expression information corresponding to the emotion information; the first device may transmit the reply information and the target expression information to the second device; the second equipment displays the preset object and the reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to be the target expression information.
For the convenience of understanding, the present disclosure also provides an application example.
As shown in FIG. 2, FIG. 2 is a schematic diagram illustrating an information interaction interface, according to an example embodiment. The information interaction situation of 3 moments (t1, t2 and t3) is included.
The scene of the embodiment is customer service interaction in a live broadcast room, and a user can perform information interaction with the customer service in the live broadcast room through a mobile phone to inquire about related problems in the live broadcast room. For example, the items currently sold in the live room.
At time t1, the user asks "how to purchase the product", and after recognizing that the emotional information is "calm", the device may display a head portrait including a "smile" expression, and display a reply message "please click on link xxx on the interface".
At time t2, the user continues to ask "what this commodity is so expensive", and the device, after recognizing the emotional information as "angry" from the keyword "what is there", may present an avatar containing a "perplexing" expression, and present reply information "do nothing, ask you to see similar other commodities".
At time t3, the user sends the message "purchased, good quality", the device may show a head portrait containing the expression "laugh" after recognizing the emotional information as "happy" according to the keyword "good" and show the reply message "very happy to serve you".
Obviously, different expressions are displayed according to emotion information, so that the interestingness of user information interaction can be improved, the user experience is improved, the bad emotion of a user is relieved, and the generation of conflict is avoided.
FIG. 3 is a block diagram illustrating an information interaction device, according to an example embodiment. Referring to fig. 3, the apparatus includes an acquisition unit 201, a determination unit 202, and a presentation unit 203.
An acquisition unit 201 configured to perform: and acquiring an operation instruction of a user, and acquiring target information based on the acquired operation instruction.
A determining unit 202 configured to perform: emotion information and reply information corresponding to the target information are determined, and target expression information corresponding to the emotion information is determined.
A presentation unit 203 configured to perform: displaying a preset object and reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to be the target expression information.
Optionally, the determining unit 202 includes: an emotion recognition subunit 202a configured to perform: emotion recognition is performed with respect to the target information, and the recognized emotion is determined as emotion information corresponding to the target information.
Optionally, the determining unit 202 includes: a set determination subunit 202b configured to perform: determining a reply information set corresponding to the target information; the reply information set includes at least one reply information for replying to the target information, different reply information corresponding to different emotional information.
An information determination subunit 202c configured to perform: in the determined reply information set, reply information corresponding to the determined emotion information is determined as reply information corresponding to the target information.
Optionally, the apparatus further comprises: a monitoring unit 204 configured to perform: and monitoring an operation instruction of a user.
Optionally, the monitoring unit 204 includes: a condition monitoring subunit 204a configured to perform: and monitoring an operation instruction of a user under the condition of displaying a preset interactive interface.
Alternatively, the preset object may be generated from a preset image, and the preset image may include expression information; or the preset object may be determined from a preset object set, and any object in the preset object set may include expression information.
Fig. 4 is a schematic structural diagram illustrating an information interaction system according to an exemplary embodiment. Referring to fig. 4, the system includes a first device 301 and a second device 302.
The first device 301 is configured to receive an operation instruction of a user sent by the second device 302, and obtain target information according to the received user operation instruction; the received operation instruction may be generated based on an operation of the second device 302 by the user; emotion information and response information corresponding to any one of the target information are determined, and target expression information corresponding to the emotion information is determined.
A second device 302 for displaying the preset object and the reply information; the preset object has the capability of updating the expression information, and the expression information of the current preset object is updated to be the target expression information.
Optionally, the first device 301 is specifically configured to perform emotion recognition on the target information, and determine the recognized emotion as emotion information corresponding to the target information.
Optionally, the first device 301 is specifically configured to determine a reply information set corresponding to the target information; the reply information set may include at least one reply information for replying to the target information, different reply information corresponding to different emotional information; in the determined reply information set, reply information corresponding to the determined emotion information is determined as reply information corresponding to the target information.
Optionally, the first device 301 is further configured to monitor an operation instruction of a user.
Optionally, the first device 301 is specifically configured to monitor an operation instruction of a user when a preset interactive interface is displayed.
Optionally, the preset object is generated according to a preset image, and the preset image includes expression information; or the preset object is determined from a preset object set, and any object in the preset object set comprises expression information.
An embodiment of the present disclosure also provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the information interaction method according to any one of the above embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium, where instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the information interaction method according to any one of the above embodiments.
Embodiments of the present disclosure further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the information interaction method according to any of the above embodiments is implemented.
FIG. 5 is a schematic block diagram illustrating an electronic device in accordance with an exemplary embodiment. Referring to fig. 5, electronic device 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 518. The electronic device/server described above may employ a similar hardware architecture.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the video summary information determination method described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes a screen that provides an output interface between the electronic device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 500 is in an operating mode, such as a shooting mode or a video mode. Each of the front camera and the rear camera may be a fixed or optical lens system with a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 518. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the electronic device 500. For example, the sensor assembly 514 may detect an open/closed state of the electronic device 500, the relative positioning of components, such as a display and keypad of the electronic device 500, the sensor assembly 514 may detect a change in the position of the electronic device 500 or a component of the electronic device 500, the presence or absence of user contact with the electronic device 500, orientation or acceleration/deceleration of the electronic device 500, and a change in the temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 518 is configured to facilitate wired or wireless communication between the electronic device 500 and other devices. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 518 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 518 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described video summary information determination.
In an embodiment of the present disclosure, a computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the electronic device 500 to perform the video summary information determination method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It is noted that, in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present disclosure are described in detail above, and the principles and embodiments of the present disclosure are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and core ideas of the present disclosure; meanwhile, for a person skilled in the art, based on the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present disclosure should not be construed as a limitation to the present disclosure.

Claims (10)

1. An information interaction method, comprising:
acquiring an operation instruction of a user, and acquiring target information based on the operation instruction;
determining emotion information and answer information corresponding to the target information, and determining target expression information corresponding to the emotion information;
displaying a preset object and the reply information; the preset object has the capability of updating expression information, and the expression information of the preset object is updated to the target expression information.
2. The method of claim 1, wherein the determining emotional information corresponding to the target information comprises:
and performing emotion recognition on the target information, and determining the recognized emotion as emotion information corresponding to the target information.
3. The method of claim 1, wherein the determining reply information corresponding to the target information comprises:
determining a reply information set corresponding to the target information; the reply information set includes at least one reply information for replying to the target information, different reply information corresponding to different emotional information;
in the determined reply information set, reply information corresponding to the determined emotion information is determined as reply information corresponding to the target information.
4. The method according to claim 1, wherein before obtaining the operation instruction of the user, the method further comprises: and monitoring an operation instruction of a user.
5. The method of claim 4, wherein the monitoring the user's operating instructions comprises:
and monitoring an operation instruction of a user under the condition of displaying a preset interactive interface.
6. The method according to claim 1, wherein the preset object is generated from a preset image, the preset image including expression information; or
The preset objects are determined from a preset object set, and any object in the preset object set comprises expression information.
7. An information interaction apparatus, comprising:
an acquisition unit configured to perform: acquiring an operation instruction of a user, and acquiring target information based on the operation instruction;
a determination unit configured to perform: determining emotion information and answer information corresponding to the target information, and determining target expression information corresponding to the emotion information;
a presentation unit configured to perform: displaying a preset object and the reply information; the preset object has the capability of updating expression information, and the expression information of the preset object is updated to the target expression information.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the information interaction method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the information interaction method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the information interaction method of any one of claims 1 to 6 when executed by a processor.
CN202110874385.4A 2021-07-30 2021-07-30 Information interaction method and device, electronic equipment and storage medium Pending CN113569031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110874385.4A CN113569031A (en) 2021-07-30 2021-07-30 Information interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110874385.4A CN113569031A (en) 2021-07-30 2021-07-30 Information interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113569031A true CN113569031A (en) 2021-10-29

Family

ID=78169615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110874385.4A Pending CN113569031A (en) 2021-07-30 2021-07-30 Information interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113569031A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933782A (en) * 2018-12-03 2019-06-25 阿里巴巴集团控股有限公司 User emotion prediction technique and device
CN110647636A (en) * 2019-09-05 2020-01-03 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN110807388A (en) * 2019-10-25 2020-02-18 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN110874137A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Interaction method and device
CN111028827A (en) * 2019-12-10 2020-04-17 深圳追一科技有限公司 Interaction processing method, device, equipment and storage medium based on emotion recognition
CN111694938A (en) * 2020-04-27 2020-09-22 平安科技(深圳)有限公司 Emotion recognition-based answering method and device, computer equipment and storage medium
CN112233698A (en) * 2020-10-09 2021-01-15 中国平安人寿保险股份有限公司 Character emotion recognition method and device, terminal device and storage medium
CN112364971A (en) * 2020-11-06 2021-02-12 联想(北京)有限公司 Session control method and device and electronic equipment
CN112383667A (en) * 2020-11-03 2021-02-19 深圳前海微众银行股份有限公司 Call data processing method, device, equipment and storage medium
CN112434139A (en) * 2020-10-23 2021-03-02 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874137A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Interaction method and device
CN109933782A (en) * 2018-12-03 2019-06-25 阿里巴巴集团控股有限公司 User emotion prediction technique and device
CN110647636A (en) * 2019-09-05 2020-01-03 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN110807388A (en) * 2019-10-25 2020-02-18 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN111028827A (en) * 2019-12-10 2020-04-17 深圳追一科技有限公司 Interaction processing method, device, equipment and storage medium based on emotion recognition
CN111694938A (en) * 2020-04-27 2020-09-22 平安科技(深圳)有限公司 Emotion recognition-based answering method and device, computer equipment and storage medium
CN112233698A (en) * 2020-10-09 2021-01-15 中国平安人寿保险股份有限公司 Character emotion recognition method and device, terminal device and storage medium
CN112434139A (en) * 2020-10-23 2021-03-02 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN112383667A (en) * 2020-11-03 2021-02-19 深圳前海微众银行股份有限公司 Call data processing method, device, equipment and storage medium
CN112364971A (en) * 2020-11-06 2021-02-12 联想(北京)有限公司 Session control method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN108363706B (en) Method and device for man-machine dialogue interaction
CN106791893B (en) Video live broadcasting method and device
CN110517185B (en) Image processing method, device, electronic equipment and storage medium
EP3264290A1 (en) Method and apparatus for recommendation of an interface theme
CN107644646B (en) Voice processing method and device for voice processing
CN111954063B (en) Content display control method and device for video live broadcast room
CN111836114A (en) Video interaction method and device, electronic equipment and storage medium
CN106649712B (en) Method and device for inputting expression information
CN112738544A (en) Live broadcast room interaction method and device, electronic equipment and storage medium
CN110019897B (en) Method and device for displaying picture
CN108270661B (en) Information reply method, device and equipment
CN114245154B (en) Method and device for displaying virtual articles in game live broadcast room and electronic equipment
CN113689530A (en) Method and device for driving digital person and electronic equipment
CN110728981A (en) Interactive function execution method and device, electronic equipment and storage medium
CN107247794B (en) Topic guiding method in live broadcast, live broadcast device and terminal equipment
CN113901353A (en) Information display method, device and system, electronic equipment and server
CN113079493A (en) Information matching display method and device and electronic equipment
CN112000266A (en) Page display method and device, electronic equipment and storage medium
CN111835617B (en) User head portrait adjusting method and device and electronic equipment
CN109787890B (en) Instant messaging method, device and storage medium
CN113569031A (en) Information interaction method and device, electronic equipment and storage medium
CN115373718A (en) Updating method and device of online model and electronic equipment
CN111292743B (en) Voice interaction method and device and electronic equipment
CN114422854A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211029