CN108322832B - Comment method and device and electronic equipment - Google Patents

Comment method and device and electronic equipment Download PDF

Info

Publication number
CN108322832B
CN108322832B CN201810060919.8A CN201810060919A CN108322832B CN 108322832 B CN108322832 B CN 108322832B CN 201810060919 A CN201810060919 A CN 201810060919A CN 108322832 B CN108322832 B CN 108322832B
Authority
CN
China
Prior art keywords
user
comment
content
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810060919.8A
Other languages
Chinese (zh)
Other versions
CN108322832A (en
Inventor
王晓振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201810060919.8A priority Critical patent/CN108322832B/en
Publication of CN108322832A publication Critical patent/CN108322832A/en
Application granted granted Critical
Publication of CN108322832B publication Critical patent/CN108322832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a commenting method, a commenting device and electronic equipment, wherein the commenting method comprises the following steps: acquiring a corresponding user virtual image according to the user information of the user making a comment; according to the comment content, determining the interaction information of the user virtual image; and displaying the comment content and the interactive information through the user virtual image. Through the embodiment of the invention, the forms of comments or postings are enriched, the personalized requirements of the user are met, and the comment or postings experience of the user is improved.

Description

Comment method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a commenting method and device and electronic equipment.
Background
The AR (Augmented Reality) technology is a new technology that integrates real world information and virtual world information "seamlessly", and superimposes information (such as visual information, sound information, and the like) that is originally hard to experience in a certain time-space range of the real world, into real information after simulation, and real environment and virtual objects are superimposed on the same picture or space in real time.
With the development of AR technology, it is becoming a trend to apply AR technology to various application scenarios. Currently, most applications provide comment functions, such as book comments, shopping comments, or posts and comments in forum communities. Most of the existing comments or posts adopt a text form, and the comment or posting form is single, so that the personalized requirements of users cannot be met.
Disclosure of Invention
In view of this, embodiments of the present invention provide a comment method and apparatus, and an electronic device, so as to solve the problem that, in an existing application with a comment or posting function, a comment or posting form is single, and personalized requirements of a user cannot be met.
According to a first aspect of embodiments of the present invention, there is provided a review method including: acquiring a corresponding user virtual image according to the user information of the user making a comment; according to the comment content, determining the interaction information of the user virtual image; and displaying the comment content and the interactive information through the user virtual image.
According to a second aspect of the embodiments of the present invention, there is provided a comment apparatus including: the acquisition module is used for acquiring a corresponding user virtual image according to the user information of the user who makes a comment; the determining module is used for determining the interactive information of the user virtual image according to the comment content; and the display module is used for displaying the comment content and the interactive information through the user virtual image.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the comment method according to the first aspect.
According to the comment scheme provided by the embodiment of the invention, the comment content and the corresponding interactive information are displayed by publishing the user virtual image of the comment user, for example, the user virtual image can play the comment content through voice and make corresponding action and/or expression at the same time, so that the effect similar to the effect of commenting people in a real physical scene is realized, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a flow chart illustrating the steps of a review method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of the steps of a commenting method according to a second embodiment of the invention;
FIG. 3 is a schematic illustration of a review interface in the embodiment shown in FIG. 2;
FIG. 4 is a flow chart of the steps of a commenting method according to a third embodiment of the present invention;
FIG. 5 is a schematic illustration of a review interface in the embodiment shown in FIG. 4;
fig. 6 is a block diagram showing a structure of a comment apparatus according to a fourth embodiment of the present invention;
fig. 7 is a block diagram showing a structure of a comment apparatus according to a fifth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
It should be noted that, in the embodiment of the present invention, the "comment" includes not only a comment of a comment area provided in an application, a post and a comment in a forum community, but also other similar comments or posting scenes.
Example one
Referring to fig. 1, a flowchart illustrating steps of a review method according to a first embodiment of the present invention is shown.
The review method of the embodiment comprises the following steps:
step S102: and acquiring a corresponding user virtual image according to the user information of the user making the comment.
Wherein, the technicians in the field can adopt a proper mode to obtain the user virtual image corresponding to the user information according to the actual needs. For example, if the user sets a corresponding avatar in the application in advance, the avatar may be directly obtained according to the user information of the user, such as the user account or the user identifier; the image or video uploaded by the user can also be obtained, and the image or video contains the user image such as a head portrait, so that the user virtual image can be obtained from the image or video uploaded by the user; if the user neither presets the avatar nor uploads the image or video, or the uploaded image or video does not have the user's avatar, the user can determine an avatar according to default settings, or generate a corresponding avatar according to the user's selection of the type of the avatar (e.g., cartoon avatar, animal avatar, celebrity avatar, etc.).
Commenting users may include: the current comment area is the users who have already published comments, or the one or more users who are triggered to perform the virtual image display of the users. Optionally, if a user who is currently logged in to use the comment area may also be included, the user may have already published a comment or may not have published a comment. For example, a current user is set as a user X, three comments are provided in a current comment area and are respectively published by three different users, and the comment sequence is as follows: user 1, user 2, user 3. When the user X enters the comment area, the avatar of the user 1, the avatar of the user 2, and the avatar of the user 3 may be sequentially displayed. After the previous user avatar completes the comment display of the corresponding comment content and interactive information, the next user avatar can be displayed, for example, after the avatar of the user 1 completes the comment display, the avatar of the user 2 is displayed and the comment display is performed; a plurality of user avatars may also be displayed simultaneously, but there is only one avatar in the current activity, after the avatar in the current activity completes the display of the corresponding comment content and interactive information, the avatar in the activity is switched to the next one, for example, avatars corresponding to users 1, 2, and 3 are displayed simultaneously, the avatar in the current activity is the avatar of user 1, after the avatar of user 1 completes the comment display, the avatar of user 2 is switched to perform the comment display, and so on. Of course, in practical use, other suitable display modes can be adopted by those skilled in the art according to actual needs.
The user X can independently post comments, or comment on the contents of the comments of the users 1, 2, and 3, and the comments of the user X can be posted in a conventional posting manner (such as a conventional text or voice posting manner), can be posted in a user avatar of the user X, or can be simultaneously selected in both manners.
Step S104: and determining the interactive information of the user avatar according to the comment content.
The interactive information may be, but is not limited to, a simple change of the avatar of the user when the comment content is played, such as a mouth shape change, and the interactive information may further include, but is not limited to, an expression and/or an action of the avatar of the user. For example, the comment content can be analyzed, and after the analysis, corresponding emotion and/or action, such as happy feeling, angry, cup falling, and the like, can be determined, so that the user avatar can have corresponding expression and/or action while the comment content is displayed, and interactivity and interestingness are improved.
Step S106: and displaying the comment content and the interactive information through the user virtual image.
The comment content can be voice content or character content, and if the comment content is the voice content, the comment content can be played through a user virtual image; if the text content is the text content, the text display can be carried out based on the user virtual image. Optionally, the text display can also be performed through interactive information.
According to the comment method of the embodiment, the comment content and the corresponding interactive information are displayed by publishing the user virtual image of the comment user, for example, the user virtual image can play the comment content through voice and make corresponding action and/or expression, so that the effect similar to the effect of people commenting in a real physical scene is achieved, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
The review method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals, such as tablet computers, mobile phones, and desktop computers.
Example two
Referring to fig. 2, a flowchart illustrating steps of a commenting method according to a second embodiment of the present invention is shown.
In this embodiment, a comment method of the embodiment of the present invention is described by taking, as an example, a user X in the first embodiment views comment contents of users 1, 2, and 3 that are not currently online, and performs comment reply on the comment content of the user 1.
The review method of the present embodiment includes the steps of:
step S202: and acquiring a corresponding user virtual image according to the user information of the user making the comment.
As described in the first embodiment, there are various ways to obtain the corresponding user avatar according to the user information. In this embodiment, the manner for obtaining the virtual image of the user is as follows: acquiring a corresponding user virtual image according to user image information corresponding to the user information; or acquiring a corresponding user virtual image according to the video image information corresponding to the user information. The user image or video image may include the user's own image or other person image preferred by the user or animal or cartoon image preferred by the user, and the corresponding user virtual image may be obtained or generated according to the information included in the user image or video image. In this way, an avatar that more closely conforms to the user's avatar or preferences may be obtained or generated. If the user has preset the corresponding user virtual image, the user virtual image set by the user is used in preference to the user virtual image obtained in the above way.
Preferably, the user image information includes: information of a user image containing a user image; the video image information includes: information of the video image containing the user character. Preferably, the user image or video image preferably includes a plurality of images, and the plurality of images include a plurality of expressions and/or actions of the user, so as to facilitate processing of different expressions and actions on the user avatar. When the user image or the video image contains the user image such as the user head portrait, the user half body or the user whole body image and the like, the user virtual image is more fit with the user, and the comment experience of the user can be improved.
The manner of obtaining or generating the user avatar according to the user information may be implemented by those skilled in the art in any appropriate manner according to actual needs, for example, the user avatar is scratched or redrawn after the image is detected, or a three-dimensional reconstruction is performed according to the user avatar in the detected image to form a three-dimensional user avatar, and the like, which is not limited in the embodiment of the present invention.
In the present embodiment, the obtained user avatars of the users making comments include the avatar of the user 1, the avatar of the user 2, and the avatar of the user 3.
Step S204: and determining the interactive information of the user virtual image according to the comment content.
In order to enable the user avatar to better match and display the comment content, the interactive information of the user avatar can be determined according to the comment content. In this embodiment, the interaction information of the user avatar includes an expression and/or an action of the user avatar.
For example, if the comment content is text content, emotion and/or motion analysis may be performed on the text content, and the expression and/or motion of the corresponding user avatar may be determined according to the result of the emotion and/or motion analysis. The emotion and/or motion analysis of the text content may be implemented by those skilled in the art in any appropriate manner according to actual needs, and the embodiment of the present invention is not limited thereto. For example, user 1 may make a comment with the text "the piece of clothing is very good! And obtaining a corresponding emotion score by performing emotion analysis on the section of the character, and enabling the user to display a smiling or smiling expression in the virtual image according to the emotion score. Of course, the avatar may be given a certain motion presentation, such as rotation or dancing, according to the mood. Similar to emotion analysis, action words are included in some text comment content, and action analysis of the action words can determine the action of the corresponding user avatar, such as throwing a cup or making a fist, and the like.
For another example, if the comment content is a voice content, the voice content may be converted into a text content, and then emotion and/or motion analysis may be performed on the converted text content, and according to the result of the emotion and/or motion analysis, the expression and/or motion of the corresponding user avatar may be determined. The implementation of converting the voice content into the text content and performing emotion and/or motion analysis on the converted text content can be implemented by those skilled in the art in any appropriate manner according to actual needs, which is not limited in this embodiment of the present invention.
But not limited to the above, the interactive information of the user avatar may also have only a simple mouth shape change.
Step S206: and displaying the comment content and the interactive information through the user virtual image.
When the interactive information comprises the expression and/or the action of the virtual image of the user, when the comment content and the interactive information are displayed, if the comment content is the text content, the text content can be converted into the voice content, the voice content is played through the virtual image of the user, and the determined expression and/or action are displayed; and if the comment content is the voice content, directly playing the voice content through the virtual image of the user, and displaying the determined expression and/or action. The comment content and the interactive information are displayed through the virtual image of the user, so that the AR processing of the comment is realized, and the interestingness and the interactivity of the comment are improved.
But not limited to, the text content is converted into voice content, and in practical use, the text content can also be displayed through the user avatar, for example, the text content is displayed by using bubbles beside the user avatar, or the text content is displayed in other graphic image modes, and the like.
In addition, the method is not limited to the expression and/or action of the user avatar, and the user avatar can simply perform mouth shape opening and closing processing to play corresponding comment contents.
Step S208: and receiving the operation on the user virtual image, and updating the user virtual image according to the operation.
The application can provide the user with operations aiming at the user avatar through the application interface, such as offering, praise, giving away virtual goods and the like, the user can select corresponding operations through corresponding options, and the application updates the user avatar according to the operations after receiving the operations. For example, the user X approves the comment content of the user 1, clicks the virtual item "wreath", and after processing the operation by the application, the effect of wearing the wreath on the head of the avatar of the user 1 is achieved.
It should be noted that this step is an optional step, and in actual use, any operation or comment reply may not be performed on the user avatar.
Through the process, the interaction between the user and the user virtual object is realized, and the comment experience is further improved.
In addition, after the virtual image of the user 1 shows the comment content and the interactive information, the user X wants to reply to the comment content of the user 1, at this time, the user X can input and publish a corresponding comment reply through the reply function of the comment area, and the application confirms that the user X published the comment reply, and if the "publish" button is clicked, the virtual image of the user X is shown, on one hand, the content of the comment reply published by the user X is shown in a corresponding position in a conventional text mode; on the other hand, the comment display is performed through the virtual image of the user X, and other users can view the comment content of the user X through the comment display.
In addition, if the comment content includes image content, the image can be processed, for example, a new image input by a user is received, and AR processing is performed on the image content included in the comment content according to the new image to generate an AR image; the image content in the comment content is updated using the AR image. For example, when a user submits a comment, an image of a virtual hat worn by a real person through AR processing is uploaded, and if the user subsequently purchases an entity hat corresponding to the virtual hat and then uploads an image of a real hat worn by a real person, the image of the real hat can be superimposed on the image of the previous virtual hat through AR processing, or the image of the virtual hat is superimposed on the image of the next real hat through AR processing, so that effect comparison is achieved.
A review interface example based on the above process is shown in fig. 3, and the review interface shown in fig. 3 includes a left interface and a right interface. The user X enters a comment interface of an application, and displays the comment content of the user 1 in an AR manner by clicking a corresponding option or button through a corresponding operation, at this time, the application acquires the three-dimensional avatar of the user 1, plays the comment content posted by the user 1 in a voice form through the three-dimensional avatar of the user 1 (shown by a human-shaped picture in a left-side interface), and makes a mouth shape change and a smile expression of the three-dimensional avatar of the user 1, as shown in the left-side interface in fig. 3. When the user X performs a corresponding operation, for example, clicks a "reply" button corresponding to the comment content of the user 1, at this time, a corresponding input box (shown by a square frame in the right-side interface) for inputting the reply content is displayed, and a three-dimensional avatar corresponding to the user X is displayed, the user X inputs the corresponding reply content in the input box, and when the user X clicks the "publish" button, the input reply content is replied to the comment content of the user 1 in a text form, and the three-dimensional avatar of the user X (shown by a lower human-shaped picture in the right-side interface) makes a head nodding action and a smiling expression while playing the reply content input by the user X, as shown in the right-side interface in fig. 3. When needed, the user virtual image can be cancelled to display through corresponding triggering operation, and the state of the conventional text comment or the conventional voice comment is recovered. The cancellation display may be for only a certain user avatar or may be for all user avatars simultaneously.
Therefore, according to the comment method of the embodiment, the comment content and the corresponding interactive information are displayed by publishing the user avatar of the comment user, for example, the user avatar can make corresponding actions and/or expressions while playing the comment content through voice, so that an effect similar to the effect of people commenting in a real physical scene is realized, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
The review method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals, such as tablet computers, mobile phones, and desktop computers.
EXAMPLE III
Referring to fig. 4, a flowchart illustrating steps of a review method according to a third embodiment of the present invention is shown.
In this embodiment, a comment method of the embodiment of the present invention is described by taking, as an example, a user X in the first embodiment views comment contents of users 1, 2, and 3 and performs interactive comments with a currently online user 1.
The review method of the embodiment comprises the following steps:
step S302: and acquiring a corresponding user virtual image according to the user information of the user making the comment.
If the number of the current online users is multiple, the corresponding multiple virtual images can be obtained according to the user information of the current online multiple users who make comments.
For example, user X and users 1, 2, and 3 are all online, and the user avatars of the corresponding four users can be obtained. Further, a virtual scene, such as a virtual meeting room, may be set for the four user avatars, so as to achieve the effect that the four users discuss in the same scene, and the virtual scene may be displayed in any suitable manner, such as through a floating window or a half-screen window, and so on.
But not limited to this, for a plurality of users currently online, only the avatar of a part of the users may be obtained, and if the user X replies to the comment of the user 1, only the avatar corresponding to the user X and the user 1 may be obtained, on one hand, this way makes the display of the avatar of the user more pertinent, on the other hand, saves the data processing burden of the application, and improves the processing efficiency of the application.
In one possible approach, the multiple users who make comments that are currently online are set to include a first comment user, such as user 1, and a second comment user, such as user X, who are currently making interactive comments. The interactive comment is used for indicating the first comment user and the second comment user to respectively comment or reply the comment content published by the other party.
In this embodiment, it is set that the user 1 and the user X upload a segment of video including their own images, and after processing the video uploaded by the user 1, the user avatar corresponding to the user 1 and various expressions and actions of the avatar are generated; and after processing the video uploaded by the user X, generating a user virtual image corresponding to the user X and various expressions and actions of the virtual image.
Step S304: and determining the interactive information of the user virtual image according to the comment content.
In this embodiment, the comment contents are all set to be text contents, and after the user inputs the corresponding text contents, the expression and/or the action of the corresponding user avatar are determined according to the result of emotion and/or action analysis by performing emotion and/or action analysis on the input text contents. Specifically, in this embodiment, it is necessary to perform corresponding analysis according to the text content input by the user 1 and the user X, and determine the expression and/or the action of the corresponding user avatar.
Step S306: and displaying the comment content and the interactive information through the user virtual image, and overlaying the user virtual image to the image acquired by the image acquisition equipment in real time and displaying the image.
In this embodiment, the interactive comment scene of the first comment user, such as user 1, and the interactive comment scene of the second comment user, such as user X, is mainly aimed at, and optionally, the comment content and the interactive information of the first comment user may be displayed through the user avatar of the first comment user, and the user avatar of the first comment user is superimposed and displayed on the image acquired by the image acquisition device of the second comment user in real time; or displaying the comment content and the interactive information of the second comment user through the user avatar of the second comment user, and overlaying the user avatar of the second comment user to the image collected by the image collecting device of the first comment user in real time and displaying the image. For example, the user 1 projects the avatar of the user X to the television at home through the mobile phone of the user 1, so as to realize interactive comment on the tv drama a playing in the television; meanwhile, the user X projects the image of the user 1 to the front of a sofa at home through the mobile phone of the user X, so that interactive comments on the TV play A are realized. Then, for the user 1, the mobile phone of the user 1 superimposes the user avatar of the user X onto the image acquired by the mobile phone of the user 1; for the user X, the mobile phone of the user X superimposes the user avatar of the user 1 onto the image captured by the mobile phone of the user X.
In practical applications, only one of the two parties performing the interactive comment may adopt a user avatar mode, and the other party may adopt a conventional text or voice display mode.
A comment interface display based on the above process is shown in fig. 5, where the comment interface on the left side in fig. 5 is a comment interface displayed by the mobile phone of the user 1, and the comment interface on the right side in fig. 5 is a comment interface displayed by the mobile phone of the user X. In this example, the following example is used for comment interaction:
user 1: "the drama a of this drama a is very compact;
a user X: "agree to user 1's opinion";
user 1: "i like the leading role of man in drama a";
a user X: ' what I think he can not play! "
Based on the above comments, in the comment interface on the left side, the user 1 first issues a comment "this drama a drama is compact"; after receiving the reply of the user X of 'agreeing with the view of the user 1', the user X at the television of the home of the user 1 makes smiling and nodding actions while playing the voice of 'agreeing with the view of the user 1'; user 1 replies to user X "i like the lead actor in drama a"; after receiving the reply of user X! "later, user X at the television of user 1 is playing" I think he is doing nothing! "when the voice is played, the head shaking action is also performed. Furthermore, the user 1 may perform image capturing or video capturing at any time during the interactive review.
Based on the comments, in the comment interface on the right side, the user 1 in front of the sofa of the user X makes a smile expression while playing the voice of "the drama a is compact"; user X replies "agree to user 1's opinion"; after receiving the 'i like the leading role of the male in the drama a' replied by the user 1, the user 1 before the user X at home in the sofa plays the voice of 'i like the leading role of the male in the drama A', and also makes shy expression and head-bending actions; user X replies to what I think he does not get you like! ". In the above interactive comment process, the user X may perform image capture or video capture at any time in the interactive comment process.
It should be noted that, when the interactive comment scene is displayed, a person skilled in the art may adopt any appropriate display mode according to actual needs, such as full-screen display, half-screen display of interactive half-screen display of comment content (the mode shown in fig. 5), or display through a small suspended window, and so on.
In addition, it should be noted that, those skilled in the art may also set live broadcast-like AR processing in the applied comment function, that is, for online users who perform interactive comments, real-time interactive comments are performed through the set video interaction options, and after the video interaction options are triggered, an image in an image collected in real time by one party may be projected to a scene or a position set by another party or multiple parties after being processed, so as to implement real-time interaction of interactive reviewers.
Therefore, according to the comment method of the embodiment, the comment content and the corresponding interactive information are displayed by publishing the user avatar of the comment user, for example, the user avatar can make corresponding actions and/or expressions while playing the comment content through voice, so that an effect similar to the effect of people commenting in a real physical scene is realized, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
The review method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals, such as tablet computers, mobile phones, and desktop computers.
Example four
Referring to fig. 6, a block diagram of a comment apparatus according to a fourth embodiment of the present invention is shown.
The comment apparatus of the present embodiment includes: an obtaining module 402, configured to obtain a corresponding user avatar according to user information of a user making a comment; a determining module 404, configured to determine, according to the comment content, interaction information of the user avatar; a display module 406, configured to display the comment content and the interaction information through the user avatar.
According to the comment device of the embodiment, the comment content and the corresponding interactive information are displayed by publishing the user virtual image of the comment user, for example, the user virtual image can play the comment content through voice and make corresponding action and/or expression, so that the effect similar to the effect of people commenting in a real physical scene is achieved, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
EXAMPLE five
Referring to fig. 7, a block diagram of a comment apparatus according to a fifth embodiment of the present invention is shown.
The comment apparatus of the present embodiment includes: an obtaining module 502, configured to obtain a corresponding user avatar according to user information of a user making a comment; a determining module 504, configured to determine, according to the comment content, interaction information of the user avatar; a display module 506, configured to display the comment content and the interaction information through the user avatar.
Optionally, the interaction information of the user avatar includes an expression and/or an action of the user avatar.
Optionally, the determining module 504 is configured to, if the comment content is a text content, perform emotion and/or motion analysis on the text content, and determine, according to a result of the emotion and/or motion analysis, an expression and/or a motion of the corresponding user avatar; if the comment content is voice content, after the voice content is converted into text content, emotion and/or action analysis is carried out on the converted text content, and the corresponding expression and/or action of the user virtual image are determined according to the result of the emotion and/or action analysis.
Optionally, the displaying module 506 is configured to, if the comment content is a text content, convert the text content into a voice content, play the voice content through the user avatar, and display the determined expression and/or action; and if the comment content is voice content, directly playing the voice content through the user virtual image, and displaying the determined expression and/or action.
Optionally, the obtaining module 502 is configured to obtain a plurality of corresponding user avatars according to the user information of a plurality of currently online users making comments.
Optionally, the plurality of currently online commenting users include a first commenting user and a second commenting user who are currently performing interactive comment.
Optionally, the displaying module 506 is configured to display the comment content and the interaction information of the first comment user through the user avatar of the first comment user, and superimpose the user avatar of the first comment user on the image acquired by the image acquisition device of the second comment user in real time and display the superimposed image; or displaying the comment content and the interactive information of the second comment user through the user avatar of the second comment user, and overlaying the user avatar of the second comment user to the image collected by the image collection device of the first comment user in real time and displaying the image.
Optionally, the comment apparatus of this embodiment further includes: a first updating module 508, configured to receive an operation on the user avatar, and update the user avatar according to the operation.
Optionally, when the comment content includes image content, the comment apparatus of the present embodiment further includes: a second updating module 510, configured to receive a new image input by the user, perform Augmented Reality (AR) processing on image content included in the comment content according to the new image, and generate an AR image; updating image content in the comment content using the AR image.
Optionally, the obtaining module 502 is configured to obtain a corresponding user avatar according to user image information corresponding to the user information; or acquiring a corresponding user virtual image according to the video image information corresponding to the user information.
Optionally, the user image information includes: information of a user image containing a user image; the video image information includes: information of the video image containing the user character.
According to the comment device of the embodiment, the comment content and the corresponding interactive information are displayed by publishing the user virtual image of the comment user, for example, the user virtual image can play the comment content through voice and make corresponding action and/or expression, so that the effect similar to the effect of people commenting in a real physical scene is achieved, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
EXAMPLE six
Referring to fig. 8, a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention is shown. The specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 8, the electronic device may include: a processor (processor)602, a communication Interface 604, a memory 606, and a communication bus 608.
Wherein:
the processor 602, communication interface 604, and memory 606 communicate with one another via a communication bus 608.
A communication interface 604 for communicating with other electronic devices.
The processor 602 is configured to execute the program 610, and may specifically perform relevant steps in the above comment method embodiment.
In particular, program 610 may include program code comprising computer operating instructions.
The processor 602 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 606 for storing a program 610. Memory 606 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 610 may specifically be configured to cause the processor 602 to perform the following operations: acquiring a corresponding user virtual image according to the user information of the user making a comment; according to the comment content, determining the interaction information of the user virtual image; and displaying the comment content and the interactive information through the user virtual image.
In an alternative embodiment, the interaction information of the user avatar includes expressions and/or actions of the user avatar.
In an optional implementation manner, the program 610 is further configured to, when determining the interaction information of the user avatar according to the comment content, if the comment content is a text content, perform emotion and/or motion analysis on the text content, and determine an expression and/or a motion of the corresponding user avatar according to a result of the emotion and/or motion analysis; if the comment content is voice content, after the voice content is converted into text content, emotion and/or action analysis is carried out on the converted text content, and the corresponding expression and/or action of the user virtual image are determined according to the result of the emotion and/or action analysis.
In an optional implementation manner, the program 610 is further configured to, when the comment content and the interaction information are displayed through the user avatar, if the comment content is a text content, convert the text content into a voice content, play the voice content through the user avatar, and display the determined expression and/or action; and if the comment content is voice content, directly playing the voice content through the user virtual image, and displaying the determined expression and/or action.
In an alternative embodiment, the program 610 is further configured to enable the processor 602 to obtain the corresponding plurality of user avatars according to the user information of the plurality of commenting users currently online when obtaining the corresponding user avatar according to the user information of the commenting user.
In an optional implementation manner, the plurality of currently online commenting users comprise a first commenting user and a second commenting user who currently perform interactive comment.
In an optional implementation, the program 610 is further configured to cause the processor 602 to, when the comment content and the interaction information are displayed through the user avatar, display the comment content and the interaction information of the first comment user through the user avatar of the first comment user, and superimpose and display the user avatar of the first comment user on an image captured by an image capturing device of the second comment user in real time; or displaying the comment content and the interactive information of the second comment user through the user avatar of the second comment user, and overlaying the user avatar of the second comment user to the image collected by the image collection device of the first comment user in real time and displaying the image.
In an alternative embodiment, program 610 is further operative to cause processor 602 to receive an operation on the user avatar, update the user avatar based on the operation.
In an optional embodiment, when the comment content includes image content, program 610 is further configured to cause processor 602 to receive a new image input by the user, perform augmented reality AR processing on the image content included in the comment content according to the new image, and generate an AR image; updating image content in the comment content using the AR image.
In an alternative embodiment, the program 610 is further configured to enable the processor 602, when acquiring the corresponding user avatar according to the user information of the commenting user, acquire the corresponding user avatar according to the user image information corresponding to the user information; or acquiring a corresponding user virtual image according to the video image information corresponding to the user information.
In an alternative embodiment, the user image information comprises: information of a user image containing a user avatar; the video image information includes: information of the video image containing the user character.
For specific implementation of each step in the program 610, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing comment method embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Through the electronic equipment of the embodiment, the comment content and the corresponding interactive information can be displayed by publishing the user virtual image of the comment user, for example, the user virtual image can play the comment content through voice and make corresponding action and/or expression, so that the effect similar to the effect of people commenting in a real physical scene is realized, the personalized requirements of the user are met in a rich comment or posting form, and the comment or posting experience of the user is improved.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the review methods described herein. Further, when a general-purpose computer accesses code for implementing the review methods shown herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the review methods shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.

Claims (17)

1. A review method, comprising:
the method comprises the steps of obtaining a plurality of corresponding user avatars according to user information of a plurality of currently online comment-making users, and setting a virtual scene for the plurality of user avatars, wherein the user avatars are three-dimensional avatars different from head portraits of the users, and the plurality of currently online comment-making users comprise a first comment user and a second comment user who currently perform interactive comments;
according to the comment content, determining interaction information of a plurality of user avatars;
displaying the comment content and the interaction information of the first comment user through the user avatar of the first comment user, and overlaying and displaying the user avatar of the first comment user to the image collected by the image collection equipment of the second comment user in real time; or the comment content and the interactive information of the second comment user are displayed through the user virtual image of the second comment user, the user virtual image of the second comment user is superposed to the image collected by the image collecting device of the first comment user in real time and is displayed, so that the user virtual image of one of the user virtual images is projected to a virtual scene set by the other user or multiple users, and real-time interaction of the users who make comments is carried out.
2. The method of claim 1, wherein the interaction information of the user avatar includes an expression and/or an action of the user avatar.
3. The method of claim 2, wherein said determining interaction information for a plurality of said user avatars from review content comprises:
if the comment content is the text content, performing emotion and/or action analysis on the text content, and determining the expression and/or action of the corresponding user virtual image according to the emotion and/or action analysis result;
if the comment content is voice content, after the voice content is converted into text content, emotion and/or action analysis is carried out on the converted text content, and the corresponding expression and/or action of the user virtual image are determined according to the result of the emotion and/or action analysis.
4. The method of claim 3, wherein said presenting said commentary content and said interaction information via a plurality of said user avatars comprises:
if the comment content is text content, converting the text content into voice content, playing the voice content through a plurality of user virtual images, and displaying the determined expression and/or action;
and if the comment content is voice content, directly playing the voice content through a plurality of user avatars, and displaying the determined expression and/or action.
5. The method of any of claims 1-4, wherein the method further comprises: and receiving the operation on the user virtual image, and updating the user virtual image according to the operation.
6. The method of any of claims 1-4, wherein when the commentary content includes image content, the method further comprises:
receiving a new image input by the user, and performing Augmented Reality (AR) processing on image content included in the comment content according to the new image to generate an AR image;
updating image content in the comment content using the AR image.
7. The method according to any one of claims 1-4, wherein said obtaining a corresponding plurality of user avatars comprises:
acquiring a plurality of corresponding user avatars according to user image information corresponding to the user information;
or,
and acquiring a plurality of corresponding user avatars according to the video image information corresponding to the user information.
8. The method of claim 7, wherein,
the user image information includes: information of a user image containing a user image;
the video image information includes: information of the video image containing the user character.
9. A commenting apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of corresponding user avatars according to user information of a plurality of currently online users making comments and setting virtual scenes for the plurality of user avatars, the user avatars are three-dimensional avatars different from the user avatars of the users, and the plurality of currently online users making comments comprise a first comment user and a second comment user who currently make interactive comments;
the determining module is used for determining the interaction information of the user avatars according to the comment content;
the display module is used for displaying the comment content and the interactive information of the first comment user through the user virtual image of the first comment user, and overlaying the user virtual image of the first comment user to the image collected by the image collection equipment of the second comment user in real time and displaying the image; or the comment content and the interactive information of the second comment user are displayed through the user virtual image of the second comment user, the user virtual image of the second comment user is superposed to the image collected by the image collecting device of the first comment user in real time and is displayed, so that the user virtual image of one of the user virtual images is projected to a virtual scene set by the other user or multiple users, and real-time interaction of the users who make comments is carried out.
10. The apparatus of claim 9, wherein the interaction information of the user avatar includes an expression and/or an action of the user avatar.
11. The device of claim 10, wherein the determining module is configured to, if the comment content is text content, perform emotion and/or motion analysis on the text content, and determine, according to a result of the emotion and/or motion analysis, an expression and/or a motion of the corresponding avatar of the user; if the comment content is voice content, after the voice content is converted into text content, emotion and/or action analysis is carried out on the converted text content, and the corresponding expression and/or action of the user virtual image are determined according to the result of the emotion and/or action analysis.
12. The apparatus of claim 11, wherein the presentation module is configured to, if the comment content is text content, convert the text content into voice content, play the voice content through a plurality of the user avatars, and present the determined expression and/or action; and if the comment content is voice content, directly playing the voice content through a plurality of user avatars, and displaying the determined expression and/or action.
13. The apparatus of any of claims 9-12, wherein the apparatus further comprises:
and the first updating module is used for receiving the operation on the user virtual image and updating the user virtual image according to the operation.
14. The apparatus according to any one of claims 9-12, wherein when the comment content includes image content, the apparatus further comprises:
the second updating module is used for receiving a new image input by the user, performing Augmented Reality (AR) processing on image content included in the comment content according to the new image, and generating an AR image; updating image content in the comment content using the AR image.
15. The apparatus according to any one of claims 9-12, wherein the obtaining module is configured to obtain a plurality of corresponding user avatars according to user image information corresponding to the user information; or acquiring a plurality of corresponding user avatars according to the video image information corresponding to the user information.
16. The apparatus of claim 15, wherein,
the user image information includes: information of a user image containing a user image;
the video image information includes: information of the video image containing the user character.
17. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction which causes the processor to execute the corresponding operation of the comment method according to any one of claims 1 to 8.
CN201810060919.8A 2018-01-22 2018-01-22 Comment method and device and electronic equipment Active CN108322832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810060919.8A CN108322832B (en) 2018-01-22 2018-01-22 Comment method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810060919.8A CN108322832B (en) 2018-01-22 2018-01-22 Comment method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108322832A CN108322832A (en) 2018-07-24
CN108322832B true CN108322832B (en) 2022-05-17

Family

ID=62887612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810060919.8A Active CN108322832B (en) 2018-01-22 2018-01-22 Comment method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108322832B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109195022B (en) * 2018-09-14 2021-03-02 王春晖 Voice bullet screen system
CN111128204A (en) * 2018-11-01 2020-05-08 阿里巴巴集团控股有限公司 Comment method and device, terminal device and computer storage medium
CN109559399A (en) * 2018-12-18 2019-04-02 深圳市致善科技有限公司 User registers method, system and storage medium and terminal device
CN111625740A (en) * 2019-02-28 2020-09-04 阿里巴巴集团控股有限公司 Image display method, image display device and electronic equipment
CN110035325A (en) * 2019-04-19 2019-07-19 广州虎牙信息科技有限公司 Barrage answering method, barrage return mechanism and live streaming equipment
CN110348957A (en) * 2019-06-27 2019-10-18 无线生活(杭州)信息科技有限公司 Information displaying method and device
CN110460903A (en) * 2019-07-18 2019-11-15 平安科技(深圳)有限公司 Based on speech analysis to the method, apparatus and computer equipment of program review
CN112347395B (en) * 2019-08-07 2024-06-14 阿里巴巴集团控股有限公司 Special effect display method and device, electronic equipment and computer storage medium
CN110971930B (en) * 2019-12-19 2023-03-10 广州酷狗计算机科技有限公司 Live virtual image broadcasting method, device, terminal and storage medium
CN112135160A (en) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 Virtual object control method and device in live broadcast, storage medium and electronic equipment
CN112601100A (en) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965843A (en) * 2014-12-29 2015-10-07 腾讯科技(北京)有限公司 Method and apparatus for acquiring comment information
CN105357587A (en) * 2015-10-28 2016-02-24 广州华多网络科技有限公司 Method and system for realizing music barrage
CN106485956A (en) * 2016-09-29 2017-03-08 珠海格力电器股份有限公司 Method and device for demonstrating functions of electronic equipment and intelligent terminal
CN107085495A (en) * 2017-05-23 2017-08-22 厦门幻世网络科技有限公司 A kind of information displaying method, electronic equipment and storage medium
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107612815A (en) * 2017-09-19 2018-01-19 北京金山安全软件有限公司 Information sending method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105051820B (en) * 2013-03-29 2018-08-10 索尼公司 Information processing equipment and information processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965843A (en) * 2014-12-29 2015-10-07 腾讯科技(北京)有限公司 Method and apparatus for acquiring comment information
CN105357587A (en) * 2015-10-28 2016-02-24 广州华多网络科技有限公司 Method and system for realizing music barrage
CN106485956A (en) * 2016-09-29 2017-03-08 珠海格力电器股份有限公司 Method and device for demonstrating functions of electronic equipment and intelligent terminal
CN107085495A (en) * 2017-05-23 2017-08-22 厦门幻世网络科技有限公司 A kind of information displaying method, electronic equipment and storage medium
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107612815A (en) * 2017-09-19 2018-01-19 北京金山安全软件有限公司 Information sending method, device and equipment

Also Published As

Publication number Publication date
CN108322832A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108322832B (en) Comment method and device and electronic equipment
CN113240782B (en) Streaming media generation method and device based on virtual roles
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN111541950B (en) Expression generating method and device, electronic equipment and storage medium
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2022105846A1 (en) Virtual object display method and apparatus, electronic device, and medium
JP4973622B2 (en) Image composition apparatus and image composition processing program
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
JP5012373B2 (en) Composite image output apparatus and composite image output processing program
US20210166461A1 (en) Avatar animation
CN114245099B (en) Video generation method and device, electronic equipment and storage medium
US12020389B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
US20240276058A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
JP5169111B2 (en) Composite image output apparatus and composite image output processing program
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
US20240155074A1 (en) Movement Tracking for Video Communications in a Virtual Environment
CN111652986B (en) Stage effect presentation method and device, electronic equipment and storage medium
CN111510769A (en) Video image processing method and device and electronic equipment
JP4962219B2 (en) Composite image output apparatus and composite image output processing program
JP6559375B1 (en) Content distribution system, content distribution method, and content distribution program
CN114173173A (en) Barrage information display method and device, storage medium and electronic equipment
JP6609078B1 (en) Content distribution system, content distribution method, and content distribution program
WO2023130715A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
US20230138434A1 (en) Extraction of user representation from video stream to a virtual environment
CN114245193A (en) Display control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200528

Address after: 310051 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping B radio square 14 storey tower

Applicant before: GUANGZHOU UCWEB COMPUTER TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: Room 508, 5 / F, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant