CN114374882A - Barrage information processing method and device, terminal and computer-readable storage medium - Google Patents

Barrage information processing method and device, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN114374882A
CN114374882A CN202111588176.XA CN202111588176A CN114374882A CN 114374882 A CN114374882 A CN 114374882A CN 202111588176 A CN202111588176 A CN 202111588176A CN 114374882 A CN114374882 A CN 114374882A
Authority
CN
China
Prior art keywords
bullet screen
target
user
determining
hot spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111588176.XA
Other languages
Chinese (zh)
Other versions
CN114374882B (en
Inventor
郭佩佩
邢刚
冯亚楠
赵璐
樊刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202111588176.XA priority Critical patent/CN114374882B/en
Priority claimed from CN202111588176.XA external-priority patent/CN114374882B/en
Publication of CN114374882A publication Critical patent/CN114374882A/en
Application granted granted Critical
Publication of CN114374882B publication Critical patent/CN114374882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a device for processing bullet screen information, a terminal and a computer readable storage medium, relates to the technical field of virtual reality, and aims to solve the problem that the existing bullet screen display form is single. The method is applied to a Virtual Reality (VR) terminal and comprises the following steps: when the VR terminal plays a video, determining an initial display position of a target bullet screen under the field angle of a first user; controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the initial display position as a starting point, and the target motion track is related to the label of the target bullet screen and the content characteristics of the video image under the field angle of the first user; the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen. The embodiment of the invention can dynamically display the bullet screen by combining the bullet screen content and the video content, thereby enriching the display form of the bullet screen.

Description

Barrage information processing method and device, terminal and computer-readable storage medium
Technical Field
The present invention relates to the field of Virtual Reality (VR) technology, and in particular, to a method, an apparatus, a terminal, and a computer-readable storage medium for processing barrage information.
Background
The barrage is a phenomenon that comments of a user fly over a screen when watching a video, and is mostly used for 2D video. VR video is also known as 360 degree panoramic video. Along with the development of VR technique, watching the VR video in-process, the user also can communicate through barrage equally and communicate. In the prior art, the barrage display form in the VR video is single.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a terminal and a computer readable storage medium for processing bullet screen information, and aims to solve the problem that the existing bullet screen display form is single.
In a first aspect, an embodiment of the present invention provides a method for processing bullet screen information, which is applied to a virtual reality VR terminal, and includes:
when the VR terminal plays a video, determining an initial display position of a target bullet screen under a field angle of a first user;
controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the initial display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
Optionally, the determining the initial display position of the target barrage at the field angle of the first user includes:
determining a hot picture area of the video image under the field angle of the first user; wherein the hotspot picture area is related to a content feature of the video image;
determining the heat value of the target bullet screen according to the label of the target bullet screen;
and determining the initial display position according to the hot spot picture area and the heat value.
Optionally, the determining the initial display position according to the hot spot picture area and the heat value includes:
determining the Z-axis coordinate of the initial display position according to the heat value and the sphere radius value rendered by the VR terminal;
determining the Y-axis coordinate corresponding to the central point of the hot spot picture area as the Y-axis coordinate of the initial display position;
determining the X-axis coordinate of the target boundary point of the field angle of the first user as the X-axis coordinate of the starting display position; wherein the target boundary points include: the top right vertex or the top left vertex.
Optionally, the determining the Z-axis coordinate of the starting display position according to the heat value and the sphere radius value rendered by the VR device includes:
according to the formula: z-r-kh2Determining the Z-axis coordinate of the initial display position;
wherein Z represents a Z-axis coordinate value, and k represents the size of the coefficient adjustment threshold range; h represents the heat value of the target bullet screen; r represents a sphere radius value rendered by the VR terminal.
Optionally, in the process of controlling the target barrage to move and display along the target motion trajectory, the method further includes at least one of:
under the condition that the label of the target bullet screen is matched with the hot spot picture area, the target motion track passes through the hot spot picture area;
when the target bullet screen and the hot spot picture area have an overlapping area, controlling the display color of the target bullet screen in the overlapping area to be a target color, wherein the target color and the color of the hot spot picture area are complementary colors;
acquiring the distance between the target bullet screen and the hot spot picture area; adjusting the font size of the target bullet screen according to the distance between the target bullet screen and the hot spot picture area;
when the target bullet screen passes through the hot spot picture area, controlling the target bullet screen to move towards a target point; the target point is a position matched with the label of the target bullet screen in the hot spot picture;
obtaining the emotion types of the target bullet screens, wherein the emotion types comprise: positive and negative emotions; when the emotion type of the target barrage is negative emotion, controlling the VR equipment to send prompt information when the target barrage enters a hot spot picture area;
performing emotion analysis on the target bullet screen to determine an emotion value of the target bullet screen; controlling the display effect of the target bullet screen according to the emotion value;
and under the condition that the sight of the user is detected to be aligned with the target barrage and the sight lasts for a preset duration, controlling the motion track of the target barrage to point to the eyeball of the first user, and increasing the font size of the target barrage to a target value at a preset rate.
Optionally, before determining the starting display position of the target barrage at the field angle of the first user, the method further includes:
acquiring a bullet screen set at the VR video playing time, wherein each bullet screen in the bullet screen set is matched with a label; the label is obtained according to the content characteristics of the bullet screen and the content characteristics of the video image under the user view field angle when the bullet screen is sent;
and determining a target bullet screen to be displayed from the bullet screen set according to the video picture content corresponding to the field angle of the first user at the current VR video playing moment and the label of each bullet screen.
Optionally, after the target barrage is controlled to move and display along the target motion trajectory, the method further includes:
and under the condition that the first user turns the head and the turning speed is greater than a preset speed value, controlling the target bullet screen to move out of the visual field range of the first user at a preset acceleration speed.
In a second aspect, an embodiment of the present invention further provides a device for processing bullet screen information, which is applied to a virtual reality VR terminal, where the device includes:
the first determining module is used for determining the initial display position of the target bullet screen under the user field angle of the first user when the VR terminal plays the video;
the display module is used for controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
In a third aspect, an embodiment of the present invention further provides a VR terminal, including: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; characterised in that the processor is adapted to read a program in the memory to implement the steps in the method according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the method according to the first aspect.
In the embodiment of the invention, when the VR terminal plays a video, the initial display position of the target bullet screen under the field angle of the first user is determined; controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen. Therefore, according to the scheme of the embodiment of the invention, the label of the bullet screen can reflect the content characteristics of the target bullet screen and the content characteristics of the video image at the field angle when the second user sends the target bullet screen, and the target motion track is related to the label of the bullet screen and the content characteristics of the video image at the field angle of the first user, so that the bullet screen can be dynamically displayed by fully combining the content of the bullet screen and the characteristics of the video content, the display form of the bullet screen can be enriched, and the immersive viewing experience of the user can be facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for processing bullet screen information according to an embodiment of the present invention;
fig. 2 is a second flowchart of a method for processing bullet screen information according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an initial display position of a target bullet screen according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of a bullet screen display provided by an embodiment of the present invention;
fig. 5 is a block diagram of a bullet screen information processing device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a VR terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for processing bullet screen information according to an embodiment of the present invention, and as shown in fig. 1, the method for processing bullet screen information, applied to a VR terminal, includes the following steps:
step 101, determining an initial display position of a target bullet screen under a field angle of a first user when the VR terminal plays a video;
in this step, the first user is a user currently watching the VR video, and the target barrage at the field angle of the first user is sent by the second user.
Step 102, controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the view field angle when the second user sends the target bullet screen.
In the step, the target bullet screen is controlled to move and display along the target motion track under the spherical coordinates in a 360-degree panoramic rendering mode.
It should be noted that, when the second user sends the target bullet screen, the system automatically generates a label for the target bullet screen, specifically, the label of the target bullet screen is generated according to the content feature of the target bullet screen and the content feature of the video image at the field angle when the second user sends the target bullet screen. And the target motion track is determined according to the content characteristics of the video image under the field angle of the first user.
In the above embodiment, when the VR terminal plays a video, the initial display position of the target barrage at the field angle of the first user is determined; further controlling the target bullet screen to move and display along a target motion track by taking the initial display position as a starting point, wherein the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; the label of the target bullet screen can reflect the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen. Through this embodiment can combine bullet screen content and video content to come dynamic show bullet screen, richened the presentation form of bullet screen, and can make the user obtain immersive sight shadow experience.
In one embodiment, as shown in fig. 2, the step 101 includes:
step 1011, determining a hot spot picture area of the video image under the field angle of the first user; wherein the hot picture area is related to content characteristics of the video image.
In the step, the hot picture area is a picture area where the content features of the video image are located, and the hot picture area is a subset of the visual field range of the first user; the obtaining manner of the hot spot picture area may include: analyzing a plurality of frames before and after the video image of the visual field range of the first user, extracting content features, and taking the extracted features as the area where the video image is located as a hot spot picture area. Wherein the hot spot picture area can be represented by the contour feature data of the object. Moreover, since the content feature of each frame of the picture can include a plurality of features, the hot picture area in each frame of the picture can also be a plurality of features.
Step 1012, determining the heat value of the target bullet screen according to the label of the target bullet screen;
in this step, the heat value is calculated according to the time when the second user sends the bullet screen and the labels of all the bullet screens.
Illustratively, the heat value corresponds to the number of times the tag occurs, the more the tag occurs, the greater the heat value. The hot value of the target bullet screen is equal to the sum of the hot values of all the labels of the item bullet screen, namely h.
And 1013, determining the initial display position according to the hot spot picture area and the heat value.
Specifically, the coordinates (X, Y, Z) of the start display position are determined as follows:
(1) z-axis coordinates:
determining the Z-axis coordinate of the initial display position according to the heat value and the sphere radius value rendered by the VR terminal;
illustratively, to avoid the bullet screen Z-axis coordinate being too close to the center of the sphere, a threshold range is selected into which the heat value of the bullet screen is mapped. The Z-axis coordinate is smaller when the heat value is high; conversely, the lower the heat value, the larger the Z-axis coordinate. The calculation formula is as follows:
according to z ═ r-kh2Determining the Z-axis coordinate of the initial display position;
wherein Z represents a Z-axis coordinate value, k represents the size of the coefficient adjustment threshold range, e.g., k may be located at 0.5; h represents the heat value of the target bullet screen; r represents a sphere radius value rendered by the VR terminal.
(2) Y-axis coordinate
Determining the Y-axis coordinate corresponding to the central point of the hot spot picture area as the Y-axis coordinate of the initial display position;
and the Y-axis coordinate is the Y-axis coordinate corresponding to the central point of the picture hot spot area matched with the item bullet screen. It should be noted that, in the case of consecutive adjacent bullet screens, to avoid overlapping the bullet screens, the Y-axis coordinate is increased or decreased by a certain step length so as to increase the interval. The step size can be set arbitrarily.
(3) X axis coordinate
Determining the X-axis coordinate of the target boundary point of the field angle of the first user as the X-axis coordinate of the starting display position; wherein the target boundary points include: the top right vertex or the top left vertex.
For example, as shown in fig. 3, the coordinates of the center point of the hotspot region of the video image are (j, k), the coordinates of the vertex of the upper right corner of the user visual field range (visual field angle) are (x1, y1), and the coordinate of the barrage Z-axis is (x1, m, r) of the movement starting point when the spherical radius r of the video image is; where m is equal to k. When the Z-axis coordinate is less than r, the bullet screen is close distance, and the moving starting point position at r/2 is (x2, n, r/2), wherein x2 and n can be calculated through the geometric principle.
In an embodiment, when the tag of the target bullet screen matches the hot spot screen area, the target motion trajectory passes through the hot spot screen area.
Optionally, under the condition that the tag of the target bullet screen is not matched with the hot spot screen area, the target motion trajectory does not pass through the hot spot screen area.
The matching of the label of the target bullet screen and the hot spot screen area can be understood that the characteristic represented by the label of the target bullet screen and the characteristic in the hot spot screen area have the same part.
For example, as shown in fig. 4, when the label of the target barrage includes "sea lion", if the hot spot picture area is a picture area where the feature of "sea lion" is located, it can be understood that the label of the target barrage matches with the hot spot picture area, and then the target motion trajectory passes through the hot spot picture area where "sea lion" is located. Under the condition that the label of the target bullet screen is a sea lion, if the hot spot picture area is a picture area with the characteristic of penguin, the target bullet screen can be understood as that the label of the target bullet screen is not matched with the hot spot picture area, and the target motion track does not pass through the hot spot picture area with the penguin.
In this embodiment, by controlling the target motion trajectory to pass through the hot spot picture region under the condition that the tag of the target barrage is matched with the hot spot picture region, the first user can have fun when watching a video, and the user can see the barrage about the hot spot picture when watching the hot spot picture, which is beneficial to the user to obtain an immersive viewing experience.
Further, in one embodiment, step 102 includes:
when the target bullet screen passes through the hot spot picture area, controlling the target bullet screen to move towards a target point; and the target point is the position matched with the label of the target bullet screen in the hot spot picture.
Wherein, the target point is a picture characteristic point corresponding to other keywords in the bullet screen label; for example, in a case that the label of the target bullet screen includes "tail of sea lion", if the hot spot screen is a screen area where the feature of sea lion is located, the target bullet screen is controlled to move toward a target point (tail of sea lion) in the hot spot screen. As shown in fig. 4, the dotted arrow shows the moving track of the bullet screen with the target bullet screen content "good love tail of sea lion", and the dotted arrow points to the tail of sea lion.
Through this embodiment, can increase the interest of barrage show, increase the user and watch the video enjoyment of VR.
In an embodiment, in the process of controlling the target barrage to move and display along the target motion track, the method further includes:
when the target bullet screen and the hot spot picture area have an overlapping area, controlling the display color of the target bullet screen in the overlapping area to be a target color, wherein the target color and the color of the hot spot picture area are complementary colors.
In this embodiment, in order to facilitate the user to view the bullet screen content, when the bullet screen encounters a hot spot area of the picture during the moving process, the color of the font of the bullet screen changes to a complementary color, so as to facilitate the recognition. The content entering the hot spot picture area changes color, the part which does not enter does not change color, the transition of the color shows gradual color change, and the comfort and the aesthetic property on the vision are enhanced.
In an embodiment, in the process of controlling the target barrage to move and display along the target motion track, the method further includes:
acquiring the distance between the target bullet screen and the hot spot picture area;
and adjusting the font size of the target bullet screen according to the distance between the target bullet screen and the hot spot picture area.
In this embodiment, in the bullet screen floating process, the font size of the target bullet screen can be controlled to gradually change according to the distance between the target bullet screen and the hot spot screen area, and the closer to the hot spot screen area, the bigger the font is, and the larger the font is after reaching a certain size, the larger the font is not. For example, the font size is increased from 10 to 30, and then the font size is fixed and not increased. Then, when the bullet screen leaves the hot spot area, the font gradually decreases, and does not change after decreasing to the initial size. Therefore, the interest of bullet screen display can be increased.
In an embodiment, in the process of controlling the target barrage to move and display along the target motion track, the method further includes:
and under the condition that the sight of the user is detected to be aligned with the target barrage and the sight lasts for a preset duration, controlling the motion track of the target barrage to point to the eyeball of the first user, and increasing the font size of the target barrage to a target value at a preset rate.
Illustratively, when a user gazes at an item bullet screen for more than 2 seconds, the moving track of the target bullet screen is changed to directly impact the eyeball of the user, the font gradually becomes larger, and finally the target bullet screen flies out, so that the effect like being sucked is achieved, and the interactive pleasure of watching the user is increased.
The method for detecting the gazing of the user on a certain bullet screen can be as follows: the eyeball tracking sensor of the VR equipment is used for collecting the coordinates of the focus point of the eyes of the user, when the focus point is continuously placed on the bullet screen for 2 seconds, the attraction processing of the bullet screen is triggered, namely the movement track of the target bullet screen is controlled to point to the eyeball of the first user, and the font size of the target bullet screen is increased to a target value at a preset rate.
In an embodiment, in the process of controlling the target barrage to move and display along the target motion track, the method further includes:
obtaining the emotion types of the target bullet screens, wherein the emotion types comprise: positive and negative emotions;
and when the emotion type of the target bullet screen is negative emotion, controlling the VR equipment to send prompt information when the target bullet screen enters a hot spot picture area.
It should be noted that the emotion can be divided into positive emotion and negative emotion according to the difference of the positive and negative change directions of the value. Wherein, the positive emotion is the emotion generated by the increase of positive value or the decrease of negative value of the person, such as pleasure, trust, excitement, celebration and the like; negative emotions are emotions produced by a person on reduction of positive value or increase of negative value, such as suffering, supporter, hate, jealousy and the like.
In this embodiment, the prompt message may include: vibration, whistling, etc. Illustratively, the strength of the vibration or the singing is determined by the strength of negative emotion values, the greater the emotion value is, the stronger the vibration or the singing is, and the vibration or the singing is stopped after the bullet screen leaves a hot spot area.
In an embodiment, after the controlling the target barrage to move and display along the target motion trajectory, the method further includes:
and under the condition that the first user turns the head and the turning speed is greater than a preset speed value, controlling the target bullet screen to move out of the visual field range of the first user at a preset acceleration speed.
In this embodiment, for the barrages that have been shown in the field of view, when the first user turns his head quickly, these barrages are accelerated to fly out of the field of view; wherein, the acceleration can be calculated by a sensor of the VR terminal.
Further, for the barrages displayed in the visual field range, when the first user rotates at a low speed and a uniform speed, the barrages continue to rotate at a uniform speed; after the bullet screen is turned to the new position, if the associated feature does not disappear and the bullet screen to be displayed of the feature is not updated at the moment, the bullet screen is continuously displayed and does not disappear.
In an embodiment, before step 101, the method further comprises:
acquiring a bullet screen set at the VR video playing time, wherein each bullet screen in the bullet screen set is matched with a label; the label is obtained according to the content characteristics of the bullet screen and the content characteristics of the video image under the user view field angle when the bullet screen is sent;
and determining a target bullet screen to be displayed from the bullet screen set according to the video picture content corresponding to the field angle of the first user at the current VR video playing moment and the label of each bullet screen.
For example, when the first user watches the VR video, finding the target barrage that should be displayed through the watching content of the first user may include the following processes:
step 1: when a first user watches the VR video, the visual field range of the first user is detected through a motion sensor of the VR terminal.
Step 2: the frame is used for analyzing a plurality of frames before and after the video image in the first user visual field range, content feature extraction is carried out, the extracted features are used as a label of a watching picture, and the region where the features are located is used as a hot spot picture region of the watching picture.
And step 3: and then acquiring the barrage at the moment of VR video playing, and carrying out fuzzy matching on the label of each barrage and the label of the watched picture, wherein the matching scheme comprises the following steps:
(3-1) the tab set A of the bullet screen text, which corresponds to { a1, a2, …, am }, wherein each element is a keyword extracted from the bullet screen text;
(3-2) the tab set B of the picture, corresponding to { B1, B2, …, bn }, wherein each element is a keyword of the feature extracted from the picture feature;
(3-3) traversing the keywords in the step A minus the keyword in the step B, calculating the semantic similarity with the keywords in the set B through Word2Vec, and forming the keywords with the similarity larger than a threshold into a set C, wherein the threshold is 0.7, for example, and can be adjusted according to actual experiments.
(3-4) the key words in the set A n B U C are the final labels of the bullet after the accuracy is improved.
And 4, step 4: and when any keyword in the labels of the bullet screens is matched with the picture labels, classifying the bullet screens into a bullet screen set D1 to be displayed.
And 5: and for the bullet screen with all the labels not matched with the picture labels, further detecting whether the bullet screen position information matches the user visual field. The method comprises the following specific steps:
and (5-1) acquiring a bullet screen hot spot area, and judging whether the visual field range of the user overlaps with the bullet screen hot spot area.
And (5-2) when the bullet screen hot spot area is overlapped with the visual field range of the user, determining that the bullet screen is matched, and classifying the bullet screen as a bullet screen set D2 to be displayed.
Step 6: and if the label and the position of the bullet screen do not match the visual field range of the user, the bullet screen is not displayed.
And 7: the bullet screen in the set D1U D2 is the bullet screen to be displayed in the user view range when the user watches VR video.
In an embodiment, in the process of controlling the target barrage to move and display along the target motion track, the method further includes:
performing emotion analysis on the target bullet screen to determine an emotion value of the target bullet screen;
and controlling the display effect of the target bullet screen according to the emotion value.
Wherein, the display effect can comprise the color and font change of the characters.
Specifically, performing emotion analysis on the target bullet screen to determine an emotion value of the target bullet screen may specifically include the following steps:
step 1: and performing word segmentation on the bullet screen text in the bullet screen set to be displayed, and screening bullet screen emotional words, wherein m emotional words are assumed.
Step 2: obtaining emotion vectors and emotion classes of barrage emotion words by using AI machine learning algorithm, wherein the emotion vectors are marked as P ═ P i1,2 … n, wherein n represents n total emotions and satisfies
Figure BDA0003428788200000111
And obtaining the maximum p from the vectoriThe corresponding emotion is marked as the emotion category of the vocabulary. Then, the emotion word vector corresponding to the kth emotion word is recorded as
Figure BDA0003428788200000112
And step 3: one color is designated for each emotion type, and the RBG vector of the color corresponding to the jth emotion is
Figure BDA0003428788200000113
The emotional colors are n in total.
And 4, step 4: firstly, calculating the color of each emotional word in a certain bullet screen, and according to the classification vector of the emotional words, the kth emotional word vector PkCalculating the color (R) of the emotional wordk,Gk,Bk) Comprises the following steps:
Figure BDA0003428788200000114
Figure BDA0003428788200000121
Figure BDA0003428788200000122
and 4, step 4: and determining the color proportion of each emotional word according to the emotional word vector distribution of each emotional word, wherein the larger the variance of the emotional word distribution, the clearer the emotional bias of the emotional word is, and therefore, the heavier the color proportion of the word is. For the kth emotional word, the variance u of the emotional wordkAnd specific gravity calculation of wkThe method comprises the following steps:
Figure BDA0003428788200000123
Figure BDA0003428788200000124
and 5: then the final color rgb value of the bullet screen sentence is obtained as:
Figure BDA0003428788200000125
Figure BDA0003428788200000126
Figure BDA0003428788200000127
step 6: and assigning a weight value for each emotion type, wherein the value range is [ -2, 2], the positive emotion is a positive value, and the negative emotion is a negative value.
Exemplary, as shown in the following table:
emotion categories Emotional words Weight value Colour(s) Character font
Happiness Happy, liked, praise, good, handsome 2 Orange colour Happy character
Anger Enough to close the mouth -1 Red colour Angry font
Fear of Fear, frightening, fear, life saving, terror, strangeness -2 Black color Horror font
And 7: setting a barrage sentence with m emotion words, wherein the weight of the kth emotion word is okAnd if the negative word appears f times, combining the negative word and the emotional word in the bullet screen text to obtain the final emotional value of the bullet screen sentence. And the emotion value s of each bullet screen is equal to the sum of the emotion word weights obtained in the previous step.
Figure BDA0003428788200000131
Specifically, the controlling of the display effect of the target bullet screen according to the emotion value may include the following effects:
the effect is as follows: and increasing the color saturation of the bullet screen if the emotion value s of the bullet screen sentence is larger, so that the color of the bullet screen is darker. Wherein, the calculation formula of the saturation V is as follows:
Figure BDA0003428788200000132
wherein, the RGB represents 3 colors respectively: r stands for red, G stands for green and B stands for blue.
The second effect is that: and determining the font of the target barrage during rendering according to the emotion type of the barrage. Specifically, a font library is established and the font emotion categories are identified. And searching the fonts corresponding to the categories according to the bullet screen emotion categories.
The effect is three: and automatically displaying the emotion icon at the beginning of the bullet screen content according to the bullet screen emotion category.
Specifically, the emotion type of the icon is obtained through the feature recognition of the icon, and when the barrage emotion type is matched with the emotion type of the icon, the icon is automatically displayed in front of the barrage text.
For example, the emotion category "happiness" shows a red love in front of the bullet screen text, the emotion category "anger" shows a flame in front of the bullet screen text, and the emotion category "fear" shows a skeleton head in front of the bullet screen text.
The embodiment of the invention also provides a device for processing the bullet screen information. Referring to fig. 5, fig. 5 is a structural diagram of a bullet screen information processing device according to an embodiment of the present invention. Because the principle of the processing device for bullet screen information to solve the problem is similar to the processing method for bullet screen information in the embodiment of the present invention, the implementation of the video processing device can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 4, the apparatus 500 for processing bullet screen information includes:
a first determining module 501, configured to determine, when the VR terminal plays a video, an initial display position of a target bullet screen at a user field angle of a first user;
a display module 502, configured to control the target barrage to move and display along a target motion trajectory, where the target motion trajectory takes the initial display position as a starting point, and the target motion trajectory is related to a label of the target barrage and a content feature of a video image at a field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
Optionally, the first determining module 501 includes:
the first determining submodule is used for determining a hot spot picture area of the video image under the field angle of the first user; wherein the hot picture area is related to content features of the video image;
the second determining submodule is used for determining the heat value of the target bullet screen according to the label of the target bullet screen;
and the third determining submodule is used for determining the starting display position according to the hot spot picture area and the heat value.
Optionally, the third determining sub-module includes:
a first determining unit, configured to determine a Z-axis coordinate of the initial display position according to the heat value and a radius value of a sphere rendered by the VR terminal;
the second determining unit is used for determining the Y-axis coordinate corresponding to the central point of the hot spot picture area as the Y-axis coordinate of the initial display position;
a third determining unit configured to determine an X-axis coordinate of a target boundary point of a field angle of the first user as an X-axis coordinate of the start display position; wherein the target boundary points include: the top right vertex or the top left vertex.
Optionally, the first determining unit is specifically configured to: according to the formula: z-r-kh2Determining the Z-axis coordinate of the initial display position; wherein Z represents a Z-axis coordinate value, and k represents the size of the coefficient adjustment threshold range; h represents the heat value of the target bullet screen; r represents a sphere radius value rendered by the VR terminal.
Optionally, under the condition that the tag of the target bullet screen is matched with the hot spot picture area, the target motion track passes through the hot spot picture area;
and under the condition that the label of the target bullet screen is not matched with the hot spot picture area, the target motion track does not pass through the hot spot picture area.
Optionally, the apparatus 500 further includes:
the first control module is used for controlling the display color of the target bullet screen in the overlapping area to be a target color when the target bullet screen and the hot spot picture area have the overlapping area, wherein the target color and the color of the hot spot picture area are complementary colors.
Optionally, the apparatus 500 further includes:
the first acquisition module is used for acquiring the distance between the target bullet screen and the hot spot picture area;
and the second control module is used for adjusting the font size of the target bullet screen according to the distance between the target bullet screen and the hotspot picture area.
Optionally, the display module 502 includes:
the first display sub-module is used for controlling the target bullet screen to move towards a target point when the target bullet screen passes through the hot spot picture area; and the target point is the position matched with the label of the target bullet screen in the hot spot picture.
Optionally, the apparatus 500 further includes:
a second obtaining module, configured to obtain an emotion category of the target barrage, where the emotion category includes: positive and negative emotions;
and the third control module is used for controlling the VR equipment to send prompt information when the target bullet screen enters a hot spot picture area when the emotion type of the target bullet screen is negative emotion.
Optionally, the apparatus 500 further includes:
the second determining module is used for carrying out emotion analysis on the target bullet screen and determining an emotion value of the target bullet screen;
and the fourth control module is used for controlling the display effect of the target bullet screen according to the emotion value.
Optionally, the apparatus 500 further includes:
and the fifth control module is used for controlling the motion trail of the target barrage to point to the eyeball of the first user under the condition that the sight line of the user is detected to be aligned with the target barrage and the sight line lasts for the preset time length, and the font size of the target barrage is increased to the target value at the preset rate.
Optionally, the apparatus 500 further includes:
the third acquisition module is used for acquiring a bullet screen set at the VR video playing time, wherein each bullet screen in the bullet screen set is matched with a label; the label is obtained according to the content characteristics of the bullet screen and the content characteristics of the video image under the user view angle when the bullet screen is sent;
and the third determining module is used for determining a target bullet screen to be displayed from the bullet screen set according to the video picture content corresponding to the view field angle of the first user at the current VR video playing moment and the label of each bullet screen.
Optionally, the apparatus 500 further includes:
and the sixth control module is used for controlling the target bullet screen to move out of the visual field range of the first user in an accelerated manner at a preset acceleration under the condition that the first user turns the head and the turning speed is greater than a preset speed value.
The apparatus provided in the embodiment of the present invention may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
As shown in fig. 6, a VR terminal according to an embodiment of the present invention includes: a processor 600; and a memory 620 connected to the processor 600 through a bus interface, wherein the memory 620 is used for storing programs and data used by the processor 600 in executing operations, and the processor 600 calls and executes the programs and data stored in the memory 620.
The processor 600 is used to read the program in the memory 620 and execute the following processes:
when the VR terminal plays a video, determining an initial display position of a target bullet screen under a field angle of a first user; controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user; and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
A transceiver 610 for receiving and transmitting data under the control of the processor 600.
Where in fig. 6, the bus architecture may include any number of interconnected buses and bridges, with various circuits being linked together, particularly one or more processors represented by processor 600 and memory represented by memory 620. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 610 may be a number of elements including a transmitter and a transceiver providing a means for communicating with various other apparatus over a transmission medium. The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 may store data used by the processor 600 in performing operations.
The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 may store data used by the processor 600 in performing operations.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
determining a hot picture area of the video image under the field angle of the first user; wherein the hotspot picture area is related to a content feature of the video image;
determining the heat value of the target bullet screen according to the label of the target bullet screen;
and determining the initial display position according to the hot spot picture area and the heat value.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
determining the Z-axis coordinate of the initial display position according to the heat value and the sphere radius value rendered by the VR terminal;
determining the Y-axis coordinate corresponding to the central point of the hot spot picture area as the Y-axis coordinate of the initial display position;
determining the X-axis coordinate of the target boundary point of the field angle of the first user as the X-axis coordinate of the starting display position; wherein the target boundary points include: the top right vertex or the top left vertex.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
according to the formula: z-r-kh2Determining the Z-axis coordinate of the initial display position;
wherein Z represents a Z-axis coordinate value, and k represents the size of the coefficient adjustment threshold range; h represents the heat value of the target bullet screen; r represents a sphere radius value rendered by the VR terminal.
Optionally, under the condition that the tag of the target bullet screen is matched with the hot spot picture area, the target motion track passes through the hot spot picture area;
and under the condition that the label of the target bullet screen is not matched with the hot spot picture area, the target motion track does not pass through the hot spot picture area.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
when the target bullet screen and the hot spot picture area have an overlapping area, controlling the display color of the target bullet screen in the overlapping area to be a target color, wherein the target color and the color of the hot spot picture area are complementary colors.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
acquiring the distance between the target bullet screen and the hot spot picture area;
and adjusting the font size of the target bullet screen according to the distance between the target bullet screen and the hot spot picture area.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
when the target bullet screen passes through the hot spot picture area, controlling the target bullet screen to move towards a target point; and the target point is the position matched with the label of the target bullet screen in the hot spot picture.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
obtaining the emotion types of the target bullet screens, wherein the emotion types comprise: positive and negative emotions;
and when the emotion type of the target bullet screen is negative emotion, controlling the VR equipment to send prompt information when the target bullet screen enters a hot spot picture area.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
performing emotion analysis on the target bullet screen to determine an emotion value of the target bullet screen;
and controlling the display effect of the target bullet screen according to the emotion value.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
and under the condition that the sight of the user is detected to be aligned with the target barrage and the sight lasts for a preset duration, controlling the motion track of the target barrage to point to the eyeball of the first user, and increasing the font size of the target barrage to a target value at a preset rate.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
acquiring a bullet screen set at the VR video playing time, wherein each bullet screen in the bullet screen set is matched with a label; the label is obtained according to the content characteristics of the bullet screen and the content characteristics of the video image under the user view field angle when the bullet screen is sent;
and determining a target bullet screen to be displayed from the bullet screen set according to the video picture content corresponding to the field angle of the first user at the current VR video playing moment and the label of each bullet screen.
Optionally, the processor 600 is further configured to read the computer program and execute the following steps:
and under the condition that the first user turns the head and the turning speed is greater than a preset speed value, controlling the target bullet screen to move out of the visual field range of the first user at a preset acceleration speed.
The device provided by the embodiment of the present invention may implement the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
In addition, the specific embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the method for processing bullet screen information, and can achieve the same technical effects, and is not described herein again to avoid repetition.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should be construed as the protection scope of the present invention.

Claims (10)

1. A bullet screen information processing method is applied to a Virtual Reality (VR) terminal and comprises the following steps:
when the VR terminal plays a video, determining an initial display position of a target bullet screen under a field angle of a first user;
controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user;
and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
2. The method for processing bullet screen information according to claim 1, wherein said determining the starting display position of the target bullet screen at the field angle of the first user comprises:
determining a hot picture area of the video image under the field angle of the first user; wherein the hot picture area is related to content features of the video image;
determining the heat value of the target bullet screen according to the label of the target bullet screen;
and determining the initial display position according to the hot spot picture area and the heat value.
3. The method for processing barrage information according to claim 2, wherein the determining the initial display position according to the hot spot picture area and the heat value includes:
determining the Z-axis coordinate of the initial display position according to the heat value and the sphere radius value rendered by the VR terminal;
determining the Y-axis coordinate corresponding to the central point of the hot spot picture area as the Y-axis coordinate of the initial display position;
determining the X-axis coordinate of the target boundary point of the field angle of the first user as the X-axis coordinate of the initial display position; wherein the target boundary points include: the top right vertex or the top left vertex.
4. The method of claim 3, wherein determining the Z-axis coordinate of the starting display position according to the heat value and the sphere radius value rendered by the VR device comprises:
according to the formula: z-r-kh2Determining the Z-axis coordinate of the initial display position;
wherein Z represents a Z-axis coordinate value, and k represents the size of the coefficient adjustment threshold range; h represents the heat value of the target bullet screen; r represents a sphere radius value rendered by the VR terminal.
5. The method for processing bullet screen information according to claim 2, wherein in the process of controlling the target bullet screen to move and display along the target motion track, the method further comprises at least one of the following steps:
under the condition that the label of the target bullet screen is matched with the hot spot picture area, the target motion track passes through the hot spot picture area;
when the target bullet screen and the hot spot picture area have an overlapping area, controlling the display color of the target bullet screen in the overlapping area to be a target color, wherein the target color and the color of the hot spot picture area are complementary colors;
acquiring the distance between the target bullet screen and the hot spot picture area; adjusting the font size of the target bullet screen according to the distance between the target bullet screen and the hot spot picture area;
when the target bullet screen passes through the hot spot picture area, controlling the target bullet screen to move towards a target point; the target point is a position matched with the label of the target bullet screen in the hot spot picture;
obtaining the emotion types of the target bullet screens, wherein the emotion types comprise: positive and negative emotions; when the emotion type of the target barrage is negative emotion, controlling the VR equipment to send prompt information when the target barrage enters the hot spot picture area;
performing emotion analysis on the target bullet screen to determine an emotion value of the target bullet screen; controlling the display effect of the target bullet screen according to the emotion value;
and under the condition that the sight of the user is detected to be aligned with the target barrage and the sight lasts for a preset duration, controlling the motion track of the target barrage to point to the eyeball of the first user, and increasing the font size of the target barrage to a target value at a preset rate.
6. The method for processing bullet screen information according to claim 1, wherein before determining the starting display position of the target bullet screen at the field angle of the first user, the method further comprises:
acquiring a bullet screen set at the VR video playing time, wherein each bullet screen in the bullet screen set is matched with a label; the label is obtained according to the content characteristics of the bullet screen and the content characteristics of the video image under the user view angle when the bullet screen is sent;
and determining a target bullet screen to be displayed from the bullet screen set according to the video picture content corresponding to the field angle of the first user at the current VR video playing moment and the label of each bullet screen.
7. The method for processing bullet screen information according to claim 1, wherein after controlling the target bullet screen to move and display along the target motion track, the method further comprises:
and under the condition that the first user turns the head and the turning speed is greater than a preset speed value, controlling the target bullet screen to move out of the visual field range of the first user at a preset acceleration speed.
8. The utility model provides a processing apparatus of barrage information, its characterized in that is applied to virtual reality VR terminal, the device includes:
the first determining module is used for determining the initial display position of the target bullet screen under the user field angle of the first user when the VR terminal plays the video;
the display module is used for controlling the target bullet screen to move and display along a target motion track, wherein the target motion track takes the starting display position as a starting point, and the target motion track is related to a label of the target bullet screen and content characteristics of a video image under the field angle of the first user;
and the label of the target bullet screen is obtained according to the content characteristics of the target bullet screen and the content characteristics of the video image under the field angle when the second user sends the target bullet screen.
9. A VR terminal, comprising: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; the processor is configured to read a program in the memory to implement the steps in the processing method of bullet screen information according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps in the method for processing bullet screen information according to any one of claims 1 to 7.
CN202111588176.XA 2021-12-23 Barrage information processing method, barrage information processing device, barrage information processing terminal and computer readable storage medium Active CN114374882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111588176.XA CN114374882B (en) 2021-12-23 Barrage information processing method, barrage information processing device, barrage information processing terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111588176.XA CN114374882B (en) 2021-12-23 Barrage information processing method, barrage information processing device, barrage information processing terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114374882A true CN114374882A (en) 2022-04-19
CN114374882B CN114374882B (en) 2024-07-02

Family

ID=

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915832A (en) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 Bullet screen display method and device and computer readable storage medium
CN115297355A (en) * 2022-08-02 2022-11-04 北京奇艺世纪科技有限公司 Bullet screen display method, bullet screen generation device, electronic equipment and storage medium
CN116628265A (en) * 2023-07-25 2023-08-22 北京天平地成信息技术服务有限公司 VR content management method, management platform, management device, and computer storage medium
WO2024088375A1 (en) * 2022-10-28 2024-05-02 北京字跳网络技术有限公司 Bullet comment presentation method and apparatus, device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811816A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video image object bullet screen marking method, device and system
CN105430471A (en) * 2015-11-26 2016-03-23 无锡天脉聚源传媒科技有限公司 Method and device for displaying live commenting in video
CN108347657A (en) * 2018-03-07 2018-07-31 北京奇艺世纪科技有限公司 A kind of method and apparatus of display barrage information
CN109429087A (en) * 2017-06-26 2019-03-05 上海优土视真文化传媒有限公司 Display methods, medium and the system of virtual reality video barrage
CN110460899A (en) * 2019-06-28 2019-11-15 咪咕视讯科技有限公司 Methods of exhibiting, terminal device and the computer readable storage medium of barrage content
CN112804582A (en) * 2020-03-02 2021-05-14 腾讯科技(深圳)有限公司 Bullet screen processing method and device, electronic equipment and storage medium
CN113542898A (en) * 2021-07-09 2021-10-22 北京爱奇艺科技有限公司 Bullet screen track generation and bullet screen display method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811816A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video image object bullet screen marking method, device and system
CN105430471A (en) * 2015-11-26 2016-03-23 无锡天脉聚源传媒科技有限公司 Method and device for displaying live commenting in video
CN109429087A (en) * 2017-06-26 2019-03-05 上海优土视真文化传媒有限公司 Display methods, medium and the system of virtual reality video barrage
CN108347657A (en) * 2018-03-07 2018-07-31 北京奇艺世纪科技有限公司 A kind of method and apparatus of display barrage information
CN110460899A (en) * 2019-06-28 2019-11-15 咪咕视讯科技有限公司 Methods of exhibiting, terminal device and the computer readable storage medium of barrage content
CN112804582A (en) * 2020-03-02 2021-05-14 腾讯科技(深圳)有限公司 Bullet screen processing method and device, electronic equipment and storage medium
CN113542898A (en) * 2021-07-09 2021-10-22 北京爱奇艺科技有限公司 Bullet screen track generation and bullet screen display method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915832A (en) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 Bullet screen display method and device and computer readable storage medium
CN114915832B (en) * 2022-05-13 2024-02-23 咪咕文化科技有限公司 Barrage display method and device and computer readable storage medium
CN115297355A (en) * 2022-08-02 2022-11-04 北京奇艺世纪科技有限公司 Bullet screen display method, bullet screen generation device, electronic equipment and storage medium
CN115297355B (en) * 2022-08-02 2024-01-23 北京奇艺世纪科技有限公司 Barrage display method, barrage generation method, barrage display device, electronic equipment and storage medium
WO2024088375A1 (en) * 2022-10-28 2024-05-02 北京字跳网络技术有限公司 Bullet comment presentation method and apparatus, device, and storage medium
CN116628265A (en) * 2023-07-25 2023-08-22 北京天平地成信息技术服务有限公司 VR content management method, management platform, management device, and computer storage medium

Similar Documents

Publication Publication Date Title
US11182591B2 (en) Methods and apparatuses for detecting face, and electronic devices
CN107197384B (en) The multi-modal exchange method of virtual robot and system applied to net cast platform
US10223838B2 (en) Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras
WO2018033155A1 (en) Video image processing method, apparatus and electronic device
KR101907136B1 (en) System and method for avatar service through cable and wireless web
CN107911736B (en) Live broadcast interaction method and system
KR101951761B1 (en) System and method for providing avatar in service provided in mobile environment
EP3885965A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
WO2018033154A1 (en) Gesture control method, device, and electronic apparatus
JP2019535059A (en) Sensory eyewear
CN106060572A (en) Video playing method and device
US10192258B2 (en) Method and system of augmented-reality simulations
US10977510B2 (en) Information-processing device and information-processing method
US20180373328A1 (en) Program executed by a computer operable to communicate with head mount display, information processing apparatus for executing the program, and method executed by the computer operable to communicate with the head mount display
CN108205684B (en) Image disambiguation method, device, storage medium and electronic equipment
US10705720B2 (en) Data entry system with drawing recognition
KR20170002100A (en) Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
US20190302880A1 (en) Device for influencing virtual objects of augmented reality
CN111405360A (en) Video processing method and device, electronic equipment and storage medium
CN112272295A (en) Method for generating video with three-dimensional effect, method for playing video, device and equipment
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment
CN112637692B (en) Interaction method, device and equipment
US11087520B2 (en) Avatar facial expression generating system and method of avatar facial expression generation for facial model
CN113497946A (en) Video processing method and device, electronic equipment and storage medium
CN114374882B (en) Barrage information processing method, barrage information processing device, barrage information processing terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant