CN108200463B - Bullet screen expression package generation method, server and bullet screen expression package generation system - Google Patents

Bullet screen expression package generation method, server and bullet screen expression package generation system Download PDF

Info

Publication number
CN108200463B
CN108200463B CN201810057027.2A CN201810057027A CN108200463B CN 108200463 B CN108200463 B CN 108200463B CN 201810057027 A CN201810057027 A CN 201810057027A CN 108200463 B CN108200463 B CN 108200463B
Authority
CN
China
Prior art keywords
bullet screen
server
text
playing image
screen text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810057027.2A
Other languages
Chinese (zh)
Other versions
CN108200463A (en
Inventor
余露露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN201810057027.2A priority Critical patent/CN108200463B/en
Publication of CN108200463A publication Critical patent/CN108200463A/en
Application granted granted Critical
Publication of CN108200463B publication Critical patent/CN108200463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computing Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a generation method of a barrage emoticon, which comprises the following steps: s100: receiving a barrage emoticon generation instruction of a current video; s200: capturing a playing image of the current video containing at least one bullet screen text; s300: editing the video picture in the playing image and/or the bullet screen text in the playing image; s400: and storing the edited playing image to generate the bullet screen facial expression package. By adopting the method for generating the barrage expression package, the image-text expression package can be manufactured by combining the video content and the barrage content, the operation is convenient and fast, the emotion of the user is easy to express, and meanwhile, the method can play a role in spreading and popularizing the related barrage culture, and further arouses the interest of the user in watching the barrage video.

Description

Bullet screen expression package generation method, server and bullet screen expression package generation system
Technical Field
The invention relates to the technical field of internet, in particular to a generation method of a barrage emoticon, a server and a generation system of the barrage emoticon.
Background
The expression bag is essentially a popular culture. By means of continuous development of social contact and networks, communication modes among people are changed correspondingly, from the earliest character communication to the beginning of using simple symbols, emoji expressions and expression packages, the expression culture gradually evolves into increasingly diversified expression culture, and self-made and popular element pictures are used for communication. The pictures are rich in fun and the picture composition is exaggerated, people can obtain interest by collecting and sharing the pictures, and meanwhile, the people can display the hidden pictures, so that the pictures can be approved by the people, and the psychological satisfaction is realized.
At present, expressions used by users are mostly from third-party manufacturers, and most of the expressions are designed and drawn by drawing tools, animation tools and the like through manufacturers, so that the manufacturing process is complex.
Therefore, the invention provides a new bullet screen expression package generation method, which can be used for manufacturing the graphic expression package by combining the video and the bullet screen content, is convenient to operate, is easy to express the emotion of a user, can play a role in spreading and popularizing the related bullet screen culture, and further stimulates the interest of the user in watching the bullet screen video.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a generation method of a barrage emoticon, a server and a generation system of the barrage emoticon. The user can combine video and barrage content to make into the picture and text expression package, and the simple operation easily expresses user's emotion, also can play the effect of spreading the popularization to relevant barrage culture when, further arouses the user to watch the video interest of barrage.
The invention provides a generation method of a barrage emoticon, which comprises the following steps:
s100: receiving a barrage emoticon generation instruction of a current video;
s200: capturing a playing image of the current video containing at least one bullet screen text;
s300: editing the video picture in the playing image and/or the bullet screen text in the playing image;
s400: and storing the edited playing image to generate the bullet screen facial expression package.
Preferably, the step S200 of capturing the playing image of the current video containing at least one bullet screen text comprises:
s210: acquiring a playing image of the current video according to the barrage expression package generation instruction;
s220: identifying and extracting the bullet screen text in the playing image;
s230: and generating a container containing the bullet screen text above the video picture.
Preferably, the step S200 of capturing the playing image of the current video containing at least one bullet screen text comprises:
s240: and sending a prompt for editing the bullet screen text and the video picture.
Preferably, the step S220 of identifying and extracting the bullet screen text in the playing image includes:
s221: extracting the characteristics of the played image to obtain characters contained in the played image;
s222: identifying the bullet screen text in the characters according to the layer where the characters are located;
s223: and separating the bullet screen text from the playing image.
Preferably, the step S300 of editing the video frame in the playing image and/or the bullet screen text in the playing image includes:
s310: reading an editing instruction;
s320: judging that the editing object of the editing instruction is the video picture or the barrage text;
s330: when the editing object is the video picture, processing the playing image according to the editing instruction;
s330': and when the editing object is the bullet screen text, adjusting the text attribute of the bullet screen text and/or deleting the bullet screen text according to the editing instruction.
Preferably, the generating method further comprises:
s500: selecting the attribute of the facial expression package of the bullet screen facial expression package;
s600: and saving and/or sharing the bullet screen facial expression package.
The invention provides a generation method of a barrage emoticon, which is applied between a server and a current user side and comprises the following steps:
the server receives a barrage emoticon generation instruction sent by the current user end to the current video;
the server captures a playing image of the current video containing at least one bullet screen text;
the current user side edits the video picture in the playing image and/or the bullet screen text in the playing image through the server;
and the server stores the edited playing image to generate the bullet screen expression package.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server generates an instruction according to the bullet screen facial expression package to obtain the playing image;
the server identifies and extracts the bullet screen text in the playing image;
and the server generates a container containing the bullet screen text above the video picture.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server further sends a prompt for editing the barrage text and the video picture.
Preferably, the step of the server identifying and extracting the bullet screen text in the playing image comprises:
the server extracts the characteristics of the played image to acquire characters contained in the played image;
the server identifies the bullet screen text in the characters according to the layer where the characters are located;
and the server separates the bullet screen text from the playing image.
Preferably, the step of the current user side editing the video picture in the playing image and/or the barrage text in the playing image through the server includes:
the server reads an editing instruction sent by the current user side;
the server judges that an editing object of the editing instruction is the video picture or the barrage text;
when the editing object is the video picture, the server processes the playing image according to the editing instruction;
and when the editing object is the bullet screen text, the server adjusts the text attribute of the bullet screen text and/or deletes the bullet screen text according to the editing instruction.
Preferably, the generating method further comprises:
the current user side selects the expression package attribute of the bullet screen expression package through the server;
and the current user side stores and/or shares the bullet screen facial expression package through the server.
The invention provides a server, which comprises a processor and a storage device, wherein the storage device stores a computer program, and the processor calls and executes the computer program to realize the generation method of the barrage emoticon as set forth in any one of claims 1-6.
The invention provides a generation system of a barrage emoticon, which comprises a server and a current user side;
the server receives a barrage emoticon generation instruction sent by the current user end to the current video;
the server captures a playing image of the current video containing at least one bullet screen text;
the current user side edits the video picture in the playing image and/or the bullet screen text in the playing image through the server;
and the server stores the edited playing image to generate the bullet screen expression package.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server generates an instruction according to the bullet screen facial expression package to obtain the playing image;
the server identifies and extracts the bullet screen text in the playing image;
and the server generates a container containing the bullet screen text above the video picture.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server further sends a prompt for editing the barrage text and the video picture.
Preferably, the step of the server identifying and extracting the bullet screen text in the playing image comprises:
the server extracts the characteristics of the played image to acquire characters contained in the played image;
the server identifies the bullet screen text in the characters according to the layer where the characters are located;
and the server separates the bullet screen text from the playing image.
Preferably, the step of the current user side editing the video picture in the playing image and/or the barrage text in the playing image through the server includes:
the server reads an editing instruction sent by the current user side;
the server judges that an editing object of the editing instruction is the video picture or the barrage text;
when the editing object is the video picture, the server processes the playing image according to the editing instruction;
and when the editing object is the bullet screen text, the server adjusts the text attribute of the bullet screen text and/or deletes the bullet screen text according to the editing instruction.
Preferably, the generating system further comprises: the current user side selects the expression package attribute of the bullet screen expression package through the server;
and the current user side stores and/or shares the bullet screen facial expression package through the server.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. combining the bullet screen text with the video content to generate a corresponding bullet screen expression packet;
2. the bullet screen text and the playing image in the expression package can be further optimized and adjusted;
3. the generation process of the barrage facial expression package is simple, and the operation is convenient and fast;
4. the enthusiasm of a user for watching the bullet screen can be effectively improved;
5. the propagation and popularization of bullet screen culture are facilitated;
6. and a more optimized use experience is provided for the user.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for generating a bullet screen emoticon according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart of step S200 according to a preferred embodiment of the present invention;
FIG. 3 is a flowchart of step S300 according to a preferred embodiment of the present invention;
FIG. 4 is a diagram illustrating a playing image of a current video according to a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating operation of a bullet screen emoticon generated in accordance with a preferred embodiment of the present invention;
fig. 6 is a simplified diagram of a bullet screen emoticon generated according to a preferred embodiment of the present invention.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination"
In the description of the present invention, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present invention and to distinguish each step, and thus should not be construed as limiting the present invention.
Referring to fig. 1, a schematic flow chart of a method for generating a bullet screen emoticon according to a preferred embodiment of the present invention is shown, in which the bullet screen playing control method includes the following steps:
s100: receiving a barrage emoticon generation instruction of a current video;
when a user watches a current video on a video website in a bullet screen mode by using a current user side such as a mobile phone, a notebook computer, a tablet computer and the like through a server of the video website, if the user is interested in a certain playing image containing a bullet screen text in the video and wants to make a bullet screen expression package, the user can send a bullet screen expression package generation instruction to an interactive interface of the video website through the current user side so as to control the server of the video website to generate a corresponding bullet screen expression package based on the playing image containing the bullet screen text played by the current video. In this embodiment, the barrage emoticon generation instruction may be in various forms, including but not limited to: clicking a function button of the bullet screen emoticon, shaking the current user terminal, long-pressing a playing interface, tapping a screen of the current user terminal, sliding a playing window and the like. The specific generation instruction can be set by default according to the server or set by modification according to the use habit of the user.
S200: capturing a playing image of the current video containing at least one bullet screen text;
when a server of the video website receives a barrage expression package generation instruction sent by a user, the server enters a barrage expression package manufacturing interface and simultaneously displays a playing image displayed by the current video at the time point when receiving the barrage expression package generation instruction sent by the user. Namely, a bullet screen facial expression package generation instruction sent by a user triggers a server of the video website to capture a playing image displayed at a previous time point, and simultaneously controls the server of the video website to switch a current playing page to a bullet screen facial expression package manufacturing interface. And capturing the played image, wherein the captured played image further comprises the bullet screen text played by the current video at the time point.
S300: editing the playing image and/or the bullet screen text in the playing image;
after the current user enters a bullet screen emoticon making interface through a server of a video website, the current user can begin to edit a playing image which is captured by the server and contains bullet screen texts. It is understood that the editing operation includes editing of a video frame in the playing image and editing of a bullet screen text in the playing image. The specific editing operation of the current user can be realized by making a corresponding editing icon in the interface. For example, an editing tool may be provided in the pop-up emoticon making interface, and icons with different functions are correspondingly provided in the editing tool, including but not limited to: color selection, font size selection, movement, deletion, clipping, beautification, magnification, reduction, filter addition, text addition, and the like.
S400: and storing the edited playing image to generate the bullet screen facial expression package.
After the current user finishes editing the video picture and the bullet screen text in the playing image in the bullet screen expression package manufacturing interface through the current user side, and the edited and modified content is stored, the server of the video website can generate a bullet screen expression package according to the edited playing image. The barrage expression package combines the video content and the barrage content which are interested by the current user, so that the emotion of the current user can be conveniently expressed, and the interestingness of watching the video by the current user is increased.
In a preferred embodiment, the method for generating a barrage emoticon further includes:
s500: selecting the attribute of the facial expression package of the bullet screen facial expression package;
in a preferred embodiment, a user can perform expression package attributes on the generated barrage expression package by combining the use of the barrage expression package and the definition of the expression package, such as selection and adjustment of the size of the barrage expression package, so that the waste of the memory space of the current user side is avoided, and too much traffic consumption when a subsequent user shares the barrage expression package with other users can be further avoided.
S600: and saving and/or sharing the bullet screen facial expression package.
After the current user finishes selecting the size of the bullet screen facial expression package, the generated facial expression package can be further stored in the current user side of the current user side or shared with others. Specifically, the current user may share the bullet screen facial expression package with an attention object in the video website or upload the bullet screen facial expression package to a facial expression package area of the video website, and may also share and use the bullet screen facial expression package on a third-party platform through a server of the video website, including but not limited to: WeChat, microblog, QQ and other social platforms.
Referring to fig. 2, in an embodiment, the step S200 of capturing the playing image of the current video including at least one bullet screen text includes:
s210: acquiring the playing image according to the barrage expression package generation instruction;
s220: identifying and extracting the bullet screen text in the playing image;
s230: and generating a container containing the bullet screen text above the video picture.
After a server of the video website receives a barrage expression package generation instruction sent by a user through a current user side, the server can automatically acquire a playing image of the current video at a time point when the barrage expression package generation instruction is received. The playing image comprises a video picture of the current video and a bullet screen text displayed in the video picture at the time point. In order to facilitate the user to edit the video frame and the bullet screen text respectively, in this embodiment, the server of the video website may further include, based on the recognition of the text image, recognition and extraction of the bullet screen text, so as to separate the bullet screen text from the video frame, and the bullet screen text separated from the video frame may be further suspended above the video frame in an editable form, such as being displayed by a container displayed above the video frame. Therefore, the user can easily select and edit the bullet screen text in the played image. In a preferred embodiment, to further facilitate the user's subsequent editing, a background color different from the video frame may be set for the container, and some shortcut editing buttons including, but not limited to, a delete button, a zoom-in/zoom-out button, a drag button, etc. may be set on the container. The shortcut editing button can further simplify the editing process of the user on the bullet screen text in the bullet screen emoticon.
In a preferred embodiment, the step S200 of capturing the playing image of the current video containing at least one bullet-screen text further comprises: s240: and sending a prompt for editing the bullet screen text and the video picture.
After the server of the video website identifies and extracts the video picture and the barrage text from the played image, in order to facilitate the user to immediately edit the video picture and the barrage text suspended above the video picture, in a preferred embodiment, the server further sends a prompt for editing the barrage text and the video picture to the current user side. In particular, the prompt may include a variety of forms, such as: the server can pop up a prompt window to the current user end, send out prompt tone or directly display a shortcut editing button on a container of the bullet screen text and a video picture.
In a preferred embodiment, the step S220 of identifying and extracting the bullet screen text in the playing image includes:
s221: extracting the characteristics of the played image to obtain characters contained in the played image;
s222: identifying the bullet screen text in the characters according to the layer where the characters are located;
s223: and separating the bullet screen text from the playing image.
In a preferred embodiment, the operation of identifying and extracting the bullet screen text part in the playing image by the server of the video website is mainly realized based on the character image identification technology, and the specific steps include the following steps: firstly, the server carries out preprocessing such as binaryzation, noise removal, inclination correction and the like on a captured playing picture so as to prepare for accurately extracting the character content subsequently. And then, extracting the characteristics of the processed playing image, and comparing the extracted characteristics with a character database to obtain the character content in the playing image. However, since there may be subtitles in the playing screen that are confused with the text content recognized by the server, the present embodiment further distinguishes the position area where the recognized text content is displayed. The bullet screen text and the caption in the acquired characters can be distinguished by the server based on the difference of the layers, so that the bullet screen text is separated from the played image and is suspended above the video picture in an editable form.
Referring to fig. 3, in an embodiment, the step S300 of editing the video frame in the playing image and/or the bullet screen text in the playing image includes:
s310: reading an editing instruction;
s320: judging that the editing object of the editing instruction is the video picture or the barrage text;
s330: when the editing object is the video picture, processing the playing image according to the editing instruction;
s330': and when the editing object is the bullet screen text, adjusting the text attribute of the bullet screen text according to the editing instruction, such as font type, font size, font color, background color, display position, display size and/or deleting the bullet screen text.
In this embodiment, after the server of the video website captures the playing image containing the bullet screen text according to the bullet screen emotion package generation instruction sent by the user, the user can edit the playing image through the current user terminal. First, a user may issue an editing instruction to the server, where the editing instruction may include operations such as tapping, clicking, double-clicking, long-pressing, dragging, sliding, and the like on a video picture or a bullet screen text in a played image. And after receiving an editing instruction sent by the user, the server judges that the object to be edited by the user is a video picture or a barrage text according to the object and the area implemented by the editing instruction. When the server judges that the object to be edited by the user is a video picture, the barrage emoticon manufacturing interface displays an editing toolbar of the video picture, and the user can edit the video picture by selecting a button in the editing toolbar, wherein the editing toolbar includes but is not limited to: cutting, beautifying, enlarging, reducing and the like. When the server judges that the object to be edited by the user is the barrage text, the barrage emoticon manufacturing interface can display an editing toolbar of the barrage text, and the user can edit the barrage text by selecting a button in the editing toolbar or directly clicking the Annie office for fast editing on the barrage text container, wherein the editing toolbar includes but is not limited to: and adjusting the text attribute of the bullet screen text, such as font type, font size, font color, background color, display position, display size, deleting the bullet screen text and the like. It can be understood that the user can realize the editing operation of the typesetting, the display content, the display style, the display effect and the like of the bullet screen text on the video picture through the current user side control server, and can also realize the further editing processing of the video picture, so that the user can make the bullet screen facial expression bag with individuation through simple and rapid operation.
In the server disclosed in the present invention, the computer program stored therein implements the steps described in the above embodiments when executed by the processor, and further description is omitted here.
In addition, when the established generation system of the barrage emoticon based on the server and the current user terminal is applied, the following steps can be executed according to the characteristics of the server and the current user terminal:
the server receives a barrage emoticon generation instruction sent by the current user end to the current video;
the server captures a playing image of the current video containing at least one bullet screen text;
the current user side edits the video picture in the playing image and/or the bullet screen text in the playing image through the server;
and the server stores the edited playing image to generate the bullet screen expression package.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server generates an instruction according to the bullet screen facial expression package to obtain the playing image;
the server identifies and extracts the bullet screen text in the playing image;
and the server generates a container containing the bullet screen text above the video picture.
Preferably, the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises: the server further sends a prompt for editing the barrage text and the video picture.
Preferably, the step of the server identifying and extracting the bullet screen text in the playing image comprises:
the server extracts the characteristics of the played image to acquire characters contained in the played image;
the server identifies the bullet screen text in the characters according to the layer where the characters are located;
and the server separates the bullet screen text from the playing image.
Preferably, the step of the current user side editing the video picture in the playing image and/or the barrage text in the playing image through the server includes:
the server reads an editing instruction sent by the current user side;
the server judges that an editing object of the editing instruction is the video picture or the barrage text;
when the editing object is the video picture, the server processes the playing image according to the editing instruction;
and when the editing object is the bullet screen text, the server adjusts the font type, font size, font color, background color, display position, display size and/or deletes the bullet screen text according to the editing instruction.
Preferably, the server and the current user terminal further perform the following steps:
the current user side selects the expression package attribute of the bullet screen expression package through the server;
and the current user side stores and/or shares the bullet screen facial expression package through the server.
Example one
The user uses the smart phone to watch a certain video on a video website in a barrage mode through a server of the website. When the video is played for 1 minute and 30 seconds, a user is interested in a playing image displayed in the video, and at the moment, the user wants to make a bullet screen facial expression package based on the playing image.
When a server of the video website receives a barrage emoticon generation instruction sent by a user, the server of the video website immediately switches a current playing interface, enters a barrage emoticon manufacturing interface, and simultaneously displays a playing image displayed in 1 minute and 30 seconds of the video, which is automatically captured by the server when a barrage emoticon function button clicked by the user is displayed in the barrage emoticon manufacturing interface. Namely, the user triggers the server of the video website to capture the playing image displayed by the video at the time point by clicking the bullet screen facial expression package generating instruction sent by the bullet screen facial expression package function button. For example, a screen capture operation is performed on a playing image displayed in the 1 st minute and 30 seconds of the video, and simultaneously, the server of the video website switches the currently played page to the bullet screen emoticon making interface. The playing image captured by the server comprises a video frame of the video at the 1 st minute and 30 seconds and a bullet screen text displayed in the video frame at the 1 st minute and 30 seconds. In order to facilitate the user to edit the video picture and the barrage text respectively, the server of the video website further identifies and extracts the barrage text in the played image based on a character image identification technology, separates the barrage text from the video picture, and suspends the barrage text separated from the video picture above the video picture in an editable form, i.e., suspends the barrage text above the video picture in a container form, such as a suspended window, a floating layer and the like. In order to further facilitate the user to distinguish the bullet screen text and the video picture and edit them respectively, in this embodiment, the container of the bullet screen text is further provided with a gray background color to distinguish from the background color of the video picture and to stand out from the video picture. In addition, the container is also provided with a quick deleting button, namely, an X-shaped button is arranged at the upper right corner of the container, and a user can directly delete the bullet screen text by clicking the X-shaped button, so that the operation is convenient and fast.
And then, the user can edit the barrage text and the video picture on the server through the smart phone. The specific editing operation can be realized by clicking a corresponding editing button in the production interface by a user. The right side in bullet curtain expression package preparation interface is equipped with the edit menu, is equipped with different editing function button in the edit menu, includes but not limited to: color selection, font size selection, movement, deletion, clipping, beautification, enlargement, reduction, addition of text, etc. The method comprises the following specific steps: firstly, a user clicks a bullet screen emoticon through a smart phone to make a playing image in an interface, then, a server of a video website identifies a region clicked by the user as a video picture or a bullet screen text, and confirms an object to be edited by the user based on the identification of the clicked region. When the area clicked by the user is a video picture, the video picture needs to be edited, and at this time, the user can further click on the right-side editing function buttons, such as buttons for cutting, beautifying, enlarging, reducing, adding characters and the like, so as to realize the editing effect of cutting, beautifying, enlarging, reducing, adding characters and the like on the video picture. When the area clicked by the user is the bullet screen text, the bullet screen text needs to be edited, at this time, the user can further click an editing function button on the right side, such as color selection, font size selection, movement, deletion, amplification, reduction, character addition and the like, so as to modify the color, font style, font size, display layout, background color, display size, bullet screen content, delete incompletely displayed bullet screen text, delete unneeded bullet screen text and the like of the bullet screen text.
After a user finishes editing operation on a video picture and a bullet screen text in a played image in a bullet screen facial expression package making interface in a server of a video website through a smart phone, a storage button in a right-side menu is clicked, and after the storage button is modified, the server of the video website can generate a unique bullet screen facial expression package according to the played image edited by the user.
The barrage expression package combines the video content and the barrage text which are interested by the user, and is edited and adjusted according to the preference of the user in the later period, so that the use of the barrage expression package can be convenient for the user to express the emotion, the interest and the participation degree of the user in watching the video are increased, and the extremely optimized use experience is provided for the user.
Example two
When a user uses the smart phone to watch a certain video on a video website in a bullet screen mode through a server of the website. When the video is played for 1 minute and 30 seconds, a user is interested in a playing image displayed in the video, and at the moment, the user wants to make a bullet screen facial expression package based on the playing image.
When a server of the video website receives a barrage emoticon generation instruction sent by a user, the server of the video website immediately switches a current playing interface, enters a barrage emoticon manufacturing interface, and simultaneously displays a playing image displayed in 1 minute and 30 seconds of the video, which is automatically captured by the server when a barrage emoticon function button clicked by the user is displayed in the barrage emoticon manufacturing interface. Namely, the user triggers the server of the video website to capture the playing image displayed by the video at the time point by clicking the bullet screen facial expression package generating instruction sent by the bullet screen facial expression package function button. For example, a screen capture operation is performed on a playing image displayed in the 1 st minute and 30 seconds of the video, and simultaneously, the server of the video website switches the currently played page to the bullet screen emoticon making interface. The playing image captured by the server comprises a video frame of the video at the 1 st minute and 30 seconds and a bullet screen text displayed in the video frame at the 1 st minute and 30 seconds. In order to facilitate the user to edit the video picture and the barrage text respectively, the server of the video website further identifies and extracts the barrage text in the played image based on a character image identification technology, separates the barrage text from the video picture, and suspends the barrage text separated from the video picture above the video picture in an editable form, namely suspends the barrage text above the video picture in a container form. In order to further facilitate the user to distinguish the bullet screen text and the video picture and edit them respectively, in this embodiment, the container of the bullet screen text is further provided with a gray background color to distinguish from the background color of the video picture and to stand out from the video picture. In addition, the container is also provided with a quick deleting button, namely, an X-shaped button is arranged at the upper right corner of the container, and a user can directly delete the bullet screen text by clicking the X-shaped button, so that the operation is convenient and fast.
And then, the user can edit the barrage text and the video picture on the server through the smart phone. The specific editing operation can be realized by clicking a corresponding editing button in the production interface by a user. The right side in bullet curtain expression package preparation interface is equipped with the edit menu, is equipped with different editing function button in the edit menu, includes but not limited to: color selection, font size selection, movement, deletion, clipping, beautification, enlargement, reduction, addition of text, etc. The method comprises the following specific steps: firstly, a user clicks a bullet screen emoticon through a smart phone to make a playing image in an interface, then, a server of a video website identifies a region clicked by the user as a video picture or a bullet screen text, and confirms an object to be edited by the user based on the identification of the clicked region. When the area clicked by the user is a video picture, the video picture needs to be edited, and at this time, the user can further click on the right-side editing function buttons, such as buttons for cutting, beautifying, enlarging, reducing, adding characters and the like, so as to realize the editing effect of cutting, beautifying, enlarging, reducing, adding characters and the like on the video picture. When the area clicked by the user is the bullet screen text, the bullet screen text needs to be edited, at this time, the user can further click an editing function button on the right side, such as color selection, font size selection, movement, deletion, amplification, reduction, character addition and the like, so as to modify the color, font style, font size, display layout, background color, display size, bullet screen content, delete incompletely displayed bullet screen text, delete unneeded bullet screen text and the like of the bullet screen text.
After a user finishes editing operation on a video picture and a bullet screen text in a played image in a bullet screen facial expression package making interface in a server of a video website through a smart phone, a storage button in a right-side menu is clicked, and after the storage button is modified, the server of the video website can generate a unique bullet screen facial expression package according to the played image edited by the user.
Because this bullet screen expression package is based on the broadcast image generation of catching video, consequently, in this embodiment for the size of avoiding the bullet screen expression package that generates is too big, be convenient for current user to save bullet screen expression package, and reduce the occupation of expression package to current user side memory, the server of video website can provide the selection of saving the size to the user when the user saves the edition, the user can combine the definition that needs in its use to select a suitable output size for this bullet screen expression package, can keep the definition of expression package, can not too much occupy the memory space of smart mobile phone again, can further avoid consuming too much flow and send time when the user shares to other people simultaneously.
After the current user finishes selecting the size of the bullet screen facial expression package, the user can save the bullet screen facial expression package into the smart phone or select to share the bullet screen facial expression package with other people through the server through the smart phone. Specifically, the sharing can be divided into internal sharing and external sharing. That is, the user may share the bullet screen facial expression package with the attention object in the video website or upload the bullet screen facial expression package to the facial expression package area of the video website, and may also share and use the bullet screen facial expression package on a third-party platform through a server of the video website, including but not limited to: WeChat, microblog, QQ and other social platforms. With this, can increase the frequency of use, the propagation scope of this bullet screen expression package, can promote the transmission bullet screen culture through the expression package that contains the bullet screen text again, attract more users to watch the bullet screen video.
Referring to fig. 4-6, demonstrative views of a user generating a barrage emoticon while viewing a video are shown. Fig. 4 shows a display frame received when a user watches a video, which includes the main content of the video, a bullet screen, a control panel for the video, and the like, for example, a function button for generating an instruction for a bullet screen emoticon by a smiling face mark in the control panel. After the generation method of the bullet screen facial expression package of the above embodiment is executed, as shown in fig. 6, the control panel of the video is removed, and the bullet screen including the main content of the video, or the facial expression part in the main content, corresponding to the triangle in fig. 6, and the bullet screen matched with the facial expression part, corresponding to the bullet screen container in the circular or rounded rectangle or other shapes in fig. 6, as shown in fig. 5, is further interface change after the operation of fig. 4, and a bullet screen facial expression package is formed after the complete corresponding operation and adjustment.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (13)

1. A method for generating a barrage emoticon is characterized in that,
the generation method comprises the following steps:
s100: receiving a barrage emoticon generation instruction of a current video;
s200: capturing a playing image of the current video containing at least one bullet screen text;
s300: editing the video picture in the playing image and the bullet screen text in the playing image;
s400: saving the edited playing image to generate the bullet screen facial expression package;
wherein the step S200 of capturing the playing image of the current video containing at least one bullet screen text comprises:
s210: acquiring a playing image of the current video according to the barrage expression package generation instruction;
s220: identifying and extracting the bullet screen text in the playing image;
s230: generating a container containing the barrage text above the video picture;
wherein the step S220 of identifying and extracting the bullet screen text in the playing image comprises:
s221: extracting the characteristics of the played image to obtain characters contained in the played image;
s222: identifying the bullet screen text in the characters according to the layer where the characters are located;
s223: separating the bullet screen text from the playing image;
the playing image further comprises a subtitle, the subtitle and the bullet screen text are displayed on different layers, and the subtitle and the bullet screen text are distinguished according to the different layers where the subtitle and the bullet screen text are located, so that the bullet screen text is separated from the playing image.
2. The generation method of claim 1,
the step S200 of capturing the playing image of the current video containing at least one bullet screen text includes:
s240: and sending a prompt for editing the bullet screen text and the video picture.
3. The generation method of claim 1,
the step S300 of editing the video frame in the playing image and/or the bullet screen text in the playing image includes:
s310: reading an editing instruction;
s320: judging that the editing object of the editing instruction is the video picture or the barrage text;
s330: when the editing object is the video picture, processing the playing image according to the editing instruction;
s330': and when the editing object is the bullet screen text, adjusting the text attribute of the bullet screen text and/or deleting the bullet screen text according to the editing instruction.
4. The generation method of claim 1,
the generation method further comprises the following steps:
s500: selecting the attribute of the facial expression package of the bullet screen facial expression package;
s600: and saving and/or sharing the bullet screen facial expression package.
5. A generation method of a barrage emoticon is characterized in that the generation method is applied between a server and a current user side, and comprises the following steps:
the server receives a barrage emoticon generation instruction sent by the current user end to the current video;
the server captures a playing image of the current video containing at least one bullet screen text;
the current user side edits the video picture in the playing image and the bullet screen text in the playing image through the server;
the server stores the edited playing image to generate the bullet screen expression package;
wherein the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server generates an instruction according to the bullet screen facial expression package to obtain the playing image;
the server identifies and extracts the bullet screen text in the playing image;
the server generates a container containing the bullet screen text above the video picture;
the step of identifying and extracting the bullet screen text in the playing image by the server comprises the following steps:
the server extracts the characteristics of the played image to acquire characters contained in the played image;
the server identifies the bullet screen text in the characters according to the layer where the characters are located;
the server separates the bullet screen text from the playing image;
the playing image further comprises a subtitle, the subtitle and the bullet screen text are displayed on different layers, and the subtitle and the bullet screen text are distinguished according to the different layers where the subtitle and the bullet screen text are located, so that the bullet screen text is separated from the playing image.
6. The generation method of claim 5,
the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server further sends a prompt for editing the barrage text and the video picture.
7. The generation method of claim 5,
the step that the current user side edits the video picture in the playing image and/or the barrage text in the playing image through the server comprises the following steps:
the server reads an editing instruction sent by the current user side;
the server judges that an editing object of the editing instruction is the video picture or the barrage text;
when the editing object is the video picture, the server processes the playing image according to the editing instruction;
and when the editing object is the bullet screen text, the server adjusts the text attribute of the bullet screen text and/or deletes the bullet screen text according to the editing instruction.
8. The generation method of claim 5,
the generation method further comprises the following steps:
the current user side selects the expression package attribute of the bullet screen expression package through the server;
and the current user side stores and/or shares the bullet screen facial expression package through the server.
9. A server comprising a processor and a storage device, wherein the storage device stores a computer program, and the processor is configured to implement the method for generating a barrage emoticon according to any one of claims 1 to 4 when the computer program is called and executed.
10. A generation system of a bullet screen facial expression package is characterized in that,
the generation system comprises a server and a current user side;
the server receives a barrage emoticon generation instruction sent by the current user end to the current video;
the server captures a playing image of the current video containing at least one bullet screen text;
the current user side edits the video picture in the playing image and the bullet screen text in the playing image through the server;
the server stores the edited playing image to generate the bullet screen expression package;
wherein the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server generates an instruction according to the bullet screen facial expression package to obtain the playing image;
the server identifies and extracts the bullet screen text in the playing image;
the server generates a container containing the bullet screen text above the video picture;
the step of identifying and extracting the bullet screen text in the playing image by the server comprises the following steps:
the server extracts the characteristics of the played image to acquire characters contained in the played image;
the server identifies the bullet screen text in the characters according to the layer where the characters are located;
the server separates the bullet screen text from the playing image;
the playing image further comprises a subtitle, the subtitle and the bullet screen text are displayed on different layers, and the subtitle and the bullet screen text are distinguished according to the different layers where the subtitle and the bullet screen text are located, so that the bullet screen text is separated from the playing image.
11. The generation system of claim 10,
the step of capturing, by the server, a play image of the current video containing at least one bullet screen text comprises:
the server further sends a prompt for editing the barrage text and the video picture.
12. The generation system of claim 10,
the step that the current user side edits the video picture in the playing image and/or the barrage text in the playing image through the server comprises the following steps:
the server reads an editing instruction sent by the current user side;
the server judges that an editing object of the editing instruction is the video picture or the barrage text;
when the editing object is the video picture, the server processes the playing image according to the editing instruction;
and when the editing object is the bullet screen text, the server adjusts the text attribute of the bullet screen text and/or deletes the bullet screen text according to the editing instruction.
13. The generation system of claim 10,
the generation system further comprises:
the current user side selects the expression package attribute of the bullet screen expression package through the server;
and the current user side stores and/or shares the bullet screen facial expression package through the server.
CN201810057027.2A 2018-01-19 2018-01-19 Bullet screen expression package generation method, server and bullet screen expression package generation system Active CN108200463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810057027.2A CN108200463B (en) 2018-01-19 2018-01-19 Bullet screen expression package generation method, server and bullet screen expression package generation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810057027.2A CN108200463B (en) 2018-01-19 2018-01-19 Bullet screen expression package generation method, server and bullet screen expression package generation system

Publications (2)

Publication Number Publication Date
CN108200463A CN108200463A (en) 2018-06-22
CN108200463B true CN108200463B (en) 2020-11-03

Family

ID=62590025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810057027.2A Active CN108200463B (en) 2018-01-19 2018-01-19 Bullet screen expression package generation method, server and bullet screen expression package generation system

Country Status (1)

Country Link
CN (1) CN108200463B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108810644B (en) * 2018-06-28 2020-10-16 武汉斗鱼网络科技有限公司 Bullet screen message distribution method, device, equipment and storage medium
CN112866785B (en) * 2018-08-17 2021-10-29 腾讯科技(深圳)有限公司 Picture generation method, device, equipment and storage medium
CN110049377B (en) * 2019-03-12 2021-06-22 北京奇艺世纪科技有限公司 Expression package generation method and device, electronic equipment and computer readable storage medium
CN111698532B (en) * 2019-03-15 2022-12-16 阿里巴巴集团控股有限公司 Bullet screen information processing method and device
CN110321845B (en) * 2019-07-04 2021-06-18 北京奇艺世纪科技有限公司 Method and device for extracting emotion packets from video and electronic equipment
CN110602565A (en) * 2019-08-30 2019-12-20 维沃移动通信有限公司 Image processing method and electronic equipment
CN111405344B (en) * 2020-03-18 2022-01-07 腾讯科技(深圳)有限公司 Bullet screen processing method and device
CN111753131A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Expression package generation method and device, electronic device and medium
CN111984173B (en) * 2020-07-17 2022-03-25 维沃移动通信有限公司 Expression package generation method and device
CN111966804A (en) * 2020-08-11 2020-11-20 深圳传音控股股份有限公司 Expression processing method, terminal and storage medium
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112004156A (en) * 2020-09-02 2020-11-27 腾讯科技(深圳)有限公司 Video playing method, related device and storage medium
CN113015009B (en) * 2020-11-18 2022-09-09 北京字跳网络技术有限公司 Video interaction method, device, equipment and medium
CN113038185B (en) * 2021-04-02 2022-09-09 上海哔哩哔哩科技有限公司 Bullet screen processing method and device
CN113747250B (en) * 2021-08-18 2024-02-02 咪咕数字传媒有限公司 Method and device for realizing new form message and computing equipment
CN113761204B (en) * 2021-09-06 2023-07-28 南京大学 Emoji text emotion analysis method and system based on deep learning
CN114491152B (en) * 2021-12-02 2023-10-31 南京硅基智能科技有限公司 Method for generating abstract video, storage medium and electronic device
CN114827648B (en) * 2022-04-19 2024-03-22 咪咕文化科技有限公司 Method, device, equipment and medium for generating dynamic expression package

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010222A (en) * 2014-04-28 2014-08-27 小米科技有限责任公司 Method, device and system for displaying comment information
CN105828167A (en) * 2016-03-04 2016-08-03 乐视网信息技术(北京)股份有限公司 Screen-shot sharing method and device
CN106341723A (en) * 2016-09-30 2017-01-18 广州华多网络科技有限公司 Bullet screen display method and apparatus
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN107370887A (en) * 2017-08-30 2017-11-21 维沃移动通信有限公司 A kind of expression generation method and mobile terminal
CN107436687A (en) * 2016-05-25 2017-12-05 天津三星通信技术研究有限公司 The method and apparatus for inputting expression
KR101817342B1 (en) * 2016-07-25 2018-01-11 (주)바른교육 Method for making and selling a photo imoticon

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177116A1 (en) * 2009-01-09 2010-07-15 Sony Ericsson Mobile Communications Ab Method and arrangement for handling non-textual information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010222A (en) * 2014-04-28 2014-08-27 小米科技有限责任公司 Method, device and system for displaying comment information
CN105828167A (en) * 2016-03-04 2016-08-03 乐视网信息技术(北京)股份有限公司 Screen-shot sharing method and device
CN107436687A (en) * 2016-05-25 2017-12-05 天津三星通信技术研究有限公司 The method and apparatus for inputting expression
KR101817342B1 (en) * 2016-07-25 2018-01-11 (주)바른교육 Method for making and selling a photo imoticon
CN106341723A (en) * 2016-09-30 2017-01-18 广州华多网络科技有限公司 Bullet screen display method and apparatus
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN107370887A (en) * 2017-08-30 2017-11-21 维沃移动通信有限公司 A kind of expression generation method and mobile terminal

Also Published As

Publication number Publication date
CN108200463A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN108200463B (en) Bullet screen expression package generation method, server and bullet screen expression package generation system
CN108965982B (en) Video recording method and device, electronic equipment and readable storage medium
US11397523B2 (en) Facilitating the sending of multimedia as a message
EP3758364B1 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN109120981B (en) Information list display method and device and storage medium
US10896478B2 (en) Image grid with selectively prominent images
DE112016000085B4 (en) Device, method, and graphical user interface for navigating media content
CN107370887B (en) Expression generation method and mobile terminal
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN113114841B (en) Dynamic wallpaper acquisition method and device
DE202016003234U1 (en) Device for capturing and interacting with enhanced digital images
WO2021238943A1 (en) Gif picture generation method and apparatus, and electronic device
CN113268622A (en) Picture browsing method and device, electronic equipment and storage medium
DE202007018413U1 (en) Touch screen device and graphical user interface for specifying commands by applying heuristics
WO2018072149A1 (en) Picture processing method, device, electronic device and graphic user interface
CN114697721B (en) Bullet screen display method and electronic equipment
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
CN112817676A (en) Information processing method and electronic device
CN113918522A (en) File generation method and device and electronic equipment
CN112954046A (en) Information sending method, information sending device and electronic equipment
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN108200479B (en) Bullet screen playing method, server and bullet screen playing system based on streaming document
CN113157972A (en) Recommendation method and device for video cover documents, electronic equipment and storage medium
WO2017193343A1 (en) Media file sharing method, media file sharing device and terminal
CN113495664A (en) Information display method, device, equipment and storage medium based on media information stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant