CN112492324A - Data processing method and system - Google Patents

Data processing method and system Download PDF

Info

Publication number
CN112492324A
CN112492324A CN201910863108.6A CN201910863108A CN112492324A CN 112492324 A CN112492324 A CN 112492324A CN 201910863108 A CN201910863108 A CN 201910863108A CN 112492324 A CN112492324 A CN 112492324A
Authority
CN
China
Prior art keywords
data
information
client
mask frame
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910863108.6A
Other languages
Chinese (zh)
Inventor
陈志伟
唐奇
丁建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN201910863108.6A priority Critical patent/CN112492324A/en
Publication of CN112492324A publication Critical patent/CN112492324A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a data processing method and a data processing system, and belongs to the technical field of internet. The method comprises the steps of identifying a main body area of a frame image in a first information flow through a server side, generating mask frame data, combining the frame image and the mask frame data to generate a second information flow, and sending the second information flow to a client side; the client acquires a second data unit in a second information stream sent by the server for analysis, acquires at least one frame image and corresponding mask frame data, draws an area corresponding to the mask frame data, barrage information and the frame image on a screen for synchronous display, simultaneously displays the mask frame data, the frame image and the corresponding barrage information, ensures live broadcast interactivity, simultaneously displays the barrage information in an area except the area corresponding to the mask frame data, avoids the barrage shielding main content in the frame image, and improves the viewing experience of a user.

Description

Data processing method and system
Technical Field
The invention relates to the technical field of internet, in particular to a data processing method and a data processing system.
Background
The user can increase the interaction between user and the user through the mode of sending barrage or present when watching the live to and between user and the anchor, and after the user sent the barrage or present, the barrage characters and the present banner can present in the live picture with the mode of rolling, so as to be looked over by other users. And to some big anchor of popularity in the live platform, the quantity of barrage and present in its live broadcast room is more, therefore barrage characters and present banner volume on the live broadcast picture of its live broadcast room are great, shelter from the live broadcast picture, in order not to influence watching live broadcast, most of users can shield barrage characters and present banner on the live broadcast picture through selector switch, but the mode of user selection shielding barrage and present, make unable presentation be used for carrying on interactive barrage and present on the live broadcast picture, reduce the interactivity of live broadcast room.
Disclosure of Invention
Aiming at the problem that the watching effect is influenced by too many bullet screens in a live broadcast scene, the data processing method and the data processing system which aim at not influencing the live broadcast picture watching effect on the premise of ensuring the live broadcast interactivity are provided.
The invention provides a server data processing method, which comprises the following steps:
the server side identifies a main body area of at least one frame of image in the first information flow and generates mask frame data;
and the server side combines the frame image and the mask frame data to generate a second information stream, and sends the second information stream to a client side.
Preferably, the first information stream comprises at least one first data unit, the first data unit comprising meta information and first data information.
Preferably, the step of identifying, by the server side, a main body region of at least one frame of image in the first information stream and generating mask frame data includes:
the server side acquires at least one first data unit in the first information flow;
the server side decodes the first data information to obtain a frame image;
the server side identifies a main body area in the frame image;
and the server side generates mask frame data corresponding to the frame image according to the main body area in the frame image.
Preferably, the step of merging the frame image and the mask frame data by the server to generate a second information stream, and sending the second information stream to the client includes:
the server side encodes the frame image and corresponding mask frame data to acquire second data information;
the server combines the second data information with the meta information to generate a second data unit;
and the server side combines the plurality of second data units to generate the second information flow and sends the second information flow to a client side.
Preferably, the first information flow uses a real-time message transmission protocol.
Preferably, the second information flow uses a real-time message transmission protocol.
The invention also provides a client data processing method, which comprises the following steps:
the client acquires a second information stream and barrage information sent by the server;
the client analyzes a second data unit in the second information flow to obtain at least one frame image and corresponding mask frame data;
and the client draws the area corresponding to the mask frame data, the bullet screen information and the frame image on a screen for synchronous display.
Preferably, the step of analyzing, by the client, the second data unit in the second information stream to obtain at least one frame image and corresponding mask frame data includes:
the client acquires the second data unit in the second information stream, wherein the second data unit comprises meta information and second data information;
and the client decodes the second data information to obtain a frame image and the mask frame data corresponding to the frame image.
Preferably, the step of drawing the region corresponding to the mask frame data, the barrage information and the frame image on the screen by the client for synchronous display includes:
and the client adjusts the display time of the mask frame data according to the meta-information corresponding to the mask frame data, so that the frame image, the mask frame data corresponding to the frame image and the corresponding barrage information are synchronously displayed.
The invention also provides a data processing system, which comprises a server side and a client side; wherein the content of the first and second substances,
the server side is used for identifying a main body area of at least one frame of image in a first information flow, generating mask frame data, combining the frame of image with the mask frame data, generating a second information flow and sending the second information flow to the client side;
the client is used for acquiring a second information stream and barrage information sent by the server, analyzing a second data unit in the second information stream, acquiring at least one frame image and corresponding mask frame data, and drawing an area corresponding to the mask frame data, the barrage information and the frame image on a screen for synchronous display.
The invention also provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the steps of the server-side data processing method when executing the computer program.
The invention also provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the client data processing method.
The present invention also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the server-side data processing method described above.
The present invention also provides a computer-readable storage medium having stored thereon a computer program characterized in that: which when executed by a processor implements the steps of the above-described client data processing method.
The beneficial effects of the above technical scheme are that:
in the technical scheme, a server terminal identifies a main body area of a frame image in a first information flow, mask frame data are generated, the frame image and the mask frame data are combined to generate a second information flow, and the second information flow is sent to a client terminal; the client acquires a second data unit in a second information stream sent by the server for analysis, acquires at least one frame image and corresponding mask frame data, draws an area corresponding to the mask frame data, barrage information and the frame image on a screen for synchronous display, simultaneously displays the mask frame data, the frame image and the corresponding barrage information, ensures live broadcast interactivity, simultaneously displays the barrage information in an area except the area corresponding to the mask frame data, avoids the barrage shielding main content in the frame image, and improves the viewing experience of a user.
Drawings
FIG. 1 is a block diagram of an application scenario of the data processing system of the present invention;
FIG. 2 is a flowchart of a server-side data processing method according to an embodiment of the present invention;
FIG. 3 is a flow diagram of a method of one embodiment of generating mask frame data;
FIG. 4 is a flow diagram of a method of one embodiment of generating a second information stream;
FIG. 5 is a flowchart of a method of one embodiment of a client data processing method of the present invention;
FIG. 6 is a flow diagram of a method of one embodiment for parsing a second data unit;
FIG. 7 is a flow diagram of one embodiment of a data processing system in accordance with the present invention;
FIG. 8 is a block diagram of one embodiment of a data processing system in accordance with the present invention;
FIG. 9 is a block diagram of one embodiment of a generation unit of the present invention;
FIG. 10 is a diagram of the hardware architecture of one embodiment of the computer apparatus of the present invention;
FIG. 11 is a diagram of the hardware architecture of another embodiment of the computer apparatus of the present invention;
FIG. 12 is a diagram illustrating an embodiment of a data processing system according to the present invention.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present invention and to distinguish each step, and thus should not be construed as limiting the present invention.
The video of the embodiment of the application may be presented on clients such as large-scale video playing devices, game machines, desktop computers, smart phones, tablet computers, MP3(movingpicture expeerpercgroupandioudiolayer iii) players, MP4(movingpicture expeerpercgroupandioudiolayer rlv) players, laptop portable computers, e-book readers, and other display terminals.
The data processing system of the embodiment of the application can be applied to live scenes, for example, live e-commerce, live training, live event events, live news release, interactive video of audiences and anchor broadcasters/video players, playing of interactive games (for example, black mirror or invisible guardian in the industry) and other online services. The embodiment of the present application takes the application of the data processing system to live video as an example, but is not limited to this.
In the embodiment of the present application, referring to fig. 1, a server side in a data processing system is composed of a video cloud source station, a mask cluster, a mask scheduling side, a mask control side, a configuration background, and the like. A stream pushing end (a live broadcast end or an anchor end) sends a live broadcast video stream to a video cloud source station, the video cloud source station sends a transcoding request to a mask control end, the mask control end sends a transcoding request to a mask scheduling end, and after receiving the transcoding request, the mask scheduling end sends a task allocation request to a mask cluster and inquires whether idle AI machines exist in the mask cluster or not, wherein the AI machines are mask identification examples, and each AI machine serves a live broadcast room; if no idle AI machine exists, an abnormal state callback is fed back to the mask control end, if an idle AI machine exists, an RTMP (Real Time Messaging Protocol) video stream is pulled to the video cloud source station, each frame image in the video stream is identified through the AI machine to generate mask frame data, the mask frame data is pushed to the video cloud source station, the video cloud source station synthesizes the mask frame data and the frame images in the source video stream to generate a video stream carrying the mask frame data, and the video stream is pushed to a CDN (Content Delivery Network) node. When a user watches live video, a playing link is requested to a configuration background through a client (a playing end or a stream pulling end), after the configuration background receives the playing link request, an opening state request is inquired to a mask control end, a Database (DB) is inquired by the mask control end, whether the mask service is allowed to be opened in a live broadcasting room or not is obtained, the obtained feedback notice is given, if the mask service is allowed to be opened in the live broadcasting room accessed by the user, the client of the user can pull a video stream carrying mask frame data through a CDN (content distribution network), the client analyzes the video stream, the video information is played through a player, a mask bullet screen is rendered, and therefore the video image, the mask frame and bullet screen information are displayed on a display screen of the client, the bullet screen is displayed in an area except the mask frame, and the watching effect of the user is improved. Only two configuration backgrounds, one client and one push streaming end are given here, and the application scenario here may further include multiple configuration backgrounds, multiple clients, and multiple push streaming ends. The video cloud source station can be a cloud server or a local server. The client and the stream pushing terminal can be mobile devices or other intelligent terminals capable of uploading videos.
The invention provides a server-side data processing method which can ensure that live broadcast interaction is not influenced on the premise of solving the defect that the watching effect is influenced by too many barrages in a live broadcast scene. Referring to fig. 2, which is a schematic flow chart of a server-side data processing method according to a preferred embodiment of the present invention, it can be seen that the server-side data processing method provided in this embodiment mainly includes the following steps:
A1. the server side identifies a main body area of at least one frame of image in the first information flow and generates mask frame data;
in this step, the first information stream may include at least one first data unit including meta information and first data information. The meta information may include: determining a second time stamp of the frame image display time in the first data information, wherein the second time stamp is a time stamp of the frame image played at the client; the meta information may also include indexes, data duration, and other information.
It should be noted that: the first data unit employs a real-time message transmission protocol. The first data unit may employ an AVPacket data structure. The first data information may be video image information to be decoded.
In this embodiment, the server-side data processing method is applied to the server side, and the server side processes each first data unit in the first information stream uploaded by the stream pushing side. And the first information flow of the stream pushing end is uploaded to the server end through a real-time message transmission protocol. The plug flow end can adopt an android system, an IOS system, a Windows system or a Mac OS X system and the like.
Further, the specific process of step a1 is (refer to fig. 3):
A11. the server side acquires at least one first data unit in the first information flow;
A12. the server side decodes the first data information to obtain a frame image;
in this step, the first data information may be decoded by a decoder to obtain a frame image and a first timestamp corresponding to the frame image;
it should be noted that: the first time stamp in this step is different from the second time stamp, and the first time stamp is a display time stamp of the mask frame data.
A13. The server side identifies a main body area in the frame image;
in some embodiments, the body region may be selected from at least one of:
a person area range, an animal area range, a landscape area range, a building area range, an artwork area range, a text area range, and a background area range distinguished from a person, an animal, a building, and an art.
By way of example and not limitation, semantic segmentation models (e.g., FCN, DilatedNet, depelab, etc.) may be employed to identify subject regions within the frame image;
in a preferred embodiment, the semantic segmentation model may adopt a deplab model. The Deeplab model has the advantages of good effect and high speed. The deep model mainly comprises a network backbone (backbone) for extracting a feature map, a feature enhancement layer for enhancing features and reducing the size influence of the feature map, and a classification layer for predicting a class corresponding to each pixel (class 0 is usually background, and more classes are 91 classes of a coco data set, including people, some animals, some common objects, and the like).
A14. And the server side generates mask frame data corresponding to the frame image according to the main body area in the frame image.
Specifically, the specific process at step a14 includes:
and generating image data corresponding to the frame image according to the main body area in the frame image, and coding the image data to generate mask frame data.
In practical application, the mask frame data can adopt a scalable vector graphics file, each target data and the corresponding type are coded according to the target data and the corresponding type in the scalable vector graphics file, a binary file which is marked with the target data type is generated, the compression rate of the data is improved, the storage space of the data is saved, and the data transmission is facilitated.
It should be noted that: the scalable vector graphics file is in an extensible markup language format. The target data is SVG graphics, and the SVG graphics are scalable and can maintain the graphics quality under the condition of changing the size. SVG can use some predefined shape elements to describe graphics, such as: rectangles, lines, paths, etc.
Specifically, step a14 may further include: and acquiring the timestamp offset of the mask frame data according to the first timestamp and the second timestamp, and storing the timestamp offset in the mask frame data so as to position the time difference between the mask frame data and the corresponding frame image.
In this step, the first timestamp and the second timestamp are subtracted to obtain a timestamp offset, and the timestamp offset represents a timestamp offset between the timestamp of the mask frame data and the second timestamp.
A2. And the server side combines the frame image and the mask frame data to generate a second information stream, and sends the second information stream to a client side.
It should be noted that: the second information flow adopts a real-time message transmission protocol.
In this step, the decoded frame image, the acquired mask frame data and the meta information are synthesized to generate a second data unit carrying the mask frame data, the plurality of second data units are sequentially arranged according to the meta information to generate a second information stream, and the server side can transmit the second information stream to the client side for the client side to view the video image configured with the mask bullet screen.
The specific process of step a2 is (refer to fig. 4):
A21. the server side encodes the frame image and corresponding mask frame data to acquire second data information;
by way of example and not limitation, the mask frame data may be a scalable vector graphics file, and each target data and corresponding type may be encoded according to the target data and corresponding type in the scalable vector graphics file to generate a binary file identifying the type of the target data, which may improve the compression rate of the data, save the storage space of the data, and facilitate the transmission of the data.
The frame image is compression-encoded while the mask frame data is encoded to reduce transmission of the data.
A22. The server combines the second data information with the meta information to generate a second data unit;
A23. and the server side combines the plurality of second data units to generate the second information flow and sends the second information flow to a client side.
In this embodiment, the server identifies the main region of the frame image in the first information stream through the server, generates mask frame data, combines the frame image with the mask frame data, generates a second information stream, and sends the second information stream to the client, so that the client displays the mask frame data and the frame image simultaneously when displaying the frame image and the mask frame data in the second data unit, thereby improving viewing experience while ensuring viewing smoothness of a user.
The server-side data processing method mainly describes a method flow of data processing at a server side, and when a client side processes data, the client-side data processing method includes the following steps (refer to fig. 5):
B1. the client acquires a second information stream and barrage information sent by the server;
it should be noted that: the second information flow adopts a real-time message transmission protocol.
B2. The client analyzes a second data unit in the second information flow to obtain at least one frame image and corresponding mask frame data;
in this embodiment, the client data processing method is applied to the client, and the client performs parsing on each second data unit in the second information stream sent by the server to obtain a corresponding frame image and mask frame data.
The specific process of step B2 is (see fig. 6):
B21. the client acquires the second data unit in the second information stream, wherein the second data unit comprises meta information and second data information;
it should be noted that: the second data unit adopts an AVpacket data structure. The second data information is video image information to be decoded.
The meta information may include: determining a second timestamp of the frame image display time in the second data information, wherein the second timestamp is a timestamp of the frame image played at the client; the meta information may also include indexes, data duration, and other information. The client may adopt an android system, an IOS system, a Windows system, or a Mac OS X system.
B22. And the client decodes the second data information to obtain a frame image and the mask frame data corresponding to the frame image.
In this embodiment, the mask frame data in the second data information is a binary file, and when decoding, each key value pair in the binary file is decoded. The meaning of corresponding binary data representation can be effectively distinguished by adopting the key value pair, so that the integrity of data can be ensured in the decoding process.
B3. And the client draws the area corresponding to the mask frame data, the bullet screen information and the frame image on a screen for synchronous display.
The specific process of the step B3 is as follows:
and the client adjusts the display time of the mask frame data according to the meta-information corresponding to the mask frame data, so that the frame image, the mask frame data corresponding to the frame image and the corresponding barrage information are synchronously displayed.
In this step, the client may adjust the display time of the mask frame data according to the timestamp offset of the mask frame data and the second timestamp in the meta-information, so that the display time of the mask frame data is synchronized with the second timestamp; and displaying the frame image, the bullet screen information and the mask frame data corresponding to the frame image at the time corresponding to the second timestamp.
It should be noted that: before the step of displaying the frame image and the mask frame data corresponding to the frame image at the time corresponding to the second timestamp, edge feathering can be performed on the mask frame data, so that smoothness of the edge of the mask frame is improved, and visual effect is improved.
In practical application, when a client displays a received video and a bullet screen, and when an area corresponding to mask frame data is a human figure area, bullet screen information is not displayed in the human figure area range and is displayed in an area except the human figure area range; when the area corresponding to the mask frame data is a text area range, the bullet screen information is not displayed in the text area range and is displayed in the area except the text area range; when the region corresponding to the mask frame data is a background region range distinguished from a person, an animal, a building, and an art, the bullet screen information is not displayed in the background region range and is displayed in a region other than the background region range.
In this embodiment, the client analyzes the second information stream to obtain the frame image and the corresponding mask frame data carrying the timestamp offset, so that when the frame image and the mask frame data in the second data unit are displayed, the mask frame data and the frame image can be simultaneously displayed according to the display time of the mask frame data adjusted by the timestamp offset, thereby ensuring that the user can watch the smooth degree and simultaneously improving the watching experience.
As shown in fig. 7-9, the present invention further provides a data processing system, which includes a server 1 and a client 2; wherein the content of the first and second substances,
the server 1 is configured to identify a main body area of at least one frame of image in a first information stream, generate mask frame data, combine the frame of image with the mask frame data, generate a second information stream, and send the second information stream to the client 2;
the client 2 is configured to acquire a second information stream and barrage information sent by the server 1, analyze a second data unit in the second information stream, acquire at least one frame image and corresponding mask frame data, and draw an area corresponding to the mask frame data, the barrage information, and the frame image on a screen for synchronous display.
The data processing system has the following specific flow (refer to fig. 7):
s1, a server side identifies a main body area of at least one frame of image in a first information flow to generate mask frame data;
s2, the server side combines the frame image and the mask frame data to generate a second information flow;
s3, the server side sends the second information flow to the client side;
s4, the server side sends the bullet screen information to the client side;
s5, the client analyzes a second data unit in the second information stream sent by the server to obtain at least one frame image and corresponding mask frame data;
and S6, the client draws the area corresponding to the mask frame data, the bullet screen information and the frame image on a screen for synchronous display.
As shown in fig. 8, the server 1 may include a generating unit 11 and a merging unit 12; wherein:
a generating unit 11, configured to identify a main area of at least one frame of image in the first information stream, and generate mask frame data;
the first information stream may comprise at least one first data unit comprising meta information and first data information. The meta information may include: determining a second timestamp of the frame image display time in the first data information, wherein the second timestamp is a timestamp of the frame image played at the client 2; the meta information may also include indexes, data duration, and other information.
It should be noted that: the first data unit employs a real-time message transmission protocol. The first data unit may employ an AVPacket data structure. The first data information may be video image information to be decoded.
Referring to fig. 9, the generating unit 11 may include: an acquisition module 111, a decoding module 112, an identification module 113 and a generation module 114; wherein:
an obtaining module 111, configured to obtain at least one first data unit in the first information stream;
and a decoding module 112, configured to decode the first data information to obtain a frame image. The decoding module 112 may decode the first data information through a decoder, and obtain a frame image and a first timestamp corresponding to the frame image;
an identifying module 113 for identifying a subject region within the frame image;
in some embodiments, the body region may be selected from at least one of:
a person area range, an animal area range, a landscape area range, a building area range, an artwork area range, a text area range, and a background area range distinguished from a person, an animal, a building, and an art.
By way of example and not limitation, semantic segmentation models (e.g., FCN, DilatedNet, depelab, etc.) may be employed to identify subject regions within the frame image;
in a preferred embodiment, the semantic segmentation model may adopt a deplab model. The Deeplab model has the advantages of good effect and high speed. The deep model mainly comprises a network backbone (backbone) for extracting a feature map, a feature enhancement layer for enhancing features and reducing the size influence of the feature map, and a classification layer for predicting a category corresponding to each pixel.
A generating module 114, configured to generate mask frame data corresponding to the frame image according to the main body region in the frame image. The generation module 114 generates image data corresponding to the frame image from the body region in the frame image, and encodes the image data to generate mask frame data.
In practical application, the mask frame data can adopt a scalable vector graphics file, each target data and the corresponding type are coded according to the target data and the corresponding type in the scalable vector graphics file, a binary file which is marked with the target data type is generated, the compression rate of the data is improved, the storage space of the data is saved, and the data transmission is facilitated.
It should be noted that: the scalable vector graphics file is in an extensible markup language format. The target data is SVG graphics, and the SVG graphics are scalable and can maintain the graphics quality under the condition of changing the size. SVG can use some predefined shape elements to describe graphics, such as: rectangles, lines, paths, etc.
Specifically, the generating module 114 may obtain a timestamp offset of the mask frame data according to the first timestamp and the second timestamp, and store the timestamp offset in the mask frame data so as to locate a time difference between the mask frame data and the corresponding frame image. And subtracting the first time stamp and the second time stamp to obtain a time stamp offset, wherein the time stamp offset represents the time stamp offset between the time stamp of the mask frame data and the second time stamp. The merging unit 12 is configured to merge the frame image and the mask frame data to generate a second information stream, and send the second information stream to the client 2.
It should be noted that: the second information flow adopts a real-time message transmission protocol.
The merging unit 12 synthesizes the decoded frame image, the acquired mask frame data and the meta information to generate a second data unit carrying the mask frame data, and arranges the plurality of second data units in order according to the meta information to generate a second information stream, and the server 1 can generate the second information stream to the client 2 for the client 2 to view the video image configured with the mask bullet screen.
The merging unit 12 encodes the frame image and the corresponding mask frame data to acquire second data information; combining the second data information with the meta information to generate the second data unit; and combining a plurality of second data units to generate the second information flow, and sending the second information flow to the client 2.
By way of example and not limitation, the mask frame data may be a scalable vector graphics file, and each target data and corresponding type may be encoded according to the target data and corresponding type in the scalable vector graphics file to generate a binary file identifying the type of the target data, which may improve the compression rate of the data, save the storage space of the data, and facilitate the transmission of the data.
The frame image is compression-encoded while the mask frame data is encoded to reduce transmission of the data.
Referring to fig. 8, the client 2 may include: a receiving unit 21, a parsing unit 22 and a rendering unit 23, wherein:
the receiving unit 21 is configured to obtain a second information stream and bullet screen information sent by the server 1;
it should be noted that: the second information flow adopts a real-time message transmission protocol.
The analyzing unit 22 is configured to analyze a second data unit in the second information stream to obtain at least one frame image and corresponding mask frame data;
the parsing unit 22 obtains a second data unit in the second information stream, decodes the second data unit, obtains a frame image and mask frame data corresponding to the frame image, and draws an area corresponding to the mask frame data, bullet screen information and the frame image on a screen for synchronous display
Wherein the second data unit includes meta information and second data information.
It should be noted that: the second data unit adopts an AVpacket data structure. The second data information is video image information to be decoded. The meta information includes: determining a second time stamp of the frame image display time in the second data information, wherein the second time stamp is a time stamp of the frame image played at the client 2; the meta information may also include indexes, data duration, and other information. The client 2 may adopt an android system, an IOS system, a Windows system, or a Mac OS X system.
The mask frame data in the second data information is a binary file, and when decoding is performed, each key value pair in the binary file is decoded. The meaning of corresponding binary data representation can be effectively distinguished by adopting the key value pair, so that the integrity of data can be ensured in the decoding process.
And the drawing unit 23 is configured to draw the area corresponding to the mask frame data, the bullet screen information, and the frame image on a screen for synchronous display.
In this embodiment, the drawing unit 23 adjusts the display time of the mask frame data according to the meta information corresponding to the mask frame data, so that the frame image, the mask frame data corresponding to the frame image, and the corresponding barrage information are displayed synchronously. The drawing unit 23 adjusts the display time of the mask frame data according to the time stamp offset of the mask frame data and the second time stamp in the meta information, so that the display time of the mask frame data is synchronized with the second time stamp; and displaying the frame image, the bullet screen information and the mask frame data corresponding to the frame image at the time corresponding to the second timestamp.
It should be noted that: before the step of displaying the frame image and the mask frame data corresponding to the frame image at the time corresponding to the second timestamp, edge feathering can be performed on the mask frame data, so that smoothness of the edge of the mask frame is improved, and visual effect is improved.
In this embodiment, the server 1 identifies a main body region of a frame image in a first information stream, generates mask frame data, combines the frame image and the mask frame data, generates a second information stream, and sends the second information stream to the client 2; the client 2 obtains a second data unit in a second information stream sent by the server 1 to analyze the second data unit, obtains at least one frame image and corresponding mask frame data, draws an area corresponding to the mask frame data, barrage information and the frame image on a screen to be synchronously displayed, enables the mask frame data, the frame image and the corresponding barrage information to be simultaneously displayed, ensures live broadcast interactivity, enables the barrage information to be displayed in an area except the area corresponding to the mask frame data, avoids the barrage shielding of main contents in the frame image, and improves the viewing experience of a user.
As shown in fig. 10 to 11, a computer device 2, the computer device 3 comprising:
a memory 31 for storing executable program code; and
and the processor 32 is used for calling the executable program codes in the memory 31, and the execution steps comprise the server-side data processing method or the client-side data processing method.
One processor 32 is illustrated in fig. 10.
The memory 31 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the server-side data processing method or the client-side data processing method in the embodiments of the present application. The processor 32 executes various functional applications and data processing of the computer device 3 by executing the nonvolatile software programs, instructions and modules stored in the memory 31, namely, implements the server-side data processing method or the client-side data processing method of the above-described method embodiments.
The memory 31 may include a program storage area and a data storage area, wherein the program storage area may store an application program required for at least one function of the operating system; the storage data area may store playback information of the user on the computer device 3. Further, the memory 31 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 31 may optionally include a memory 31 remotely located from the processor 32, and these remote memories 31 may be connected to a server or a client via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 31 and, when executed by the one or more processors 32, perform the server-side data processing method in any of the above-described method embodiments, e.g., perform the method steps described above in fig. 2-4, to implement the server-side functionality in the data processing system shown in fig. 8.
The one or more modules are stored in the memory 31 and, when executed by the one or more processors 32, perform the server-side data processing method in any of the above-described method embodiments, e.g., perform the method steps described above in fig. 5-6, to implement the client-side functionality in the data processing system shown in fig. 8.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
The computer device 3 of the embodiment of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic devices with data interaction functions.
Embodiments of the present application provide a non-transitory computer-readable storage medium, which stores computer-executable instructions, which are executed by one or more processors, such as one processor 32 in fig. 10, so that the one or more processors 32 may execute the server-side data processing method in any of the method embodiments described above, for example, execute the method steps in fig. 2 to fig. 4 described above, and implement the functions of the server side shown in fig. 8.
Embodiments of the present application provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as one of the processors 32 in fig. 11, so that the one or more processors 32 may execute the client data processing method in any of the method embodiments, for example, execute the method steps in fig. 5 to 6 described above, and implement the functions of the client shown in fig. 8.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on at least two network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
The first embodiment,
Referring to fig. 12, taking the application of the data processing system to a live scene as an example: the user carries out live broadcasting in the period of 11:00-12:00 through a stream pushing end C, the stream pushing end C sends a first information stream to a server end D from 11:00, the server end D analyzes the received first information stream, identifies the type of each first data unit in the first information stream, wherein the first data unit can be a video unit or an audio unit, if the first data unit is a video unit, the video unit is decoded, a frame image, a first time stamp t1 and meta information in the video unit are obtained, a second time stamp t2 in the meta information is extracted, the frame image is identified to generate mask frame data, a time stamp offset of the mask frame data relative to the second time stamp is obtained according to the first time stamp t1 and the second time stamp t2, the time stamp offset is added into the mask frame data, the mask frame number and the corresponding frame data are encoded to generate a second data unit, the second data units and the audio units are sequenced and combined according to the sequence of the first information flow to generate a second information flow, the server end D pushes the second information flow and the barrage information to a CDN node, and the CDN node modifies a second time stamp t2 in the second information flow (for example, the second time stamp is modified to t 3); when the client F is at 11:30, requesting a live broadcast information stream from a CDN node, sending the modified second information stream and the bullet screen information to the client F by the CDN node, analyzing the second information stream by the client F, extracting audio information, frame images, mask frame data and bullet screen information, adjusting the display time of the mask frame data according to the timestamp offset and the modified second timestamp t3, so that the frame images, the corresponding mask frame data and the corresponding bullet screen information are displayed on the display screen at the time corresponding to the second timestamp t3 to be played, and when the bullet screen information passes through the mask frame data, displaying an area corresponding to the mask frame data, avoiding the bullet screen information from blocking main content in the frame images, and improving the visual experience of users.
Example II,
Referring to fig. 12, taking the application of the data processing system to a live scene as an example: the user carries out live broadcasting in the period of 11:00-12:00 through a stream pushing end C, the stream pushing end C sends a first information stream to a server end D from 11:00, the server end D analyzes the received first information stream, identifies the type of each first data unit in the first information stream, wherein the first data unit can be a video unit or an audio unit, if the first data unit is a video unit, the video unit is decoded, a frame image, a first time stamp t1 and meta information in the video unit are obtained, a second time stamp t2 in the meta information is extracted, the frame image is identified to generate mask frame data, a time stamp offset of the mask frame data relative to the second time stamp is obtained according to the first time stamp t1 and the second time stamp t2, the time stamp offset is added into the mask frame data, the mask frame number and the corresponding frame data are encoded to generate a second data unit, the second data units and the audio units are sequenced and combined according to the sequence of the original first information flow to generate a second information flow, and the server end D pushes the second information flow and the bullet screen information to the CDN node; when the client F is at 11:00, requesting a live broadcast information stream from the CDN node, sending a second information stream and the bullet screen information to the client F by the CDN node, analyzing the second information stream by the client F, extracting audio information, frame images, mask frame data and bullet screen information, adjusting the display time of the mask frame data according to the timestamp offset and the second timestamp t2, so that the frame images, the corresponding mask frame data and the corresponding bullet screen information are displayed on the display screen at the time corresponding to the second timestamp t2 for playing, wherein when the bullet screen information passes through the mask frame data, the area corresponding to the mask frame data is displayed, the fact that the bullet screen information shields main content in the frame images is avoided, and the visual experience of users is improved.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A server-side data processing method is characterized by comprising the following steps:
the server side identifies a main body area of at least one frame of image in the first information flow and generates mask frame data;
and the server side combines the frame image and the mask frame data to generate a second information stream, and sends the second information stream to a client side.
2. The server-side data processing method according to claim 1, wherein the first information stream includes at least one first data unit, and the first data unit includes meta information and first data information.
3. The server-side data processing method according to claim 2, wherein the step of the server-side identifying a main body region of at least one frame of image in the first information stream and generating mask frame data comprises:
the server side acquires at least one first data unit in the first information flow;
the server side decodes the first data information to obtain a frame image;
the server side identifies a main body area in the frame image;
and the server side generates mask frame data corresponding to the frame image according to the main body area in the frame image.
4. The server-side data processing method according to claim 2, wherein the step of the server-side combining the frame image and the mask frame data to generate a second information stream, and sending the second information stream to the client-side includes:
the server side encodes the frame image and corresponding mask frame data to acquire second data information;
the server combines the second data information with the meta information to generate a second data unit;
and the server side combines the plurality of second data units to generate the second information flow and sends the second information flow to a client side.
5. The server-side data processing method according to claim 1, wherein the first information stream employs a real-time message transport protocol.
6. The server-side data processing method according to claim 1, wherein the second information stream employs a real-time message transport protocol.
7. A client data processing method, comprising the steps of:
the client acquires a second information stream and barrage information sent by the server;
the client analyzes a second data unit in the second information flow to obtain at least one frame image and corresponding mask frame data;
and the client draws the area corresponding to the mask frame data, the bullet screen information and the frame image on a screen for synchronous display.
8. The client data processing method of claim 7, wherein the step of the client parsing the second data unit in the second information stream to obtain at least one frame image and corresponding mask frame data comprises:
the client acquires the second data unit in the second information stream, wherein the second data unit comprises meta information and second data information;
and the client decodes the second data information to obtain a frame image and the mask frame data corresponding to the frame image.
9. The client data processing method according to claim 8, wherein the step of drawing the region corresponding to the mask frame data, the barrage information, and the frame image on the screen for synchronous display by the client comprises:
and the client adjusts the display time of the mask frame data according to the meta-information corresponding to the mask frame data, so that the frame image, the mask frame data corresponding to the frame image and the corresponding barrage information are synchronously displayed.
10. A data processing system is characterized by comprising a server side and a client side; wherein the content of the first and second substances,
the server side is used for identifying a main body area of at least one frame of image in a first information flow, generating mask frame data, combining the frame of image with the mask frame data, generating a second information flow and sending the second information flow to the client side;
the client is used for acquiring a second information stream and barrage information sent by the server, analyzing a second data unit in the second information stream, acquiring at least one frame image and corresponding mask frame data, and drawing an area corresponding to the mask frame data, the barrage information and the frame image on a screen for synchronous display.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 6 when executing the computer program.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 7 to 9 when executing the computer program.
13. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 6.
14. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 7 to 9.
CN201910863108.6A 2019-09-12 2019-09-12 Data processing method and system Pending CN112492324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910863108.6A CN112492324A (en) 2019-09-12 2019-09-12 Data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910863108.6A CN112492324A (en) 2019-09-12 2019-09-12 Data processing method and system

Publications (1)

Publication Number Publication Date
CN112492324A true CN112492324A (en) 2021-03-12

Family

ID=74920525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910863108.6A Pending CN112492324A (en) 2019-09-12 2019-09-12 Data processing method and system

Country Status (1)

Country Link
CN (1) CN112492324A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
US20220046291A1 (en) * 2020-08-04 2022-02-10 Shanghai Bilibili Technology Co., Ltd. Method and device for generating live streaming video data and method and device for playing live streaming video
WO2022237281A1 (en) * 2021-05-14 2022-11-17 广东欧谱曼迪科技有限公司 Image mark data processing and restoring system, image mark data processing method and apparatus, and image mark data restoring method and apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401177A (en) * 2018-02-27 2018-08-14 上海哔哩哔哩科技有限公司 Video broadcasting method, server and audio/video player system
CN109151489A (en) * 2018-08-14 2019-01-04 广州虎牙信息科技有限公司 live video image processing method, device, storage medium and computer equipment
CN109302619A (en) * 2018-09-18 2019-02-01 北京奇艺世纪科技有限公司 A kind of information processing method and device
US20190068664A1 (en) * 2017-08-24 2019-02-28 Knowledgevision Systems Incorporated Method To Re-Synchronize Live Media Streams, Commands, And On-Screen Events Transmitted Through Different Internet Pathways
CN109618213A (en) * 2018-12-17 2019-04-12 华中科技大学 A method of preventing barrage shelter target object
CN109862414A (en) * 2019-03-22 2019-06-07 武汉斗鱼鱼乐网络科技有限公司 A kind of masking-out barrage display methods, device and server
CN110225365A (en) * 2019-04-23 2019-09-10 北京奇艺世纪科技有限公司 A kind of method, server and the client of the interaction of masking-out barrage

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190068664A1 (en) * 2017-08-24 2019-02-28 Knowledgevision Systems Incorporated Method To Re-Synchronize Live Media Streams, Commands, And On-Screen Events Transmitted Through Different Internet Pathways
CN108401177A (en) * 2018-02-27 2018-08-14 上海哔哩哔哩科技有限公司 Video broadcasting method, server and audio/video player system
CN109151489A (en) * 2018-08-14 2019-01-04 广州虎牙信息科技有限公司 live video image processing method, device, storage medium and computer equipment
CN109302619A (en) * 2018-09-18 2019-02-01 北京奇艺世纪科技有限公司 A kind of information processing method and device
CN109618213A (en) * 2018-12-17 2019-04-12 华中科技大学 A method of preventing barrage shelter target object
CN109862414A (en) * 2019-03-22 2019-06-07 武汉斗鱼鱼乐网络科技有限公司 A kind of masking-out barrage display methods, device and server
CN110225365A (en) * 2019-04-23 2019-09-10 北京奇艺世纪科技有限公司 A kind of method, server and the client of the interaction of masking-out barrage

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220046291A1 (en) * 2020-08-04 2022-02-10 Shanghai Bilibili Technology Co., Ltd. Method and device for generating live streaming video data and method and device for playing live streaming video
US11863801B2 (en) * 2020-08-04 2024-01-02 Shanghai Bilibili Technology Co., Ltd. Method and device for generating live streaming video data and method and device for playing live streaming video
WO2022237281A1 (en) * 2021-05-14 2022-11-17 广东欧谱曼迪科技有限公司 Image mark data processing and restoring system, image mark data processing method and apparatus, and image mark data restoring method and apparatus
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
CN113766339B (en) * 2021-09-07 2023-03-14 网易(杭州)网络有限公司 Bullet screen display method and device

Similar Documents

Publication Publication Date Title
CN108184144B (en) Live broadcast method and device, storage medium and electronic equipment
US11962858B2 (en) Video playback method, video playback terminal, and non-volatile computer-readable storage medium
CN106454407B (en) Video live broadcasting method and device
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN106331880B (en) Information processing method and system
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN109299326B (en) Video recommendation method, device and system, electronic equipment and storage medium
CN109309842B (en) Live broadcast data processing method and device, computer equipment and storage medium
CN112492324A (en) Data processing method and system
CN108171160B (en) Task result identification method and device, storage medium and electronic equipment
CN112272327B (en) Data processing method, device, storage medium and equipment
CN112019905A (en) Live broadcast playback method, computer equipment and readable storage medium
CN105898395A (en) Network video playing method, device and system
CN114139491A (en) Data processing method, device and storage medium
CN114461423A (en) Multimedia stream processing method, device, storage medium and program product
US11696001B2 (en) Enhanced immersive digital media
CN112312145B (en) Access server, burst traffic caching method, system, computer device and readable storage medium
US9807453B2 (en) Mobile search-ready smart display technology utilizing optimized content fingerprint coding and delivery
CN113923530B (en) Interactive information display method and device, electronic equipment and storage medium
CN111954041A (en) Video loading method, computer equipment and readable storage medium
US20080256169A1 (en) Graphics for limited resolution display devices
CN114071170B (en) Network live broadcast interaction method and device
CN110602534B (en) Information processing method and device and computer readable storage medium
CN111147930A (en) Data output method and system based on virtual reality
KR102615377B1 (en) Method of providing a service to experience broadcasting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination