WO2020094012A1 - 流媒体视频数据处理方法、装置、计算机设备和存储介质 - Google Patents

流媒体视频数据处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2020094012A1
WO2020094012A1 PCT/CN2019/115725 CN2019115725W WO2020094012A1 WO 2020094012 A1 WO2020094012 A1 WO 2020094012A1 CN 2019115725 W CN2019115725 W CN 2019115725W WO 2020094012 A1 WO2020094012 A1 WO 2020094012A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
data
information
fragmented
video data
Prior art date
Application number
PCT/CN2019/115725
Other languages
English (en)
French (fr)
Inventor
孙俊
王雍
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP19881958.3A priority Critical patent/EP3879842A4/en
Publication of WO2020094012A1 publication Critical patent/WO2020094012A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23113Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving housekeeping operations for stored content, e.g. prioritizing content for deletion because of storage space restrictions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular, to a streaming media video data processing method, device, computer equipment, and storage medium.
  • FLV Flash Video streaming video format
  • FLV Flash Video
  • the rich and diverse video resources on the Internet are gradually dominated by the FLV format.
  • the cache acceleration server generally adopts the form of caching the entire FLV resource file.
  • the local FLV format video resource can be parsed by hitting, and the content of the fragment that the user needs to request can be read to provide the user with the video acceleration service.
  • the video cache acceleration server needs to cache the entire video resource, wasting disk space.
  • the present disclosure provides a streaming media video data processing method, apparatus, computer equipment, and storage medium.
  • a method for processing streaming media video data comprising: receiving a video playback request sent by at least one terminal, the video playback request carrying a video content identifier and corresponding video memory information; generating a preset drag parameter corresponding to the video content identifier;
  • the source station obtains the video information corresponding to the preset drag parameters, and the video information includes metadata; determines the fragmentation information according to the video memory information, obtains the fragmented video corresponding to the fragmentation information from the source station, and caches the fragmented video data;
  • the video playback request concatenates metadata and fragmented video data to obtain corresponding playback data.
  • a streaming media video data processing device comprising: a data receiving module for receiving a video playback request sent by at least one terminal, the video playback request carrying a video content identifier and corresponding video memory information; a parameter generating module for generating and The preset drag parameters corresponding to the video content identifier; the data acquisition module, which is used to acquire video information corresponding to the preset drag parameters from the source station, and the video information includes metadata; and the data cache module, which is used to determine the score based on the video memory information Slice information, obtain the fragmented video corresponding to the fragmentation information from the source station, and cache the fragmented video data; the data splicing module is used to splice metadata and fragmented video data according to the video playback request to obtain the corresponding playback data.
  • a computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the following steps are implemented: receiving a video playback request sent by at least one terminal, The video playback request carries the video content identifier and corresponding video memory information; generates preset drag parameters corresponding to the video content identifier; obtains video information corresponding to the preset drag parameters from the source station, and the video information includes metadata; according to The video memory information determines the fragmentation information, obtains the fragmented video corresponding to the fragmentation information from the source station, and caches the fragmented video data; according to the video playback request, the metadata and the fragmented video data are spliced to obtain the corresponding playback data.
  • Generate preset drag parameters corresponding to the video content identifier obtain video information corresponding to the preset drag parameters from the source station, the video information includes metadata; determine fragmentation information based on the video memory information, and obtain and fragment from the source station
  • the fragmented video corresponding to the information caches the fragmented video data; the metadata and the fragmented video data are spliced according to the video playback request to obtain the corresponding playback data.
  • FIG. 1 is an application scenario diagram of a streaming media video data processing method in an embodiment
  • FIG. 2 is a schematic flowchart of a streaming media video data processing method in an embodiment
  • FIG. 3 is a schematic flowchart of a streaming media video data processing method in another embodiment
  • FIG. 4 is a schematic flowchart of a step of determining fragmentation information in an embodiment
  • FIG. 5 is a schematic flowchart of a streaming media video data processing method in a specific embodiment
  • FIG. 6 is a structural block diagram of a streaming media video data processing device in an embodiment
  • FIG. 7 is a structural block diagram of a streaming media video data processing device in another embodiment
  • FIG. 8 is a structural block diagram of a data cache module in an embodiment
  • FIG. 9 is a block diagram of the internal structure of a computer device in an embodiment.
  • FIG. 1 is an application environment diagram of a streaming media video data processing method in an embodiment.
  • the streaming media video data processing method is applied to a streaming media video data processing system.
  • the streaming media video data processing system includes a terminal group 110, a cache acceleration server 120, and an origin station 130.
  • the terminal group 110 includes multiple terminals, such as the terminal 112, the terminal 114, and the terminal 116.
  • the 110 and the cache acceleration server 120 are connected through a network, and the cache acceleration server 120 and the source station 130 are connected through a network.
  • the cache acceleration server 120 acquires and receives a video playback request sent by at least one terminal, the video playback request carries the video content identifier and corresponding video memory information, generates a preset drag parameter corresponding to the video content identifier, and obtains and drags from the source station 130 Drag the video information corresponding to the parameter, the video information contains metadata, determine the fragmentation information according to the video memory information, obtain the fragmented video data corresponding to the fragmentation information from the source station 130, cache the fragmented video data, and stitch the metadata according to the video playback request Data and the segmented video data to obtain corresponding playback data. Send the playback data to the corresponding terminal.
  • the terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
  • the cache acceleration server 120 or the source station 130 can be implemented by an independent server or a server cluster composed of multiple servers.
  • a streaming media video data processing method is provided. This embodiment is mainly exemplified by the method applied to the cache acceleration server 120 in FIG. 1 described above. Referring to FIG. 2, the streaming media video data processing method specifically includes the following steps:
  • Step S202 Receive a video playback request sent by at least one terminal.
  • the video playback request carries the video content identifier and corresponding video memory information.
  • a video playback request refers to a request sent by a user to obtain playback data.
  • the video playback request carries video content identification and video memory information, where the video content identification is tag data used to identify video content, and the video content identification includes video At least one of theme, video drag parameter, video category, etc.
  • the video drag parameter includes at least one of a start position parameter and an end position parameter, where the start position parameter can be represented by START, and the end position parameter can be adopted by END Said.
  • Video memory information includes video memory size information and video playback position information.
  • the cache acceleration server is a server for caching video data, and the cache acceleration server obtains a video playback request sent by at least one terminal.
  • Step S204 generating preset drag parameters corresponding to the video content identifier.
  • step S206 video information corresponding to preset drag parameters is obtained from the source station, and the video information includes metadata.
  • the preset drag parameter is a preset parameter including a preset start position parameter and a preset end position parameter.
  • the preset drag parameter may be used to obtain video information corresponding to the video content identifier, where the video information includes metadata and video data determined according to the drag start position and end position of the preset drag parameter.
  • the source station obtains the corresponding video data and metadata according to the preset drag parameters, and returns the acquired video data and metadata to the server.
  • the source station refers to a server for storing streaming media video data
  • metadata (METADATA) is data for describing streaming media video data, mainly describing data attribute information of the streaming media video data.
  • Step S208 Determine the fragmentation information according to the video memory information, obtain fragmented video data corresponding to the fragmentation information from the source station, and cache the fragmented video data.
  • the fragmentation information refers to attribute information of the fragmented video data after fragmentation, including memory size information and playback position information of the fragmented video data.
  • the fragmented video data refers to the video data obtained by dividing the video data in a larger memory according to a certain memory size. For example, the memory of a video is 20M, and it is divided according to 2M to obtain 10 fragmented video data.
  • the memory size of each fragmented video data is 2M.
  • the memory size of the fragmented video data existing in the cache acceleration server is 2M.
  • Step S210 Splice metadata and segmented video data according to the video playback request to obtain corresponding playback data.
  • the playback data is video data sent to the playback terminal, and the video data is obtained by splicing the cached fragmented video data and metadata according to the video playback request sent by the user.
  • the video playback request sent by the user is different, and the playback data obtained when splicing the video data metadata is different.
  • the video playback request sent by the user carries the drag parameter START or END
  • the metadata data and the video data corresponding to the drag parameter are directly spliced to obtain the playback data.
  • the video playback request sent by the user does not include the drag parameter START or END, the segmented video data returned by the source station is directly spliced to obtain the playback data.
  • the video playback request sent by the user carries the drag parameter
  • the fragmented video data corresponding to the drag parameter is spliced, that is, when the video memory size corresponding to the drag parameter and multiple fragmented videos
  • the sum of the memory size of the data is the same, directly splicing multiple fragmented video data and metadata into playback data, and conversely, the fragmented video data that obtains the same video content of the fragmented video data and the part of the fragmented video data
  • the video content and the video with the same content of the drag parameters and the metadata are stitched together to obtain the playback data.
  • the video data and metadata of 10M-12M, 12M-14M, 14M-16M, 16M-18M and 18M-20M are spliced to obtain the playback data.
  • the video content that the user wants to watch is 10M-19M, 10M-12M, 12M-14M, 14M-16M, 16M-18M, and 18M-19M, and metadata are spliced to obtain playback data.
  • the preset drag parameter corresponding to the video content identifier before generating the preset drag parameter corresponding to the video content identifier, it further includes: determining whether there is metadata corresponding to the video playback request. When it exists, enter the step of determining the fragmentation information according to the video memory information, and when it does not exist, enter the step of obtaining video information corresponding to the drag parameter from the source station.
  • the cache acceleration server after generating the preset drag parameters corresponding to the video content identifier, the cache acceleration server first searches locally whether there is metadata corresponding to the video playback request.
  • the metadata contains the video content identification.
  • the video content identifier in the video playback request is matched with the video content identifier of each metadata in the cache acceleration server.
  • the match When the match is successful, it means that some or all of the fragmented video data corresponding to the video content identifier has been cached in the cache acceleration server, and the step of determining the fragmentation information according to the video memory information is entered.
  • the method before determining the fragmentation information according to the video memory information, the method further includes: determining whether to cache the full-part video data corresponding to the video playback request, and when the full-part video data is cached, enter according to the video playback request The step of splicing the metadata and the fragmented video data to obtain the corresponding playback data. When the full-part video data is not cached, the judgment step for the full-part video data is continued until the full-part video is cached.
  • the video content identifier of the metadata in the cache acceleration server matches the video content identifier in the video playback request
  • the cache acceleration server contains part of the video data corresponding to the video playback request, the remaining part is obtained from the source station. For example, when the user wants to watch the video data for 10-40 minutes, the cache acceleration server contains 10- For 20 minutes of video data, the remaining 20-40 minutes of video data is obtained from the source station. If the cache acceleration server does not contain the video data corresponding to the video playback request, the video data corresponding to the video playback request is obtained from the source station. It is more convenient to determine whether there is corresponding video data in the cache acceleration server to avoid obtaining duplicate video data. It is more convenient to obtain the video data directly from the cache acceleration server.
  • step S210 the method further includes:
  • Step S212 Extract the terminal identification in the video playback request, and send the playback data to the terminal corresponding to the terminal identification.
  • the terminal identification is label data used to identify the terminal, and the terminal identification may be composed of letters, characters, special symbols, and numbers.
  • Each video playback request carries the corresponding terminal identification. After the cache acceleration server stitches to obtain the playback data, the terminal identification in the video playback request is extracted, and the playback data is sent to the terminal corresponding to the terminal identification. For example, if terminal A sends a video playback request to play program X, after acquiring the playback data of program X, the playback data is sent to terminal A.
  • the video playback request by receiving a video playback request sent by at least one terminal, the video playback request carries the video content identifier and the corresponding video memory information, and generates a preset drag parameter corresponding to the video content identifier to the source station. Obtain the video information corresponding to the preset drag parameters.
  • the video information contains metadata. Determine the fragmentation information based on the video memory information. Obtain the fragmented video corresponding to the fragmentation information from the source station. Cache the fragmented video data and play according to the video. Request to splice metadata and fragmented video data to obtain corresponding playback data.
  • the cache acceleration server automatically generates preset drag parameters according to the video playback request, obtains the video information from the source station according to the preset drag parameters, obtains the fragmented video data from the source station according to the information carried in the video playback request, and plays Request to join the fragmented video data to get the playback data.
  • the preset drag parameters can obtain metadata. Different video providers have different contents in response to video requests.
  • the metadata is separated by preset drag parameters, and the corresponding video data is obtained according to the video playback request. Then, the video data and the metadata are spliced together, so that the cache acceleration server can better obtain videos of different video providers to meet user needs.
  • By sharding the cache according to user needs to cache the corresponding number of videos to reduce the pressure on the source site, and can more effectively use the cache to accelerate the server's disk space.
  • step S210 before step S210, it further includes:
  • Step S402 Determine whether the video playback request includes video drag parameters.
  • Step S404 when the video playback request includes video drag parameters, splice metadata corresponding to the video drag parameters to obtain the playback data.
  • Step S406 when the video playback request does not include the video dragging parameter, the segmented video data corresponding to the mosaic video playback request is playback data.
  • the video playback request sent by the user includes the video drag parameter
  • the video drag parameter is an identifier used to describe the video data that the user desires to cache, and the identifier may be in the start position or the end position At least one position is represented, where the start position is described by START and the end position is described by END.
  • the video playback request includes video dragging parameters
  • the metadata and the segmented video data corresponding to the video playback request are spliced to obtain playback data.
  • the segmented video data corresponding to the video playback request is directly spliced to obtain playback data.
  • step S208 includes:
  • Step S2082 Acquire preset sharding rules.
  • Step S2084 Determine the number of video fragments according to the video memory size and the preset fragmentation rules.
  • Step S2086 Determine the playback position information of each segmented video data according to the video playback position information and the number of segments.
  • the preset fragmentation rule refers to a preset rule for fragmenting a video.
  • the rule can be customized, for example, it can be a rule made by a technician after analysis based on network bandwidth, server performance, etc. .
  • the preset sharding rules include memory sharding rules, time sharding rules, etc. Taking the memory sharding rules as an example, the sharding memory is defined as 2M and the video memory size is 10M, then the number of sharding is 5 . After determining the number of segments, determine the playback position information of each segmented video data according to the video playback position information and the number of segments.
  • the first segment video data is 10 minutes
  • the start position of the first segment is 12 minutes
  • the end position of the first segment video data is 12 minutes
  • the end position is 14 minutes.
  • the above streaming media video data processing method includes:
  • Step S602 The terminal sends a FLV video playback request to the cache acceleration server.
  • step S604 the cache acceleration server determines whether corresponding metadata exists according to the FLV video playback request. When it exists, it proceeds to step S606A, and when it does not exist, proceeds to step S606B.
  • step S606A it is determined whether there is video data corresponding to the FLV video playback request. If it exists, go to step S616B, if not, go to step S612B.
  • Step S608B The source station obtains corresponding video information according to preset drag parameters, and sends the video information to the cache acceleration server.
  • step S610B the cache acceleration server uses the file name + METADATA (metadata) as an identifier to identify the video information sent by the cache source station, and the video information includes metadata and video data.
  • METADATA metadata
  • Step S612B The cache acceleration server returns the source to the source station in a certain segment size in the manner of a RANGE header according to the range in the video content identifier of the video playback request.
  • Step S614B the source station responds to the corresponding fragmented video data according to the RANGE range requested by the cache acceleration server.
  • Step S616B The cache acceleration server uses the file name + RANGE range as an identifier to identify the fragmented video data that the cache source station responds to.
  • step S618B the cache acceleration server determines whether the video playback request carries START or END drag parameters. If so, proceed to step S620B, if not, proceed to step S622B.
  • step S620B the metadata and the segmented video data are spliced into playback data. That is, the data in the video information is deleted, and the metadata is left, and the metadata is spliced with the fragmented video to obtain the playback data.
  • Step S622B splicing the segmented video data to obtain playback data.
  • Step S624B The cache acceleration server sends the playback data to the terminal.
  • FIGS. 2-5 are schematic flowcharts of a streaming media video data processing method in an embodiment. It should be understood that although the steps in the flowcharts of FIGS. 2-5 are displayed in order according to the arrows, the steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS.
  • 2-5 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed and completed at the same time, but may be executed at different times, these sub-steps or stages
  • the execution order of is not necessarily sequential, but may be executed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • a streaming media video data processing apparatus 200 including:
  • the data receiving module 202 is configured to receive a video playback request sent by at least one terminal, and the video playback request carries a video content identifier and corresponding video memory information.
  • the parameter generation module 204 is used to generate preset drag parameters corresponding to the video content identifier.
  • the data acquisition module 206 is used to acquire video information corresponding to preset drag parameters from the source station, and the video information includes metadata.
  • the data cache module 208 is used to determine the fragment information according to the video memory information, obtain the fragment video corresponding to the fragment information from the source station, and cache the fragment video data.
  • the data splicing module 210 is used for splicing metadata and segmented video data according to a video playback request to obtain corresponding playback data.
  • the above-mentioned streaming media video data processing device further includes:
  • the metadata judgment module 212 is used to judge whether there is metadata corresponding to the video playback request. When it exists, it returns to the data cache module 208, and when it does not exist, it returns to the data acquisition module 206.
  • the above-mentioned streaming media video data processing device further includes:
  • the video data judging module 214 is used to judge whether the full-part video data corresponding to the video playback request is cached. When the full-part video data is cached, the data splicing module 208 is returned, when the full-part video data is not cached , Return to the data acquisition module 206.
  • the above streaming media video data processing device further includes:
  • the drag parameter determination module 216 is used to determine whether the video playback request includes the video drag parameter.
  • the data splicing module 210 is also used to splice the segmented video data corresponding to the video drag parameters when the video playback request includes video drag parameters, to obtain playback data, and to splice when the video playback request does not include video drag parameters
  • the segmented video data corresponding to the video playback request is playback data.
  • the data cache module 208 includes:
  • the information obtaining unit 2082 is used to obtain preset sharding rules.
  • the slice number calculation unit 2084 is used to determine the number of video slices according to the video memory size and the preset slice rule.
  • the slice information determination unit 2086 is configured to determine the playback position information of each sliced video data according to the video playback position information and the number of slices.
  • the above-mentioned streaming media video data processing device further includes:
  • the data sending module 218 is used to extract the terminal identification carried in the video playback request, and send the playback data to the terminal corresponding to the terminal identification.
  • FIG. 9 shows an internal structure diagram of a computer device in an embodiment.
  • the computer device may specifically be the terminal 110 (or the cache acceleration server 120) in FIG.
  • the computer device includes the processor, a memory, a network interface, an input device, and a display screen connected by a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and may also store a computer program.
  • the processor may enable the processor to implement a streaming media video data processing method.
  • a computer program may also be stored in the internal memory.
  • the processor may cause the processor to execute the streaming media video data processing method.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball or a touch pad provided on the casing of the computer device. It can be an external keyboard, touchpad or mouse.
  • FIG. 9 is only a block diagram of a part of the structure related to the disclosed solution, and does not constitute a limitation on the computer device to which the disclosed solution is applied.
  • the specific computer device may It includes more or fewer components than shown in the figure, or some components are combined, or have a different component arrangement.
  • the streaming media video data processing apparatus may be implemented in the form of a computer program, and the computer program may run on the computer device shown in FIG. 9.
  • the memory of the computer device may store various program modules constituting the streaming media video data processing device, for example, the data receiving module 202, the parameter generating module 204, the data acquiring module 206, the data caching module 208, and the data splicing module shown in FIG. 210.
  • the computer program constituted by each program module causes the processor to execute the steps in the streaming media video data processing methods of various embodiments of the present disclosure described in this specification.
  • the computer device shown in FIG. 9 may execute and receive the video playback request sent by at least one terminal through the data receiving module 202 in the streaming media video data processing apparatus shown in FIG. 6, the video playback request carries the video content identifier and the corresponding Video memory information.
  • the computer device can execute the parameter generation module 204 to generate a preset drag parameter corresponding to the video content identifier.
  • the computer device may execute, through the data acquisition module 206, acquire video information corresponding to preset drag parameters from the source station, where the video information includes metadata.
  • the computer device can execute the data cache module 208 to determine the fragment information according to the video memory information, obtain the fragment video corresponding to the fragment information from the source station, and cache the fragment video data.
  • the computer device can execute the data splicing module 210 to splice the metadata and fragmented video data according to the video playback request to obtain the corresponding playback data.
  • a computer device which includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the following steps are implemented: receiving at least A video playback request sent by a terminal.
  • the video playback request carries the video content identifier and the corresponding video memory information, generates a preset drag parameter corresponding to the video content identifier, and obtains the video information corresponding to the drag parameter from the source station.
  • Contains metadata determines the fragmentation information according to the video memory information, obtains the fragmented video data corresponding to the fragmentation information from the source station, caches the fragmented video data, and stitches the metadata and the fragmented video data according to the video playback request to obtain the corresponding Play data.
  • the processor before generating the preset drag parameters corresponding to the video content identifier, the processor also implements the following steps when executing the computer program: judging whether there is metadata corresponding to the video playback request, when it does not exist , Enter the step of acquiring video information corresponding to the drag parameter from the source station.
  • the processor before determining the fragmentation information according to the video memory information, the processor also implements the following steps when executing the computer program: judging whether to cache the full-part video data corresponding to the video playback request, when the full-part video is cached For data, enter the step of splicing metadata and fragmented video data according to the video playback request to obtain the corresponding playback data. When the full part of the video data is not cached, the judgment step of the full part of the video data is continued until the cache is reached Full video data.
  • the processor before splicing metadata and fragmented video data according to the video playback request to obtain the corresponding playback data, the processor also implements the following steps when executing the computer program: judging whether the video playback request includes video drag parameters, when When the video playback request contains the video drag parameters, the split video data corresponding to the video drag parameters is spliced to obtain the playback data. When the video playback request does not include the video drag parameters, the split video corresponding to the video playback request is spliced The data is playback data.
  • the video memory information includes video playback position information and video memory size
  • the fragmentation information includes playback position information of each fragmented video data
  • determining fragmentation information according to the video memory information includes: obtaining a preset score For the slice rule, the number of video fragments is determined according to the size of the video memory and the preset fragmentation rule, and the playback position information of each fragmented video data is determined according to the video playback position information and the number of fragments.
  • the video playback request carries the terminal identifier, and according to the video playback request, the metadata and the fragmented video data are spliced to obtain the corresponding playback data.
  • the processor executes the computer program, the following steps are also implemented: extracting the video playback request
  • the carried terminal logo sends the playback data to the terminal corresponding to the terminal identification.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the following steps are implemented: receiving a video playback request sent by at least one terminal, and the video playback request carries video content ID and corresponding video memory information, generate preset drag parameters corresponding to the video content ID, obtain the video information corresponding to the drag parameters from the source station, the video information contains metadata, and determine the fragmentation information according to the video memory information.
  • the source station obtains the fragmented video data corresponding to the fragmentation information, caches the fragmented video data, and splices the metadata and the fragmented video data according to the video playback request to obtain the corresponding playback data.
  • the processor before generating the preset drag parameters corresponding to the video content identifier, the processor also implements the following steps when executing the computer program: judging whether there is metadata corresponding to the video playback request, when it does not exist , Enter the step of acquiring video information corresponding to the drag parameter from the source station.
  • the processor before determining the fragmentation information according to the video memory information, the processor also implements the following steps when executing the computer program: judging whether to cache the full-part video data corresponding to the video playback request, when the full-part video is cached For data, enter the step of splicing metadata and fragmented video data according to the video playback request to obtain the corresponding playback data. When the full part of the video data is not cached, the judgment step of the full part of the video data is continued until the cache is reached Full video data.
  • the processor before splicing metadata and fragmented video data according to the video playback request to obtain the corresponding playback data, the processor also implements the following steps when executing the computer program: judging whether the video playback request includes video drag parameters, when When the video playback request contains the video drag parameters, the split video data corresponding to the video drag parameters is spliced to obtain the playback data. When the video playback request does not include the video drag parameters, the split video corresponding to the video playback request is spliced The data is playback data.
  • the video memory information includes video playback position information and video memory size
  • the fragmentation information includes playback position information of each fragmented video data
  • determining fragmentation information according to the video memory information includes: obtaining a preset score For the slice rule, the number of video fragments is determined according to the size of the video memory and the preset fragmentation rule, and the playback position information of each fragmented video data is determined according to the video playback position information and the number of fragments.
  • the video playback request carries the terminal identifier, and according to the video playback request, the metadata and the fragmented video data are spliced to obtain the corresponding playback data.
  • the processor executes the computer program, the following steps are also implemented: extracting the video playback request
  • the carried terminal logo sends the playback data to the terminal corresponding to the terminal identification.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium for storing desired information and accessible by a computer.
  • the communication medium generally contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .
  • the above-mentioned streaming media video data processing method, device, computer equipment and storage medium the method generates a video content identifier by receiving a video playback request sent by at least one terminal, the video playback request carrying the video content identifier and corresponding video memory information
  • Corresponding preset drag parameters obtain the video information corresponding to the preset drag parameters from the source station, the video information contains metadata, determine the fragment information according to the video memory information, and obtain the fragment corresponding to the fragment information from the source station

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本公开涉及一种流媒体视频数据处理方法、装置、计算机设备和存储介质,所述方法通过接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。

Description

流媒体视频数据处理方法、装置、计算机设备和存储介质
本公开要求享有2018年11月06日提交的名称为“流媒体视频数据处理方法、装置、计算机设备和存储介质”的中国专利申请CN201811314448.5的优先权,其全部内容通过引用并入本文中。
技术领域
本公开涉及通信技术领域,尤其涉及一种流媒体视频数据处理方法、装置、计算机设备和存储介质。
背景技术
随着互联网的迅猛发展,视频和语音等流量不断增加,基于FLV(Flash Video,流媒体视频)格式的视频资源已经成为国内主流视频网站内容。FLV是一种全新的视频格式,由于它形成的文件极小、加载速度极快,便于传输特点,使得FLV实际成为了在线视频播放的现行标准,网络观看高清视频文件形成可能。网络上丰富多样的视频资源正逐渐地以FLV格式为主,通过调研发现不同视频网站播放平台对FLV视频格式要求存在差异性。
缓存加速服务器一般采用缓存整个FLV资源文件形式,用户再次请求该视频资源时,可以通过命中解析本地FLV格式视频资源,读取用户需要请求的片段内容给用户提供视频加速服务。当用户只需要观看视频资源的某一段内容时,视频缓存加速服务器需要缓存整个视频资源,浪费磁盘空间。
发明内容
为了解决上述技术问题,本公开提供了一种流媒体视频数据处理方法、装置、计算机设备和存储介质。
一种流媒体视频数据处理方法,包括:接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息;生成与视频内容标识对应的预设拖拽参数;向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据;根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据;根据视频播放请求拼接元数据和分片视频数据, 得到对应的播放数据。
一种流媒体视频数据处理装置,包括:数据接收模块,用于接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息;参数生成模块,用于生成与视频内容标识对应的预设拖拽参数;数据获取模块,用于向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据;数据缓存模块,用于根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据;数据拼接模块,用于根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息;生成与视频内容标识对应的预设拖拽参数;向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据;根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据;根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息;
生成与视频内容标识对应的预设拖拽参数;向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据;根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据;根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或一些情况中的技术方案,下面将对实施例或一些情况描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中流媒体视频数据处理方法的应用场景图;
图2为一个实施例中流媒体视频数据处理方法的流程示意图;
图3为另一个实施例中流媒体视频数据处理方法的流程示意图;
图4为一个实施例中分片信息确定步骤的流程示意图;
图5为一个具体实施例中流媒体视频数据处理方法的流程示意图;
图6为一个实施例中流媒体视频数据处理装置的结构框图;
图7为另一个实施例中流媒体视频数据处理装置的结构框图;
图8为一个实施例中数据缓存模块的结构框图;
图9为一个实施例中计算机设备的内部结构框图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
图1为一个实施例中流媒体视频数据处理方法的应用环境图。参照图1,该流媒体视频数据处理方法应用于流媒体视频数据处理系统。该流媒体视频数据处理系统包括终端组110、缓存加速服务器120和源站130。终端组110包括多个终端,如终端112、终端114和终端116,110和缓存加速服务器120通过网络连接,缓存加速服务器120与源站130过网络连接。缓存加速服务器120获取接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站130获取与拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站130获取与分片信息对应的分片视频数据,缓存分片视频数据,根据视频播放请求拼接元数据和所述分片视频数据,得到对应的播放数据。将播放数据发送至对应的终端。
终端110具体可以是台式终端或移动终端,移动终端具体可以手机、平板电脑、笔记本电脑等中的至少一种。缓存加速服务器120或源站130都可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
如图2所示,在一个实施例中,提供了一种流媒体视频数据处理方法。本实施例主要以该方法应用于上述图1中的缓存加速服务器120来举例说明。参照图2,该 流媒体视频数据处理方法具体包括如下步骤:
步骤S202,接收至少一个终端发送的视频播放请求。
在一个实施例中,视频播放请求携带了视频内容标识和对应的视频内存信息。视频播放请求是指用户发送的用于获取播放数据的请求,该视频播放请求中携带了视频内容标识和视频内存信息,其中视频内容标识是用于标识视频内容的标签数据,视频内容标识包括视频主题、视频拖拽参数、视频类别等的至少一种,视频拖拽参数包括起始位置参数和终止位置参数的中的至少一个,其中起始位置参数可以用START表示,终止位置参数可以采用END表示。视频内存信息包括视频的内存大小信息和视频的播放位置信息等。缓存加速服务器是用于缓存视频数据的服务器,缓存加速服务器获取至少一个终端发送的视频播放请求。
步骤S204,生成与视频内容标识对应的预设拖拽参数。
步骤S206,向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据。
在一个实施例中,预设拖拽参数是预先设置的包括预设起始位置参数和预设终止位置参数的参数。预设拖拽参数可以用于获取视频内容标识对应的视频信息,其中视频信息包括元数据,和根据预设拖拽参数的拖拽起始位置和终止位置确定的视频数据。预设拖拽参数包括预设起始位置参数START和预设终止位置参数END,其中START和END可以预先设定,如设置START=0,END=1,生成预设拖拽参数后,将预设拖拽参数发送至源站,源站根据预设拖拽参数获取对应的视频数据和元数据,将获取到的视频数据和元数据返回服务器。其中源站是指用于存储流媒体视频数据的服务器,元数据(METADATA)是用于描述流媒体视频数据的数据,主要是描述流媒体视频数据的数据属性信息。
步骤S208,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频数据,缓存分片视频数据。
在一个实施例中,分片信息是指分片后的分片视频数据的属性信息,包括分片视频数据的内存大小信息、播放位置信息等。将分片信息发送至源站,从源站中获取与分片信息对应的分片视频数据,分片视频数据是指将较大内存的视频数据按照一定内存大小进行划分后得到的视频数据。如一个视频的内存为20M,按照2M进行划分,得到10个分片视频数据,其中每一分片视频数据的内存大小为2M,缓存加速服务器中存在的分片视频数据的内存大小2M。
步骤S210,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放 数据。
在一个实施例中,播放数据是用于发送至播放终端的视频数据,该视频数据是根据用户发送的视频播放请求对缓存的分片视频数据和元数据进行拼接得到的。用户发送的视频播放请求不相同,在对视频数据元数据进行拼接时得到的播放数据不相同。如用户发送的视频播放请求中携带了拖拽参数START或END时,直接将元数据和拖拽参数对应的视频数据进行拼接,得到播放数据。当用户发送的视频播放请求不包含拖拽参数START或END时,直接将源站返回的分片视频数据进行拼接得到播放数据。
在一个具体的实施例中,用户发送的视频播放请求中携带了拖拽参数,将拖拽参数对应的分片视频数据进行拼接,即当拖拽参数对应的视频内存大小与多个分片视频数据的内存大小的和相同,直接将多个分片视频数据拼接和元数据拼接成播放数据,反之,获取分片视频数据的视频内容全部一致的分片视频数据,和分片视频数据的部分视频内容与拖拽参数内容一致的视频,以及元数据进行拼接,得到播放数据。如当用户想要观看的视频内容为10M-20M时,将10M-12M、12M-14M、14M-16M、16M-18M和18M-20M的分片视频数据和元数据进行拼接得到播放数据,当用户想要观看的视频内容为10M-19M时,将10M-12M、12M-14M、14M-16M、16M-18M,和18M-19M,以及元数据进行拼接得到播放数据。
在一个实施例中,生成与视频内容标识对应的预设拖拽参数之前,还包括:判断是否存在与视频播放请求对应的元数据。当存在时,进入根据视频内存信息确定分片信息的步骤,当不存在时,进入向源站获取与拖拽参数对应的视频信息的步骤。
在一个实施例中,缓存加速服务器在生成与视频内容标识对应的预设拖拽参数之后,首先在本地搜索是否存在与视频播放请求对应的元数据。其中元数据中包含了视频内容标识。
在一个实施例中,根据视频播放请求中的视频内容标识与缓存加速服务器中的各个元数据的视频内容标识进行匹配。
当匹配成功时,表示缓存加速服务器中已经缓存了视频内容标识对应的部分或全部的分片视频数据,进入根据视频内存信息确定分片信息的步骤。
当匹配失败时,表示缓存加速服务器中不存在与视频内容标识对应的分片视频数据,进入向源站获取与拖拽参数对应的视频信息的步骤。直接根据元数据进 行判断避免相同数据的重复获取。
在一个实施例中,根据视频内存信息确定分片信息之前,还包括:判断是否缓存到与视频播放请求对应的全部分片视频数据,当缓存到全部分片视频数据时,进入根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据的步骤,当未缓存到全部分片视频数据时,持续对全部分片视频数据的判断步骤,直至缓存到全部分片视频。
在一个实施例中,当缓存加速服务器中的元数据的视频内容标识与视频播放请求中的视频内容标识匹配时,判断缓存加速服务器中缓存的分片视频数据与视频播放请求的视频数据是否匹配,若匹配,则根据视频播放请求拼接元数据和分片视频数据,得到播放数据。如用户想要观看的视频数据为10-20分钟,缓存加速服务器中包含了10-20分钟的视频数据,则直接将缓存加速服务器中的分片视频数据和元数据进行拼接,得到播放数据。若不匹配,则根据视频播放请求中的视频内存信息向源站获取对应的视频数据,按照分片的方式缓存至缓存加速服务器。其中,若缓存加速服务器中包含视频播放请求对应的部分视频数据,则剩余部分从源站中获取,如当用户想要观看的视频数据为10-40分钟时,缓存加速服务器中包含了10-20分钟的视频数据,则剩下的20-40分钟的视频数据,向源站获取。若缓存加速服务器中不包含视频播放请求对应的视频数据,则向源站获取视频播放请求对应的视频数据。判断缓存加速服务器中是否存在对应的视频数据,避免获取重复的视频数据,直接从缓存加速服务器获取视频数据更为便捷。
在一个实施例中,步骤S210之后,还包括:
步骤S212,提取视频播放请求中的终端标识,将播放数据发送至终端标识对应的终端。
在一个实施例中,终端标识是用于标识终端的标签数据,终端标识可以采用字母、文字、特殊符号和数字等构成。每一个视频播放请求都携带了对应的终端标识。缓存加速服务器拼接得到播放数据后,提取视频播放请求中的终端标识,将播放数据发送至终端标识对应的终端中。如终端A发送请求播放节目X的视频播放请求,在获取到节目X的播放数据后,将播放数据发送至终端A。
上述流媒体视频数据处理方法,通过接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片 视频,缓存分片视频数据,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。缓存加速服务器通过根据视频播放请求自动生成预设拖拽参数,根据预设拖拽参数向源站获取是视频信息,根据视频播放请求中携带的信息向源站获取分片视频数据,根据视频播放请求拼接分片视频数据,得到播放数据。其中预设拖拽参数能够获取到元数据,不同的视频提供商对视频请求响应的内容存在差异性,通过预设拖拽参数将元数据分离出来,在根据视频播放请求获取对应的视频数据,然后在将视频数据和元数据进行拼接,使得缓存加速服务器能够更好的获取不同视频提供商的视频,满足用户需求。通过分片缓存的方式,根据用户需求缓存对应的视频数减少源站压力,且能够更有效的利用缓存加速服务器的磁盘空间。
在一个实施例中,如图3所示,步骤S210之前,还包括:
步骤S402,判断视频播放请求中是否包含视频拖拽参数。
步骤S404,当视频播放请求包含视频拖拽参数时,拼接元数据与视频拖拽参数对应的分片视频数据,得到播放数据。
步骤S406,当视频播放请求未包含视频拖拽参数时,拼接视频播放请求对应的分片视频数据为播放数据。
在一个实施例中,判断用户发送的视频播放请求中是否包含了视频拖拽参数,其中视频拖拽参数是用于描述用户期望缓存的视频数据的标识,标识可以采用起始位置或终止位置中的至少一个位置表示,其中起始位置用START描述,终止位置用END描述。当视频播放请求中包含了视频拖拽参数时,将元数据和视频播放请求对应的分片视频数据进行拼接,得到播放数据。如当视频播放请求中包含视频拖拽参数为START=10,END=30时,将元数据和视频拖拽参数START=10和END=30对应的分片视频数据拼接,得到播放数据。当视频播放请求不包含视频拖拽参数时,直接将视频播放请求对应的分片视频数据进行拼接,得到播放数据。
在一个实施例中,如图4所示,步骤S208,包括:
步骤S2082,获取预设分片规则。
步骤S2084,根据视频内存大小和预设分片规则确定视频分片数量。
步骤S2086,根据视频播放位置信息和分片数量确定各个分片视频数据的播放位置信息。
在一个实施例中,预设分片规则是指预先设置的用于对视频进行分片的规则,该规则可以自定义,如可以是技术人员根据网络带宽、服务器性能等进行分析后 制定的规则。在一个实施例中,预设分片规则包括内存分片规则、时间分片规则等,以内存分片规则为例,定义分片内存为2M,视频内存大小为10M,则分片数量为5。在确定了分片数量后,根据视频播放位置信息和分片数量确定各个分片视频数据的播放位置信息,如视频播放位置信息为10-20分钟,则10分钟时为第一分片视频数据的起始位置,12分钟时为第一分片视频数据的终止位置,第二分片视频数据的起始位置为12分钟、终止位置为14分钟,同理可知剩下三个分片的播放位置信息。
在一个实施例中,上述流媒体视频数据处理方法包括:
步骤S602,终端向缓存加速服务器发送FLV视频播放请求。
步骤S604,缓存加速服务器根据FLV视频播放请求判断是否存在对应的元数据,当存在时,进入步骤S606A,当不存在时,进入步骤S606B。
步骤S606A,判断是否存在与FLV视频播放请求对应的视频数据,若存在,进入步骤S616B,不存在,进入步骤S612B。
步骤S606B,缓存加速服务器根据FLV视频播放请求重新拼接预设拖拽参数START=0和END=1,将预设拖拽参数发送至源站。
步骤S608B,源站根据预设拖拽参数获取对应的视频信息,将视频信息发送至缓存加速服务器。
步骤S610B,缓存加速服务器以文件名+METADATA(元数据)为标识缓存源站发送的视频信息,视频信息包括元数据和视频数据。
步骤S612B,缓存加速服务器根据视频播放请求的视频内容标识中的范围以RANGE头的方式按一定分片大小依次向源站回源。
步骤S614B,源站根据缓存加速服务器请求的RANGE范围,响应对应的分片视频数据。
步骤S616B,缓存加速服务器以文件名+RANGE范围为标识缓存源站响应的分片视频数据。
步骤S618B,缓存加速服务器判断视频播放请求是否带START或END拖拽参数,若是,进入步骤S620B,若不是,进入步骤S622B。
步骤S620B,将元数据和分片视频数据拼接成播放数据。即将视频信息中的是数据删除,剩下元数据,将元数据与分片视频进行拼接,得到播放数据。
步骤S622B,拼接分片视频数据,得到播放数据。
步骤S624B,缓存加速服务器将播放数据发送至终端。
图2-5为一个实施例中流媒体视频数据处理方法的流程示意图。应该理解的是,虽然图2-5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图6所示,提供了一种流媒体视频数据处理装置200,包括:
数据接收模块202,用于接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息。
参数生成模块204,用于生成与视频内容标识对应的预设拖拽参数。
数据获取模块206,用于向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据。
数据缓存模块208,用于根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据。
数据拼接模块210,用于根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
在一个实施例中,上述流媒体视频数据处理装置,还包括:
元数据判断模块212,用于判断是否存在与视频播放请求对应的元数据,当存在时,返回数据缓存模块208,当不存在时,返回数据获取模块206。
在一个实施例中,上述流媒体视频数据处理装置,还包括:
视频数据判断模块214,用于判断是否缓存到与视频播放请求对应的全部分片视频数据,当缓存到全部分片视频数据时,返回数据拼接模块208,当未缓存到全部分片视频数据时,返回数据获取模块206。
在一个实施例中,如图7所示,上述流媒体视频数据处理装置,还包括:
拖拽参数判断模块216,用于判断视频播放请求中是否包含视频拖拽参数。
数据拼接模块210还用于当视频播放请求包含视频拖拽参数时,拼接元数据与视频拖拽参数对应的分片视频数据,得到播放数据,当视频播放请求未包含视频拖拽参数时,拼接视频播放请求对应的分片视频数据为播放数据。
在一个实施例中,如图8所示,数据缓存模块208,包括:
信息获取单元2082,用于获取预设分片规则。
分片数量计算单元2084,用于根据视频内存大小和预设分片规则确定视频分片数量。
分片信息确定单元2086,用于根据视频播放位置信息和分片数量确定各个分片视频数据的播放位置信息。
在一个实施例中,上述流媒体视频数据处理装置,还包括:
数据发送模块218,用于提取视频播放请求中携带的终端标识,将播放数据发送至终端标识对应的终端。
图9示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110(或缓存加速服务器120)。如图9所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现流媒体视频数据处理方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执行流媒体视频数据处理方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图9中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的流媒体视频数据处理装置可以实现为一种计算机程序的形式,计算机程序可在如图9所示的计算机设备上运行。计算机设备的存储器中可存储组成该流媒体视频数据处理装置的各个程序模块,比如,图6所示的数据接收模块202、参数生成模块204、数据获取模块206、数据缓存模块208和数据拼接模块210。各个程序模块构成的计算机程序使得处理器执行本说明书中描述的本公开各个实施例的流媒体视频数据处理方法中的步骤。
例如,图9所示的计算机设备可以通过如图6所示的流媒体视频数据处理装置中的数据接收模块202执行接收至少一个终端发送的视频播放请求,视频播放 请求携带了视频内容标识和对应的视频内存信息。计算机设备可通过参数生成模块204执行生成与视频内容标识对应的预设拖拽参数。计算机设备可通过数据获取模块206执行向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据。计算机设备可通过数据缓存模块208执行根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据。计算机设备可通过数据拼接模块210执行根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计了视算机程序时实现以下步骤:接收至少一个终端发送的视频播放请求,视频播放请求携带视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站获取与拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频数据,缓存分片视频数据,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
在一个实施例中,生成与所述视频内容标识对应的预设拖拽参数之前,处理器执行计算机程序时还实现以下步骤:判断是否存在与所述视频播放请求对应的元数据,当不存在时,进入向源站获取与所述拖拽参数对应的视频信息的步骤。
在一个实施例中,根据视频内存信息确定分片信息之前,处理器执行计算机程序时还实现以下步骤:判断是否缓存到与视频播放请求对应的全部分片视频数据,当缓存到全部分片视频数据时,进入根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据的步骤,当未缓存到全部分片视频数据时,持续对全部分片视频数据的判断步骤,直至缓存到全部分片视频数据。
在一个实施例中,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据之前,处理器执行计算机程序时还实现以下步骤:判断视频播放请求中是否包含视频拖拽参数,当视频播放请求包含视频拖拽参数时,拼接元数据与视频拖拽参数对应的分片视频数据,得到播放数据,当视频播放请求未包含视频拖拽参数时,拼接视频播放请求对应的分片视频数据为播放数据。
在一个实施例中,视频内存信息包括视频播放位置信息和视频内存大小,分片信息包括各个分片视频数据的播放位置信息,根据所述视频内存信息确定分片信息,包括:获取预设分片规则,根据视频内存大小和预设分片规则确定视频分片数量,根据视频播放位置信息和分片数量确定各个分片视频数据的播放位置信 息。
在一个实施例中,视频播放请求中携带终端标识,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据之后,处理器执行计算机程序时还实现以下步骤:提取视频播放请求中携带的终端标,将播放数据发送至终端标识对应的终端。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:接收至少一个终端发送的视频播放请求,视频播放请求携带视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站获取与拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频数据,缓存分片视频数据,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。
在一个实施例中,生成与所述视频内容标识对应的预设拖拽参数之前,处理器执行计算机程序时还实现以下步骤:判断是否存在与所述视频播放请求对应的元数据,当不存在时,进入向源站获取与所述拖拽参数对应的视频信息的步骤。
在一个实施例中,根据视频内存信息确定分片信息之前,处理器执行计算机程序时还实现以下步骤:判断是否缓存到与视频播放请求对应的全部分片视频数据,当缓存到全部分片视频数据时,进入根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据的步骤,当未缓存到全部分片视频数据时,持续对全部分片视频数据的判断步骤,直至缓存到全部分片视频数据。
在一个实施例中,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据之前,处理器执行计算机程序时还实现以下步骤:判断视频播放请求中是否包含视频拖拽参数,当视频播放请求包含视频拖拽参数时,拼接元数据与视频拖拽参数对应的分片视频数据,得到播放数据,当视频播放请求未包含视频拖拽参数时,拼接视频播放请求对应的分片视频数据为播放数据。
在一个实施例中,视频内存信息包括视频播放位置信息和视频内存大小,分片信息包括各个分片视频数据的播放位置信息,根据所述视频内存信息确定分片信息,包括:获取预设分片规则,根据视频内存大小和预设分片规则确定视频分片数量,根据视频播放位置信息和分片数量确定各个分片视频数据的播放位置信息。
在一个实施例中,视频播放请求中携带终端标识,根据视频播放请求拼接元 数据和分片视频数据,得到对应的播放数据之后,处理器执行计算机程序时还实现以下步骤:提取视频播放请求中携带的终端标,将播放数据发送至终端标识对应的终端。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
通过生成的预设拖拽参数获取元数据,将元数据和视频数据分离,通过分离缓存的方式进行数据缓存,节约磁盘空间。
上述流媒体视频数据处理方法、装置、计算机设备和存储介质,所述方法通 过接收至少一个终端发送的视频播放请求,视频播放请求携带了视频内容标识和对应的视频内存信息,生成与视频内容标识对应的预设拖拽参数,向源站获取与预设拖拽参数对应的视频信息,视频信息包含元数据,根据视频内存信息确定分片信息,向源站获取与分片信息对应的分片视频,缓存分片视频数据,根据视频播放请求拼接元数据和分片视频数据,得到对应的播放数据。通过生成的预设拖拽参数获取元数据,将元数据和视频数据分离,通过分离缓存的方式进行数据缓存,节约磁盘空间。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所申请的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种流媒体视频数据处理方法,所述方法包括:
    接收至少一个终端发送的视频播放请求,所述视频播放请求携带了视频内容标识和对应的视频内存信息;
    生成与所述视频内容标识对应的预设拖拽参数;
    向源站获取与所述拖拽参数对应的视频信息,所述视频信息包含元数据;
    根据所述视频内存信息确定分片信息,向所述源站获取与所述分片信息对应的分片视频数据,缓存所述分片视频数据;
    根据视频播放请求拼接所述元数据和所述分片视频数据,得到对应的播放数据。
  2. 根据权利要求1所述的方法,其中,所述生成与所述视频内容标识对应的预设拖拽参数之前,还包括:
    判断是否存在与所述视频播放请求对应的元数据;
    当不存在时,进入向源站获取与所述拖拽参数对应的视频信息的步骤。
  3. 根据权利要求1所述的方法,其中,所述根据所述视频内存信息确定分片信息之前,还包括:
    判断是否缓存到与所述视频播放请求对应的全部分片视频数据;
    当缓存到全部所述分片视频数据时,进入根据视频播放请求拼接所述元数据和所述分片视频数据,得到对应的播放数据的步骤;
    当未缓存到全部所述分片视频数据时,持续对全部所述分片视频数据的判断步骤,直至缓存到全部分片视频数据。
  4. 根据权利要求1所述的方法,其中,所述根据视频播放请求拼接所述元数据和所述分片视频数据,得到对应的播放数据之前,还包括:
    判断所述视频播放请求中是否包含视频拖拽参数;
    当所述视频播放请求包含所述视频拖拽参数时,拼接所述元数据与所述视频拖拽参数对应的所述分片视频数据,得到所述播放数据;
    当所述视频播放请求未包含所述视频拖拽参数时,拼接所述视频播放请求对应的所述分片视频数据为所述播放数据。
  5. 根据权利要求1所述的方法,其中,所述视频内存信息包括视频播放位置信息和视频内存大小,所述分片信息包括各个所述分片视频数据的播放位置信息,所述根据所述视频内存信息确定分片信息,包括:
    获取预设分片规则;
    根据所述视频内存大小和所述预设分片规则确定视频分片数量;
    根据所述视频播放位置信息和所述分片数量确定各个所述分片视频数据的播放位置信息。
  6. 根据权利要求1至5中任意一项所述的方法,其中,所述视频播放请求中携带终端标识,所述根据视频播放请求拼接所述元数据和所述分片视频数据,得到对应的播放数据之后,还包括:
    提取所述视频播放请求中携带的终端标识;
    将所述播放数据发送至所述终端标识对应的终端。
  7. 一种流媒体视频数据处理装置,其中,所述装置包括:
    数据接收模块,用于接收至少一个终端发送的视频播放请求,所述视频播放请求携带了视频内容标识和对应的视频内存信息;
    参数生成模块,用于生成与所述视频内容标识对应的预设拖拽参数;
    数据获取模块,用于向源站获取与所述拖拽参数对应的视频信息,所述视频信息包含元数据;
    数据缓存模块,用于根据所述视频内存信息确定分片信息,向所述源站获取与所述分片信息对应的分片视频,缓存所述分片视频数据;
    数据拼接模块,用于根据视频播放请求拼接所述元数据和所述分片视频数据,得到对应的播放数据。
  8. 根据权利要求7所述的装置,其中,所述装置还包括:
    元数据判断模块,用于判断是否存在与所述拖拽参数对应的元数据,当不存在时,返回数据获取模块。
  9. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现权利要求1至6中任一项所述方法的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至6中任一项所述的方法的步骤。
PCT/CN2019/115725 2018-11-06 2019-11-05 流媒体视频数据处理方法、装置、计算机设备和存储介质 WO2020094012A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19881958.3A EP3879842A4 (en) 2018-11-06 2019-11-05 METHOD AND EQUIPMENT FOR VIDEO DATA PROCESSING FROM STREAMING MEDIA AND COMPUTER DEVICE AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811314448.5A CN111147888B (zh) 2018-11-06 2018-11-06 流媒体视频数据处理方法、装置、计算机设备和存储介质
CN201811314448.5 2018-11-06

Publications (1)

Publication Number Publication Date
WO2020094012A1 true WO2020094012A1 (zh) 2020-05-14

Family

ID=70516523

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115725 WO2020094012A1 (zh) 2018-11-06 2019-11-05 流媒体视频数据处理方法、装置、计算机设备和存储介质

Country Status (3)

Country Link
EP (1) EP3879842A4 (zh)
CN (1) CN111147888B (zh)
WO (1) WO2020094012A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101415069A (zh) * 2008-10-22 2009-04-22 清华大学 一种服务器及其在线播放视频的发送方法
CN102611945A (zh) * 2011-12-19 2012-07-25 北京蓝汛通信技术有限责任公司 一种流媒体切片方法、切片服务器及流媒体点播系统
CN102857794A (zh) * 2011-06-28 2013-01-02 上海聚力传媒技术有限公司 一种用于合并视频分段的方法与设备
US20130308699A1 (en) * 2012-05-18 2013-11-21 Home Box Office, Inc. Audio-visual content delivery
CN105763960A (zh) * 2016-03-01 2016-07-13 青岛海信传媒网络技术有限公司 一种网络视频播放的方法和装置
CN105872807A (zh) * 2016-05-16 2016-08-17 乐视控股(北京)有限公司 一种视频播放的方法和系统
CN108235151A (zh) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 一种视频直播的方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883187B (zh) * 2012-09-17 2015-07-08 华为技术有限公司 一种时移节目服务方法、设备和系统
CN106550284B (zh) * 2015-09-21 2020-04-17 北京国双科技有限公司 一种播放分片视频的方法及装置
CN106612456A (zh) * 2015-10-26 2017-05-03 中兴通讯股份有限公司 网络视频播放方法和系统及用户终端、家庭流服务节点
CN105430425B (zh) * 2015-11-18 2018-11-16 深圳Tcl新技术有限公司 单分片视频播放加速方法及装置
CN106534946A (zh) * 2016-10-26 2017-03-22 腾讯科技(深圳)有限公司 视频播放的控制方法和装置
CN109104617B (zh) * 2018-09-05 2021-04-27 杭州领智云画科技有限公司 视频请求响应方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101415069A (zh) * 2008-10-22 2009-04-22 清华大学 一种服务器及其在线播放视频的发送方法
CN102857794A (zh) * 2011-06-28 2013-01-02 上海聚力传媒技术有限公司 一种用于合并视频分段的方法与设备
CN102611945A (zh) * 2011-12-19 2012-07-25 北京蓝汛通信技术有限责任公司 一种流媒体切片方法、切片服务器及流媒体点播系统
US20130308699A1 (en) * 2012-05-18 2013-11-21 Home Box Office, Inc. Audio-visual content delivery
CN105763960A (zh) * 2016-03-01 2016-07-13 青岛海信传媒网络技术有限公司 一种网络视频播放的方法和装置
CN105872807A (zh) * 2016-05-16 2016-08-17 乐视控股(北京)有限公司 一种视频播放的方法和系统
CN108235151A (zh) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 一种视频直播的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3879842A4

Also Published As

Publication number Publication date
CN111147888A (zh) 2020-05-12
EP3879842A4 (en) 2022-08-31
EP3879842A1 (en) 2021-09-15
CN111147888B (zh) 2022-06-03

Similar Documents

Publication Publication Date Title
US10587920B2 (en) Cognitive digital video filtering based on user preferences
US20180253503A1 (en) Method, Apparatus and System for Preloading of APP Launch Advertising
WO2017088415A1 (zh) 检索视频内容的方法、装置和电子设备
WO2017185616A1 (zh) 文件存储方法及电子设备
CN103945259B (zh) 一种在线视频播放方法及装置
US10313468B2 (en) Caching of metadata objects
WO2019154014A1 (zh) 视频播放方法、装置、存储介质及电子设备
CN107197359B (zh) 视频文件缓存方法及装置
WO2020133608A1 (zh) 一种拖拉视频数据的处理方法及代理服务器
WO2014154096A1 (zh) 信息推荐方法、装置及信息资源推荐系统
CN104506493A (zh) 一种实现hls内容回源和缓存的方法
US20210209057A1 (en) File system quota versioning
KR20100136541A (ko) 보조 데이터를 디바이스로 전송하는 방법 및 장치
JP2019512144A (ja) 限定対話機能を用いたリアルタイムのコンテンツ編集
WO2023216491A1 (zh) 动画资源的信息处理方法及装置、设备、介质及产品
US20150150044A1 (en) Audio/video-on-demand method, server, terminal and system
EP3125541A1 (en) Data acquisition and interaction method, set top box, server and multimedia system
WO2017000929A1 (zh) 基于客户端的媒体信息投放方法及装置
CN103646039A (zh) 一种网页搜索方法及装置
AU2020288833B2 (en) Techniques for text rendering using font patching
US9043441B1 (en) Methods and systems for providing network content for devices with displays having limited viewing area
WO2020094012A1 (zh) 流媒体视频数据处理方法、装置、计算机设备和存储介质
WO2016184288A1 (zh) 一种广告投放方法、装置及系统
CN107480269B (zh) 对象展示方法及系统、介质和计算设备
CN110020290B (zh) 网页资源缓存方法、装置、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19881958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019881958

Country of ref document: EP

Effective date: 20210607