CN110475121B - Video data processing method and device and related equipment - Google Patents

Video data processing method and device and related equipment Download PDF

Info

Publication number
CN110475121B
CN110475121B CN201810443699.7A CN201810443699A CN110475121B CN 110475121 B CN110475121 B CN 110475121B CN 201810443699 A CN201810443699 A CN 201810443699A CN 110475121 B CN110475121 B CN 110475121B
Authority
CN
China
Prior art keywords
target
video data
scene
video
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810443699.7A
Other languages
Chinese (zh)
Other versions
CN110475121A (en
Inventor
庄钟鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810443699.7A priority Critical patent/CN110475121B/en
Publication of CN110475121A publication Critical patent/CN110475121A/en
Application granted granted Critical
Publication of CN110475121B publication Critical patent/CN110475121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The embodiment of the invention discloses a video data processing method, a device and related equipment, wherein the method comprises the following steps: the video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; and the video server sends the target video data and the target playing file to a mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file. By adopting the invention, the accuracy of pushing the service data can be improved.

Description

Video data processing method and device and related equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video data processing method, an apparatus, and a related device.
Background
Data shows that by 6 months in 2017, the scale of Chinese netizens reaches 7.10 hundred million, the popularity of the Internet reaches 51.7 percent, and the popularity of the Internet is increased, so that the network gradually becomes a main place for people to enjoy leisure and entertainment. The method directly promotes the unprecedented development of the network video industry, the number of videos is increased dramatically, social enterprises cooperate with all large video websites to implant advertisements in the videos, and good advertisement effect and wide spread range are provided for the advertisements while huge economic benefits are brought to the video websites.
The existing process of inserting advertisements in the live video broadcast process mainly comprises the following steps: when an operator sees a certain event in the video in the background (for example, a football game breaks in the middle), the operator pushes the advertisement to a video decoder in the client, and after the advertisement is decoded by the decoder, the mobile terminal plays the advertisement. Since the advertisement video is buffered by the video streaming server, the video frame seen by the viewer is delayed from the video frame seen by the operator in the background, so that the time for the operator to push the advertisement to the client is inaccurate. For example, when a football game is played, if a picture seen by a viewer is a game video picture which has not yet reached a half time, and an operator already sees the half time video picture, an advertisement is pushed to a client, so that at the client side, the played advertisement occupies the video picture in the game, that is, the time for playing the advertisement in live video broadcast is not matched with the scene of the video content watched by the user, and the advertisement cannot be accurately pushed to the viewer.
Disclosure of Invention
The embodiment of the invention provides a video data processing method, a video data processing device and related equipment, which can improve the time accuracy of pushing service data.
One aspect of the present invention provides a video data processing method, including:
the video server acquires target scene information corresponding to currently played target video data;
the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the target playing file is an index list file of the target video data;
and the video server sends the target video data and the target playing file to a mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
Wherein, still include:
the video server receives tag configuration information; the label configuration information comprises a scene number and service behavior data corresponding to the scene number;
the video server constructs a label behavior mapping table according to the mapping relation between each scene number and the service behavior data corresponding to each scene number, and sends the label behavior mapping table to the mobile terminal, so that the mobile terminal queries the mapping relation between each scene number and the service behavior data corresponding to each scene number in the label behavior mapping table.
The video server generates a target scene tag corresponding to the target scene information, and adds the target scene tag to a target playing file, including:
the video server acquires a target scene number corresponding to the target scene information;
the video server generates the target scene label according to the target scene number;
and the video server adds the target scene label to the target playing file.
The video server sends the target video data and the target playing file to a mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes a business operation associated with the target scene tag when the target scene tag is read in the target playing file, wherein the business operation comprises:
the video server sends the target playing file to the mobile terminal, so that the mobile terminal sends a video data request to the video server according to the target playing file;
and the video server sends the target video data requested by the video data request to the mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
The method for acquiring the target scene information corresponding to the currently played target video data by the video server includes:
the video server receives a trigger display instruction and displays a plurality of pieces of scene information according to the trigger display instruction;
and the video server receives a click determination instruction, and takes scene information corresponding to the click determination instruction as target scene information corresponding to the target video data.
The method for acquiring the target scene information corresponding to the currently played target video data by the video server includes:
the video server detects a timestamp of the currently played target video data;
and if the timestamp is a target timestamp, the video server generates target scene information corresponding to the target timestamp for the target video data.
The method for acquiring the target scene information corresponding to the currently played target video data by the video server includes:
the video server extracts a video frame image of the currently played target video data;
and the video server identifies the service type of the video frame image and generates target scene information corresponding to the service type for the target video data.
Another aspect of the present invention provides a video data processing method, including:
the mobile terminal acquires a playing request and acquires a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is scene information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data;
the mobile terminal plays the target video data according to the target playing file;
and when the mobile terminal reads the target scene label in the target playing file, executing business operation associated with the target scene label.
Wherein the performing the business operation associated with the target scenario tag comprises:
the mobile terminal analyzes the target scene label to obtain a target scene number;
in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number.
Wherein, the executing the service operation according to the target service behavior data comprises:
and the mobile terminal plays auxiliary video data or displays static multimedia materials according to the target service behavior data.
Wherein, the acquiring the target playing file and the target video data according to the playing request includes:
the mobile terminal acquires the target playing file according to the playing request;
and the mobile terminal sends a video data request to the video server according to the identification information of the target video data in the target playing file, so that the video server sends the target video data requested by the video data request to the mobile terminal.
Another aspect of the present invention provides a video data processing method, including:
the video server acquires target scene information corresponding to currently played target video data;
the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the target playing file is an index list file of the target video data;
the video server sends the target video data and the target playing file to a mobile terminal;
the mobile terminal plays the target video data according to the target playing file;
and when the mobile terminal reads the target scene label in the target playing file, executing business operation associated with the target scene label.
Another aspect of the present invention provides a video data processing apparatus, including:
the receiving module is used for acquiring target scene information corresponding to currently played target video data;
the adding module is used for generating a target scene label corresponding to the target scene information and adding the target scene label to a target playing file; the target playing file is an index list file of the target video data;
and the sending module is used for sending the target video data and the target playing file to a mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
Wherein, still include:
a configuration module for receiving tag configuration information; the label configuration information comprises a scene number and service behavior data corresponding to the scene number;
the construction module is used for constructing a label behavior mapping table according to the mapping relation between each scene number and the service behavior data corresponding to each scene number, and sending the label behavior mapping table to the mobile terminal, so that the mobile terminal can inquire the mapping relation between each scene number and the service behavior data corresponding to each scene number in the label behavior mapping table.
Wherein, the adding module comprises:
a number acquisition unit configured to acquire a target scene number corresponding to the target scene information;
the number obtaining unit is further configured to generate the target scene tag according to the target scene number;
and the determining unit is used for adding the target scene label to the target playing file.
Wherein, the sending module includes:
the file sending unit is used for sending the target playing file to the mobile terminal so that the mobile terminal sends a video data request to the video server according to the target playing file;
and the data sending unit is used for sending the target video data requested by the video data request to the mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
Wherein, the receiving module comprises:
the display unit is used for receiving a trigger display instruction and displaying a plurality of pieces of scene information according to the trigger display instruction;
and the generating unit is used for receiving a click determining instruction and taking the scene information corresponding to the click determining instruction as the target scene information corresponding to the target video data.
Wherein, the receiving module further comprises:
the detection unit is used for detecting the timestamp of the currently played target video data;
the generating unit is further configured to generate target scene information corresponding to the target timestamp for the target video data if the timestamp is the target timestamp.
Wherein, the receiving module further comprises:
the extraction unit is used for extracting the video frame image of the currently played target video data;
the generating unit is further configured to identify a service type of the video frame image, and generate target scene information corresponding to the service type for the target video data.
Another aspect of the present invention provides a video data processing apparatus, including:
the request acquisition module is used for acquiring a playing request;
the video acquisition module is used for acquiring a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is scene information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data;
the playing module is used for playing the target video data according to the target playing file;
and the business operation module is used for executing the business operation associated with the target scene label when the target scene label is read in the target playing file.
Wherein, the service operation module includes:
the analysis unit is used for analyzing the target scene label to obtain a target scene number;
the searching unit is used for extracting target service behavior data corresponding to the target scene number from a label behavior mapping table;
the business operation unit is used for executing business operation according to the target business behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number.
Wherein, the service operation unit is specifically configured to: and playing auxiliary video data or displaying static multimedia materials according to the target service behavior data.
Wherein, the video acquisition module includes:
a file obtaining unit, configured to obtain the target play file according to the play request;
and the request sending unit is further used for sending a video data request to the video server according to the identification information of the target video data in the target playing file, so that the video server sends the target video data requested by the video data request to the mobile terminal.
Another aspect of the present invention provides a video data processing system, including: a mobile terminal and a video server;
the video server is used for acquiring target scene information corresponding to currently played target video data;
the video server is further used for generating a target scene label corresponding to the target scene information and adding the target scene label to a target playing file; the target playing file is an index list file of the target video data;
the video server is also used for sending the target video data and the target playing file to the mobile terminal;
the mobile terminal is used for playing the target video data according to the target playing file;
the mobile terminal is further configured to execute a service operation associated with the target scene tag when the target scene tag is read from the target playing file.
Another aspect of the present invention provides an electronic device, including: a processor and a memory;
the processor is connected to a memory, wherein the memory is used for storing program codes, and the processor is used for calling the program codes to execute the method in one aspect of the embodiment of the invention.
Another aspect of the present invention provides an electronic device, including: a processor and a memory;
the processor is coupled to a memory, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform a method as in another aspect of an embodiment of the invention.
Another aspect of the embodiments of the present invention provides a computer storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, perform a method as in one aspect of the embodiments of the present invention.
Another aspect of embodiments of the present invention provides a computer storage medium storing a computer program comprising program instructions that, when executed by a processor, perform a method as in another aspect of embodiments of the present invention.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In the foregoing, the server generates the scene tag according to the scene of the currently played video data, and when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 a-1 b are schematic views of scenes of a video data processing method according to an embodiment of the present invention;
fig. 2a is a schematic flow chart of a video data processing method according to an embodiment of the present invention;
fig. 2b is a scene schematic diagram for acquiring target scene information according to an embodiment of the present invention;
fig. 2c is a schematic view of another video data processing method according to an embodiment of the present invention;
fig. 3 is a flow chart illustrating another video data processing method according to an embodiment of the present invention;
FIG. 4a is a field interaction diagram of a video data processing method according to an embodiment of the present invention;
fig. 4b is a hardware interaction diagram of a video data processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another video data processing apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of a video data processing system according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1 a-1 b, which are scene diagrams illustrating a video data processing method according to an embodiment of the present invention. As shown in fig. 1a, a video server 10e provides a video data service for a user terminal cluster through a switch 10d, and the user terminal cluster may include: user terminal 10a, user terminal 10 b. Since the video server 10e needs to buffer the video data to the user terminal cluster through the buffering of the video server 10e and the buffering of the decoder in each terminal device, the video pictures viewed by each user terminal in the user terminal cluster are delayed from the video pictures viewed by the video server 10 e. In order that the video server 10e can accurately push additional content such as advertisements, the following describes how to accurately push additional content having an association relationship with scene information of video data played by the user side in terms of time, by taking the video server 10e and the user terminal 10a as an example. As shown in fig. 1b, the video server 10e is currently broadcasting video data of a badminton game, the video server 10e generates a corresponding m3u8 file 10g according to the video data being broadcasted, the m3u8 file is a playlist file in UTF-8(Unicode Transformation Format) text Format, and the file includes attributes and addresses specifying video data segments to be broadcasted, for example, the m3u8 tag "# extinnf" in the m3u8 file 10 g: 3 "indicates that the corresponding duration of the video data segment" 1-4.ts "is 3 seconds, and the video data segment" 1-4.ts "is analyzed, so as to obtain the URL (Uniform Resource Locator) of the video data segment" 1-4.ts ", where the URL indicates the address of the video data segment stored in the video server 10e, and the m3u8 tag is an attribute for distinguishing other data from the video data segment, and starts with the character string" # EXT "in the m3u8 file. If the scene in which the operator at the video server 10e views the current video data is a scene in which the referee announces a break in the game, the operator sends the scene information of the current scene to the video server 10e, where the scene information is: and (5) resting in a middle field. The server 10e converts the received scene information "half time" into a scene tag 10m, which is an m3u8 tag, and adds the generated scene tag 10m to the m3u8 file 10 g. The server 10e transmits an m3u8 file 10g including a scene tag 10m to the user terminal, which requests video data of video data segments from the video server 10e in turn according to URLs of a plurality of video data segments in the m3u8 file 10g, and plays the video data about the badminton game acquired from the video server 10e on a screen 10h of the user terminal. When the user terminal reads a scene tag 10m in an m3u8 file 10g, querying that behavior data corresponding to the scene tag 10m in a preset tag behavior mapping table is: the guest comment video data on the badminton game is acquired from the server 10 f. The user terminal sends a request of the guest comment video data to the server 10f, after the server 10f responds to the request, the screen 10k of the user terminal starts to play the video data of the guest comment of the badminton game, namely when the user terminal plays the video picture of the referee playing whistle declaring the midcourt rest, the user terminal immediately switches to the video picture of the guest comment, and therefore the user terminal can be prevented from switching to the video picture of the guest comment when not playing the video picture of the midcourt rest. The label behavior mapping table is sent to the user terminal by the video server 10e in advance, and the mapping table records the corresponding relationship between the scene label and the behavior data. For a plurality of user terminals in the user cluster, the above manner can be adopted, and the scene tag is added to the m3u8 file, that is, the additional content having an association relationship with the scene information of the video data can be accurately pushed in the user terminal, thereby avoiding the situation that the pushed additional content is not matched with the scene information of the video data being played due to the fact that the pushing time is determined manually.
The specific processes of receiving the scene information and generating the scene tag may refer to the following embodiments corresponding to fig. 2a to 4 b.
Further, please refer to fig. 2a, which is a flowchart illustrating a video data processing method according to an embodiment of the present invention. As shown in fig. 2a, the video data processing method may include:
step S101, a video server acquires target scene information corresponding to currently played target video data.
Specifically, the video data currently played by the video server (e.g. the video server 10e in the embodiment corresponding to fig. 1 a) is called target video data. As shown in fig. 2b, which is a scene schematic diagram for acquiring target scene information according to an embodiment of the present invention, when an operator at a side of a video server views that a video frame in currently played target video data matches a plurality of pieces of preset scene information (for example, the preset scene information may be scene information of a half-time during a match, or scene information of a celebrity appearing, and the like) in a playing interface 20a corresponding to the video server, the operator clicks a "set" button in a setting interface 20b corresponding to the video server, and after the click, the video server generates a trigger display instruction, and displays a plurality of pieces of scene information in the setting interface according to the trigger display instruction, where the scene information in fig. 2b includes: a race bar start, a race bar end, a race pause, and a race end. An operator clicks a button corresponding to one of the pieces of scene information according to the viewed target video data, and the video server generates a click determination instruction, where the piece of scene information indicated by the click determination instruction is the target scene information, as shown in fig. 2b, a user clicks a button corresponding to "end of game measure" piece of scene information, and thus the video server generates the click determination instruction, and the piece of scene information indicated by the click determination instruction, that is, "end of game measure" piece is the target scene information. The target scene information is the scene information which is selected by the operator from the plurality of scene information and is most matched with the currently played target video. Namely, the process is to manually determine the scene information of the currently played target video data as the target scene information. The various scene information is configured in the configuration page of the video server in advance.
Optionally, the video server detects a timestamp of currently played target video data, and if the detected timestamp is the same as a preset target timestamp, the video server generates scene information corresponding to the target timestamp for the target video data, and the scene information is used as target scene information. For example, since the target video data is video data of a live large conference and a conference pause of 10:30 am is recorded for a rest in a preset schedule table for the conference, a timestamp of the target video data is detected when the video data of the conference is played on the video server side, and if it is detected that the current timestamp is 10:30, the video server generates scene information regarding the conference pause as the target scene information.
Optionally, the video server extracts a video frame of the currently played target video data as a video frame image. Detecting the service type of the video frame image, generating scene information corresponding to the service type for target video data as target scene information according to the detected service type, wherein the service type can be a midfield rest type, a match ending type, a celebrity appearing type and the like, namely when the service type of the video frame image meets a preset service type, a video server generates the scene information corresponding to the service type, and the preset service type is set in advance. For example, the extracted video frame image is an image of star a when it enters, and the service type corresponding to the image is: the star leaving type meets the preset service type, so the video server generates scene information about the star A entering as target scene information.
Step S102, the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file.
Specifically, after the target scene information is acquired, a scene number corresponding to the target scene information is queried and acquired in the information number mapping table as the target scene number. The information number mapping table includes a plurality of pieces of scene information and a scene number corresponding to each piece of scene information, for example, the scene information "basketball game bar ends", and the corresponding scene number is 001; the scene information "basketball match measure starts", and the corresponding scene number is 002; the scene information "basketball game is completed", and the corresponding scene number is 003. And the video server generates a scene label as a target scene label according to the target scene label, and adds the target scene label to the target playing file. The play file is an index list file of video data in which an attribute and a storage address designating video data to be played are recorded, and since video data is stored separately divided into a plurality of video slices, the attribute and the storage address of each video slice are recorded in the play file, the index list file of target video data is called a target play file, the attribute of the slice file may be represented by a tag, and the storage address of the slice file may be represented by a URL. If the target play file is the target m3u8, the scene tag corresponds to the m3u8 tag, that is, the video server generates the m3u8 tag according to the target scene number, the m3u8 tag generated by the target scene number is called the target m3u8 tag, and the m3u8 file is a play list file in the UTF-8 text format. For example, the target m3u8 label may be: # EXT-X-SCENE-CODE: and X represents the target m3u8 tag corresponding to the target scene number X, and the target m3u8 tag is added to the target m3u8 file in a signing mode. The m3u8 file is a play file based on the HLS (Http Live streaming) protocol, the m3u8 tag is a tag in the m3u8 file for distinguishing from the rest of the data, and the m3u8 tag begins with the string "# EXT" in the m3u8 file. Of course, an RTMP (Real Time Messaging Protocol) Protocol may also be adopted, and then the corresponding play file is an h264 file, and the corresponding tag is an h264 tag. An MPEG-DASH (Dynamic Adaptive Streaming over, Adaptive bit rate Streaming) protocol may also be used, and the corresponding playback file is an mpd file and the corresponding tag is an mpd tag.
Step S103, the video server sends the target video data and the target playing file to a mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
Specifically, after the target playing file corresponding to the target video data is generated, at this time, the target playing file includes the target scene tag, and the video server sends the target playing file to the mobile terminal (for example, the user terminal 10a, the user terminal 10b, and the user terminal 10c in the embodiment corresponding to fig. 1 a), so that the mobile terminal sends a video data request to the video server according to the identification information (for example, the identification information may be a URL) in the target playing file, where the data requested by the video data request is the target video data. After the video server receives the video data request, the video server sends target video data corresponding to the request to the mobile terminal, so that the mobile terminal plays the target video data on one side of the mobile terminal after receiving the target video data. Because the target video data is divided into a plurality of video slices for storage respectively, when the mobile terminal acquires the video data corresponding to the plurality of video slices from the video server, the video data is played according to the sequence of the video slices in the target playing file. When the mobile terminal reads a target scene label in a target playing file, the mobile terminal queries and acquires service behavior data corresponding to the target scene label, which is called target service behavior data. And the mobile terminal executes service operation according to the acquired target service behavior data, namely executing the service operation associated with the target scene label. The business behavior data may be: playing the auxiliary video data, displaying pictures associated with the scene tags, etc. In order to make the pushed business behavior data more fit to the target scene tag, the content of the auxiliary video data may have an association relationship with the target scene tag, and the content of the picture may also have an association relationship with the target scene tag. For example, the behavior data corresponding to the target scene label "take a break in a soccer match" is: playing comment video data of a football commentator; the behavior data corresponding to the target scene label "star B coming out" is: push advertisements, etc., by star B. When the service operation of the target behavior data is executed, a service layer in the mobile terminal and a User Interface (UI) layer can be called to cooperate to complete the operation.
The mobile terminal may include a mobile phone, a tablet computer, a notebook computer, a handheld computer, a Mobile Internet Device (MID), a Point Of Sale (POS) machine, a wearable device (e.g., a smart watch, a smart bracelet, etc.), or other mobile terminals having a video playing function.
Optionally, the video server receives tag configuration information input by the operator, where the tag configuration information includes a scene number and service behavior data corresponding to the scene number. The video server constructs a label behavior mapping table according to the mapping relation between the scene number input by the operator and the corresponding service behavior data, namely the unique corresponding service behavior data exists in the scene number. Of course, the tag behavior mapping table may be created in the video server, and may also be modified and queried. And sending the constructed label behavior mapping table to the mobile terminal, so that the mobile terminal can inquire the scene numbers and the service behavior data corresponding to the scene numbers in the label behavior mapping table. The information number mapping table comprises a plurality of pieces of scene information and scene numbers corresponding to the scene information, so that the information number mapping table and the label behavior mapping table can be fused to obtain the scene information, the scene labels and the service behavior data, and the fused table is sent to the mobile terminal so that the mobile terminal can inquire the corresponding relations among the scene information, the scene labels and the service behavior data in the table.
Please refer to fig. 2c, which is a scene diagram illustrating another video data processing method according to an embodiment of the present invention. The method comprises the steps that an operator watches target video data on one side of a video server, if scene information of the currently played target video data meets preset scene information, the operator sends the scene information to the video server through an operation controller, the video server generates a corresponding scene label according to the received scene information, the scene label is added into a target playing file, and the method is equivalent to the method that the scene label is added into the target video data. And the video server sends the target playing file carrying the scene label and the target video data to a decoder in the mobile terminal, and the decoder is used for decoding the target video data. Since the target video data is divided into a plurality of video slices for storage, and the target video data acquired by the mobile terminal from the video server is also video data corresponding to the plurality of video slices, the plurality of video slices of the decoded target video data need to be played in a display screen of the mobile terminal according to the sequence of the plurality of video slices in the target playing file. When the decoder reads a scene label in the target playing file, searching and acquiring the service behavior data corresponding to the scene label in the label behavior mapping table, and executing the service operation corresponding to the acquired service behavior data.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In this way, when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Fig. 3 is a schematic view of a scene of a video data processing method according to an embodiment of the present invention. The video data processing method may include the steps of:
step S201, the mobile terminal obtains a play request, and obtains a target play file and target video data according to the play request.
Specifically, the mobile terminal monitors and receives a play request, where the play request is used for the mobile terminal to obtain target video data and a target play file corresponding to the target video data from the video server. The specific process of obtaining is as follows: the mobile terminal firstly acquires a target playing file from the video server, and sends a video data request to the video server according to the identification information of the target video data in the target playing file, so that the video server sends the requested target video data to the mobile terminal according to the video data request. The identification information of the target video data is information capable of uniquely identifying the target video data (for example, the identification information of the target video data may be a URL of the target video data in a video server), and has uniqueness and exclusivity. The target playing file obtained from the video server includes a target scene tag, the target scene tag is a tag generated by the video server and matched with target scene information, the target scene information is scene information corresponding to currently played target video data in the video server, and the target playing file is an index list file of the target video data, the file records the attribute and the storage address of the target video data, if the target video data is divided into a plurality of video slices to be stored respectively, the attributes of each video slice of the target video data (e.g., the size of the video slice or the video slice duration, etc.) and the storage address of each video slice are recorded in the target play file, so that the target video data is requested according to the target play file, that is, the video data corresponding to each video slice is respectively obtained according to the address of each video slice in the target playing file. The storage address in the target play file may be represented by a URL, and the attribute may be represented by a tag.
Step S202, the mobile terminal plays the target video data according to the target playing file.
Specifically, after receiving target video data, that is, video data corresponding to a plurality of video slices of the target video data, the mobile terminal loads and plays the video data corresponding to each video slice according to the sequence of each video slice in the target playing file, that is, plays the target video data.
Step S203, when the mobile terminal reads the target scene tag in the target playing file, executing a service operation associated with the target scene tag.
Specifically, the target video is played according to the sequence of the plurality of video slices in the target playing file. When the mobile terminal reads a target scene label in a target playing file, the target scene label is extracted, the target scene label is analyzed, a scene number is obtained, and the scene number obtained by the target scene label is called as a target scene number. In the label behavior mapping table, the mobile terminal searches and acquires service behavior data corresponding to the target scene label, which is called target service behavior data. And executing the service operation according to the target service behavior data, wherein the service behavior data can be: playing auxiliary video data, or displaying static multimedia material. In order to make the pushed business behavior data more fit to the target scene tag, the content of the auxiliary video data may have an association relationship with the target scene tag, and the content of the picture may also have an association relationship with the target scene tag and thus be the same. For example, the behavior data corresponding to the target scene label "end of soccer match" is: playing comment video data of a football commentator, wherein the video data is auxiliary video data, and behavior data corresponding to a target scene label 'star B coming out of the scene' is as follows: push advertisements, etc., by star B. The label behavior mapping table is obtained by the mobile terminal in the video server, the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number, and the mobile terminal can inquire the scene number and the corresponding service behavior data in the label behavior mapping table at any time.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In this way, when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Please refer to fig. 4a, which is a field interaction diagram of a video data processing method according to an embodiment of the present invention. The video data processing method may include:
in step S301, the video server receives target scene information corresponding to the currently played target video data.
Step S302, the video server generates a target scene tag corresponding to the target scene information, and adds the target scene tag to a target playing file.
Step S303, the video server sends the target video data and the target playback file to the mobile terminal.
For specific implementation of steps S301 to S303, reference may be made to the description of steps S101 to S103 in the embodiment corresponding to fig. 2a, and details are not repeated here.
Step S304, the mobile terminal sends a video data request to the video server according to the target playing file.
Specifically, the mobile terminal sends a video data request to the video server according to the identification information of the target video data in the target playing file, where the video data request is the target video data. Since the target video data is composed of a plurality of video slices, a video data request is sent to the video server, namely, a request for acquiring the video slices is sent to the video server according to the URLs of the video slices in the target playing file.
Step S305, the video server sends the target video data to the mobile terminal.
Specifically, after receiving the video data request, the video server sends the target video data to the mobile terminal, that is, sends the video data corresponding to each video slice.
Step S306, the mobile terminal plays the target video data according to the target playing file.
Specifically, after the mobile terminal acquires video data of a plurality of video slices, the video data of each video slice is loaded and played according to the sequence of each video slice in the target playing file, that is, the target video data is played.
Step S307, when the mobile terminal reads the target scene tag in the target playing file, executing a service operation associated with the target scene tag.
The specific implementation of step S307 may refer to the description of step S203 in the embodiment corresponding to fig. 3, and is not described herein again.
Please refer to fig. 4b, which is a hardware interaction diagram of a video data processing method according to an embodiment of the present invention. And an operator of the server configures the mapping relation among the scene information, the scene number and the service behavior data through a video scene configuration page in the video scene configurator. The mapping relationship between the scene information and the scene number can construct an information number mapping table, the mapping relationship between the scene number and the service behavior data can construct a label behavior mapping table, and the corresponding relationship among the scene information, the scene number and the service behavior data can also be constructed into one label behavior mapping table without constructing an information number mapping table. And the constructed mapping tables are all stored in the video information server, and operators can increase tuples in the mapping tables, modify tuples in the mapping tables and the like through the video scene configurator. The video information server may send the constructed tag behavior mapping table to a play service manager in the mobile terminal, and the play service manager may provide a query function and a storage function with respect to the tag behavior mapping table. An operator sends scene information of a target video currently played in a video streaming media server to the video streaming media server through a video scene configurator, and inquires a scene number corresponding to the scene information in an information number mapping table, and the video streaming media server packages the scene number into a corresponding scene label and adds the scene label to a target playing file, namely, the scene label is equivalently inserted into target video data. The video streaming media server sends the target video data and the target playing file to a client in the mobile terminal, and the decoder decodes a plurality of video slices of the target video data and plays the video data of each video slice according to the indicated sequence of the video slices in the target. When the decoder loads the scene label in the target playing file, the scene number in the scene label is extracted and sent to the playing service manager. And the playing service manager searches and acquires service behavior data corresponding to the scene number in the label behavior mapping table, calls a service operation execution controller and executes service operation corresponding to the service behavior data.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In this way, when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Further, please refer to fig. 5, which is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention. As shown in fig. 5, the video data processing apparatus 1 may be applied to a video server, and the video data processing apparatus 1 may include: a receiving module 11, an adding module 12 and a sending module 13;
a receiving module 11, configured to obtain target scene information corresponding to currently played target video data;
an adding module 12, configured to generate a target scene tag corresponding to the target scene information, and add the target scene tag to a target playing file; the target playing file is an index list file of the target video data;
a sending module 13, configured to send the target video data and the target playing file to a mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes a service operation associated with the target scene tag when the target scene tag is read in the target playing file.
For specific functional implementation manners of the receiving module 11, the adding module 12, and the sending module 13, reference may be made to steps S101 to S103 in the embodiment corresponding to fig. 2a, which is not described herein again.
Referring to fig. 5 together, the video data processing apparatus 1 may include: the receiving module 11, the adding module 12, and the sending module 13 may further include: a configuration module 14 and a construction module 15.
A configuration module 14 for receiving tag configuration information; the label configuration information comprises a scene number and service behavior data corresponding to the scene number;
the building module 15 is configured to build a tag behavior mapping table according to the mapping relationship between each scene number and the service behavior data corresponding to each scene number, and send the tag behavior mapping table to the mobile terminal, so that the mobile terminal queries, in the tag behavior mapping table, the mapping relationship between each scene number and the service behavior data corresponding to each scene number.
The specific functional implementation manner of the configuration module 14 and the construction module 15 may refer to step S103 in the embodiment corresponding to fig. 2a, which is not described herein again.
Referring to fig. 5, the adding module 12 may include: a number acquisition unit 121, a determination unit 122.
A number acquiring unit 121 configured to acquire a target scene number corresponding to the target scene information;
the number obtaining unit 121 is further configured to generate the target scene tag according to the target scene number;
a determining unit 122, configured to add the target scene tag to the target playback file.
For specific functional implementation manners of the number obtaining unit 121 and the determining unit 122, reference may be made to step S102 in the embodiment corresponding to fig. 2a, which is not described herein again.
Referring to fig. 5, the sending module 13 may include: a file transmission unit 131, a data transmission unit 132;
a file sending unit 131, configured to send the target play file to the mobile terminal, so that the mobile terminal sends a video data request to the video server according to the target play file;
a data sending unit 132, configured to send the target video data requested by the video data request to the mobile terminal, so that the mobile terminal plays the target video data according to the target play file, and executes a service operation associated with the target scene tag when the target scene tag is read in the target play file.
For specific functional implementation manners of the file sending unit 131 and the file receiving unit 132, reference may be made to step S103 in the embodiment corresponding to fig. 2a, which is not described herein again.
Referring to fig. 5, the receiving module 11 may include: a display unit 111, a generation unit 112, a detection unit 113, and an extraction unit 114;
the display unit 111 is configured to receive a trigger display instruction, and display a plurality of pieces of scene information according to the trigger display instruction;
a generating unit 112, configured to receive a click determination instruction, and use scene information corresponding to the click determination instruction as target scene information corresponding to the target video data;
a detecting unit 113, configured to detect a timestamp of the currently played target video data;
the generating unit 112 is further configured to generate target scene information corresponding to the target timestamp for the target video data if the timestamp is the target timestamp;
an extracting unit 114, configured to extract a video frame image of the currently played target video data;
the generating unit 112 is further configured to identify a service type of the video frame image, and generate target scene information corresponding to the service type for the target video data.
For specific functional implementation manners of the display unit 111, the generation unit 112, the detection unit 113, and the extraction unit 114, reference may be made to step S101 in the embodiment corresponding to fig. 2a, which is not described herein again.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In this way, when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Further, please refer to fig. 6, which is a schematic structural diagram of another video data processing apparatus according to an embodiment of the present invention. As shown in fig. 6, the video data processing apparatus 2 may be applied to a mobile terminal, and the video data processing apparatus 2 may include: a request acquisition module 21, a video acquisition module 22, a playing module 23 and a service operation module 24;
a request obtaining module 21, configured to obtain a play request;
the video obtaining module 22 is configured to obtain a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is scene information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data;
the playing module 23 is configured to play the target video data according to the target playing file;
and a service operation module 24, configured to execute a service operation associated with the target scene tag when the target scene tag is read in the target playing file.
For specific functional implementation manners of the request obtaining module 21, the video obtaining module 22, the playing module 23, and the service operation module 24, reference may be made to steps S201 to S203 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 6, the service operation module 24 may include: an analysis unit 241, a search unit 242, and a service operation unit 243;
an analyzing unit 241, configured to analyze the target scene tag to obtain a target scene number;
a searching unit 242, configured to extract, in a tag behavior mapping table, target service behavior data corresponding to the target scene number;
a business operation unit 243, configured to execute a business operation according to the target business behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number.
The service operation unit 243 is specifically configured to: and playing auxiliary video data or displaying static multimedia materials according to the target service behavior data.
The specific functional implementation manners of the parsing unit 241, the searching unit 242, and the service operating unit 243 may refer to step S203 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 6, the video acquisition module 22 may include: a file acquisition unit 221, a request transmission unit 222;
a file obtaining unit 221, configured to obtain the target playing file according to the playing request;
the request sending unit 222 is further configured to send a video data request to the video server according to the identification information of the target video data in the target playing file, so that the video server sends the target video data requested by the video data request to the mobile terminal.
For specific functional implementation manners of the file obtaining unit 221 and the request sending unit 222, reference may be made to step S201 in the embodiment corresponding to fig. 3, which is not described herein again.
The video server acquires target scene information corresponding to currently played target video data; the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the video server sends the target video data and the target playing file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes business operation associated with the target scene label when the target scene label is read in the target playing file. In this way, when the mobile terminal plays the video, if the scene tag is read, the service operation related to the scene tag is executed. The scene label is added into the video data, so that the service data can be released when the video played by the user side is in a specific scene, the situation that the user cannot normally watch the video content due to manual pushing of the service data is avoided, the matching degree between the time for releasing the service data and the scene information of the video content watched by the user is improved, and the time accuracy for pushing the service data is further enhanced.
Fig. 7 is a schematic structural diagram of a video data processing system according to an embodiment of the present invention. The video data processing system 3 includes: a mobile terminal 100c and a video server 100a, wherein the mobile terminal 100c and the video server 100a establish a connection through a network 100 b.
The video server 100a is configured to obtain target scene information corresponding to currently played target video data;
the video server 100a is further configured to generate a target scene tag corresponding to the target scene information, and add the target scene tag to a target playing file; the target playing file is an index list file of the target video data;
the video server 100a is further configured to send the target video data and the target playing file to a mobile terminal;
the mobile terminal 100c is configured to play the target video data according to the target play file;
the mobile terminal 100c is further configured to execute a service operation associated with the target scene tag when the target scene tag is read in the target playing file.
For specific functional implementation manners of the mobile terminal 100c and the video server 100a, reference may be made to steps S301 to S307 in the embodiment corresponding to fig. 4a, which is not described herein again.
It should be understood that the video server 100a described in the embodiment of the present invention may perform the description of the video data processing method in the embodiment corresponding to fig. 2a to fig. 4b, and may also perform the description of the video data processing apparatus 1 in the embodiment corresponding to fig. 5, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail. Moreover, the mobile terminal 100c described in the embodiment of the present invention may perform the description of the video data processing method in the embodiment corresponding to fig. 3 and fig. 4b, and may also perform the description of the video data processing apparatus 2 in the embodiment corresponding to fig. 6, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, please refer to fig. 8, which is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 8, the video data processing apparatus 1 in fig. 5 may be applied to the electronic device 1000, and the electronic device 1000 may include: the processor 1001, the network interface 1004, and the memory 1005, the electronic device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 8, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
The electronic device 1000 may be the video server in the embodiment corresponding to fig. 2a, and in the electronic device 1000 shown in fig. 8, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
acquiring target scene information corresponding to currently played target video data;
generating a target scene label corresponding to the target scene information, and adding the target scene label to a target playing file; the target playing file is an index list file of the target video data;
and sending the target video data and the target playing file to a mobile terminal so that the mobile terminal plays the target video data according to the target playing file, and executing business operation associated with the target scene label when the target scene label is read in the target playing file.
In one embodiment, the processor 1001 further performs the steps of:
receiving tag configuration information; the label configuration information comprises a scene number and service behavior data corresponding to the scene number;
and constructing a label behavior mapping table according to the mapping relation between each scene number and the service behavior data corresponding to each scene number, and sending the label behavior mapping table to the mobile terminal, so that the mobile terminal queries the mapping relation between each scene number and the service behavior data corresponding to each scene number in the label behavior mapping table.
In an embodiment, when the processor 1001 generates an object scene tag corresponding to the object scene information and adds the object scene tag to an object playing file, the following steps are specifically performed:
acquiring a target scene number corresponding to the target scene information;
generating the target scene label according to the target scene number;
and adding the target scene label to the target playing file.
In an embodiment, when the processor 1001 executes the sending of the target video data and the target playback file to the mobile terminal, so that the mobile terminal plays the target video data according to the target playback file, and when the target scene tag is read in the target playback file, and executes a service operation associated with the target scene tag, the following steps are specifically executed:
sending the target playing file to the mobile terminal so that the mobile terminal sends a video data request to the video server according to the target playing file;
and sending the target video data requested by the video data request to the mobile terminal so that the mobile terminal plays the target video data according to the target playing file, and executing business operation associated with the target scene label when the target scene label is read in the target playing file.
In an embodiment, when the processor 1001 acquires target scene information corresponding to currently played target video data, the following steps are specifically performed:
receiving a trigger display instruction, and displaying a plurality of pieces of scene information according to the trigger display instruction;
and receiving a click determination instruction, and taking scene information corresponding to the click determination instruction as target scene information corresponding to the target video data.
In an embodiment, when the processor 1001 acquires target scene information corresponding to currently played target video data, the following steps are specifically performed:
detecting a timestamp of the currently played target video data;
and if the timestamp is a target timestamp, generating target scene information corresponding to the target timestamp for the target video data.
In an embodiment, when the processor 1001 acquires target scene information corresponding to currently played target video data, the following steps are specifically performed:
extracting a video frame image of the currently played target video data;
and identifying the service type of the video frame image, and generating target scene information corresponding to the service type for the target video data.
It should be understood that the electronic device 1000 described in the embodiment of the present invention may perform the description of the video data processing method in the embodiment corresponding to fig. 2a to fig. 4b, and may also perform the description of the video data processing apparatus 1 in the embodiment corresponding to fig. 5, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores the aforementioned computer program executed by the video data processing apparatus 1, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the video data processing method in the embodiment corresponding to fig. 2a to fig. 4b can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
Further, please refer to fig. 9, which is a schematic structural diagram of another electronic device according to an embodiment of the present invention. As shown in fig. 9, the video data processing apparatus 2 in fig. 6 may be applied to the electronic device 2000, and the electronic device 2000 may include: the processor 2001, the network interface 2004 and the memory 2005, the electronic device 2000 may further include: a user interface 2003, and at least one communication bus 2002. The communication bus 2002 is used to implement connection communication between these components. The user interface 2003 may include a Display (Display) and a Keyboard (Keyboard), and the optional user interface 2003 may further include a standard wired interface and a standard wireless interface. The network interface 2004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Memory 2005 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 2005 may optionally also be at least one memory device located remotely from the aforementioned processor 2001. As shown in fig. 9, the memory 2005, which is one type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
The electronic device 2000 may be the mobile terminal in the embodiment corresponding to fig. 3, and in the electronic device 2000 shown in fig. 9, the network interface 2004 may provide a network communication function; and the user interface 2003 is primarily used to provide an interface for user input; and processor 2001 may be used to invoke the device control application stored in memory 2005 to implement:
acquiring a playing request, and acquiring a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is scene information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data;
playing the target video data according to the target playing file;
and when the target scene label is read in the target playing file, executing the business operation associated with the target scene label.
In one embodiment, the processor 2001 specifically performs the following steps when executing the business operation associated with the target scenario tag:
analyzing the target scene label to obtain a target scene number;
extracting target service behavior data corresponding to the target scene number from a tag behavior mapping table, and executing service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number.
In an embodiment, when the processor 2001 executes the service operation according to the target service behavior data, the following steps are specifically executed:
and playing auxiliary video data or displaying static multimedia materials according to the target service behavior data.
In an embodiment, when the processor 2001 executes the acquiring of the target play file and the target video data according to the play request, the following steps are specifically executed:
acquiring the target playing file according to the playing request;
and sending a video data request to the video server according to the identification information of the target video data in the target playing file, so that the video server sends the target video data requested by the video data request.
The electronic device 2000 described in the embodiment of the present invention may perform the description of the video data processing method in the embodiment corresponding to fig. 3 to fig. 4b, and may also perform the description of the video data processing apparatus 2 in the embodiment corresponding to fig. 6, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, and the computer storage medium stores the aforementioned computer program executed by the video data processing apparatus 2, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the video data processing method in the foregoing embodiments of fig. 3 to 4b can be performed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (13)

1. A method of processing video data, comprising:
the video server acquires target scene information corresponding to currently played target video data; the video server acquires target scene information corresponding to currently played target video data, and the method comprises the following steps: the video server extracts a video frame image of the currently played target video data; the video server identifies the service type of the video frame image and generates target scene information corresponding to the service type for the target video data; the service types comprise a middle rest type, a match ending type and a celebrity leaving type;
the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the target playing file is an index list file of the target video data;
the video server sends the target video data and the target playing file to a mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
2. The method of claim 1, further comprising:
the video server receives tag configuration information; the label configuration information comprises a scene number and service behavior data corresponding to the scene number;
the video server constructs a label behavior mapping table according to the mapping relation between each scene number and the service behavior data corresponding to each scene number, and sends the label behavior mapping table to the mobile terminal, so that the mobile terminal queries the mapping relation between each scene number and the service behavior data corresponding to each scene number in the label behavior mapping table.
3. The method according to claim 1 or 2, wherein the video server generates an object scene tag corresponding to the object scene information and adds the object scene tag to an object playing file, including:
the video server acquires a target scene number corresponding to the target scene information;
the video server generates the target scene label according to the target scene number;
and the video server adds the target scene label to the target playing file.
4. The method according to claim 1 or 2, wherein the video server sends the target video data and the target playing file to a mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes a business operation associated with the target scene tag when the target scene tag is read in the target playing file, including:
the video server sends the target playing file to the mobile terminal, so that the mobile terminal sends a video data request to the video server according to the target playing file;
and the video server sends the target video data requested by the video data request to the mobile terminal so that the mobile terminal plays the target video data according to the target playing file and executes business operation associated with the target scene label when the target scene label is read in the target playing file.
5. The method according to claim 1 or 2, wherein the video server obtains target scene information corresponding to currently played target video data, and comprises:
the video server receives a trigger display instruction and displays a plurality of pieces of scene information according to the trigger display instruction;
and the video server receives a click determination instruction, and takes scene information corresponding to the click determination instruction as target scene information corresponding to the target video data.
6. The method according to claim 1 or 2, wherein the video server obtains target scene information corresponding to currently played target video data, and comprises:
the video server detects a timestamp of the currently played target video data;
and if the timestamp is a target timestamp, the video server generates target scene information corresponding to the target timestamp for the target video data.
7. A method of processing video data, comprising:
the mobile terminal acquires a playing request and acquires a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is scene information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data; the target scene information corresponds to the service type; the service type is determined by the video server extracting the video frame image of the currently played target video data and identifying the video frame image;
the mobile terminal plays the target video data according to the target playing file;
when the mobile terminal reads the target scene label in the target playing file, executing business operation associated with the target scene label; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
8. A method of processing video data, comprising:
the video server acquires target scene information corresponding to currently played target video data; the video server acquires target scene information corresponding to currently played target video data, and the method comprises the following steps: the video server extracts a video frame image of the currently played target video data; the video server identifies the service type of the video frame image and generates target scene information corresponding to the service type for the target video data; the service types comprise a middle rest type, a match ending type and a celebrity leaving type;
the video server generates a target scene label corresponding to the target scene information and adds the target scene label to a target playing file; the target playing file is an index list file of the target video data;
the video server sends the target video data and the target playing file to a mobile terminal;
the mobile terminal plays the target video data according to the target playing file;
when the mobile terminal reads the target scene label in the target playing file, executing business operation associated with the target scene label; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
9. A video data processing apparatus applied to a video server, comprising:
the receiving module is used for acquiring target scene information corresponding to currently played target video data; the video server acquires target scene information corresponding to currently played target video data, and the method comprises the following steps: the video server extracts a video frame image of the currently played target video data; the video server identifies the service type of the video frame image and generates target scene information corresponding to the service type for the target video data; the service types comprise a middle rest type, a match ending type and a celebrity leaving type;
the adding module is used for generating a target scene label corresponding to the target scene information and adding the target scene label to a target playing file; the target playing file is an index list file of the target video data;
a sending module, configured to send the target video data and the target playing file to a mobile terminal, so that the mobile terminal plays the target video data according to the target playing file, and executes a service operation associated with the target scene tag when the target scene tag is read in the target playing file; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
10. A video data processing device applied to a mobile terminal is characterized by comprising:
the request acquisition module is used for acquiring a playing request;
the video acquisition module is used for acquiring a target playing file and target video data according to the playing request; the target playing file comprises a target scene label; the target scene label is a label which is generated by the video server and matched with the target scene information; the target scene information is information corresponding to currently played target video data in the video server; the target playing file is an index list file of the target video data; the target scene information corresponds to the service type; the service type is determined by the video server extracting the video frame image of the currently played target video data and identifying the video frame image;
the playing module is used for playing the target video data according to the target playing file;
the business operation module is used for executing business operation related to the target scene label when the target scene label is read in the target playing file; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
11. A video data processing system, characterized in that the video data processing system comprises: a mobile terminal and a video server;
the video server is used for acquiring target scene information corresponding to currently played target video data; the video server acquires target scene information corresponding to currently played target video data, and the method comprises the following steps: the video server extracts a video frame image of the currently played target video data; the video server identifies the service type of the video frame image and generates target scene information corresponding to the service type for the target video data; the service types comprise a middle rest type, a match ending type and a celebrity leaving type;
the video server is further used for generating a target scene label corresponding to the target scene information and adding the target scene label to a target playing file; the target playing file is an index list file of the target video data;
the video server is also used for sending the target video data and the target playing file to the mobile terminal;
the mobile terminal is used for playing the target video data according to the target playing file;
the mobile terminal is further configured to execute a service operation associated with the target scene tag when the target scene tag is read from the target playing file; the executing the business operation associated with the target scenario tag comprises: the mobile terminal analyzes the target scene label to obtain a target scene number; in a label behavior mapping table, the mobile terminal extracts target service behavior data corresponding to the target scene number and executes service operation according to the target service behavior data; the label behavior mapping table comprises at least one scene number and service behavior data corresponding to each scene number; the target service behavior data comprises playing auxiliary video data and displaying static multimedia materials, and the auxiliary video data and the static multimedia materials are all in incidence relation with the target scene information.
12. An electronic device, comprising: a processor and a memory;
the processor is coupled to a memory, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1-8.
13. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method according to any one of claims 1-8.
CN201810443699.7A 2018-05-10 2018-05-10 Video data processing method and device and related equipment Active CN110475121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810443699.7A CN110475121B (en) 2018-05-10 2018-05-10 Video data processing method and device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810443699.7A CN110475121B (en) 2018-05-10 2018-05-10 Video data processing method and device and related equipment

Publications (2)

Publication Number Publication Date
CN110475121A CN110475121A (en) 2019-11-19
CN110475121B true CN110475121B (en) 2022-02-11

Family

ID=68503898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810443699.7A Active CN110475121B (en) 2018-05-10 2018-05-10 Video data processing method and device and related equipment

Country Status (1)

Country Link
CN (1) CN110475121B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992942A (en) * 2019-12-05 2022-01-28 腾讯科技(深圳)有限公司 Video splicing method and device and computer storage medium
CN114125509B (en) * 2021-11-30 2024-01-19 深圳Tcl新技术有限公司 Video playing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103596044A (en) * 2013-11-22 2014-02-19 深圳创维数字技术股份有限公司 Method, device and system for processing and displaying video file
CN103905846A (en) * 2012-12-25 2014-07-02 中国电信股份有限公司 Content pushing method based on IPTV and server
CN104065979A (en) * 2013-03-22 2014-09-24 北京中传数广技术有限公司 Method for dynamically displaying information related with video content and system thereof
CN104486680A (en) * 2014-12-19 2015-04-01 珠海全志科技股份有限公司 Video-based advertisement pushing method and system
CN105956170A (en) * 2016-05-20 2016-09-21 微鲸科技有限公司 Real-time scene information embedding method, and scene realization system and method
CN106649855A (en) * 2016-12-30 2017-05-10 中广热点云科技有限公司 Video label adding method and adding system
CN106792188A (en) * 2016-12-06 2017-05-31 腾讯数码(天津)有限公司 A kind of data processing method of live page, device and system
CN107392666A (en) * 2017-07-24 2017-11-24 北京奇艺世纪科技有限公司 Advertisement data processing method, device and advertisement placement method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180891B1 (en) * 2008-11-26 2012-05-15 Free Stream Media Corp. Discovery, access control, and communication with networked services from within a security sandbox
CN105744355A (en) * 2016-01-28 2016-07-06 宇龙计算机通信科技(深圳)有限公司 Video pre-reminding processing method, video pre-reminding processing device, and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905846A (en) * 2012-12-25 2014-07-02 中国电信股份有限公司 Content pushing method based on IPTV and server
CN104065979A (en) * 2013-03-22 2014-09-24 北京中传数广技术有限公司 Method for dynamically displaying information related with video content and system thereof
CN103596044A (en) * 2013-11-22 2014-02-19 深圳创维数字技术股份有限公司 Method, device and system for processing and displaying video file
CN104486680A (en) * 2014-12-19 2015-04-01 珠海全志科技股份有限公司 Video-based advertisement pushing method and system
CN105956170A (en) * 2016-05-20 2016-09-21 微鲸科技有限公司 Real-time scene information embedding method, and scene realization system and method
CN106792188A (en) * 2016-12-06 2017-05-31 腾讯数码(天津)有限公司 A kind of data processing method of live page, device and system
CN106649855A (en) * 2016-12-30 2017-05-10 中广热点云科技有限公司 Video label adding method and adding system
CN107392666A (en) * 2017-07-24 2017-11-24 北京奇艺世纪科技有限公司 Advertisement data processing method, device and advertisement placement method and device

Also Published As

Publication number Publication date
CN110475121A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN108391179B (en) Live broadcast data processing method and device, server, terminal and storage medium
CN105635764B (en) Method and device for playing push information in live video
US20180152767A1 (en) Providing related objects during playback of video data
KR101999389B1 (en) Identification and presentation of internet-accessible content associated with currently playing television programs
CN108419138B (en) Live broadcast interaction device and method and computer readable storage medium
US8737813B2 (en) Automatic content recognition system and method for providing supplementary content
CN105230035B (en) The processing of the social media of time shift multimedia content for selection
US20160316233A1 (en) System and method for inserting, delivering and tracking advertisements in a media program
WO2015090095A1 (en) Information pushing method, device, and system
WO2021114845A1 (en) Interactive service processing method, system and device, and storage medium
US20140297745A1 (en) Processing of social media for selected time-shifted multimedia content
KR20140037144A (en) A mechanism for embedding metadata in video and broadcast television
CN106791988B (en) Multimedia data carousel method and terminal
CN103997691A (en) Method and system for video interaction
CN104361075A (en) Image website system and realizing method
US20130070152A1 (en) Sampled digital content based syncronization of supplementary digital content
CN103026681A (en) Video-based method, server and system for realizing value-added service
US11778286B2 (en) Systems and methods for summarizing missed portions of storylines
CN112771881B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
US9489421B2 (en) Transmission apparatus, information processing method, program, reception apparatus, and application-coordinated system
CN110475121B (en) Video data processing method and device and related equipment
CN104918071A (en) Video playing method, device and terminal equipment
CN103023923B (en) A kind of method transmitting information and device
WO2018000743A1 (en) Cross-device group chatting method and electronic device
US20150026744A1 (en) Display system, display apparatus, display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant