CN111093101A - Media file delivery method and device, storage medium and electronic device - Google Patents

Media file delivery method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111093101A
CN111093101A CN201811236644.5A CN201811236644A CN111093101A CN 111093101 A CN111093101 A CN 111093101A CN 201811236644 A CN201811236644 A CN 201811236644A CN 111093101 A CN111093101 A CN 111093101A
Authority
CN
China
Prior art keywords
target
video
scene type
media file
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811236644.5A
Other languages
Chinese (zh)
Other versions
CN111093101B (en
Inventor
李星
李智
周彬
徐澜
徐伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811236644.5A priority Critical patent/CN111093101B/en
Publication of CN111093101A publication Critical patent/CN111093101A/en
Application granted granted Critical
Publication of CN111093101B publication Critical patent/CN111093101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]

Abstract

The invention discloses a method and a device for delivering media files, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring target index information corresponding to a target media file to be launched; acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment; and delivering the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file on the target playing position is the target moment. The invention solves the technical problem of low efficiency of delivering the media files in the related technology.

Description

Media file delivery method and device, storage medium and electronic device
Technical Field
The invention relates to the field of videos, in particular to a method and a device for delivering a media file, a storage medium and an electronic device.
Background
At present, when a media file is released, a special person is needed to watch the video according to the requirements of a client, a specific video time point in the video is selected manually, namely, manually dotting is performed, and then the media file is released into the video based on the video time point in the video, so that the media file is played when the video is played to the target time point.
Although the method can realize the release of the media files, the method only can select partial head content in a video library of a video website for dotting depending on manpower, which is not only easy to make mistakes, but also cannot select a lot of video moments to release the media files due to the limitation of manpower, so that the media files cannot cover more video contents, and the problem of low release efficiency of the media files exists, thereby not well meeting the release requirements of users.
In view of the above-mentioned problem of low efficiency of delivering media files, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for delivering media files, a storage medium and an electronic device, which are used for at least solving the technical problem of low efficiency of delivering the media files in the related art.
According to an aspect of an embodiment of the present invention, a method for delivering a media file is provided. The method comprises the following steps: acquiring target index information corresponding to a target media file to be launched; acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment; and delivering the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file on the target playing position is the target moment.
According to another aspect of the embodiment of the invention, a device for delivering the media file is also provided. The device includes: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring target index information corresponding to a target media file to be launched; the execution unit is used for acquiring a target scene type matched with the target index information and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment; and the releasing unit is used for releasing the target media file to a target playing position corresponding to the target moment in the target video, wherein the initial playing moment of the target media file on the target playing position is the target moment.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium. The storage medium has stored therein a computer program, wherein the computer program is configured to execute the method for delivering a media file in the embodiment of the present invention when running.
According to another aspect of the embodiment of the invention, an electronic device is also provided. The electronic device comprises a memory and a processor, and is characterized in that the memory stores a computer program, and the processor is configured to execute the method for delivering a media file in the embodiment of the invention through the computer program.
In the embodiment of the invention, target index information corresponding to a target media file to be launched is obtained; acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment; and delivering the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file on the target playing position is the target moment. The target scene type is determined through the target index information of the target media file to be launched, the target media file is automatically launched to the target playing position in the target video corresponding to the target scene type, the target media file can be played at the moment when the target video is played to the target, so that the target media file can be launched to more videos, the goal of launching the target media file is achieved, the media file is prevented from being launched by manually selecting the launching position of the media file in the target video, the technical effect of improving the efficiency of launching the media file is achieved, and the technical problem of low efficiency of launching the media file in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a method for delivering a media file according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for delivering a media file according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a media file delivery system according to an embodiment of the present invention;
FIG. 4 is a diagram of selecting a scene tag according to an embodiment of the invention;
FIG. 5 is a diagram of a view scene type package, according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a video information display interface according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a media file delivery apparatus according to an embodiment of the invention; and
fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, a method for delivering a media file is provided, and optionally, as an optional implementation manner, the method for delivering a media file may be applied, but not limited, to an environment as shown in fig. 1. Fig. 1 is a schematic diagram of a hardware environment of a method for delivering a media file according to an embodiment of the present invention. As shown in FIG. 1, a user 102 may be in data communication with a user device 104, which may include, but is not limited to, a memory 106 and a processor 108. The user equipment 104 obtains target index information corresponding to a target media file to be delivered, wherein the memory 106 may store the target index information corresponding to the target media file, and the user equipment 104 may execute step S102 through the processor 108 and send the target index information corresponding to the target media file to the server 112 through the network 110. Server 112 includes a database 114 and a placement engine 116. After obtaining the target index information corresponding to the target media file, the server 112 obtains a target scene type matched with the target index information from the database 114, and determines a target video corresponding to the target scene type, where the target scene type is used to indicate a scene of target content played by the target video at a corresponding target time, inputs the target media file, the target scene type, the target time, and the target video into the delivery engine 116, and delivers the target media file to a target playing position corresponding to the target time in the target video through the delivery engine 116. The server 112 returns the delivery result to the user device 104 through the network 110 through step S104, and the user device 104 may display the delivery result.
It should be noted that, in the related art, when a media file is delivered, a special person is required to watch a video according to the requirement of a client, and the media file is delivered to the video by manually selecting a specific video time point in the video. However, the above method is not only prone to errors, but also cannot select a lot of video time to deliver the media file due to limited manpower, and thus has a problem of low efficiency in delivering the media file. In the embodiment, the target scene type is determined according to the target index information of the target media file to be launched, and the target media file is automatically launched to the target playing position in the target video corresponding to the target scene type, so that the target media file can be launched to more videos, and the technical effect of improving the efficiency of launching the media file is achieved.
Alternatively, the method for delivering the media file may be, but is not limited to, applied to a terminal capable of calculating data, such as a mobile phone, a tablet computer, a notebook computer, a PC, and the like, and the network may include, but is not limited to, a wireless network or a wired network. Wherein, this wireless network includes: WIFI and other networks that enable wireless communication. Such wired networks may include, but are not limited to: wide area networks, metropolitan area networks, and local area networks. The server may include, but is not limited to, any hardware device capable of performing computations.
Fig. 2 is a flowchart of a method for delivering a media file according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, target index information corresponding to a target media file to be launched is obtained.
In the technical solution provided in the foregoing step S202 of the present application, the target media file is a content to be delivered in a video that a user desires to deliver, and may be a media file to be delivered and strongly related to a time point of the video, for example, the target media file is a dotting type media file, where the dotting type media file refers to a video media file that needs to depend on a specific video time point or a time range during playing, and may be an intervening media file, a corner mark media file, and the like, where the intervening media file is a media file of a tile type that is inserted at a specific time during video playing, and the corner mark media file is a media file that appears on the video in a specific time period during video playing and is displayed together with the video. The time point of this embodiment is a certain time in the video.
Target index information corresponding to a target media file to be launched is obtained, the target index information may be information related to the target media file and used for searching the target media file, and information content to be searched by a user may be summarized to the greatest extent, for example, the target index information is a target keyword, and the like of the target media file.
Optionally, in this embodiment, the target index information corresponding to the target media file is input by the user on a target interface of a Tag Management Platform (TMP), so as to achieve the purpose of obtaining the target index information.
Step S204, a target scene type matched with the target index information is obtained, and a target video corresponding to the target scene type is determined.
In the technical solution provided in step S204 above, after obtaining the target index information corresponding to the target media file to be launched, obtaining a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, where the target scene type is used to indicate a scene of target content played by the target video at a corresponding target time.
In this embodiment, the target video is a video to be launched by the target media file, that is, a target launch object of the target media file, which may be a target art, a movie, a television play, or the like, and the target video may be derived from a video library of a video website in a large amount, and may be a plurality of videos. The target scene type of this embodiment may be a scene tag, tag information associated with a target time in the target video, where the tag information is used to identify a scene of target content played by the target video at the corresponding target time, and may be abstract information, such as a vehicle, a human, and the like, or may be specific information, such as a star a, a vehicle model B, and the like, video content from the target video, subtitle content from the target video, and the like. The target moment can be a scene point of the target video, the target moments of different target videos can be different, the scene point is a playing moment of a target content with a media file delivery value in the target video, and is an abstract moment point, for example, the target moment is the 10 th minute of the target video playing, when the target video is played to the 10 th minute, the video content of the running automobile is played, the video content of the running automobile has the delivery value of delivering the target media file related to the automobile, and at this time, the target media file related to the automobile can be delivered at the 10 th minute playing position of the target video, so that the propaganda of automobile products is facilitated.
After target index information corresponding to a target media file to be launched is obtained, a target scene type can be matched to the target index information, and the target scene type can be used as description information for describing a target video by a user. Optionally, the target index information is matched with a plurality of scene types, and the plurality of scene types are all determined as the target scene type, for example, if the target index information corresponding to the target media file is "car", then the target index information "car" is matched with two scene types "car" and "car", and then the two scene types "car" and "car" are both determined as the target scene type.
Optionally, in this embodiment, the target index information is matched to a plurality of scene types, and the target scene type is selected from the plurality of scene types, that is, the plurality of scene types are scene selectable tags, for example, the target index information corresponding to the target media file is "car", and then the target index information is matched to a plurality of scene types "truck", "motorcycle", "car", "racing", "sports car", "car", and the like, where the plurality of scene types "truck", "motorcycle", "car", "racing", "sports car", "car", and the like are scene selectable tags, and the user may select "car" and "car" as the target scene type.
In this embodiment, the time for playing the video content by the target video may be decomposed into a plurality of target times associated with different scene types, where the different scene types include the target scene type, for example, the time for playing the video content by the target video may be decomposed into a plurality of scene points associated with different scene tags, so that when the target scene type matching the target index information is obtained, the target video corresponding to the target scene type may be determined, and the scene of the target content played by the target video at the target time is indicated by the target scene type.
Step S206, the target media file is launched to the target playing position corresponding to the target moment in the target video.
In the technical solution provided in the foregoing step S206 of the present application, since the target scene type is used for indicating a scene of the target content played by the target video at the corresponding target time. After the target video corresponding to the target scene type is determined, the target media file can be launched to the target playing position corresponding to the target moment in the target video, and the target media file can be launched to the target playing position corresponding to the target moment in the target video through the media file launching engine, so that the target media file can be played when the target video is played to the target moment. The target video may be composed of a plurality of video segments, the target playing position may be a video segment whose time point includes the target time, and the target playing position may also correspond to the playing progress of the target video. Optionally, the target video may include a plurality of target playing positions corresponding to a plurality of target times, which may be represented by a recognition result "target time/target scene type" of the target video, for example, "13: 29/car "for indicating that when the target video is played for 13 minutes and 29 seconds, the scene of the target content which is started to be played is a car scene.
In this embodiment, the obtained target scene type and the target video that are matched with the target index information may be packaged to obtain a scene type package, where the scene type package corresponds to the target video and a target time in the target video that is associated with the target scene type. Optionally, information such as a content package ID, a content package name, a scenario scale of the target video, a keyword of the target media file, an operation type, a creation time, and a creator of the scene type package is displayed on a target interface of the tag management platform, where the content package name may be a test package-car, the keyword is a car, and the operation type may be video image identification, which is not limited herein. The embodiment transmits the scene type package to the order system, and then order putting of the target media file is performed in the order system.
For example, the embodiment generates the scene content package from the customer's requirement through the tag management system. The client enters the keywords of the target media file and the tag management system queries and returns all similar scene types for selection by the client. For example, the client inputs the keyword car, and the optional scene types displayed by the tag management system include car, truck, etc. After a customer selects one or more scene types according to the requirement, the label management system packs the selected one or more scene types and transmits the scene types to a release order system of a release side.
And the order releasing system reads the generated scene type packet, associates the generated scene type packet with the order of the corresponding client, releases the order by means of orientation and the like, and the releasing engine reads the scene type packet of the order, matches the scene type packet with the current scene point-scene type association relationship returned by the online label service and selects to release the order on the corresponding scene point, so that the aim of releasing the target media file according to the scene type is fulfilled.
Optionally, the target media file of this embodiment may be delivered to the target playing position corresponding to the target time in the multiple target videos, that is, the target media file covers all target playing positions corresponding to the target time associated with the target scene type, and the problems that a manual dotting scheme depends on manpower and is prone to error and many video time points cannot be selected due to the limitation of manpower are avoided, so that the delivery requirement of the user is well met, and the delivery efficiency of the target media file is improved.
It should be noted that the target media file in the embodiment of the present invention may be a video file, an audio file, a picture file, or a text file, and may also be any combination of these files, for example, a combination of a text file and a picture file, and a combination of a video file and a text file. The specific product modality may be, for example, a video advertisement, a native advertisement, a search advertisement, and the like.
Acquiring target index information corresponding to a target media file to be launched through the steps S202 to S206; acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment; and delivering the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file on the target playing position is the target moment. The target scene type is determined through the target index information of the target media file to be launched, the target media file is automatically launched to the target playing position in the target video corresponding to the target scene type, the target media file can be played at the moment when the target video is played to the target, so that the target media file can be launched to more videos, the goal of launching the target media file is achieved, the media file is prevented from being launched by manually selecting the launching position of the media file in the target video, the technical effect of improving the efficiency of launching the media file is achieved, and the technical problem of low efficiency of launching the media file in the related technology is solved.
As an alternative implementation, in step S204, determining the target video corresponding to the target scene type includes: at least a first target scene type and a second target scene type are determined from the target scene types, and a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type are determined, wherein the first target scene type is used for indicating a scene of first target content played by the first target video at a first target moment, and the second target scene type is used for indicating a scene of second target content played by the second target video at a second target moment; step S206, the step of delivering the target media file to the target playing position corresponding to the target time in the target video includes: and delivering the target media file to a first target playing position corresponding to the first target moment in the first target video and a second target playing position corresponding to the second target moment in the second target video.
In this embodiment, the target scene type matched with the target index information may include a plurality of scene types, that is, the target scene type may include a plurality of scene selectable tags, and the user may select a plurality of scene types satisfying a media file delivery requirement from the plurality of scene types.
In the embodiment, when a target video corresponding to a target scene type is determined, at least a first target scene type and a second target scene type are determined from the target scene types, where the first target scene type is used to indicate a scene of first target content played by the first target video at a first target time, the second target scene type is used to indicate a scene of second target content played by the second target video at a second target time, for example, the first target scene type is "car", the second target scene type is "car", the first target content played by the first target video at the first target time, and the second target scene type is "car", the second target content played by the second target video at the second target time.
After at least a first target scene type and a second target scene type are determined, a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type are determined, and the first target video and the second target video may be the same or different.
In the embodiment, after a first target video corresponding to a first target scene type and a second target video corresponding to a second target scene type are determined, a target media file is delivered to a first target playing position corresponding to a first target moment in the first target video and a second target playing position corresponding to a second target moment in the second target video.
In this embodiment, when the first target video and the second target video are the same video, the first target time and the second target time may be the same or different. Under the condition that the first target time is the same as the second target time, the first target playing position is the same as the second target playing position, and the first target content is the same as the second target content; under the condition that the first target time is different from the second target time, the first target content is different from the second target content, and the first target playing position is different from the second target playing position; when the first target video and the second target video are different, the first target time may be the same or different, but the first target content and the second target content are different, and the first target playing position and the second target playing position are different.
For example, if the target time a of the first target video corresponds to the car scene type, and the target time b corresponds to the car scene type, the delivered target media file is played when the first target video is played to the target time a, and the target media file is also played when the first target video is played to the target time b; optionally, the a target moment of the first target video corresponds to the car scene type, the b target moment of the second target video corresponds to the car scene type, the released target media file is played when the first target video is played to the a target moment, and the released target media file is also played when the second target video is played to the b target moment; optionally, the c target time of the first target video corresponds to both the car tag and the car tag, the target media is played only when the first target video is played to the c target time, the a target time of the first target video corresponds to the car tag, the b target time corresponds to the car tag, and the target media file is not played when the first target video is played to the a target time and the b target time.
According to the embodiment, at least a first target scene type and a second target scene type are determined from the target scene types, at least a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type are determined, and the target media file is launched at least to a first target playing position corresponding to a first target moment in the first target video and a second target playing position corresponding to a second target moment in the second target video, so that the target media file is automatically launched to a target playing position corresponding to a target moment associated with the target scene type in more videos, even to full website coverage, and the launching efficiency of the target media file is improved.
It should be noted that, the above-mentioned first target scene type and second target scene type, first target video and second target video, first target time and second target time, first target play position and second target play position, and first target content and second target content are only an example of the embodiment of the present invention, and the embodiment may further include more target scene types, target videos, target times, target play positions, and target contents, and an example is not repeated here.
As an optional implementation manner, after determining the target video corresponding to the target scene type in step S204, the method further includes: acquiring a first target operation instruction on a target interface, wherein the first target operation instruction is used for indicating and displaying information of a target video, and the information of the target video comprises playing information of the target video at least one target playing position; and responding to the first target operation instruction, and displaying the information of the target video.
In this embodiment, after determining a target video corresponding to a target scene type, the target scene type and the target video are packed to obtain a packing result, which is also a scene type packet. The embodiment may check the packaging result, for example, check the target video to which the target media file is specifically dropped and the time point in the target video associated with the target scene type, that is, check at what time point of which videos the target media file is specifically dropped.
The embodiment may obtain the first target operation instruction on the target interface, where the first target interface may be an operation interface in the tag management platform, for example, an operation interface in a drama list, where the first target operation instruction is used to indicate information for displaying a target video, and may be an operation instruction generated by a click operation, an operation instruction generated by a double click, an operation instruction generated by a drag, an operation instruction generated by staying at a target position for a preset time, and the like, which are not limited herein. The information of the target video in this embodiment includes playing information of the target video at the at least one target playing position, and the playing information may be a target scene type and a target time corresponding to the target video at the at least one target playing position. Optionally, the information of the target video further includes a name of a drama/album where the target video is located, a copyright side of the target video, a year in which the target video is published, a broadcasting date of the target video, and the like, which is not limited herein.
And after the first target operation instruction is acquired on the target interface, responding to the first target operation instruction and displaying the information of the target video. For example, when the user clicks a scenario list on the tag management platform, a viewing interface of the scenario list is displayed, and information of a target video determined by a keyword "car" and a target scene type "car" and further determined by the scene type "car" and "car" is displayed.
As an optional implementation, when displaying the information of the target video, the method further includes: determining a first target playing position from at least one target playing position; acquiring a second target operation instruction based on the first target playing position, wherein the second target operation instruction is used for indicating and displaying the content played by the target video at the first target playing position, and the played content comprises the target content; and responding to the second target operation instruction, and displaying the content played by the target video at the first target playing position in the first target window.
In this embodiment, the target playing position of the target video may include a plurality of positions, and may correspond to "target time/target scene type", for example, the target playing position corresponds to "13: 29/car "," 19: 56/car "," 26: 37/car ", etc. A first target playback position is determined from at least one target playback position, for example, from "13: 29/car "," 19: 56/car "," 26: 37/car "determines" 13: the target playing position corresponding to 29/car "is a first target playing position, and a second target operation instruction is obtained based on the first target playing position, where the second target operation instruction is used to instruct and display the content played by the target video at the first target playing position, and may be a click operation performed on the" target time/target scene type "information corresponding to the first target playing position, a double click operation, a click operation instruction generated by staying at the target position for a preset time, and the like, which are not limited herein. The content played by the target video at the first target playing position comprises the target content played by the target video at the corresponding target moment.
After the second target operation instruction is obtained based on the first target playing position, the content played by the target video at the first target playing position can be displayed on the first target window of the tag management platform in response to the second target operation instruction, so that the user can quickly know the target video to be launched by the target media file. Optionally, the first target window may be an interface for video playing, and may be moved, enlarged, reduced, closed, and the like, so as to facilitate user operation.
As an optional implementation, when displaying the information of the target video, the method further includes: acquiring a third target operation instruction based on a second target window, wherein the second target window is used for displaying the playing information of the target video at least one target playing position, and the third target operation instruction is used for indicating to update the at least one target playing position; and responding to the third target operation instruction, updating the at least one target playing position, and displaying the updated playing information on the at least one target playing position in the second target window.
In this embodiment, the playing information at the at least one target playing position may be displayed through a second target window in the tag management platform, where the second target window is used to display the playing information of the target video at the at least one target playing position, and the playing information includes an identification result of identifying the target video by a target scene type, for example, the identification result is "target time/target scene type". At least one target playing position can be determined by the "target time/target scene type" displayed in the second target window, for example, by the "target time/target scene type", the scene type of the video clip that starts playing at the target time at the target playing position in the target video can be determined as the target scene type.
The second target window of the implementation may display the playback information at the target number of target playback positions. When the user wants to view more playing information at the target playing positions, a third target operation instruction may be input based on the second target window, where the third target operation instruction may be a sliding operation instruction generated by sliding an operation bar on the second target window, and the target playing positions of the target number corresponding to the second target window may be updated through the sliding operation instruction, where the target number is the number of the target playing positions corresponding to the playing information allowed to be displayed in the second target window. Optionally, the target play positions of the target number are updated, in order to update the target play positions of the target number as a whole, that is, even if one target play position is different from the previous target play position, the target play positions of the target number are also updated, so that the user can view the play information on the target play position that is not displayed before the second target window.
For example, the number of targets is 3, and the second target window displays the playing information at 3 target playing positions, which are "13: 29/car "," 19: 56/car "," 26: 37/car ", obtaining a sliding operation instruction based on the second target window, and updating 3 target playing positions in response to the sliding operation instruction, that is, comparing" 13: 29/car "," 19: 56/car "," 26: 37/car "and displays the updated playback information at the 3-target playback position in the second target window, for example, displaying" 19: 56/car "," 26: 37/car "," 37: 57/car ". Wherein, the ratio of "37: 57/Car "is the playback information for the target playback position that was not previously displayed in the second target window, so that the playback information displayed in the second target window has changed as a whole relative to the playback information displayed before the update. Optionally, the size of the second target window in this embodiment may be adjusted, so that the playing information on at least one target playing position displayed in the second target window is more comprehensive.
According to the embodiment, the at least one target playing position is updated through the third target operation instruction, and the updated playing information on the at least one target playing position is displayed in the second target window, so that the information display efficiency is improved, a user can quickly check the range of the target video to which the target media file is put, and the user can conveniently click a specific target moment to check the corresponding video content.
As an optional implementation manner, the playing information of the target video at the target playing position includes: and the target scene type and the target time of the target video correspond to at least one target playing position.
In this embodiment, the playing information of the target video at the target playing position includes a specific target video, and a target scene type and a target time point in the target video, and the target playing position of the target video can be determined by the target scene type and the target time, where the playing information of the target video at the target playing position may be "13: 29/car "," 19: 56/car "," 26: 37/car ", etc., by operating on a specific target time, the target content played at the target time can be viewed.
As an optional implementation manner, in step S206, after the target media file is delivered to the target playing position corresponding to the target time in the target video, the method further includes: and when the target video is played to the target moment, playing the corner mark media file or the inserting media file, wherein the target media file comprises the corner mark media file and the inserting media file.
In this embodiment, the target media file is a dotting type media file, which is strongly related to a time point, and needs to depend on a specific video time point or a time range during playing, and may include a corner mark media file and an intervening media file, where the corner mark media file appears at a specific time period of the target video during playing of the target video and is displayed together with the content of the target video, and the intervening media file is a patch type media file inserted at a specific time of the target video during playing of the target video.
After the corner mark media file is placed at the target playing position corresponding to the target moment in the target video, the corner mark media file begins to be played when the target video is played to the target moment, the corner mark media file and the content of the target video at the target moment are displayed together, and after the corner mark media file is played, only the video content of the target video is displayed; optionally, after the middle media file is launched to a target playing position corresponding to the target time in the target video, when the target video is played to the target time, only the middle media file is played, the target video is not played again, and after the middle media file is played, video content of the target video after the target time is played again.
As an optional implementation manner, before obtaining the target scene type matching the target index information, the method further includes at least one of: determining a target scene type from video content of a target video; determining a target scene type from subtitle content of a target video; from the audio content of the target video, a target scene type is determined.
In this embodiment, the target scene type is a scene tag, and for tag information associated with a target time in the target video, the time for the target video to play the video content may be decomposed into a plurality of target times associated with different scene types in advance, where the different scene types may be determined from the video content of the target video, for example, if the video content is a video content for running a car, the scene type is determined to be "car"; different scene types can also be determined from the subtitle content of the target video, for example, if the subtitle content of the target video is that a goods van is loaded with heavy goods and drives slowly from a distance, the goods van is a keyword, and the scene type is determined to be the goods van; the different scene types can also be determined from the audio content of the target video, for example, if the audio content of the target video is "beautiful car appears on a winding mountain road", the keyword "car" is recognized by voice recognition, and the scene type is determined to be "car", so that the purpose of determining the target scene type from the video content of the target video, the subtitle content of the target video and the audio content of the target video is realized by the method.
It should be noted that the target scene type in this embodiment may be abstract information, such as a vehicle, a human being, or the like, or may also be concrete information, such as a star a, a vehicle type B, or the like, as long as the target scene type of the scene capable of indicating the target content played by the target video at the corresponding target time is within the scope of the embodiment of the present invention, which is not illustrated herein.
As an optional implementation, determining the target scene type from the video content of the target video includes: identifying the picture of the video content of the target video to obtain a target scene type, wherein the target scene type is associated with the playing time of the picture of the video content; determining the target scene type from the subtitle content of the target video comprises: determining keywords in subtitle content of a target video as a target scene type, wherein the target scene type is associated with the playing time of the subtitle content; from the audio content of the target video, determining the target scene type includes: converting the audio content of the target video into a target text; and determining the keywords of the target text as a target scene type, wherein the target scene type is associated with the playing time of the audio content.
In this embodiment, when determining a target scene type from video content of a target video, a picture of the video content of the target video may be identified to obtain the target scene type, and a portrait picture in the video content of the target video may be identified by using a face recognition technology, for example, star a is identified, where the target scene type is associated with a playing time of the picture of the video content; when determining a target scene type from the subtitle content of the target video, the embodiment determines the keywords in the subtitle content of the target video as the target scene type, for example, determines the keywords in the subtitle content of the target video as the target scene type by a subtitle matching technique or using a text mining correlation algorithm, where the target scene type is associated with the playing time of the subtitle content; when the target scene type is determined from the audio content of the target video, the embodiment may convert the audio content of the target video into the target text, determine the keyword as the target scene type, and further determine the keyword as the target scene type, where the target scene type is associated with the playing time of the audio content.
The embodiment can determine the association relationship between the target scene type and the target time by the method, and can decompose the time for playing the video content of the target video into a plurality of target times associated with different scene types in advance, so that the target media file is released to the target time associated with the scene type to be released for playing.
As an optional implementation manner, when determining a target video corresponding to a target scene type, the method further includes: in the target database, the target time of the target video is identified through the target scene type and the target association relationship, wherein the target scene type and the target association relationship are stored in the target database in advance, and the target association relationship is used for indicating that the target scene type is associated with the target time of the target video.
In this embodiment, the target database may be a data center, and a target scene type and a target association relationship are pre-stored, where the association relationship is a "target time-target scene type" association relationship, that is, a "scene point-scene type" association relationship, and in the target database, a target time of the target video is identified by the target scene type and the "target time-target scene type", and then the target media file is delivered to a target playing position in the target video corresponding to the target time.
Optionally, in this embodiment, the data stored in the target database includes data that is needed by all tag management systems, such as a scene type dictionary, "target time-scene type" association relationship, time point (scene point) information, and the like, and the data may be stored in various storage media, such as a redis, sql, and es file, according to characteristics of the data, so as to be called downstream.
Optionally, the scene point information and the scene point-scene type association relationship in this embodiment are written into the redis storage and the video content point association ES through a message queue (queue storage) at the same time. The online tag service can read the real-time scene point-scene type association relationship from the redis storage and transmit the real-time scene point-scene type association relationship to a delivery engine of the media file for use.
In the target database of this embodiment, an inquiry interface is also maintained for a downstream tag management platform to perform a content package packing operation and a query volume operation, and when performing the content package packing operation, a series of complex data queries such as a media asset tag association relation, a tag distance, a media asset predicted playing volume, and the like are involved. The query interface is responsible for uniformly receiving the query request and returning the integrated data to the downstream system according to the query purpose, so that the complexity of the downstream system is reduced. Similarly, the quantity inquiry system acquires data such as media asset tag association from the inquiry interface and sends back a request for writing information such as the expected playing quantity of the media assets to the storage medium. The query volume of this embodiment refers to, among other things, the total amount of exposure that an order for querying a media file may be booked under specified conditions.
As an optional implementation, the method further comprises: and updating the target scene type and the target association relation stored in the target database regularly.
In this embodiment, the target scene type and the target association relationship stored in the target database are periodically updated, for example, the scene point information in the redis storage is periodically refreshed and stored in the time point sql, so as to ensure that the target scene type matched with the target index information is obtained, and the accuracy and the instantaneity of the target video corresponding to the target scene type are determined, thereby facilitating downstream call.
The traditional putting of the dotting type media files depends on manpower, special people need to watch videos, and the media files are put by selecting time points meeting the requirements of users. The media file delivery method of the embodiment not only avoids the complexity of manual operation, but also greatly extends the delivery range of the media files, and can realize the delivery of the media files covered by the video website in a total station level by simple operation no matter what type of scenes, thereby being greatly helpful for media file operators and video websites.
The technical solution of the present invention will be described below with reference to preferred embodiments. Specifically, the media file is taken as an example for advertisement.
In this embodiment, the punctuation type advertisement, such as the mid-roll advertisement, the corner mark advertisement, etc., in the video advertisement is strongly correlated with the time point of the video. According to the embodiment, through technologies such as scene mining, the advertisement delivery is realized by automatically associating the requirements of the advertiser with the scene points of the video related content without manually selecting the delivery scene, so that the advertiser can simply select to cover the advertisement creatives of the advertiser on the target video scene points.
In this embodiment, the Tag Management Platform (TMP) decomposes video content into scene points with tags, which are the time of content with advertisement delivery value in the video, through scene mining, face recognition, subtitle matching, and the like. When an advertiser wants to launch mass advertisements, the advertiser only needs to clearly describe the target launching objects of the advertiser, such as a star A, a vehicle type B and the like, and launches the advertisements according to the corresponding scene labels, so that the advertisements needing to be launched can be covered on all scene spots containing the scene labels.
Fig. 3 is a schematic diagram of a media file delivery system according to an embodiment of the present invention. As shown in fig. 3, the media file delivery system includes: the system comprises a mining module, a data center and a TMP management and delivery system.
The excavation module of this embodiment is described below.
The mining module consists of a media information service interface and a data mining algorithm. The media information service is used for providing media information needing mining for a mining algorithm, and the media information comprises contents such as video files and video subtitles. The media information service receives the input of the full amount of media assets and the increment media assets, is used for integrating, updating and processing the full amount of media assets and the increment media assets, simultaneously provides a calling interface (API) for calling a mining algorithm, and returns the received and stored media information to the mining algorithm for use according to a fixed format, wherein the mining algorithm is used for data mining and comprises subtitle mining, scene mining and video media assets. The media asset information service is used as a middle layer to isolate the mining algorithm from the online media asset environment so as to ensure that the mining algorithm can quickly obtain the required media asset information.
It should be noted that, according to the difference of the medium asset input, the mining algorithm may be a set of a series of different algorithms, and is not limited to only one mining algorithm. For example, for the input of a video file, a related algorithm of scene mining can be used, for example, a face recognition method is used to recognize a video picture, and a scene type corresponding to a time point is obtained; for the input of the caption, the scene type corresponding to the moment point can be obtained by using a related algorithm of caption mining. For the mining algorithm, the outputs are all consistent and are all incidence relations between time points and scene types, and the incidence relations between the time points and the scene types are output to a message queue in a data pipeline through the mining algorithm.
The data center of this embodiment is described below.
In this embodiment, the data stored in the data center includes data that needs to be used by all TMP systems, such as a scene type dictionary, a scene point-scene type association relationship, time point (scene point) information, and the like, and is stored in various storage media, such as a redis file, an sql file, and an es file, according to characteristics of the data, so as to be called by downstream.
The scene point information and the scene point-scene type association relationship generated by the mining module are written into video content point association (ES) in a redis storage and query interface through queue processing through a message queue (queue storage) in a data pipeline. Scene point information in the redis storage is periodically refreshed and stored in a time point SQL in the query interface, so that downstream calling is facilitated; meanwhile, the online tag service is used for reading the scene point-scene type incidence relation from the redis storage in real time and transmitting the scene point-scene type incidence relation to an advertisement putting engine of the video advertisement for use.
The embodiment maintains a query interface in the data center module, and the query interface is used for the downstream TMP management platform to perform the content package packaging operation and for the query system to perform the volume operation call. When the content package packing operation is carried out, a series of data queries such as media asset label association relation, label distance, expected playing amount of media assets and the like are involved. The query interface is used for uniformly receiving the query request, performing label and inventory query, and returning the integrated data to the downstream system according to the query purpose, thereby reducing the complexity of the downstream system. Similarly, the query system acquires scene point-scene type association and other data from the query interface and sends back a request for writing back information such as the expected playing amount of the media assets and the like to the storage medium.
The TMP management and delivery system of this embodiment is described below.
The TMP management platform of this embodiment is used to generate the scene content package from the customer demand. The client enters the scene type, and the TMP platform queries and returns all similar scene types for selection by the client. For example, a keyword car for inputting an advertisement to be delivered, and optional scene types displayed by the TMP system include cars, trucks, etc. After a customer selects one or more scene types according to requirements, the TMP platform packs the one or more scene types to obtain a scene type package, and the scene type package is transmitted to an order releasing system.
The order placing system of the embodiment reads the generated scene type package, associates the generated scene type package with the order of the corresponding customer, and places the order by means of orientation and the like. And the putting engine reads the scene type package of the order, matches the scene type incidence relation of all current scene points returned by the online label service, and selects to put the order on the corresponding scene point, so that the aim of putting the advertisement by the advertiser according to the scene type is finally achieved.
It should be noted that the label mining algorithm of the mining module in this embodiment may be replaced, and may be used as long as the purpose of mining the relationship between the scene point and the scene type can be achieved; the storage media of the data center may be replaced. For example, the storage structure redis may be replaced by other non-relational storage structures such as the storage structure mongoDB, and the ES file may be replaced by the HBase file, and the like, which is not limited herein; both the order system and the delivery engine in the delivery system can be replaced, and any advertisement delivery system and delivery engine can be compatible with the scheme, which is not illustrated herein.
The application environment of the embodiment of the present invention may refer to the application environment in the above embodiments, but is not described herein again. The embodiment of the invention provides an optional specific application for implementing the media file delivery method.
Fig. 4 is a schematic diagram of selecting a scene tag according to an embodiment of the present invention. As shown in fig. 4, in the video advertisement content tag management platform, the video image recognition is operated, a target keyword "car" to be advertised is first input, a series of selectable scene tags, such as "tire", "truck", "motorcycle", "car", "racing car", "sports car", "car", and the like, are matched for "car" in the image recognition tag library, a scene tag in which the user wants to be placed is selected, such as "car" and "car", is selected, the "car" and the "car" are displayed on the selected interface, and after the "car" and the "car" are selected, the storage key is pressed to package the "car" and the "car", so as to obtain a scene type package. The video advertisement content tag management platform can be logged in by a user who puts in advertisements through an account, and further comprises a content package generation option, a my content package option and a browser advertisement search option, wherein the content package generation option is used for generating a content package according to scene types, the my content package option can be used for viewing the generated content package, and the browser advertisement search option is used for searching advertisements through a browser.
FIG. 5 is a diagram of a view scene type package, according to an embodiment of the invention. As shown in fig. 5, in the video advertisement content tag management platform, after packaging the selected scene type, the user may view the scene type package through the "my content package", and may select the generated scene type package, and then a content package ID (371), a content package name (test package-car), a drama video (8, 177), a keyword (car ), a type (video image recognition), a creation time (2018-6-2912: 06: 03), a creator, and the like may be displayed.
Optionally, the embodiment selects the generated scene type package, and then operates the drama list, that is, the embodiment may enter a viewing interface of the drama list to view information of a specific video to which the advertisement is delivered.
Fig. 6 is a schematic diagram of a video information display interface according to an embodiment of the invention. As shown in fig. 6, information of a specific video corresponding to the scene type package, such as the name of the drama/album, the copyright side, the year, the broadcast date, the content introduction, etc., and the time point at which the identified advertisement can be delivered in the video, the scene type, i.e., the identification result, for example, at which time point in which the order corresponding to the scene type package is to be delivered in detail, and the corresponding scene type at that time point, for example, for the video name a, "13: 29/car", "19: 56/car", "26: 37/car", "37: 57/car", "37: 24/car", "43: 41/car", etc., can be checked.
The videos to which the advertisement of this embodiment can be delivered include a plurality of videos, for example, a video name a, a video name B, a video name C, a video name D, a video name E, and the like, and each video has an identification result of "time point/scene type", so that the advertisement can be delivered to a playing position corresponding to a time point associated with a scene type in the video for playing. Alternatively, each video has a plurality of recognition results of "time point/scene type", and more recognition results of "time point/scene type" can be displayed by operating the slider. The embodiment can operate specific time points, for example, select "27: 21/car" in the recognition result corresponding to the video name B, view the video content of the video name B at the moment when the video name B is played to 27:21, play the video content through the target play window, and perform moving operation, enlarging operation, reducing operation, closing operation, and the like on the target play window.
In the embodiment, after the scene type package is generated, the scene type package is transmitted to the order system, and the advertisement is delivered to the playing position of the time point corresponding to the scene type in the video through the order system according to the scene type package.
According to the embodiment, through technologies such as scene mining, the requirement of an advertiser is automatically associated with the scene points of the video related content without manually selecting the delivered scene types, so that advertisement delivery is realized, the advertiser can simply select the advertisement creative idea to cover the target video scene points, and the delivery range of the advertisement with the delivered type is greatly expanded. Therefore, no matter what type of scene type the advertisement is delivered to, the advertisement delivery of the all-station level coverage of the video website can be realized through simple operation, and great help is provided for advertisers and the video website.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiments of the present invention, a device for delivering a media file is also provided, which is used for implementing the method for delivering a media file. Fig. 7 is a schematic diagram of a media file delivery apparatus according to an embodiment of the present invention. As shown in fig. 7, the media file delivering apparatus 700 may include: an acquisition unit 10, an execution unit 20 and a dosing unit 30.
The obtaining unit 10 is configured to obtain target index information corresponding to a target media file to be launched.
And the execution unit 20 is configured to acquire a target scene type matching the target index information, and determine a target video corresponding to the target scene type, where the target scene type is used to indicate a scene of target content played by the target video at a corresponding target time.
The delivering unit 30 is configured to deliver the target media file to a target playing position in the target video corresponding to the target time, where an initial playing time of the target media file at the target playing position is the target time.
Optionally, the execution unit 30 of this embodiment includes: the device comprises a determining module, a judging module and a display module, wherein the determining module is used for determining at least a first target scene type and a second target scene type from the target scene types, and determining a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type, the first target scene type is used for indicating a scene of first target content played by the first target video at a first target moment, and the second target scene type is used for indicating a scene of second target content played by the second target video at a second target moment; the dosing unit 30 comprises: and the releasing module is used for releasing the target media file to a first target playing position corresponding to the first target moment in the first target video and a second target playing position corresponding to the second target moment in the second target video.
Optionally, the apparatus further comprises: the device comprises a first acquisition unit and a first display unit. The first obtaining unit is used for obtaining a first target operation instruction on a target interface after determining a target video corresponding to a target scene type, wherein the first target operation instruction is used for indicating and displaying information of the target video, and the information of the target video comprises playing information of the target video at least one target playing position; and the first display unit is used for responding to the first target operation instruction and displaying the information of the target video.
Optionally, the apparatus further comprises: the device comprises a first determining unit, a second acquiring unit and a second displaying unit. The first determining unit is used for determining a first target playing position from at least one target playing position when the information of the target video is displayed; a second obtaining unit, configured to obtain a second target operation instruction based on the first target playing position, where the second target operation instruction is used to instruct to display content played by the target video at the first target playing position, where the played content includes the target content; and the second display unit is used for responding to the second target operation instruction and displaying the content played by the target video at the first target playing position in the first target window.
Optionally, the apparatus further comprises: the device comprises a third acquisition unit and a first execution unit. The third obtaining unit is configured to obtain a third target operation instruction based on a second target window when information of the target video is displayed, where the second target window is used to display playing information of the target video at least one target playing position, and the third target operation instruction is used to instruct to update the at least one target playing position; and the first execution unit is used for responding to the third target operation instruction, updating at least one target playing position and displaying the updated playing information on the at least one target playing position in the second target window.
Optionally, in this embodiment, the playing information of the target video at the target playing position includes: and the target scene type and the target time of the target video correspond to at least one target playing position.
Optionally, the apparatus further comprises: and the playing unit is used for playing the corner mark media file or the inserting media file when the target video is played to the target moment after the target media file is launched to the target playing position corresponding to the target moment in the target video, wherein the target media file comprises the corner mark media file and the inserting media file.
Optionally, the apparatus further comprises at least one of: a second determining unit configured to determine a target scene type from video content of the target video before acquiring the target scene type matching the target index information; a third determining unit, configured to determine a target scene type from subtitle content of the target video; and the fourth determining unit is used for determining the target scene type from the audio content of the target video.
Optionally, the second determination unit includes: the first determining module is used for identifying the picture of the video content of the target video to obtain a target scene type, wherein the target scene type is associated with the playing time of the picture of the video content; the third determination unit includes: the second determining module is used for determining keywords in the subtitle content of the target video as a target scene type, wherein the target scene type is associated with the playing time of the subtitle content; the fourth determination unit includes: the third determining module is used for converting the audio content of the target video into a target text; and determining the keywords of the target text as a target scene type, wherein the target scene type is associated with the playing time of the audio content.
Optionally, the apparatus further comprises: when determining a target video corresponding to the target scene type, the identifying unit is configured to identify a target time of the target video in a target database through the target scene type and a target association relationship, where the target scene type and the target association relationship are stored in the target database in advance, and the target association relationship is used to indicate that the target scene type is associated with the target time of the target video.
Optionally, the apparatus further comprises: and the updating unit is used for regularly updating the target scene type and the target association relation stored in the target database.
It should be noted that the obtaining unit 10 in this embodiment may be configured to execute step S202 in this embodiment, the executing unit 20 in this embodiment may be configured to execute step S204 in this embodiment, and the delivering unit 30 in this embodiment may be configured to execute step S206 in this embodiment.
In this embodiment, target index information corresponding to a target media file to be launched is obtained by the obtaining unit 10, a target scene type matched with the target index information is obtained by the executing unit 20, and a target video corresponding to the target scene type is determined, where the target scene type is used to indicate a scene of target content played by the target video at a corresponding target time, and the target media file is launched to a target playing position corresponding to the target time in the target video by the launching unit 30, where an initial playing time of the target media file at the target playing position is the target time. The target scene type is determined through the target index information of the target media file to be launched, the target media file is automatically launched to the target playing position in the target video corresponding to the target scene type, the target media file can be played at the moment when the target video is played to the target, so that the target media file can be launched to more videos, the goal of launching the target media file is achieved, the media file is prevented from being launched by manually selecting the launching position of the media file in the target video, the technical effect of improving the efficiency of launching the media file is achieved, and the technical problem of low efficiency of launching the media file in the related technology is solved.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present invention, an electronic device for implementing the method for delivering a media file is also provided. Fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 8, the electronic device comprises a memory 802 in which a computer program is stored and a processor 804 arranged to perform the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring target index information corresponding to a target media file to be launched;
s2, acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment;
and S3, the target media file is launched to a target playing position corresponding to the target time in the target video, wherein the starting playing time of the target media file at the target playing position is the target time.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
The memory 802 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for delivering a media file in the embodiments of the present invention, and the processor 804 executes various functional applications and data processing by running the software programs and modules stored in the memory 802, so as to implement the above-mentioned method for delivering a media file. The memory 802 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 802 can further include memory located remotely from the processor 804, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 802 may be specifically, but not limited to, used for storing information such as extracted facial features and pose features. As an example, as shown in fig. 8, the memory 802 may include, but is not limited to, the obtaining unit 10, the executing unit 20, and the delivering unit 30 of the delivering device 700 of the media file. In addition, the media file delivery device may further include, but is not limited to, other module units in the media file delivery device, which is not described in this example again.
The transmission device 806 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 806 includes a network adapter (NIC) that can be connected to a router via a network cable and other network devices to communicate with the internet or a local area network. In one example, the transmission device 806 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 808, configured to display an execution state of the object code in the first objective function; and a connection bus 810 for connecting the respective module components in the electronic device.
According to a further aspect of embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring target index information corresponding to a target media file to be launched;
s2, acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment;
and S3, the target media file is launched to a target playing position corresponding to the target time in the target video, wherein the starting playing time of the target media file at the target playing position is the target time.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, at least determining a first target scene type and a second target scene type from the target scene types, and determining a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type, wherein the first target scene type is used for indicating a scene of first target content played by the first target video at a first target moment, and the second target scene type is used for indicating a scene of second target content played by the second target video at a second target moment;
and S2, the target media file is delivered to a first target playing position corresponding to the first target moment in the first target video and a second target playing position corresponding to the second target moment in the second target video.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, after determining a target video corresponding to the target scene type, acquiring a first target operation instruction on a target interface, wherein the first target operation instruction is used for indicating information of the target video to be displayed, and the information of the target video comprises playing information of the target video at least one target playing position;
s2, in response to the first target operation instruction, displaying information of the target video.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, when the information of the target video is displayed, a first target playing position is determined from at least one target playing position;
s2, acquiring a second target operation instruction based on the first target playing position, wherein the second target operation instruction is used for indicating and displaying the content played by the target video at the first target playing position, and the played content comprises the target content;
s3, in response to the second target operation instruction, displaying the content played by the target video at the first target playing position in the first target window.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, when information of the target video is displayed, a third target operation instruction is obtained based on a second target window, wherein the second target window is used for displaying the playing information of the target video at least one target playing position, and the third target operation instruction is used for indicating that at least one target playing position is updated;
s2, responding to the third target operation command, updating the at least one target playing position, and displaying the updated playing information on the at least one target playing position in the second target window.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
and the target scene type and the target time of the target video correspond to at least one target playing position.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
after the target media file is launched to a target playing position corresponding to the target moment in the target video, when the target video is played to the target moment, the corner mark media file or the middle inserting media file is played, wherein the target media file comprises the corner mark media file and the middle inserting media file.
Optionally, in this embodiment, the storage medium may be configured to store a computer program for executing at least one of the following steps before acquiring the target scene type matching the target index information:
s1, determining the type of the target scene from the video content of the target video;
s2, determining the type of the target scene from the subtitle content of the target video;
s3, determining the target scene type from the audio content of the target video.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing at least one of the following steps:
s1, identifying the picture of the video content of the target video to obtain a target scene type, wherein the target scene type is associated with the playing time of the picture of the video content;
s2, determining keywords in the subtitle content of the target video as a target scene type, wherein the target scene type is associated with the playing time of the subtitle content;
s3, converting the audio content of the target video into a target text; and determining the keywords of the target text as a target scene type, wherein the target scene type is associated with the playing time of the audio content.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
when a target video corresponding to a target scene type is determined, identifying a target moment of the target video through the target scene type and a target association relation in a target database, wherein the target scene type and the target association relation are stored in the target database in advance, and the target association relation is used for indicating that the target scene type is associated with the target moment of the target video.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
and updating the target scene type and the target association relation stored in the target database regularly.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method for delivering a media file, comprising:
acquiring target index information corresponding to a target media file to be launched;
acquiring a target scene type matched with the target index information, and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment;
and delivering the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file at the target playing position is the target moment.
2. The method of claim 1,
determining the target video corresponding to the target scene type comprises: at least a first target scene type and a second target scene type are determined from the target scene types, and a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type are determined, wherein the first target scene type is used for indicating a scene of first target content played by the first target video at a first target moment, and the second target scene type is used for indicating a scene of second target content played by the second target video at a second target moment;
the delivering the target media file to the target playing position corresponding to the target moment in the target video comprises: and delivering the target media file to a first target playing position corresponding to the first target moment in the first target video and a second target playing position corresponding to the second target moment in the second target video.
3. The method of claim 1, wherein after determining the target video corresponding to the target scene type, the method further comprises:
acquiring a first target operation instruction on a target interface, wherein the first target operation instruction is used for indicating and displaying information of the target video, and the information of the target video comprises playing information of the target video at least one target playing position;
and responding to the first target operation instruction, and displaying the information of the target video.
4. The method of claim 3, wherein when displaying the information of the target video, the method further comprises:
determining a first target playing position from at least one target playing position;
acquiring a second target operation instruction based on the first target playing position, wherein the second target operation instruction is used for indicating to display the content played by the target video at the first target playing position, and the played content comprises the target content;
and responding to the second target operation instruction, and displaying the content played by the target video at the first target playing position in a first target window.
5. The method of claim 3, wherein when displaying the information of the target video, the method further comprises:
acquiring a third target operation instruction based on a second target window, wherein the second target window is used for displaying the playing information of the target video at least one target playing position, and the third target operation instruction is used for indicating to update at least one target playing position;
and responding to the third target operation instruction, updating at least one target playing position, and displaying the updated playing information on at least one target playing position in the second target window.
6. The method of claim 3, wherein the playing information of the target video at the target playing position comprises: and the target scene type and the target time of the target video correspond to at least one target playing position.
7. The method according to any one of claims 1 to 6, wherein after the target media file is launched to the target playing position corresponding to the target time in the target video, the method further comprises:
and when the target video is played to the target moment, playing a corner mark media file or an intermediate media file, wherein the target media file comprises the corner mark media file and the intermediate media file.
8. The method according to any one of claims 1 to 6, wherein before obtaining the target scene type of the target video matching the target index information, the method further comprises at least one of:
determining the target scene type from the video content of the target video;
determining the target scene type from the subtitle content of the target video;
determining the target scene type from the audio content of the target video.
9. The method of claim 8,
determining, from the video content of the target video, the scene type comprises: identifying the picture of the video content of the target video to obtain the target scene type, wherein the target scene type is associated with the playing time of the picture of the video content;
determining the scene type from the subtitle content of the target video includes: determining a keyword in the subtitle content of the target video as the target scene type, wherein the target scene type is associated with the playing time of the subtitle content;
determining the scene type from the audio content of the target video comprises: converting the audio content of the target video into a target text; determining the keywords of the target text as the target scene type, wherein the target scene type is associated with the playing time of the audio content.
10. The method of any of claims 1-6, wherein in determining the target video corresponding to the target scene type, the method further comprises:
in a target database, the target time of the target video is identified through the target scene type and a target association relationship, wherein the target scene type and the target association relationship are stored in the target database in advance, and the target association relationship is used for indicating that the target scene type is associated with the target time of the target video.
11. The method of claim 10, further comprising:
and updating the target scene type and the target association relation stored in the target database regularly.
12. A device for delivering a media file, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring target index information corresponding to a target media file to be launched;
the execution unit is used for acquiring a target scene type matched with the target index information and determining a target video corresponding to the target scene type, wherein the target scene type is used for indicating a scene of target content played by the target video at a corresponding target moment;
and the releasing unit is used for releasing the target media file to a target playing position corresponding to the target moment in the target video, wherein the starting playing moment of the target media file on the target playing position is the target moment.
13. The apparatus of claim 12,
the execution unit includes: a determining module, configured to determine at least a first target scene type and a second target scene type from the target scene types, and determine a first target video corresponding to the first target scene type and a second target video corresponding to the second target scene type, where the first target scene type is used to indicate a scene of first target content played by the first target video at a first target time, and the second target scene type is used to indicate a scene of second target content played by the second target video at a second target time;
the delivery unit includes: and the releasing module is used for releasing the target media file to a first target playing position corresponding to the first target moment in the first target video and a second target playing position corresponding to the second target moment in the second target video.
14. A storage medium having stored thereon a computer program, wherein the computer program is arranged to execute a method of delivering a media file as claimed in any one of claims 1 to 11 when executed.
15. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the method for delivering a media file according to any one of claims 1 to 11 by the computer program.
CN201811236644.5A 2018-10-23 2018-10-23 Media file delivery method and device, storage medium and electronic device Active CN111093101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811236644.5A CN111093101B (en) 2018-10-23 2018-10-23 Media file delivery method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811236644.5A CN111093101B (en) 2018-10-23 2018-10-23 Media file delivery method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111093101A true CN111093101A (en) 2020-05-01
CN111093101B CN111093101B (en) 2023-03-24

Family

ID=70391329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811236644.5A Active CN111093101B (en) 2018-10-23 2018-10-23 Media file delivery method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111093101B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815363A (en) * 2020-07-13 2020-10-23 湖南快乐阳光互动娱乐传媒有限公司 Advertisement dotting method and device, storage medium and electronic equipment
CN111967915A (en) * 2020-08-27 2020-11-20 北京明略昭辉科技有限公司 Media file delivery method and device, storage medium and electronic device
CN113553485A (en) * 2021-07-29 2021-10-26 北京达佳互联信息技术有限公司 Multimedia resource display method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070061204A1 (en) * 2000-11-29 2007-03-15 Ellis Richard D Method and system for dynamically incorporating advertising content into multimedia environments
CN101072340A (en) * 2007-06-25 2007-11-14 孟智平 Method and system for adding advertising information in flow media
CN101207807A (en) * 2007-12-18 2008-06-25 孟智平 Method for processing video and system thereof
CN103607647A (en) * 2013-11-05 2014-02-26 Tcl集团股份有限公司 Multimedia video advertisement recommendation method, system and advertisement playing equipment
CN104992347A (en) * 2015-06-17 2015-10-21 北京奇艺世纪科技有限公司 Video matching advertisement method and device
CN106169140A (en) * 2016-02-02 2016-11-30 华扬联众数字技术股份有限公司 Advertisement placement method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070061204A1 (en) * 2000-11-29 2007-03-15 Ellis Richard D Method and system for dynamically incorporating advertising content into multimedia environments
CN101072340A (en) * 2007-06-25 2007-11-14 孟智平 Method and system for adding advertising information in flow media
CN101207807A (en) * 2007-12-18 2008-06-25 孟智平 Method for processing video and system thereof
CN103607647A (en) * 2013-11-05 2014-02-26 Tcl集团股份有限公司 Multimedia video advertisement recommendation method, system and advertisement playing equipment
CN104992347A (en) * 2015-06-17 2015-10-21 北京奇艺世纪科技有限公司 Video matching advertisement method and device
CN106169140A (en) * 2016-02-02 2016-11-30 华扬联众数字技术股份有限公司 Advertisement placement method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815363A (en) * 2020-07-13 2020-10-23 湖南快乐阳光互动娱乐传媒有限公司 Advertisement dotting method and device, storage medium and electronic equipment
CN111967915A (en) * 2020-08-27 2020-11-20 北京明略昭辉科技有限公司 Media file delivery method and device, storage medium and electronic device
CN113553485A (en) * 2021-07-29 2021-10-26 北京达佳互联信息技术有限公司 Multimedia resource display method, device, equipment and storage medium
CN113553485B (en) * 2021-07-29 2024-02-06 北京达佳互联信息技术有限公司 Method, device, equipment and storage medium for displaying multimedia resources

Also Published As

Publication number Publication date
CN111093101B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110378732B (en) Information display method, information association method, device, equipment and storage medium
US20150256858A1 (en) Method and device for providing information
CN110139162B (en) Media content sharing method and device, storage medium and electronic device
US8375405B2 (en) Contextual television advertisement delivery
CN111093101B (en) Media file delivery method and device, storage medium and electronic device
US8315423B1 (en) Providing information in an image-based information retrieval system
CN108352914A (en) Media content matches and index
CN104602128A (en) Video processing method and device
US8156001B1 (en) Facilitating bidding on images
JP2010509661A (en) Content management system
WO2009073552A2 (en) Video object tag creation and processing
TW200845639A (en) Tagging media assets, locations, and advertisements
CN105898446A (en) Advertisement push method and device, video server and terminal equipment
US9881581B2 (en) System and method for the distribution of audio and projected visual content
US20140133832A1 (en) Creating customized digital advertisement from video and/or an image array
CN110880139B (en) Commodity display method, commodity display device, terminal, server and storage medium
US20120246676A1 (en) Targeting ads in conjunction with set-top box widgets
CN110958470A (en) Multimedia content processing method, device, medium and electronic equipment
CN107690080B (en) media information playing method and device
CN104009965A (en) Method, apparatus and system for displaying mobile media information
CN112927024B (en) Advertisement putting method, system, device, electronic equipment and readable storage medium
US8595760B1 (en) System, method and computer program product for presenting an advertisement within content
CN107437196B (en) System for providing instruction image content and advertisement of smart phone
US20140133833A1 (en) Creating customized digital advertisement from video and/or an image array
JP6082716B2 (en) Broadcast verification system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant