CN105828103A - Video processing method and player - Google Patents

Video processing method and player Download PDF

Info

Publication number
CN105828103A
CN105828103A CN201610201738.3A CN201610201738A CN105828103A CN 105828103 A CN105828103 A CN 105828103A CN 201610201738 A CN201610201738 A CN 201610201738A CN 105828103 A CN105828103 A CN 105828103A
Authority
CN
China
Prior art keywords
current video
video
video frame
described current
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610201738.3A
Other languages
Chinese (zh)
Inventor
蔡炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leshi Zhixin Electronic Technology Tianjin Co Ltd
LeTV Holding Beijing Co Ltd
Original Assignee
Leshi Zhixin Electronic Technology Tianjin Co Ltd
LeTV Holding Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leshi Zhixin Electronic Technology Tianjin Co Ltd, LeTV Holding Beijing Co Ltd filed Critical Leshi Zhixin Electronic Technology Tianjin Co Ltd
Priority to CN201610201738.3A priority Critical patent/CN105828103A/en
Publication of CN105828103A publication Critical patent/CN105828103A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiments of the invention provide a video processing method and a player. The method comprises the following steps: determining time information of a current video frame; according to the time information of the current video frame, obtaining description information of each video object in the current video frame; respectively performing association registration on the obtained description information of each video object in the current video frame with a corresponding video object in the current video frame; and synthesizing the video object finishing the association registration with a video stream corresponding to the current video frame and performing playing. According to the embodiments of the invention, interaction between a user and a video, especially interaction between the user and a video object in the video, is realized, the operation process is simplified, and user experience is improved.

Description

A kind of method for processing video frequency and a kind of player
Technical field
The present invention relates to video technique field, be specifically related to a kind of method for processing video frequency and a kind of player.
Background technology
Nowadays, along with the application of Internet technology and universal, user habit utilizes various terminal unit to obtain the information needed from the Internet.Especially, a lot of users like in one's spare time, watch internet video program.By selecting watching video program to become the killtime a kind of important way of various terminal use.
But, current video playback scheme can only realize the single playing function to video frequency program, relatively independent between user and video frequency program, lacks interactive.Such as, user is when watching video frequency program, often interested in a certain part stage property or the commodity occurred in video frequency program; but; owing to current video playback scheme cannot realize the mutual of video frequency program and user, therefore, user cannot get its stage property interested or the specifying information of commodity.
Summary of the invention
The embodiment of the present invention provides a kind of method for processing video frequency and a kind of player, to realize the mutual of the mutual of user and video, particularly user and the object video in video, simplifies operating process, improves Consumer's Experience.
Embodiments provide a kind of method for processing video frequency, including:
Determine the temporal information of current video frame;
According to the temporal information of described current video frame, obtain the description information of each object video in described current video frame;
The object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration;
The described video flowing completing to associate the object video registered corresponding with described current video frame is synthesized and plays.
The embodiment of the present invention additionally provides a kind of player, including:
Determine module, for determining the temporal information of current video frame;
Acquisition module, for according to the temporal information of described current video frame, obtains the description information of each object video in described current video frame;
Registering modules, for being associated registration by object video corresponding with described current video frame respectively for the description information of each object video in the described current video frame obtained;
Synthesis playing module, for synthesizing the described video flowing completing to associate the object video registered corresponding with described current video frame and play.
Video processing schemes described in the embodiment of the present invention, it may be determined that the temporal information of current video frame;According to the temporal information of described current video frame, obtain the description information of each object video in described current video frame;Then, the object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration;Finally, the described video flowing completing to associate the object video registered corresponding with described current video frame is synthesized and plays.Visible, user is when watching the enhancing video after synthesizing, real-time, interactive can be carried out by object video in real time and in frame of video picture, such as, user can check and obtain in frame of video picture occur commodity specifying information (as, the specification information of commodity, pricing information, purchase link information etc.), the most such as, check and obtain the profile information etc. of the personage occurred in frame of video picture, achieve the real-time, interactive of user and video, particularly user and the object video in video.
Compared to prior art, scheme described in the embodiment of the present invention directly can show all kinds of description information of object video to user, user need not to open separately third-party application (as, search browser) removal search obtain object video various information, simplify operating process, achieve the real-time, interactive of user and video, in real time, quickly video object information can be obtained, improve Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of a kind of method for processing video frequency in the embodiment of the present invention one;
Fig. 2 is the flow chart of steps of a kind of method for processing video frequency in the embodiment of the present invention two;
Fig. 3 is the structural representation of a kind of player in the embodiment of the present invention three;
Fig. 4 is the structural representation of a kind of preferred player in the embodiment of the present invention.
Detailed description of the invention
Understandable for enabling the above-mentioned purpose of the present invention, feature and advantage to become apparent from, the present invention is further detailed explanation with detailed description of the invention below in conjunction with the accompanying drawings.
Embodiment one
With reference to Fig. 1, it is shown that the flow chart of steps of a kind of method for processing video frequency in the embodiment of the present invention one.In the present embodiment, described method for processing video frequency includes:
Step 102, determines the temporal information of current video frame.
Frame of video may refer to the frame number of video clipping display per second, and a still image is a frame, and several still images can form one section of video.That is, one section of video can be made up of multiple continuous print frame of video.In the present embodiment, can but be not limited only to the reproduction time according to video and determine the temporal information of described current video frame.
Step 104, according to the temporal information of described current video frame, obtains the description information of each object video in described current video frame.
Usually, one section of video includes multiple object video, such as, the electronic equipment that clothing that the personage that occurs in video, personage dress, personage use, and the dining room, bus station, the scene environment such as cafe that occur in video can be understood as being the object video in video.Wherein, the time that each object video occurs in video is incomplete same.Such as, personage A dresses clothing A to be occurred in the 1st second of video, and personage A dresses the clothing B appearance in the 10th second at video.
In the present embodiment, can be according to the description information of each object video in the temporal information described current video frame of acquisition of described current video frame.Such as, the description information of described object video can include the buyer's guide information of above-mentioned clothing A or clothing B, place of production information and buy link information etc..Wherein, the description information of each object video in described current video frame can be to extract in advance and be saved in data base, and then directly can obtain the description information of the object video of correspondence from data base according to the temporal information of described current video frame.
Step 106, the object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration.
In the present embodiment, by description information to be associated registration with corresponding object video, it is achieved that describe the fusion of information and the object video in described current video frame, it is achieved that user is mutual with the object video in described current video frame.
Step 108, synthesizes the described video flowing completing to associate the object video registered corresponding with described current video frame and plays.
In the present embodiment, can realize by the way of any one is suitable described in complete the synthesis of the object video video flowing corresponding with described current video frame of association registration.Such as, in windows platform, can use DirectShow realize described in complete the synthesis of the object video video flowing corresponding with described current video frame of association registration, and then realize the broadcasting of the enhancing frame of video picture after synthesis, user is when watching the enhancing frame of video picture after synthesizing, real-time interactive can be carried out with the object video (personage, stage property or clothing) in described current video frame picture, check the specifying information of described object video.
In sum, method for processing video frequency described in the present embodiment directly can show all kinds of description information of object video to user, user need not to open separately third-party application (as, search browser) removal search obtain object video various information, simplify operating process, achieve the real-time, interactive of user and video, in real time, quickly video object information can be obtained, improve Consumer's Experience.
Embodiment two
With reference to Fig. 2, it is shown that the flow chart of steps of a kind of method for processing video frequency in the embodiment of the present invention two.In the present embodiment, described method for processing video frequency includes:
Step 202, carries out feature extraction to current video, obtains metadata.
In the present embodiment, described metadata includes but are not limited to: the description information of the object video in described current video.Described description information specifically may refer to the profile information of object video, color information and semantic information etc..Wherein, described description information can serve to indicate that the relative position in a certain frame of video picture of object video, and described object video concrete semantic (e.g., what role personage described object video is, or what article, implication etc. in video).
In the present embodiment, described metadata can be obtained by the way of any one is suitable.For example, it is possible to but be not limited only to by the way of automatically extracting or manually extracting described current video is carried out feature extraction, obtain all kinds of description information of the object videos such as the profile information of object video, color information and semantic information in described current video.Wherein, all kinds of description information namely the described metadata such as the profile information of described object video, color information and semantic information.
Step 204, is labeled described metadata.
In the present embodiment, in order to the metadata distinguished corresponding to different video frame picture, described metadata is labeled.Preferably, can by but described metadata is labeled in the way of being not limited only to by time-labeling, to realize metadata is made a distinction: may determine that the frame of video of coupling corresponding to each metadata according to the mark of metadata.It is of course also possible to be labeled described metadata by the way of other are the most suitable, to realize the differentiation to different metadata, this is not restricted by the present embodiment.
Step 206, preserves the metadata after mark with metadata form of scripts.
In the present embodiment, can be that step form preserves by the metadata record after mark, wherein, described metadata script carry described mark.For example, it is possible to but be not limited only to will mark after metadata record be that MPEG-7 script preserves.
MPEG-7 is the ISO/IEC standard formulated by MPEG (dynamic image expert group) and grow up, and it is mainly directed towards the information retrieval of multimedia era.The media being presently considered mainly include rest image, sequence/moving image, computer graphical, 3D model, animation, language, sound (including synthesized voice) etc..It is otherwise known as " Multimedia Content Description Interface ", it is provided that general, flexibly, extendible, abundant a set of audio and video characteristic describing framework, the description of content of multimedia can be created with it.This framework contains standardized descriptor D (Descriptor), description scheme DS (DescriptionScheme), Description Definition Language DDL (DescriptionDefinitionLanguage), and to describing the multiple Method and kit for encoded, make its versatility and reusability be greatly improved, be the good solution of video presentation.
In the present embodiment, in advance each video can be carried out feature extraction, obtain the metadata that each video is corresponding, and be that MPEG-7 script is saved in data base or server by described metadata record, so that follow-up directly lookup uses, save the Video processing time, improve efficiency.
Step 208, determines the temporal information of current video frame.
General, the corresponding different timestamp of each frame of video, in the present embodiment, the temporal information of described current video can be determined according to the timestamp of current video frame.
Step 210, according to the temporal information of described current video frame, obtains the description information of each object video in described current video frame.
As previously mentioned, the description information (namely metadata) of object video is to extract in advance and be saved in data base or server with the form of script, therefore, in the present embodiment, the first script matched with described current video frame can be obtained from metadata script according to the temporal information of described current video frame;Then, described first script is resolved, obtain the description information of each object video in described current video frame.
Wherein, described metadata script carries the script for distinguishing metadata, therefore, in the present embodiment, realizing the temporal information according to described current video frame, when obtaining the step of the first script matched with described current video frame from metadata script, specifically may include that the first mark that the temporal information determined with described front frame of video matches;From described metadata script, the first script matched with described current video frame is obtained according to described the first mark determined.That is, in the present embodiment, need to obtain the corresponding content for script of current video frame from metadata script, then the corresponding content for script obtained is resolved, and then obtain the description information of object video corresponding to described current video frame.
Step 212, the object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration.
In the present embodiment, described description information includes but are not limited to: profile information, color information and semantic information.Metadata corresponding to video (namely, description information) mainly indicate herein below: the temporal information (timestamp) of object video, spatial positional information (object video relative position in current video frame picture), (object video is what role or what article to semantic information, and, concrete meaning in video).After the description information of each object video in getting current video frame, need to get up the one_to_one corresponding such as the description information of acquisition and the personage in this current video frame picture and/or stage property, that is, need to realize description information and the object video in described current video frame associates registration.Be successfully completed association registered after, user and the description information of the personage obtaining interaction locations alternately of current video frame picture and/or stage property etc. can be passed through.
Step 214, synthesizes the described video flowing completing to associate the object video registered corresponding with described current video frame and plays.
In the present embodiment, the synthesis of video can be carried out with strengthening on frame of video level.Such as, on windows platform, can realize based on DirectShow.DirectShow have employed modular COM (ComponentObjectModel, The Component Object Model) architecture, with filter as ultimate unit, by the way of video flowing by capture, compress, decompress, transmit, a series of filters such as playback are together in series to realize Video Applications.
In the present embodiment, the synthesis to video and enhancing can be realized by a player realized based on DirectShow framework architecture.Such as, player can use the window scheme (DDSCL_NORMAL) of DirectDraw, and picture format is YUV.For realizing Video Composition: first obtain the equipment content handler (HDC) of first type surface, then the graphic plotting API of WindowsGDI is used, according to resolving the profile information of object video, color information and the semantic information etc. that MPEG-7 script obtains, associate registration finally by what profile information and/or color information realized semantic information and object video, semantic information is synthesized in former frame of video picture.
Certainly, those skilled in the art it will be understood that, can also by other any one suitable by the way of realize described in complete the synthesis of object video and described current video frame and the enhancing of association registration, such as, can be based on being opengl (OpenGraphicsLibrary, open graphic library) realization in Android platform.
Step 216, when video playback to described current video frame position, shows the description information of each object video in described current video frame in video playback interface;And/or, when receiving the trigger action for the object video in described current video frame, show the description information of the object video that described trigger action triggered.
In the present embodiment, when the frame of video picture (video) after synthesis is played out, or in video playback to when carrying the frame of video picture of description information, the description information of the object video in described frame of video can be shown in real time: such as, when video playback to described current video frame position, the description information of each object video in described current video frame directly can be shown in video playback interface;And/or, when receiving the trigger action for the object video in described current video frame, show the description information of the object video that described trigger action triggered.
In sum, the method for processing video frequency described in the present embodiment, it may be determined that the temporal information of current video frame;According to the temporal information of described current video frame, obtain the description information of each object video in described current video frame;Then, the object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration;Finally, the described video flowing completing to associate the object video registered corresponding with described current video frame is synthesized and plays.Visible, user is when watching the enhancing video after synthesizing, real-time, interactive can be carried out by object video in real time and in frame of video picture, such as, user can check and obtain in frame of video picture occur commodity specifying information (as, the specification information of commodity, pricing information, purchase link information etc.), the most such as, check and obtain the profile information etc. of the personage occurred in frame of video picture, achieve the real-time, interactive of user and video, particularly user and the object video in video.
Compared to prior art, method for processing video frequency described in the present embodiment directly can show all kinds of description information of object video to user, user need not to open separately third-party application (as, search browser) removal search obtain object video various information, simplify operating process, achieve the real-time, interactive of user and video, in real time, quickly video object information can be obtained, improve Consumer's Experience.
It should be noted that, for embodiment of the method, in order to be briefly described, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the embodiment of the present invention is not limited by described sequence of movement, because according to the embodiment of the present invention, some step can use other orders or carry out simultaneously.Secondly, those skilled in the art also should know, embodiment described in this description belongs to preferred embodiment, necessary to the involved action not necessarily embodiment of the present invention.
Embodiment three
With reference to Fig. 3, it is shown that the structural representation of a kind of player in the embodiment of the present invention three.In the present embodiment, described player includes:
Determine module 302, for determining the temporal information of current video frame.
Acquisition module 304, for according to the temporal information of described current video frame, obtains the description information of each object video in described current video frame.
Registering modules 306, for being associated registration by object video corresponding with described current video frame respectively for the description information of each object video in the described current video frame obtained.
Synthesis playing module 308, for synthesizing the described video flowing completing to associate the object video registered corresponding with described current video frame and play.
Visible, compared to prior art, player described in the present embodiment directly can show all kinds of description information of object video to user, user need not to open separately third-party application (as, search browser) removal search obtains the various information of object video, simplifies operating process, it is achieved that the real-time, interactive of user and video, for in real time, quickly video object information can be obtained, improve Consumer's Experience.
In a preferred version of the present embodiment, with reference to Fig. 4, it is shown that the structural representation of a kind of preferred player in the embodiment of the present invention.
Preferably, described player can also include:
Extraction module 310, for current video is carried out feature extraction, obtains metadata.
In the present embodiment, described metadata includes: the description information of the object video in described current video.
Labeling module 312, for being labeled described metadata.
Preserve module 314, for the metadata after mark being preserved with metadata form of scripts;Wherein, described metadata script carries described mark.
Correspondingly, described acquisition module 304 specifically may include that script obtains submodule 3042, for the temporal information according to described current video frame, obtains the first script matched with described current video frame from metadata script;Analyzing sub-module 3044, for resolving described first script, obtains the description information of each object video in described current video frame.
It is further preferred that described script acquisition submodule 3044 specifically may include that and determines subelement 30442, for determining the first mark that the temporal information with described current video frame matches;Obtain subelement 30444, for obtaining, according to described the first mark determined, the first script matched with described current video frame from described metadata script.
In the present embodiment, it is preferred that described player can also include:
Display module 316, for when video playback to described current video frame position, shows the description information of each object video in described current video frame in video playback interface;And/or, when receiving the trigger action for the object video in described current video frame, show the description information of the object video that described trigger action triggered.
It should be noted that in the present embodiment, described metadata script can be specifically MPEG-7 script.Described description information includes but are not limited to: profile information, color information and semantic information.
In sum, the player described in the present embodiment, it may be determined that the temporal information of current video frame;According to the temporal information of described current video frame, obtain the description information of each object video in described current video frame;Then, the object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration;Finally, the described video flowing completing to associate the object video registered corresponding with described current video frame is synthesized and plays.Visible, user is when watching the enhancing video after synthesizing, real-time, interactive can be carried out by object video in real time and in frame of video picture, such as, user can check and obtain in frame of video picture occur commodity specifying information (as, the specification information of commodity, pricing information, purchase link information etc.), the most such as, check and obtain the profile information etc. of the personage occurred in frame of video picture, achieve the real-time, interactive of user and video, particularly user and the object video in video.
Compared to prior art, player described in the present embodiment directly can show all kinds of description information of object video to user, user need not to open separately third-party application (as, search browser) removal search obtain object video various information, simplify operating process, achieve the real-time, interactive of user and video, in real time, quickly video object information can be obtained, improve Consumer's Experience.
For device embodiment, due to itself and embodiment of the method basic simlarity, so describe is fairly simple, relevant part sees the part of embodiment of the method and illustrates.
Each embodiment in this specification all uses the mode gone forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, and between each embodiment, identical similar part sees mutually.
Those skilled in the art are it should be appreciated that the embodiment of the embodiment of the present invention can be provided as method, device or computer program.Therefore, the form of the embodiment in terms of the embodiment of the present invention can use complete hardware embodiment, complete software implementation or combine software and hardware.And, the embodiment of the present invention can use the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
The embodiment of the present invention is to describe with reference to method, terminal unit (system) and the flow chart of computer program according to embodiments of the present invention and/or block diagram.It should be understood that can be by the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing terminal equipment so that the instruction performed by the processor of computer or other programmable data processing terminal equipment is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing terminal equipment work in a specific way, the instruction making to be stored in this computer-readable memory produces the manufacture including command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded on computer or other programmable data processing terminal equipment, make to perform sequence of operations step on computer or other programmable terminal equipment to produce computer implemented process, thus the instruction performed on computer or other programmable terminal equipment provides the step of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame for realization.
Although having been described for the preferred embodiment of the embodiment of the present invention, but those skilled in the art once know basic creative concept, then these embodiments can be made other change and amendment.So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of range of embodiment of the invention.
Finally, it can further be stated that, in this article, the relational terms of such as first and second or the like is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires or imply the relation or sequentially that there is any this reality between these entities or operation.And, term " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that include that the process of a series of key element, method, article or terminal unit not only include those key elements, but also include other key elements being not expressly set out, or also include the key element intrinsic for this process, method, article or terminal unit.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that there is also other identical element in including the process of described key element, method, article or terminal unit.
Above a kind of method for processing video frequency provided by the present invention and a kind of player are described in detail, principle and the embodiment of the present invention are set forth by specific case used herein, and the explanation of above example is only intended to help to understand method and the core concept thereof of the present invention;Simultaneously for one of ordinary skill in the art, according to the thought of the present invention, the most all will change, in sum, this specification content should not be construed as limitation of the present invention.

Claims (14)

1. a method for processing video frequency, it is characterised in that including:
Determine the temporal information of current video frame;
According to the temporal information of described current video frame, obtain the description information of each object video in described current video frame;
The object video that the description information of each object video in the described current video frame that will obtain is corresponding with described current video frame respectively is associated registration;
The described video flowing completing to associate the object video registered corresponding with described current video frame is synthesized and plays.
Method the most according to claim 1, it is characterised in that the described temporal information according to described current video frame, obtains the description information of each object video in described current video frame, including:
According to the temporal information of described current video frame, from metadata script, obtain the first script matched with described current video frame;
Described first script is resolved, obtains the description information of each object video in described current video frame.
Method the most according to claim 2, it is characterised in that also include:
Current video is carried out feature extraction, obtains metadata;Wherein, described metadata includes: the description information of the object video in described current video;
Described metadata is labeled;
Metadata after mark is preserved with metadata form of scripts;Wherein, described metadata script carries described mark.
Method the most according to claim 3, it is characterised in that the described temporal information according to described current video frame, obtains the first script matched with described current video frame from metadata script, including:
Determine the first mark that the temporal information with described front frame of video matches;
From described metadata script, the first script matched with described current video frame is obtained according to described the first mark determined.
Method the most according to claim 2, it is characterised in that described metadata script is MPEG-7 script.
Method the most according to claim 1, it is characterised in that also include:
When video playback to described current video frame position, video playback interface shows the description information of each object video in described current video frame;
And/or,
When receiving the trigger action for the object video in described current video frame, show the description information of the object video that described trigger action triggered.
7. according to the method described in any one of claim 1-6, it is characterised in that described description information includes: profile information, color information and semantic information.
8. a player, it is characterised in that including:
Determine module, for determining the temporal information of current video frame;
Acquisition module, for according to the temporal information of described current video frame, obtains the description information of each object video in described current video frame;
Registering modules, for being associated registration by object video corresponding with described current video frame respectively for the description information of each object video in the described current video frame obtained;
Synthesis playing module, for synthesizing the described video flowing completing to associate the object video registered corresponding with described current video frame and play.
Player the most according to claim 8, it is characterised in that described acquisition module, including:
Script obtains submodule, for the temporal information according to described current video frame, obtains the first script matched with described current video frame from metadata script;
Analyzing sub-module, for resolving described first script, obtains the description information of each object video in described current video frame.
Player the most according to claim 9, it is characterised in that also include:
Extraction module, for current video is carried out feature extraction, obtains metadata;Wherein, described metadata includes: the description information of the object video in described current video;
Labeling module, for being labeled described metadata;
Preserve module, for the metadata after mark being preserved with metadata form of scripts;Wherein, described metadata script carries described mark.
11. players according to claim 10, it is characterised in that described script obtains submodule, including:
Determine subelement, for determining the first mark that the temporal information with described current video frame matches;
Obtain subelement, for obtaining, according to described the first mark determined, the first script matched with described current video frame from described metadata script.
12. players according to claim 9, it is characterised in that described metadata script is MPEG-7 script.
13. players according to claim 8, it is characterised in that also include:
Display module, for when video playback to described current video frame position, shows the description information of each object video in described current video frame in video playback interface;And/or, when receiving the trigger action for the object video in described current video frame, show the description information of the object video that described trigger action triggered.
14. players described in-13 any one according to Claim 8, it is characterised in that described description information includes: profile information, color information and semantic information.
CN201610201738.3A 2016-03-31 2016-03-31 Video processing method and player Pending CN105828103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610201738.3A CN105828103A (en) 2016-03-31 2016-03-31 Video processing method and player

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610201738.3A CN105828103A (en) 2016-03-31 2016-03-31 Video processing method and player

Publications (1)

Publication Number Publication Date
CN105828103A true CN105828103A (en) 2016-08-03

Family

ID=56525655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610201738.3A Pending CN105828103A (en) 2016-03-31 2016-03-31 Video processing method and player

Country Status (1)

Country Link
CN (1) CN105828103A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888974A (en) * 2016-09-30 2018-04-06 北京视连通科技有限公司 A kind of instant video synthetic method and system based on scene or special object
CN108460616A (en) * 2017-12-05 2018-08-28 北京陌上花科技有限公司 The data processing method and device that video ads are launched
WO2018171750A1 (en) * 2017-03-23 2018-09-27 Mediatek Inc. Method and apparatus for track composition
CN109756727A (en) * 2017-08-25 2019-05-14 华为技术有限公司 Information display method and relevant device
CN113794907A (en) * 2021-09-16 2021-12-14 广州虎牙科技有限公司 Video processing method, video processing device and electronic equipment
WO2023045867A1 (en) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 Video-based information display method and apparatus, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072340A (en) * 2007-06-25 2007-11-14 孟智平 Method and system for adding advertising information in flow media
CN102708215A (en) * 2007-12-18 2012-10-03 孟智平 Method and system for processing video
US20130019261A1 (en) * 2003-12-23 2013-01-17 Opentv, Inc. System and method for providing interactive advertisement
CN103297840A (en) * 2012-03-01 2013-09-11 阿里巴巴集团控股有限公司 Additional information display method and system based on video moving focus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019261A1 (en) * 2003-12-23 2013-01-17 Opentv, Inc. System and method for providing interactive advertisement
CN101072340A (en) * 2007-06-25 2007-11-14 孟智平 Method and system for adding advertising information in flow media
CN102708215A (en) * 2007-12-18 2012-10-03 孟智平 Method and system for processing video
CN103297840A (en) * 2012-03-01 2013-09-11 阿里巴巴集团控股有限公司 Additional information display method and system based on video moving focus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888974A (en) * 2016-09-30 2018-04-06 北京视连通科技有限公司 A kind of instant video synthetic method and system based on scene or special object
WO2018171750A1 (en) * 2017-03-23 2018-09-27 Mediatek Inc. Method and apparatus for track composition
CN109756727A (en) * 2017-08-25 2019-05-14 华为技术有限公司 Information display method and relevant device
CN109756727B (en) * 2017-08-25 2021-07-20 华为技术有限公司 Information display method and related equipment
CN108460616A (en) * 2017-12-05 2018-08-28 北京陌上花科技有限公司 The data processing method and device that video ads are launched
CN113794907A (en) * 2021-09-16 2021-12-14 广州虎牙科技有限公司 Video processing method, video processing device and electronic equipment
WO2023045867A1 (en) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 Video-based information display method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN105828103A (en) Video processing method and player
US9224246B2 (en) Method and apparatus for processing media file for augmented reality service
WO2018036456A1 (en) Method and device for tracking and recognizing commodity in video image and displaying commodity information
US5708845A (en) System for mapping hot spots in media content for interactive digital media program
US20160198097A1 (en) System and method for inserting objects into an image or sequence of images
US20130083036A1 (en) Method of rendering a set of correlated events and computerized system thereof
CN104902345A (en) Method and system for realizing interactive advertising and marketing of products
US20080285940A1 (en) Video player user interface
US20170048597A1 (en) Modular content generation, modification, and delivery system
US9788084B2 (en) Content-object synchronization and authoring of dynamic metadata
CN105635712A (en) Augmented-reality-based real-time video recording method and recording equipment
CN105872717A (en) Video processing method and system, video player and cloud server
CN104982039A (en) Method for providing targeted content in image frames of video and corresponding device
US20130282715A1 (en) Method and apparatus of providing media file for augmented reality service
CN106060578A (en) Producing video data
EP2071578A1 (en) Video interaction apparatus and method
CN103929669A (en) Interactive video generator, player, generating method and playing method
JP2001157192A (en) Method and system for providing object information
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
CN112287771A (en) Method, apparatus, server and medium for detecting video event
EP3735778B1 (en) Coordinates as ancillary data
TWM506428U (en) Display system for video stream on augmented reality
US8768945B2 (en) System and method of enabling identification of a right event sound corresponding to an impact related event
CN102073668B (en) Searching and extracting digital images from digital video files
CN113542907B (en) Multimedia data transceiving method, system, processor and player

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160803