CN100471255C - Method for making and playing interactive video frequency with heat spot zone - Google Patents

Method for making and playing interactive video frequency with heat spot zone Download PDF

Info

Publication number
CN100471255C
CN100471255C CN 200610053953 CN200610053953A CN100471255C CN 100471255 C CN100471255 C CN 100471255C CN 200610053953 CN200610053953 CN 200610053953 CN 200610053953 A CN200610053953 A CN 200610053953A CN 100471255 C CN100471255 C CN 100471255C
Authority
CN
China
Prior art keywords
video
frame
motion vector
hot spot
spot region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200610053953
Other languages
Chinese (zh)
Other versions
CN1946163A (en
Inventor
潘云鹤
庄越挺
吴飞
翁建广
陈铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN 200610053953 priority Critical patent/CN100471255C/en
Publication of CN1946163A publication Critical patent/CN1946163A/en
Application granted granted Critical
Publication of CN100471255C publication Critical patent/CN100471255C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This invention discloses a method for producing and playing interactive videos with hot point regions including: 1, adding hot point interactive regions in the videos, 2, preserving the interactive video, 3, playing the video. This invention adds interactable factors in the traditional video, any close regions in any frames in the video can be added with hot point interactive information, the change of the position and size of the regions can be computed by motion parameters of a camera and the region can timely receive interactive operation of users and responds it relatively. The interactive information in the video is preserved in the mode of appended data file and SMIL standard file can be derived and generated by combining the original video file with the interactive data files, which can be operated ordinarily in all players supporting the SMIL standard.

Description

Be used to make and play the method for interactive video with hot spot region
Technical field
The present invention relates to the computer video editor, relate in particular to a kind of method that is used to make and play interactive video with hot spot region.
Background technology
Occurred since 1980 " Movie Map " [1], HyperText is towards setting up related HyperMedia development between dissimilar multimedias, and then occurred not only on the two-dimensional space but also support the HyperVideo of multimedia hyperlink on the video sequential.Because the time attribute of video, HyperVideo becomes the research tendency of HyperMedia than other static mediums closing to reality and flexibly more.HyperVideo has obtained comparatively successful practical application [2] [3] with its good interactivity and vividness in fields such as education, scientific research and supplemental trainings, and it also very is suitable for the expression structure video and generates summary [4] [5].The research field of HyperVideo can be divided into content description, display mode and three aspects of manufacture method.Aspect content description and display mode, MPEG7[6] and SMIL[7] become the corresponding universal standard, and occurred describing the method [8] that is automatically changeb to SMIL from MPEG7.Because video itself lacks structure and text semantic, and on two-dimensional screen, can not show full content with the locus with other static mediums are direct like that, the making of HyperVideo is difficulty relatively, though some auxiliary systems that make (as HyperCafe[9] and Hyper-Hitchcock[10]) occurred, the poor efficiency of manual manufacture HyperVideo remains and limits it and advance-go on foot the bottleneck of popularization.Based on video analysis and structuring algorithm, the segmentation video index mechanism that generates multilayer from linear video automatically is the target that the researcher makes great efforts.
Attached: list of references
[1]A.Lippman.Movie-Maps:An?Application?of?the?Optical?Videodisc?to?Computer?Graphics.Proc.of?ACM?SIGGRAPH,ACM,pp.32-42,1980.
[2]T.Chambel,C.Zahn,and?M.Finke.Hypervideo?Design?and?Support?for?ContextualizcdLearning.IEEE?International?Conference?on?Advanced?Learning?Technologies,pp.345-349,2004.
[3]O.Aubert?and?Y.Prie.Advene:Active?Reading?through?Hypervideo.Proc.of?the?sixteenthACM?conference?on?Hypertext?and?hypermedia,pp.235-244,2005.
[4]A.Girgensohn,F.Shipman,and?L.Wilcox.Hypervideo?Summaries,SPIE?InformationTechnologies?and?Communications,2003.
[5]F.Shipman,A.Girgensohn,L.Wilcox.Generation?of?Interactive?Multi-Level?Video?Summaries.Proc.of?ACM?Multimedia,pp.392-401,2003.
[6]P.Mallorca.MPEG-7?Overview(version?10).ISO/IEC?JTC1/SC29/WG11?N6828,2004.
[7]D.Bulterman?et?al.Synchronized?Multimedia?Integration?Language(SMIL2.1).2005.
[8]T.Zhou,T.Gedeon?and?J.Jin.Automatic?Generating?Detail-on-demand?Hypervideo?UsingMPEG-7?and?SMIL.Proc.of?the?13th?annual?ACM?international?conference?on?Multimedia,pp.379-382,2005.
[9]D.B.Nitin?Sawhney?and?I.Smith.HyperCafe:Narrative?and?Aesthetic?Properties?ofHypervideo.Proc.of?the?Seventh?ACM?Conference?on?Hypertext,pp.1-10,1996.
[10]F.Shipman?III,A.Girgensohn?and?L.Wilcox.Hypervideo?Expression:Experiences?withHyper-Hitchcock.Proc.of?ACM?Hypertext?Hypertext?and?HyperMedia,pp.217-226,2005.
Summary of the invention
The purpose of this invention is to provide a kind of method that is used to make and play interactive video with hot spot region.
Be used to make and play the method for interactive video with hot spot region:
1) interpolation of the focus interaction area in the video
The user specifies the position and the size of the focus interaction area in some frames, and determine each regional activationary time section and add corresponding additional content in each zone and response target information, according to the kinematic parameter of camera in the every frame that calculates, extrapolate each regional position and size in interior other frame of activationary time section automatically;
2) preservation of interactive video
The markup information of the focus interaction area in the video is preserved with the form of additional data file, or video and interaction data are combined, and exports as the SMIL normative document;
3) broadcast of interactive video
From corresponding mark file, read hot spot region information during video playback, in case the current time is in the activationary time section of hot spot region, then demonstrate this regional additional annotations content in the video, the user clicks the mouse in this zone, then directly jumps to its corresponding response target.
Focus interaction area in the described frame: the shape of focus interaction area can be the figure of closed geometry arbitrarily in the video single frames picture size scope; Focus interaction area in the video is freely specified by the user, or is generated automatically by system supplymentary.
Corresponding additional content in zone and response target information: the additional displaying contents of interaction area is text or image in the video; When the user clicked the focus interaction area, the file destination of response can be audio frequency, video, webpage.
The activationary time section in zone: only in the time period of appointment, the focus interaction area just is in state of activation, could show additional content and response target information.
Automatically extrapolate each regional position and big or small method in interior other frame of activationary time section, comprise the steps:
(1) calculates the kinematic parameter of camera in each frame;
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size.
Calculate the method for the kinematic parameter of camera in each frame, comprise the steps:
(1) from the video compression file, reads the motion vector of this frame.
(2) motion vector is carried out normalization.Motion vector under I frame, B frame, P frame and these several different situations such as interframe encode, intraframe coding, hybrid coding is united.
(3) the part noise in the removal motion vector.Main according to being: the slickness when neighborhood of motion vector and variation.
(4) further optimize motion vector.Earlier the motion vector in this frame is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then.
(5) set up camera parameter model solution kinematic parameter.Set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.
The information of mark comprises: 1) time started of this zone state of activation, concluding time; 2) hot spot region position and size during each frame in state of activation; 3) Qu Yu additional content information, text comprises text color, size, hyperlink, image comprises the information of all picture elements, if directly read from file, the path address of log file then; 4) this area relative response target information is image, audio frequency, video or webpage, the path address of log file.
The useful effect that the present invention has is: but in conventional video, added interaction factor, the focus interactive information all can be added in any enclosed zone in the arbitrary frame in the video, and can calculate by the kinematic parameter of camera automatically with big or small change in video playback the position of focus interaction area.During the video playback, the focus interaction area can receive user's interactive operation in real time and make corresponding response.Interactive information in the video is preserved with the form of additional data file, does not rely on the specific coding mode of source video file, does not also need recompile.Source video file and interaction data file can be combined, derive and generate the SMIL normative document, in the player of all support SMIL standards, all can normally move.
Description of drawings
When being the part noise of removing in the motion vector, Fig. 1 (a) judges the schematic diagram that changes slickness;
When being the part noise of removing in the motion vector, Fig. 1 (b) judges the schematic diagram of neighborhood;
Fig. 2 (a) is the cluster schematic diagram that belongs to the camera motion type after the motion vector cluster;
Fig. 2 (b) is the cluster schematic diagram that belongs to the object motion type after the motion vector cluster;
Fig. 2 (c) is the cluster schematic diagram that belongs to the abnormal motion type after the motion vector cluster;
Fig. 3 makes the workflow diagram with focus interaction area video;
Fig. 4 plays the system flow chart with focus interaction area video;
Fig. 5 is the operation interface exemplary plot of making sight spot class interactive video;
Fig. 6 is the application exemplary plot of interactive video in sight formula digital travelling project;
Fig. 7 is the response exemplary plot after the focus interaction area is clicked in the sight formula digital travelling project.
Fig. 8 is the operation interface exemplary plot of making figure kind's interactive video.
Fig. 9 is that the user clicks the response exemplary plot behind the focus personage in the video.
Embodiment
The step of interactive video that making of the present invention has the hot spot region is as follows:
1. the user adds focus interaction area information
The user navigates to a certain frame with video earlier, sketches the contours of the position and the shape size of hot spot region again by mouse, and specifies the time started and the concluding time of this regional state of activation.Adding corresponding additional content for then this hot spot region, comprise text and image, can be directly to read from file, also can be to import in text box and draw by mouse.The pairing response target information in last Adding Area can be image, audio frequency, video, webpage or other file, the normally path address of log file.
2. generate the position and the size of the hot spot region in other frame in the activationary time section automatically
If allow the user that each frame in the activationary time section is all marked the hot spot area location size by hand, be very loaded down with trivial details, so native system assist by the camera motion parameter of calculating each frame and finish this a part of work, concrete steps are as follows:
(1) calculates the kinematic parameter of camera in each frame.
A. from the video compression file, read the motion vector of this frame.
B. motion vector is carried out normalization.Motion vector under I frame, B frame, P frame and these several different situations such as interframe encode, intraframe coding, hybrid coding is united.
C. remove the part noise in the motion vector.Main according to being: the slickness when neighborhood of motion vector and variation.As shown in Figure 1, the motion vector of macro block and four macro blocks that diagonally opposing corner is adjacent were on every side got average respectively in the middle of the part on the left side was represented among the figure, if the number of macroblocks that is lower than a certain threshold values with the difference of middle macro block motion vector in four averages is less than certain value, think that then this macro block does not keep the slickness that changes, noise is arranged, should remove.The motion vector of macro block the and when quantity of difference respectively in threshold values scope necessarily of eight macro blocks is less than certain value on every side in the middle of the part on the right is represented among the figure thinks that then this macro block does not keep neighborhood, and noise is arranged, and should remove.
D. further optimize motion vector.Earlier the motion vector in this frame is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then.As shown in Figure 2, the part on the left side represents to belong to the macro block clustering distribution situation of camera motion among the figure, the macro block clustering distribution situation of middle indicated object motion, the situation that the expression on the right is unusual.
E. set up camera parameter model solution kinematic parameter.Set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.Parameter is many more, and model is accurate more, but it is slow more to find the solution speed.Usually adopt the affine model of 6 parameters:
u = a 0 x + a 1 y + a 2 v = a 3 x + a 4 y + a 5
A in the following formula 0A 5Be unknown parameter, x, y are the coordinate at macro block center, and u, v are two components of the motion vector of macro block.
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size.Every point coordinates on the zone boundary is brought into the movement velocity that can calculate this point in the camera model equation group, also just can calculates the position at this place in the next frame.
3. the interactive information in the preservation video
The relevant information of each focus interaction area is saved in the additional data file, and relevant information comprises:
1) time started of this zone state of activation, concluding time.
2) hot spot region position and size during each frame in state of activation.
3) Qu Yu additional content information, text comprises text color, size, hyperlink etc., image comprises the information of all picture elements, if directly read from file, the path address of log file then.
4) this area relative response target information can be image, audio frequency, video, webpage or other file, the path address of log file.
4. derive the SMIL normative document
The SMIL language can weave contents such as audio frequency, video in sequence with set of locations, and support to add the link area layer, so just can write out the multimedia file that can represent identical mutual effect, on the player of all support SMIL standards, can play with the SMIL language.Among the SMIL<region〉label can be used for specifying the display position and the size of the corresponding additional content in hot spot region,<text〉label is used for the display text content,<img〉label is used for display image content.<anchor〉label is used for defining the hot spot region, and wherein the start attribute is used for the time started of appointed area state of activation, and the end attribute is used for specifying the concluding time, the path address of href attribute specified response file destination.By these grammatical functions, system just can generate corresponding SMIL file according to the interaction data file of self.
The step of interactive video that broadcast of the present invention has the hot spot region is as follows:
1. read the corresponding interaction data file of video
In the interaction data file, read path address of position size in state of activation time started, concluding time, each frame of each focus interaction area in the video, additional content information, response file destination etc.
2. show the focus interaction area
When showing each frame, find out current all be in the hot spot region of state of activation, show the additional content that this is regional with particular form respectively.In order to improve the efficient of searching active region, can when reading in the interaction data file, be that a chained list set up in index with the frame number, after each sequence number record this be in the hot spot region of state of activation constantly.
3. judge user's interactive operation
When user's mouse was clicked on broadcasting pictures, traversal was judged the zone under the click location in current all activated hot spot region, and makes this regional respective objects response.Judging point whether in the zone employing be rudimentary algorithm intersection point counting method in the computer graphics, principle is to be that starting point is made a horizontal rays with judging point P, count this ray and the intersection point number of interface boundary is arranged, come judging point whether in the zone according to the intersection point number.
Embodiment 1
As shown in Figure 6, the application in sight formula virtual tour system of this method and system, if the user is interested in the sight spot of being seen in virtual tourism, think further to view and admire, then can be by on video, realizing alternately with the hot spot region, scenic spot, describe the concrete steps that this example is implemented below in detail, as follows:
(1) as shown in Figure 5, in tourism guide's video, add focus interaction area information, suspend when arriving the position that sight spot " Yue Wangmiao " begins to occur, sketch the contours of a rectangle focus interaction area with mouse, add additional text content " Yue Wangmiao ", provide the relevant introduction at " Yue Wangmiao " sight spot; And add this regional target response file, the i.e. file path of further viewing and admiring video of " Yue Wangmiao "; The time started and the concluding time of specifying this hot spot region to activate at last.
(2) generate the position and the size of " Yue Wangmiao " hot spot region in other frame in the activationary time section automatically.
(3) relevant information with the focus interaction area of mark in the step (1) is saved in the additional data file.Comprise time started, concluding time that this zone is activated, the hot spot region is position and the size during each frame in state of activation, additional text content " Yue Wangmiao ", pairing response file destination path.
(4) as shown in Figure 6, when the user browsed to highway section, " Yue Wangmiao " place, sight spot, system read the corresponding interaction data file in this highway section.
(5) when video playback, detect the current hot spot region that is in state of activation in real time, the user goes near " Yue Wangmiao " sight spot, this hot spot region is activated, and begins videotex additional contents such as " Yue Wangmiao ", the sight spot shown in the brief introduction picture at ad-hoc location on the picture.
When (6) being in state of activation in the hot spot region, if the user clicks this zone, bottom-right broadcast window is promptly made response, and what begin to play " Yue Wangmiao " sight spot specifically introduces video.
Embodiment 2
As shown in Figure 9,, then can describe the concrete steps that this example is implemented below in detail by on video, realizing alternately with the personage hot spot region if the user is unfamiliar with the personage in the picture but thinks further understanding in video playback, as follows:
(1) as shown in Figure 8, in video, add focus interaction area information, suspend when arriving the position that personage " Li Jizhu " begins to occur, sketch the contours of a rectangle focus interaction area with mouse, add additional text content " Taiwan celebrity Li Ji pearl ", when the user views and admires video, give prompting; And add this regional target response file, the i.e. file path of the picture brief introduction of " Li Jizhu "; The time started and the concluding time of specifying this hot spot region to activate at last.
(2) generate the position and the size of " Li Jizhu " hot spot region in other frame in the activationary time section automatically.
(3) relevant information with the focus interaction area of mark in the step (1) is saved in the additional data file.Comprise time started, concluding time that this zone is activated, the hot spot region is position and the size during each frame in state of activation, additional text content " Li Jizhu ", pairing response file destination path.
(4) as shown in Figure 9, when video playback, detect the current hot spot region that is in state of activation in real time, personage " Li Jizhu " is when occurring, this hot spot region is activated, begin videotex additional contents such as " Taiwan celebrity Li Ji pearls " at ad-hoc location on the picture, in order to point out this personage's information.
When (5) being in state of activation,, click this zone, then demonstrate the brief introduction of " Li Jizhu " in the broadcasting pictures if the user thinks the correlation circumstance of further understanding " Li Jizhu " in the hot spot region.
Foregoing description is just in order to illustrate and describe the method and system that makings, broadcast have the interactive video of hot spot region.It is not detailed description, does not limit the invention to form illustrated and that describe yet, and obviously, many modifications and variations also are fine.The conspicuous modifications and variations of person skilled in the art are also included within the defined scope of the present invention of subsidiary claim.

Claims (5)

1. method that is used to make and play the interactive video with hot spot region is characterized in that:
1) interpolation of the focus interaction area in the video
The user specifies the position and the size of the focus interaction area in some frames, and determine each regional activationary time section and add corresponding additional content in each zone and response target information, according to the camera motion parameter in the every frame in the video that calculates, extrapolate each regional position and size in interior other frame of activationary time section automatically;
2) preservation of interactive video
The markup information of the focus interaction area in the video is preserved with the form of additional data file, or video and interaction data are combined, and markup information and interaction data export as the SMIL normative document;
3) broadcast of interactive video
From corresponding mark file, read hot spot region information during video playback, in case the current time is in the activationary time section of hot spot region, then demonstrate this regional additional annotations content in the video, the user clicks the mouse in this zone, then directly jumps to its corresponding response target;
Describedly extrapolate in the activationary time section each regional position and big or small step in other frame automatically:
(1) calculates the kinematic parameter of camera in each frame in the video;
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size;
The kinematic parameter step of camera in described each frame that calculates in the video:
(1) from the video compression file, reads the motion vector of all frames in the video;
(2) motion vector under the I frame in the video, B frame, P frame and interframe encode, intraframe coding, these several different situations of hybrid coding is united, motion vector is carried out normalization;
(3) slickness according to the neighborhood of motion vector and when changing is removed the part noise in the motion vector;
(4) earlier the motion vector in all frames in the video is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then, further optimize motion vector;
(5) set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.
2. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that described user specifies the focus interaction area in some frames: the shape of focus interaction area is the figure of closed geometry arbitrarily in the video single frames picture size scope; Focus interaction area in the video is freely specified by the user, or is generated automatically by system supplymentary.
3. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that corresponding additional content in described zone and response target information: the additional content of interaction area is text or image in the video; When the user clicked the focus interaction area, the response target information was audio frequency, video or webpage.
4. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that, the activationary time section in described zone: only in the time period of appointment, the focus interaction area just is in state of activation, could show additional content and response target information.
5. a kind of method that is used to make and play the interactive video with hot spot region according to claim 1 is characterized in that described markup information comprises: 1) time started of this zone state of activation, concluding time; 2) hot spot region position and size during each frame in state of activation; 3) Qu Yu additional content information, text comprises text color, size, hyperlink, image comprises the information of all picture elements, if additional content is directly to read from file, the path address of log file then; 4) this area relative response target information is image, audio frequency, video or webpage, the path address of log file.
CN 200610053953 2006-10-25 2006-10-25 Method for making and playing interactive video frequency with heat spot zone Expired - Fee Related CN100471255C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200610053953 CN100471255C (en) 2006-10-25 2006-10-25 Method for making and playing interactive video frequency with heat spot zone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200610053953 CN100471255C (en) 2006-10-25 2006-10-25 Method for making and playing interactive video frequency with heat spot zone

Publications (2)

Publication Number Publication Date
CN1946163A CN1946163A (en) 2007-04-11
CN100471255C true CN100471255C (en) 2009-03-18

Family

ID=38045352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610053953 Expired - Fee Related CN100471255C (en) 2006-10-25 2006-10-25 Method for making and playing interactive video frequency with heat spot zone

Country Status (1)

Country Link
CN (1) CN100471255C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204417B2 (en) 2016-05-10 2019-02-12 International Business Machines Corporation Interactive video generation

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571812B (en) * 2008-04-30 2012-08-29 国际商业机器公司 Visualization method and device for dynamic transition of objects
CN101753913B (en) * 2008-12-17 2012-04-25 华为技术有限公司 Method and device for inserting hyperlinks in video, and processor
JP5343676B2 (en) * 2009-04-08 2013-11-13 ソニー株式会社 Image processing apparatus, image processing method, and computer program
CN102572601B (en) * 2010-09-21 2014-07-16 北京奇艺世纪科技有限公司 Display method and device for video information
CN101950578B (en) * 2010-09-21 2012-11-07 北京奇艺世纪科技有限公司 Method and device for adding video information
CN102523512A (en) * 2011-11-30 2012-06-27 江苏奇异点网络有限公司 Video output method with operable implicit content
CN103428539B (en) * 2012-05-15 2017-08-22 腾讯科技(深圳)有限公司 The dissemination method and device of a kind of pushed information
JP5659307B2 (en) * 2012-07-17 2015-01-28 パナソニックIpマネジメント株式会社 Comment information generating apparatus and comment information generating method
CN102868919B (en) * 2012-09-19 2016-03-30 上海基美文化传媒股份有限公司 Interactive play equipment and player method
CN103780973B (en) * 2012-10-17 2017-08-04 三星电子(中国)研发中心 Video tab adding method and device
CN107995533B (en) * 2012-12-08 2020-09-18 周成 Method for popping out video of tracking object in video
CN103702222A (en) * 2013-12-20 2014-04-02 惠州Tcl移动通信有限公司 Interactive information generation method and video file playing method for mobile terminal
CN103986980B (en) * 2014-05-30 2017-06-13 中国传媒大学 A kind of hypermedia editing method and system
CN104967908B (en) * 2014-09-05 2018-07-24 腾讯科技(深圳)有限公司 Video hotspot labeling method and device
CN105657564B (en) * 2015-12-30 2018-04-20 广东欧珀移动通信有限公司 The method for processing video frequency and processing system for video of browser
CN106792157A (en) * 2016-12-13 2017-05-31 广东中星电子有限公司 A kind of information labeling based on video and display methods and system
CN110909037B (en) * 2019-10-09 2024-02-13 中国人民解放军战略支援部队信息工程大学 Frequent track mode mining method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204417B2 (en) 2016-05-10 2019-02-12 International Business Machines Corporation Interactive video generation
US10546379B2 (en) 2016-05-10 2020-01-28 International Business Machines Corporation Interactive video generation

Also Published As

Publication number Publication date
CN1946163A (en) 2007-04-11

Similar Documents

Publication Publication Date Title
CN100471255C (en) Method for making and playing interactive video frequency with heat spot zone
US10755745B2 (en) Automatic generation of video from structured content
KR101046749B1 (en) Encoding method and apparatus and decoding method and apparatus
CN105745938B (en) Multi-angle of view audio and video interactive playback
Hamakawa et al. Object composition and playback models for handling multimedia data
US20120198412A1 (en) Software cinema
US20080184120A1 (en) Concurrent presentation of video segments enabling rapid video file comprehension
CN103986980B (en) A kind of hypermedia editing method and system
CN1454430A (en) Embedding re-usable object-based product information in audiovisual programs for non-intrusive viewer driven usage
CN101401130B (en) Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program
CN115661420A (en) Design and implementation method of POLY VR editor system
Daras et al. MPEG-4 authoring tool for the composition of 3D audiovisual scenes
Van Rijsselbergen et al. Semantic Mastering: content adaptation in the creative drama production workflow
Daras et al. An MPEG-4 tool for composing 3D scenes
Jamil et al. Overview of JPEG Snack: A Novel International Standard for the Snack Culture
Hardman et al. Document Model Issues for Hypermedia.
Rutledge et al. Evaluating SMIL: three user case studies
Hu Impact of VR virtual reality technology on traditional video advertising production
Sadallah et al. Hypervideo and Annotations on the Web
KR102335096B1 (en) System for providing video production service compositing figure video and ground video
KR101482099B1 (en) Method and apparatus for encoding/decoding Multi-media data
Grahn The media9 Package, v1. 14
Messina et al. Making second screen sustainable in media production: the bridget approach
Tran-Thuong et al. Structured Media for Authoring Multimedia Documents
Uribe et al. New usability evaluation model for a personalized adaptive media search engine based on interface complexity metrics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090318

Termination date: 20121025