WO2008028334A1 - Method and device for adaptive video presentation - Google Patents
Method and device for adaptive video presentation Download PDFInfo
- Publication number
- WO2008028334A1 WO2008028334A1 PCT/CN2006/002261 CN2006002261W WO2008028334A1 WO 2008028334 A1 WO2008028334 A1 WO 2008028334A1 CN 2006002261 W CN2006002261 W CN 2006002261W WO 2008028334 A1 WO2008028334 A1 WO 2008028334A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- scene
- video
- size
- extracted window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4092—Image resolution transcoding, e.g. by using client-server architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/957—Browsing optimisation, e.g. caching or content distillation
- G06F16/9577—Optimising the visualization of content, e.g. distillation of HTML documents
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
Definitions
- the present invention relates to a method and a device for video presentation, and more particularly to a method and a device for adaptive video presentation on a display with limited screen size.
- ACM MM' 03, 3003 which introduces three browsing modes: manual browsing method, full-automatic browsing method and semiautomatic browsing method.
- the present invention provides an adaptive video presentation solution for full-automatically representing videos on display devices with small sizes according to metadata information based on content analysis in order to provide an optimal video viewing experience for users .
- the method comprises steps of determining a salient object group containing at least one salient object based on perceptual interest value of macroblocks for each frame of the video, extracting a window having a minimum size containing the salient object group for a scene of the video, characterized in that it further comprises steps of comparing size of the extracted window with the display size; and presenting at least a selected area of the extracted window containing at least a part of the salient object group for the scene on the display based on the result of the comparison step.
- the extracted window is exposited on the display with an appropriate zoom-in operation; if size of the extracted window is larger than a predefined percentage of the display size and equal or smaller than the display size, the extracted window is exposited directly on the display; and if size of the extracted window is larger than the display size, the extracted window is exposited on the display with an appropriate zoom-out operation.
- the method further comprises steps of calculating a weighted average motion vector length MV act of macroblocks inside frames for the scene of the video, comparing the weighted average motion vector length MV act of the scene with a predefined threshold T mo ti on> determining whether the weighted average motion vector length MV ac t of the scene is less than the predefined threshold T mo tion, and presenting the extracted window for the scene of video on the display in a true motion exhibition mode corresponding to a fast motion status in case the average motion vector length MV act of the scene is determined less than the predefined threshold TWi 0n ; or else presenting the extracted window for the scene of the video on the display in a normal exhibition mode corresponding to a low motion status in case the average motion vector length MV act of the scene is determined less than the predefined threshold T mot i o n-
- the extracted window containing all the salient object groups for whole the scene of the video is presented on the display with a weighted average gravity point of the salient object group for the whole scene defined as a still focus centre of the extracted window, in such a way the true motion exhibition mode can be achieved in the fast motion status.
- the method in the normal exhibition mode corresponding to the low motion status, further comprises steps of comparing length of scene of the video with a predefined minimum perceptual time, wherein if the length of scene is less than the predefined minimum perceptual time, it further comprises steps of determining number of the salient objects existing in the scene, presenting the extracted window on the display with a gravity flowing show operation in case only one salient object exists, or else presenting the extracted window directly on the display in case multiple salient objects exist, wherein if the length of scene is no less than the predefined minimum perceptual time, it further comprises a' step of presenting the extracted window on the display with the gravity flowing show operation.
- FIG. 1 is a schematic view of a first embodiment of the system framework using the method in accordance with the present invention
- Fig. 2 is a schematic view of a second embodiment of the system framework using the method in accordance with the present invention.
- Fig. 3 is a schematic view of a third embodiment of the system framework using the method in accordance with the present invention.
- Fig. 4 is a schematic view of salient objects inside one frame
- Fig. 5 is a schematic view of salient object group inside one frame
- Fig. 6 is a flowchart of adaptive video presentation sample solution. DETAIL DESCRIPTION OF PREFERRED EMBODIMENTS
- the present invention directs to a method and a device of adaptive video presentation (AVP) for better viewing experience with stream-embedded metadata base on content analysis information.
- AVP adaptive video presentation
- decoder end solution decoder end solution
- Joint encoder-decoder end solution Joint encoder-decoder end solution
- encoder end solution encoder end solution
- the first type of the AVP framework solution leaves all the region processing and display work in the decoder end, where only a preanalysis module 11a is provided at an encoder end 10a, while other functional blocks are provided at a decoder end 20a.
- the pre-analysis module 11a includes operations of scene change detection, attention area extraction and content/motion analysis.
- the other four functional blocks includes an object group classification (OGC) module 12a, which classifies objects/object groups based on the scene and attention mask information from pre-analysis module 11a; a property calculation (PC) module 13a, which calculates statistics (e.g.
- a still focus decision (SFD) module 14a which decides candidate focus area in special image based on the statistics information derived from PC module (e.g. gravity points) 13a and other metadata information from pre-analysis module 11; a spatial- temporal processing module 15a, which does a spatial- temporal processing to guarantee the video smooth and acceptable and eliminates the artifacts .
- SFD still focus decision
- a pre-analysis module lib an object group classification module 12b, a property calculation module 13b and a still focus decision module 14b are included in the encoder end 10b to generate candidate focus area, and a spatial/temporal processing module 15b is included in a decoder end 20b to do optimal display based on candidate focus area with consideration of temporal and spatial quality tradeoff.
- AVP Salient Object
- a Salient Object is a set of attention area MacroBlocks (MB) connected each other, as shown by area of MBs with grey in Fig. 4.
- the salient objects are separated by the non-attention MBs, which is denoted by MBs with white .
- Salient Object group contains all salient Objects in the current frame. It can be described by following parameters:
- Video consists of a collection of video frames and the segmentation results can be frame, shot, scene, and video with granularity from small to large. Shot is a sequence of frames recorded in a single-camera operation.
- Scene is a collection of consecutive shots that have semantic similarity in object, person, space, and time. It's also defined to tell the switch of salient objects between two frames.
- the display scheme inside a scene should be definite and usually stays consistent.
- Configuration parameters are necessary parameters on help making decisions of adaptive display mode selection, such as display or not, scaling down or not, summarizing or not, etc. There are four conditions defined to assist the video viewing path programming.
- MPT is used as a threshold for the fixation duration when viewing a salient object. It's also used as the threshold of the weight of information value when watching a scene. If a salient object doesn't stay on the screen longer than a MPT threshold MPT 30 , it may not be perceptible enough to let users catch the information. If a scene doesn't last longer than a threshold MPT SC , only the most significant portion in it may be perceptible enough.
- MPT 30 and MPT SC can be selected according to different application scenarios and human visual property, which is usually set to 1/3 second and 2 second in our real application.
- the MPS is used as a threshold of the minimum spatial area of a salient object. Normally, if the size of a salient object SOi is less than a threshold MPT 30 , the salient object Oi should become marked as non- attention object or be merged into its neighbourhood attention object. But the MPS threshold is not always correct since a salient object with smaller spatial area may carry the most important information, and it cannot be merged or unmarked. So, additional configuration parameter of Weight Information (be introduced below) can be used. Usually MPT 30 can be set to 5 MacroBlocks or be set to 5%-10% of the largest salient object size.
- WSOi can be defined by the semantic importance of each salient object, which is depend on the content mode, third part's appointed semantic information, specifically user's experience, etc. Furthermore, the gravity of the salient object group is re-calculated.
- Operation Set includes all the possible operations needed for the requirement of Adaptive Video Presentation, Currently six operations are defined as Table-I shows:
- the adaptive video presentation operations can be classified into two categories: low motion exhibition and true motion exhibition respectively corresponding to low motion status and high motion status, which can be distinguished by low motion status and high motion status can be distinguished by the weighted average motion vector length of all MacroBlocks inside one frame MV AC ⁇ -
- a threshold T MOTION can be selected to do classification, if MV AC T is less than T M OTI O N, the low motion status is determined, or else the fast motion status is determined.
- the first one is to directly exposit the salient objects or salient object groups on the display
- the second one is called as gravity flowing show, which control the movement of display area by following the movement of the gravity point of the salient object group, usually tolerance of gravity change (TGC) parameters are used to keep a smooth display strategy
- TGC tolerance of gravity change
- the third one is basically a pan operation with the consideration of saliency distribution to display the salient area on limited display window, especially in case of large saliency object or multiple saliency objects exist.
- a predefined threshold T MOTION ⁇ the true motion exhibition is introduced to display the salient objects or salient object group.
- the gravity point of the OG Object Group
- the gravity point moves forwards and backwards, a weighted average gravity point for the scene of the video will be used as a still focus centre of the display, the middle of the forward gravity point and backward one is approximately shown in the center of the display, therefore the presentation of the video can be viewed with the true motion exhibition mode on the display, i.e.
- the viewer can see the OG moving forwards and backwards on the display window.
- the gravity point moves along a certain direction rapidly, then the weighted average gravity point for the scene of the video will be determined as the still focus centre of the display, the viewer can see the OG moving from one side to the other side of the display window.
- the video can be treated as an information gravity point flowing plane, in which different salient objects have different weights of importance of the information, and the MBs have the same characteristics inside each salient object. Therefore, it's the gravity point but not the center point of the salient object or group should be the center of the display.
- the small display should focus on the area centralized by the gravity point of the group or a salient object, or progressively display the area by using the panning operation, which depends on the density distribution of the information.
- the STP (spatial-temporal processing) module is the most important module in the AVP framework. Optimal spatial-temporal operations will be taken in the module to guarantee a smooth and acceptable video viewing experience .
- Table-II demonstrates a sample of decisions of AVP operations, and of course some other types of combination can be considered due to the detail requirement of real application.
- DS means display size of the corresponding display device.
- Fig. 6 demonstrates the flowchart of one exemplary scheme for decisions of the adaptive video presentation solution in accordance with the present invention.
- step 100 the motion status of the scene of the video is determined by comparing the weighted average motion vector length for frames MV AC ⁇ with the predefined threshold T MOTION - In case the MV AC ⁇ is less than the predefined threshold T MOTIO ⁇ O then the next step goes to step 200, or else to step 400.
- step 220 it will determine that if the RZG is equal or larger then the DS, if the RZG is less than the DS but larger than the DS/n, then in step 230 the extracted window with the RZG will be directly exposited on the display, if the RZG is larger than the DS, then in step 240, it will determine whether the length of scene LOS is less then the minimum perceptual time MPT. Then in step 250, it will determine whether the salient object group contains only one salient object.
- step 260 In a condition that only one salient object exists and the LOS is less than the MPT, the video will be presented on the display in a gravity flowing shown operation with appropriate zoom-out operation, in step 260.
- step 270 case multiple salient objects exist and the LOS is less than the MPT, the video will be directly exposited on the display, since in this condition the pan operation is forbidden to avoid frequent changing of the presentation operation so as to smooth the viewing experience.
- the video will be presented in the gravity flowing show operation along with the saliency driven pan operation without zoom-out.
- the decision scheme of true motion exhibition mode is made through steps 400 to 440.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Controls And Circuits For Display Device (AREA)
- Studio Circuits (AREA)
- Transforming Electric Information Into Light Information (AREA)
Priority Applications (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2006/002261 WO2008028334A1 (en) | 2006-09-01 | 2006-09-01 | Method and device for adaptive video presentation |
| US12/310,461 US8605113B2 (en) | 2006-09-01 | 2007-09-03 | Method and device for adaptive video presentation |
| KR1020097004095A KR101414669B1 (ko) | 2006-09-01 | 2007-09-03 | 적응형 비디오 표현을 위한 방법 및 디바이스 |
| EP07800849.7A EP2057531A4 (en) | 2006-09-01 | 2007-09-03 | Method and device for adaptive video presentation |
| JP2009525901A JP2010503006A (ja) | 2006-09-01 | 2007-09-03 | 適応的なビデオ呈示のための方法および装置 |
| PCT/CN2007/002632 WO2008040150A1 (en) | 2006-09-01 | 2007-09-03 | Method and device for adaptive video presentation |
| CN2007800318436A CN101535941B (zh) | 2006-09-01 | 2007-09-03 | 自适应视频呈现的方法和装置 |
| JP2014041700A JP2014139681A (ja) | 2006-09-01 | 2014-03-04 | 適応的なビデオ呈示のための方法および装置 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2006/002261 WO2008028334A1 (en) | 2006-09-01 | 2006-09-01 | Method and device for adaptive video presentation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008028334A1 true WO2008028334A1 (en) | 2008-03-13 |
Family
ID=39156807
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2006/002261 Ceased WO2008028334A1 (en) | 2006-09-01 | 2006-09-01 | Method and device for adaptive video presentation |
| PCT/CN2007/002632 Ceased WO2008040150A1 (en) | 2006-09-01 | 2007-09-03 | Method and device for adaptive video presentation |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2007/002632 Ceased WO2008040150A1 (en) | 2006-09-01 | 2007-09-03 | Method and device for adaptive video presentation |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US8605113B2 (enExample) |
| EP (1) | EP2057531A4 (enExample) |
| JP (2) | JP2010503006A (enExample) |
| KR (1) | KR101414669B1 (enExample) |
| CN (1) | CN101535941B (enExample) |
| WO (2) | WO2008028334A1 (enExample) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009066783A1 (en) * | 2007-11-22 | 2009-05-28 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method, image display system, and computer program |
| JP5182202B2 (ja) * | 2009-04-14 | 2013-04-17 | ソニー株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
| JP5489557B2 (ja) * | 2009-07-01 | 2014-05-14 | パナソニック株式会社 | 画像符号化装置及び画像符号化方法 |
| JP5421727B2 (ja) * | 2009-10-20 | 2014-02-19 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
| US8358691B1 (en) * | 2009-10-30 | 2013-01-22 | Adobe Systems Incorporated | Methods and apparatus for chatter reduction in video object segmentation using a variable bandwidth search region |
| EP2530642A1 (en) * | 2011-05-31 | 2012-12-05 | Thomson Licensing | Method of cropping a 3D content |
| US9300933B2 (en) * | 2013-06-07 | 2016-03-29 | Nvidia Corporation | Predictive enhancement of a portion of video data rendered on a display unit associated with a data processing device |
| KR101724555B1 (ko) | 2014-12-22 | 2017-04-18 | 삼성전자주식회사 | 부호화 방법 및 장치와 복호화 방법 및 장치 |
| US11115666B2 (en) | 2017-08-03 | 2021-09-07 | At&T Intellectual Property I, L.P. | Semantic video encoding |
| JP2019149785A (ja) * | 2018-02-28 | 2019-09-05 | 日本放送協会 | 映像変換装置及びプログラム |
| US11244450B2 (en) * | 2019-08-19 | 2022-02-08 | The Penn State Research Foundation | Systems and methods utilizing artificial intelligence for placental assessment and examination |
| CN110602527B (zh) * | 2019-09-12 | 2022-04-08 | 北京小米移动软件有限公司 | 视频处理方法、装置及存储介质 |
| US11640714B2 (en) * | 2020-04-20 | 2023-05-02 | Adobe Inc. | Video panoptic segmentation |
| CN113535105B (zh) * | 2021-06-30 | 2023-03-21 | 北京字跳网络技术有限公司 | 媒体文件处理方法、装置、设备、可读存储介质及产品 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1529499A (zh) * | 2003-09-29 | 2004-09-15 | 上海交通大学 | 用于视频图像格式转换的运动自适应模块实现方法 |
| WO2004090812A1 (en) * | 2003-04-10 | 2004-10-21 | Koninklijke Philips Electronics N.V. | Spatial image conversion |
| US20050226538A1 (en) * | 2002-06-03 | 2005-10-13 | Riccardo Di Federico | Video scaling |
Family Cites Families (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5960126A (en) * | 1996-05-22 | 1999-09-28 | Sun Microsystems, Inc. | Method and system for providing relevance-enhanced image reduction in computer systems |
| US6108041A (en) * | 1997-10-10 | 2000-08-22 | Faroudja Laboratories, Inc. | High-definition television signal processing for transmitting and receiving a television signal in a manner compatible with the present system |
| GB2371459B (en) * | 2001-01-19 | 2005-05-04 | Pixelfusion Ltd | Image scaling |
| GB2382940A (en) | 2001-11-27 | 2003-06-11 | Nokia Corp | Encoding objects and background blocks |
| JP4153202B2 (ja) * | 2001-12-25 | 2008-09-24 | 松下電器産業株式会社 | 映像符号化装置 |
| US7263660B2 (en) * | 2002-03-29 | 2007-08-28 | Microsoft Corporation | System and method for producing a video skim |
| US7035435B2 (en) * | 2002-05-07 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | Scalable video summarization and navigation system and method |
| US6928186B2 (en) * | 2002-06-21 | 2005-08-09 | Seiko Epson Corporation | Semantic downscaling and cropping (SEDOC) of digital images |
| JP2004140670A (ja) * | 2002-10-18 | 2004-05-13 | Sony Corp | 画像処理装置および方法、画像表示装置および方法、画像配信装置および方法、並びにプログラム |
| JP2005269016A (ja) * | 2004-03-17 | 2005-09-29 | Tama Tlo Kk | 選択的解像度変換装置 |
| JP2005292691A (ja) * | 2004-04-05 | 2005-10-20 | Matsushita Electric Ind Co Ltd | 動画像表示装置および動画像表示方法 |
| US7696988B2 (en) * | 2004-04-09 | 2010-04-13 | Genesis Microchip Inc. | Selective use of LCD overdrive for reducing motion artifacts in an LCD device |
| US7542613B2 (en) * | 2004-09-21 | 2009-06-02 | Sanyo Electric Co., Ltd. | Image processing apparatus |
| US7505051B2 (en) * | 2004-12-16 | 2009-03-17 | Corel Tw Corp. | Method for generating a slide show of an image |
| JP2006196960A (ja) * | 2005-01-11 | 2006-07-27 | Canon Inc | 動画データ受信装置 |
| US20060227153A1 (en) * | 2005-04-08 | 2006-10-12 | Picsel Research Limited | System and method for dynamically zooming and rearranging display items |
-
2006
- 2006-09-01 WO PCT/CN2006/002261 patent/WO2008028334A1/en not_active Ceased
-
2007
- 2007-09-03 JP JP2009525901A patent/JP2010503006A/ja active Pending
- 2007-09-03 US US12/310,461 patent/US8605113B2/en not_active Expired - Fee Related
- 2007-09-03 EP EP07800849.7A patent/EP2057531A4/en not_active Withdrawn
- 2007-09-03 CN CN2007800318436A patent/CN101535941B/zh not_active Expired - Fee Related
- 2007-09-03 KR KR1020097004095A patent/KR101414669B1/ko not_active Expired - Fee Related
- 2007-09-03 WO PCT/CN2007/002632 patent/WO2008040150A1/en not_active Ceased
-
2014
- 2014-03-04 JP JP2014041700A patent/JP2014139681A/ja active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050226538A1 (en) * | 2002-06-03 | 2005-10-13 | Riccardo Di Federico | Video scaling |
| WO2004090812A1 (en) * | 2003-04-10 | 2004-10-21 | Koninklijke Philips Electronics N.V. | Spatial image conversion |
| CN1529499A (zh) * | 2003-09-29 | 2004-09-15 | 上海交通大学 | 用于视频图像格式转换的运动自适应模块实现方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2057531A4 (en) | 2017-10-25 |
| JP2010503006A (ja) | 2010-01-28 |
| KR101414669B1 (ko) | 2014-07-03 |
| KR20090045288A (ko) | 2009-05-07 |
| US8605113B2 (en) | 2013-12-10 |
| JP2014139681A (ja) | 2014-07-31 |
| CN101535941A (zh) | 2009-09-16 |
| CN101535941B (zh) | 2013-07-03 |
| WO2008040150A1 (en) | 2008-04-10 |
| EP2057531A1 (en) | 2009-05-13 |
| US20090244093A1 (en) | 2009-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8605113B2 (en) | Method and device for adaptive video presentation | |
| JP2010503006A5 (enExample) | ||
| EP2413597B1 (en) | Thumbnail generation device and method of generating thumbnail | |
| US6930687B2 (en) | Method of displaying a digital image | |
| Fan et al. | Looking into video frames on small displays | |
| KR101456652B1 (ko) | 비디오 인덱싱 및 비디오 시놉시스 방법 및 시스템 | |
| Luo et al. | Towards extracting semantically meaningful key frames from personal video clips: from humans to computers | |
| US20020051010A1 (en) | Method and apparatus for skimming video data | |
| CN102541494A (zh) | 一种面向显示终端的视频尺寸转换系统与方法 | |
| JP2006525755A (ja) | ビデオコンテンツを閲覧する方法及びシステム | |
| US20040264939A1 (en) | Content-based dynamic photo-to-video methods and apparatuses | |
| KR101318459B1 (ko) | 수신기 상에서 오디오비주얼 문서를 시청하는 방법 및이러한 문서를 시청하기 위한 수신기 | |
| JP4645356B2 (ja) | 映像表示方法、映像表示方法のプログラム、映像表示方法のプログラムを記録した記録媒体及び映像表示装置 | |
| JP2003061038A (ja) | 映像コンテンツ編集支援装置および映像コンテンツ編集支援方法 | |
| US20240214443A1 (en) | Methods, systems, and media for selecting video formats for adaptive video streaming | |
| EP1006464A2 (en) | Image retrieving apparatus performing retrieval based on coding information utilized for featured frame extraction or feature values of frames | |
| CN112949449A (zh) | 交错判断模型训练方法及装置和交错图像确定方法及装置 | |
| EP2071511A1 (en) | Method and device for generating a sequence of images of reduced size | |
| CN113762156B (zh) | 观影数据处理方法、装置及存储介质 | |
| JP4949307B2 (ja) | 動画像シーン分割装置および動画像シーン分割方法 | |
| US20090307725A1 (en) | Method for providing contents information in vod service and vod system implemented with the same | |
| CN114390222A (zh) | 适用于180度全景视频的切换方法、装置及存储介质 | |
| JP2003298983A (ja) | 代表画像生成装置 | |
| Barreiro-Megino et al. | Visual tools for ROI montage in an Image2Video application |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 06775578 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06775578 Country of ref document: EP Kind code of ref document: A1 |