WO2003049450A2 - Methods for multimedia content repurposing - Google Patents
Methods for multimedia content repurposing Download PDFInfo
- Publication number
- WO2003049450A2 WO2003049450A2 PCT/IB2002/005091 IB0205091W WO03049450A2 WO 2003049450 A2 WO2003049450 A2 WO 2003049450A2 IB 0205091 W IB0205091 W IB 0205091W WO 03049450 A2 WO03049450 A2 WO 03049450A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- constructs
- video
- video content
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 230000009466 transformation Effects 0.000 claims abstract description 15
- 230000005540 biological transmission Effects 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 abstract description 9
- 238000004891 communication Methods 0.000 abstract description 5
- 230000001131 transforming effect Effects 0.000 description 7
- 230000015654 memory Effects 0.000 description 4
- 238000005056 compaction Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/21—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with binary alpha-plane coding for video objects, e.g. context-based arithmetic encoding [CAE]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43637—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44227—Monitoring of local network, e.g. connection or bandwidth variations; Detecting new devices in the local network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4621—Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
Definitions
- the present invention is directed, in general, to multimedia content transcoding and, more specifically, to intra- and inter-modality multimedia content transcoding for use under resource constraints of mobile devices.
- Multimedia content may take the form of one of the three distinct modalities of audio, visual, and textual, or any combination thereof.
- Content "re-purposing” refers generally and theoretically to re-formatting, re-scaling, and/or transcoding content by changing the content representation within a given domain, such as: from video to video, video to still graphic images, or natural pictures to cartoons in the visual domain; from natural to synthetic sound in the audio domain; and from full text to summaries in the textual domain.
- content may be re-purposed by changing from one domain to another, such as from video to text or from audio to text.
- a primary use of content re-purposing is to enable the processing, storage, transmission and display of multimedia information on mobile (e.g., wireless) devices. Such devices typically have very stringent limitations on processing, storage, transmission/reception and display capabilities.
- content re-purposing a mobile device user may have constant access to multimedia information with variable quality depending upon the circumstances, and by using the best available multimedia modality.
- Current content re-purposing implementations include primarily speech-to- text, where spoken sounds are analyzed to transform them into vowels and consonants for translation into text to be employed, for example, in answering or response (dial-in) systems. Summarization, which deals almost exclusively with textual information, is also employed.
- the constructs are content operators that represent 2D image regions and/or 3D volumetric regions for objects within the sequence and characterized by various visual attributes, and are extracted from the video sequence by segmentation utilizing video processing techniques.
- the constructs are employed for intra- and inter-modality transformation to accommodate resource constraints of the mobile device.
- FIG. 1 depicts a data processing system network employing content re- purposing according to one embodiment of the present invention
- Figs. 2A through 2C illustrate infra-modality visual content re-purposing according to one embodiment of the present invention
- Fig. 3 illustrates inter-modality content re-purposing utilizing compact information according to one embodiment of the present invention.
- Fig. 1 depicts a data processing system network employing content re- purposing according to one embodiment of the present invention.
- the data processing system network 100 includes a server system 101 and a client system 102.
- the server 101 and client 102 are wirelessly coupled and interoperable.
- the server 101 maybe any system, such as a desktop personal computer (PC), a laptop, a "super-computer,” or any other system including a central processing unit (CPU), a local memory system, and a set of dedicated chips that perform specific signal processing operations such as convolutions, etc.
- Data processing system 100 may include any type of wireless communications network, including video, data, voice/audio, or some combination thereof.
- Mobile (or fixed wirelessly connected) device 102 may be, for example, a telephone, a personal digital assistant (PDA), a computer, a satellite or terrestrial television and/or radio reception system, or a set top box.
- PDA personal digital assistant
- Figs. 2 A through 2C illustrate infra-modality visual content re-purposing according to one embodiment of the present invention.
- server 101 is capable of video sequence and/or static image re-purposing for content delivered to client 102.
- a video sequence 201 is fransformed into constructs by construct generator 202.
- the constructs describe elements of a compact video sequence representation, allowing (a) access to video sequence content information 203, synthesis of the original input video sequence 204 (or the creation of a new video sequence), and (c) compression of the video sequence 205.
- the constructs are each a compact representation of video content information, with a small number of constructs capable of representing long video sequences.
- a video sequence is represented by frames or fields in their uncompressed form or by video streams in their compressed form.
- the atomic units are pixels or fields (frames) in the uncompressed form and packages in the compressed form, with the representation being unstructured with respect to video content information.
- Video content information is mid-level visual content information given by "objects” such as two dimensional (2D) image regions or three dimensional (3D) volumetric regions characterized by various visual attributes (e.g., color, motion, shape).
- objects such as two dimensional (2D) image regions or three dimensional (3D) volumetric regions characterized by various visual attributes (e.g., color, motion, shape).
- the information must be segmented from the video sequence, which requires use of various image processing and/or computer vision techniques. For example, edge/shape segmentation, motion analysis (2D or 3D), or color segmentation may be employed for the segmentation process.
- the compact representation of the segmented video content information is also important.
- Fig. 2B illustrates segmentation and compaction, in which the input video sequence 201 is processed by segmentation and compaction units 206 and 207 to generate compact video content operators 208.
- the content operators 208 form part of the video content construct set.
- Another type of video content constructs is layered mosaics 209, generated by: (i) determining the relative depth information between different mosaics; and (ii) incrementally combining the relative depth information with individual frame from the input source, partial mosaics, and content operators as illustrated in Fig. 2C.
- 2C constitute video constructs which, together with video content segmentation and compaction units 206 and 207, represent the construct generator 202 of Fig. 2A.
- the 3D world is composed of rigid objects; those objects are distributed at different depth levels forming the scene background, which is static (or at least slowly varying) while the foreground comprises a collection of independently moving (rigid) objects; the objects have a local surface which may be approximated as a plane; and the overall scene illumination is uniform.
- Other suitable models include the 8-para- meter perspective model. In any case, the result of registering image I k _ to image I k is image / _, .
- image velocity is estimated for the registered images I k _ x and I k , utilizing one of many techniques including energy-based and gradient-based.
- the resulting image velocity determines the pixel velocity of regions associated with 3D rigid objects moving in a uniform manner, and correspond to the foreground 3D objects and associated 2D image regions.
- image regions are then segmented to determine the parts associated with the foreground objects. This results in image regions that may be appropriately post-processed to fill in gaps, with associated Alpha maps.
- a compact set of shape templates may be generated via computational geometry techniques.
- a simple representation is in terms of rectangular shape approximations.
- mosaics are extended planar images encoding non- redundant information about the video sequence, coming in layers according to the associated relative depth of world regions and generated incrementally through recursive algorithms. At each step of such algorithms, comparison of the last previously- generated mosaic with the current video sequence image generates the new instance of the mosaic.
- the generation of layered mosaics begins with a video sequence ⁇ I ,...,I N ⁇ made up of N successive frames each having an associated compact Alpha map a within ⁇ a x ,...,a N ⁇ .
- the result of video construct generation is a set of compact video content operators, a set of layered mosaics, and ancillary information.
- Image re-purposing is directed to reducing the complexity of the images. For example, the image may be transformed into regions of smooth value of color, brightness, texture, motion, etc.
- equation (3) determines the "error" between the actual image and the smooth image
- the second term determines the "smoothness” term
- the third term is proportional to the boundary length
- equation (3) should be appropriately discretized— i.e., approximated by a sum of terms.
- /( ⁇ ,•) and I M (;-) denote the visual attribute being smoothed. For example, if smoothing image velocity V(; •) , then
- the cartoonification of /(•, •) creates regions with a constant value for a given attribute. A full cartoonification is accomplished when the region boundaries are marked in black.
- the cartoon image I c is a very simplified version of the original image that keeps the main characteristics of the original image I .
- Visual information transformation from natural to synthetic is one important application of content re-purposing.
- 3D meshes may be employed for transforming natural 3D objects to synthetic 3D objects; a combination of perspective and projective transformations with 2D meshes may be employed for transforming natural 3D objects to synthetic 2D objects; and 2D meshes and computational geometry tools may be employed for transforming natural 2D objects to synthetic 2D objects.
- Audio re-purposing includes speech-to-text transformation according to known techniques, with phonemes being generated by speech recognition and then transformed from phonemes to text.
- the phonemes should be regarded as a compact set of basic elements by which text information is generated utilizing a dictionary as described in further detail below.
- Inter-modality content re-purposing corresponds to re-purposing multimedia information between different modalities.
- the framework for inter-modality content re-purposing includes (i) multimedia content segmentation, (ii) template/pattern matching; (iii) use of cross-modality translation dictionaries.
- transformations across these different modalities should follow the flow defined in equation (7). While not necessarily dictated as a content hierarchy, this patterned is necessitated by the bits required to represent the content within the various modalities.
- One common technique for re-purposing content according to the flow defined by equation (7) is to transform all visual and audio information into textual description. Video to still image transformation is commonly performed by sub-sampling frames of a video sequence, with transformation of content information with respect to point-of-view (or perspective) being less common.
- a description of the compact video content (video constructs) is given in the textual domain.
- compact image content is transformed to textual description.
- FIG. 3 illustrates inter-modality content re-purposing utilizing compact information according to one embodiment of the present invention.
- content re- purposing across multimedia modalities is performed in the present invention using compact information (e.g., video constructs, image cartoons). Transformation between compact y elements representing a given modality utilizes a compact information format, which is important in transformation from video frames/fields to static frames or text.
- Compact constructs 305-308 are generated as described above, with inter-modality content re-purposing employing a set of dictionaries (not separately depicted), which translate information between sets of compact content elements in different modalities.
- Across-modality dictionaries define how the compact content information is described in a given modality, and may be textual and/or based on metadata of a either a proprietary form or employing an agreed standard (e.g., MPEG-7, TV- Anytime, and/or SIMPTE).
- the present invention may be implemented on a continuous access content server containing content within a database, to re-purpose content for mobile access of such content.
- the content may be re-purposed prior to any request for such content by a mobile device (e.g., when the content is loaded for access from the server) or in response to a specific request from a particular device, customizing the content to the resources available within the mobile device.
- the present invention may be advantageously employed within wireless communications utilizing Transmission Convergence Protocol (TCP) or Radio Transmission Protocol (RTP) to provide Internet access to customized PDAs, mini-laptops, etc.
- TCP Transmission Convergence Protocol
- RTP Radio Transmission Protocol
- machine usable mediums include: nonvolatile, hard-coded type mediums such as read only memories (ROMs) or erasable, electrically programmable read only memories (EEPROMs), recordable type mediums such as floppy disks, hard disk drives and compact disc read only memories (CD-ROMs) or digital versatile discs (DVDs), and transmission type mediums such as digital and analog communication links.
- ROMs read only memories
- EEPROMs electrically programmable read only memories
- CD-ROMs compact disc read only memories
- DVDs digital versatile discs
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02785800A EP1459552A2 (en) | 2001-12-04 | 2002-12-02 | Methods for multimedia content repurposing |
JP2003550509A JP2005512215A (en) | 2001-12-04 | 2002-12-02 | Multimedia content re-purpose processing method |
KR10-2004-7008696A KR20040071176A (en) | 2001-12-04 | 2002-12-02 | Methods for multimedia content repurposing |
AU2002351088A AU2002351088A1 (en) | 2001-12-04 | 2002-12-02 | Methods for multimedia content repurposing |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/011,883 US20030105880A1 (en) | 2001-12-04 | 2001-12-04 | Distributed processing, storage, and transmision of multimedia information |
US10/011,883 | 2001-12-04 | ||
US10/265,582 | 2002-10-07 | ||
US10/265,582 US7305618B2 (en) | 2001-12-04 | 2002-10-07 | Methods for multimedia content repurposing |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2003049450A2 true WO2003049450A2 (en) | 2003-06-12 |
WO2003049450A3 WO2003049450A3 (en) | 2003-11-06 |
Family
ID=26682897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2002/005091 WO2003049450A2 (en) | 2001-12-04 | 2002-12-02 | Methods for multimedia content repurposing |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1459552A2 (en) |
JP (1) | JP2005512215A (en) |
CN (1) | CN1600032A (en) |
AU (1) | AU2002351088A1 (en) |
WO (1) | WO2003049450A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2048887A1 (en) * | 2007-10-12 | 2009-04-15 | Thomson Licensing | Encoding method and device for cartoonizing natural video, corresponding video signal comprising cartoonized natural video and decoding method and device therefore |
US11218530B2 (en) | 2016-10-12 | 2022-01-04 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112887733A (en) * | 2021-01-25 | 2021-06-01 | 中兴通讯股份有限公司 | Volume media processing method and device, storage medium and electronic device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2205704A (en) * | 1987-04-01 | 1988-12-14 | Univ Essex | Reduced bandwidth video transmission |
US6061462A (en) * | 1997-03-07 | 2000-05-09 | Phoenix Licensing, Inc. | Digital cartoon and animation process |
-
2002
- 2002-12-02 CN CNA028240332A patent/CN1600032A/en active Pending
- 2002-12-02 EP EP02785800A patent/EP1459552A2/en not_active Withdrawn
- 2002-12-02 JP JP2003550509A patent/JP2005512215A/en not_active Withdrawn
- 2002-12-02 WO PCT/IB2002/005091 patent/WO2003049450A2/en active Application Filing
- 2002-12-02 AU AU2002351088A patent/AU2002351088A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2205704A (en) * | 1987-04-01 | 1988-12-14 | Univ Essex | Reduced bandwidth video transmission |
US6061462A (en) * | 1997-03-07 | 2000-05-09 | Phoenix Licensing, Inc. | Digital cartoon and animation process |
Non-Patent Citations (2)
Title |
---|
MOHAN R ET AL: "Adapting multimedia Internet content for universal access" IEEE TRANSACTIONS ON MULTIMEDIA, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 1, no. 1, March 1999 (1999-03), pages 104-114, XP002159629 ISSN: 1520-9210 * |
SZU SHENG CHEN ET AL: "New view generation from a video sequence" CIRCUITS AND SYSTEMS, 1998. ISCAS '98. PROCEEDINGS OF THE 1998 IEEE INTERNATIONAL SYMPOSIUM ON MONTEREY, CA, USA 31 MAY-3 JUNE 1998, NEW YORK, NY, USA,IEEE, US, 31 May 1998 (1998-05-31), pages 81-84, XP010289421 ISBN: 0-7803-4455-3 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2048887A1 (en) * | 2007-10-12 | 2009-04-15 | Thomson Licensing | Encoding method and device for cartoonizing natural video, corresponding video signal comprising cartoonized natural video and decoding method and device therefore |
WO2009047349A1 (en) * | 2007-10-12 | 2009-04-16 | Thomson Licensing | Encoding method and device for cartoonizing natural video, corresponding video signal comprising cartoonized natural video and decoding method and device therefore |
US11218530B2 (en) | 2016-10-12 | 2022-01-04 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11283850B2 (en) | 2016-10-12 | 2022-03-22 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Spatially unequal streaming |
US11489900B2 (en) | 2016-10-12 | 2022-11-01 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11496538B2 (en) | 2016-10-12 | 2022-11-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E. V. | Spatially unequal streaming |
US11496541B2 (en) | 2016-10-12 | 2022-11-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11496540B2 (en) | 2016-10-12 | 2022-11-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11496539B2 (en) | 2016-10-12 | 2022-11-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11516273B2 (en) | 2016-10-12 | 2022-11-29 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11539778B2 (en) | 2016-10-12 | 2022-12-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
US11546404B2 (en) | 2016-10-12 | 2023-01-03 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Spatially unequal streaming |
Also Published As
Publication number | Publication date |
---|---|
CN1600032A (en) | 2005-03-23 |
JP2005512215A (en) | 2005-04-28 |
EP1459552A2 (en) | 2004-09-22 |
WO2003049450A3 (en) | 2003-11-06 |
AU2002351088A1 (en) | 2003-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7305618B2 (en) | Methods for multimedia content repurposing | |
US11436780B2 (en) | Matching mouth shape and movement in digital video to alternative audio | |
JP4138459B2 (en) | Low resolution image production method and apparatus | |
US7224731B2 (en) | Motion estimation/compensation for screen capture video | |
US7123774B2 (en) | System and method for coding data | |
US7889949B2 (en) | Joint bilateral upsampling | |
US6285794B1 (en) | Compression and editing of movies by multi-image morphing | |
US8553782B2 (en) | Object archival systems and methods | |
EP1641275B1 (en) | Interactive design process for creating stand-alone visual representations for media objects | |
EP1641282B1 (en) | Techniques for encoding media objects to a static visual representation | |
Kaufmann et al. | Finite element image warping | |
EP1641281A1 (en) | Techniques for decoding and reconstructing media objects from a still visual representation | |
CN113869138A (en) | Multi-scale target detection method and device and computer readable storage medium | |
JP2001197507A (en) | Method and system for processing image in image compression/expansion system employing hierarchical coding | |
JP2007141107A (en) | Image processor and its method | |
WO2003049450A2 (en) | Methods for multimedia content repurposing | |
US20220301523A1 (en) | Method and apparatus for efficient application screen compression | |
CN116403142A (en) | Video processing method, device, electronic equipment and medium | |
Masmoudi et al. | Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms | |
JP2005184062A (en) | Image data conversion apparatus and image data conversion program | |
CN113365072B (en) | Feature map compression method and device, computing equipment and storage medium | |
US20040101205A1 (en) | Position coding system and method | |
Sri Geetha et al. | Enhanced video articulation (eva)—a lip-reading tool | |
US20230360376A1 (en) | Semantic Image Fill at High Resolutions | |
JPH0837664A (en) | Moving picture encoding/decoding device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002785800 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003550509 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20028240332 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020047008696 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2002785800 Country of ref document: EP |