WO2006089254A2 - Mobile imaging application, device architecture, service platform architecture and services - Google Patents

Mobile imaging application, device architecture, service platform architecture and services Download PDF

Info

Publication number
WO2006089254A2
WO2006089254A2 PCT/US2006/005891 US2006005891W WO2006089254A2 WO 2006089254 A2 WO2006089254 A2 WO 2006089254A2 US 2006005891 W US2006005891 W US 2006005891W WO 2006089254 A2 WO2006089254 A2 WO 2006089254A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
mobile
present
application
handset
Prior art date
Application number
PCT/US2006/005891
Other languages
English (en)
French (fr)
Other versions
WO2006089254A3 (en
Inventor
John D. Ralston
Steven E. Saunders
Krasimir D. Kolarov
Original Assignee
Droplet Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Droplet Technology, Inc. filed Critical Droplet Technology, Inc.
Priority to JP2007556380A priority Critical patent/JP2008537854A/ja
Priority to CA002611683A priority patent/CA2611683A1/en
Priority to EP06735520A priority patent/EP1856805A4/en
Priority to AU2006214055A priority patent/AU2006214055A1/en
Publication of WO2006089254A2 publication Critical patent/WO2006089254A2/en
Publication of WO2006089254A3 publication Critical patent/WO2006089254A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets

Definitions

  • Directly digitized images and video take lots of bits; it is common to compress images and video for storage, transmission, and other uses.
  • Several basic methods of compression are known, and very many specific variants of these.
  • a general method can be characterized by a three-stage process: transform, quantize, and entropy-code.
  • Most image and video compressors share this basic architecture, with variations.
  • the intent of the transform stage in a video compressor is to gather the energy or information of the source picture into as compact a form as possible by taking advantage of local similarities and patterns in the picture or sequence.
  • No compressor can possibly compress all possible inputs; we design compressors to work well on "typical” inputs and ignore their failure to compress "random” or “pathological” inputs.
  • Many image compression and video compression methods such as MPEG-2 and MPEG-4, use the discrete cosine transform (DCT) as the transform stage.
  • DCT discrete cosine transform
  • Quantization discards information after the transform stage; the reconstructed decompressed image cannot then be an exact reproduction of the original.
  • Entropy coding is generally a lossless step: this step takes the information remaining after quantization and codes it so that it can be reproduced exactly in the decoder. Thus the design decisions about what information to discard is not affected by the following entropy-coding stage.
  • DCT-based video compression/decompression (codec) techniques A limitation of DCT-based video compression/decompression (codec) techniques is that, having been developed originally for video broadcast and streaming applications, they rely on the encoding of video content in a studio environment, where high-complexity encoders can be run on computer workstations. Such computationally complex encoders allow computationally simple and relatively inexpensive decoders (players) to be installed in consumer playback devices.
  • players computationally simple and relatively inexpensive decoders
  • asymmetric encode/decode technologies are a poor match to mobile multimedia devices, in which video messages must be captured in real time in the handset itself, as well as played back.
  • video in mobile devices is typically limited to much smaller sizes and much lower frame rates than in other consumer products.
  • This invention pertains to methods, devices, systems, and architectures relating to still image and video image recording in devices, including mobile devices, corresponding mobile device architectures, service platform architectures and methods and services for transmitting, storing, editing, sharing, marketing, and transcoding still images and video images over wireless and wired networks and systems and viewing them on display-enabled devices, as well as network and other system services in relation to the foregoing.
  • the present invention also pertains to improvements in the image recording technique, and corresponding improvements in the architectures of mobile devices and service platforms.
  • aspects of the present invention comprise all-software video codecs/camcorder applications for compressing and/or decompressing video or still images.
  • aspects of the present invention also comprise infrastructure products, methods and processes, including mobile multimedia service (MMS) infrastructure applications, for deploying video messaging and sharing services in conjunction with software video codec/camcorder applications for mobile handsets as well as editing and transcoding applications to support complete interoperability with other commonly-deployed standards-based and proprietary video formats.
  • aspects of the invention also comprise methods, processes and business processes for establishing, enabling, distributing and operating innovative MMS services, including an innovative mobile video blog and marketing service for video content created by mobile users on mobile devices.
  • Figure 1 depicts Video image size limitations in mobile image messaging.
  • Figure 2 depicts a system diagram for joint source-channel coding: (a) encoder; (b) decoder.
  • Figure 3 depicts a Mobile Imaging Handset Architecture.
  • Figure 4 depicts a Mobile Imaging Service Platform Architecture.
  • Figure 5 depicts a Video Codec Technology Comparison.
  • Figure 6 depicts a system diagram for improved joint source-channel coding: (a) encoder; (b) decoder.
  • Figure 7 depicts an improved mobile imaging handset platform architecture.
  • Figure 8 depicts a Video Codec Performance Comparison.
  • Figure 9 depicts an improved mobile imaging handset platform architecture.
  • Figure 10 depicts an improved mobile imaging handset platform architecture.
  • Figure 11 depicts an improved mobile imaging service platform architecture.
  • Figure 12 depicts an OTN Upgrade of deployed MMSC Video Gateway.
  • Figure 13 depicts a Self-playing video MMS eliminates need for transcoding.
  • Figure 14 depicts a reduction in complexity, cost, and number of video editing servers re to deploy media producer services.
  • Figure 15 depicts a mobile video service platform.
  • Figure 16 depicts faster, lower cost development and deployment of higher quality multimedia services.
  • Figure 17 depicts aspects of mobile video services according to aspects of the present Invention.
  • Figure 18 depicts Applications to Broadband Multimedia Devices and Services according to aspects of the present invention.
  • Figure 19 depicts Implementation Options for SW Imaging Application according to aspects of the present invention.
  • Figure 20 depicts Implementation Options for HW-Accelerated Imaging Application according to aspects of the present invention.
  • Figure 21 depicts implementation Options for Hybrid HW-Accelerated SW Imaging Application according to aspects of the present invention.
  • Figure 22 depicts an Application: Simplified Multimedia Handset Platform Architecture.
  • Figure 23 depicts Elements of the mobile video messaging demo over a GSM/GPRS network.
  • Figure 24 depicts certain MMS functionality according to aspects of the present invention.
  • a wavelet transform may comprise the repeated application of wavelet filter pairs to a set of data, either in one dimension or in more than one.
  • a 2-D wavelet transform horizontal and vertical
  • Video codecs according to the present invention can use a 3-D wavelet transform (horizontal, vertical, and temporal).
  • An improved, symmetrical 3-D wavelet-based video compression/decompression (codec) device is desirable to reduce the computational complexity and power consumption in mobile devices well below those required for DCT- based codecs, as well as to enable simultaneous support for processing still images and video images in a single codec.
  • Such simultaneous support for still images and video images in a single codec may eliminate or reduce the need for separate MPEG (video) and JPEG (still image) codecs, or greatly improve compression performance and hence storage efficiency with respect to Motion JPEG codecs.
  • An improved, symmetrical 3-D wavelet-based video processing device is also desirable to reduce the computational complexity and power consumption in MMS infrastructure equipment utilized to support automated or manual editing of user-created video, as well as database storage, search, and retrieval of user-created video.
  • aspects of the present invention comprise new methods, services and systems relating to innovative capture, compression, transmission, editing, storing and sharing video content associated with mobile devices.
  • Aspects of the present invention may apply to telecom (both wireless and wireline providers) and internet, cable and other data and multimedia operators including fixed and mobile wireless service providers.
  • Aspects of the present invention may provide for richer content, higher bandwidth usage and higher average revenue per user (ARPU).
  • Mobile multimedia service (MMS) is the multimedia evolution of the text-based short message service (SMS). According to aspects of the present invention, a promising new MMS application presented is innovative video messaging and sharing, enabling a target audiences' need to communicate personal information.
  • Mobile image messaging and sharing may require the addition of digital camera functionality (still images) and/or camcorder functionality (video images) to mobile handsets, so that subscribers can both capture (encode) video messages that they wish to send, and play back (decode) video messages that they receive.
  • digital camera functionality still images
  • camcorder functionality video images
  • Aspects of the present invention may also enable these functionalities in ways unavailable, if available at all, in the prior art.
  • mobile devices may be enabled to evolve into integrated consumer multimedia entertainment platforms.
  • a substantial investment in industry has been directed toward technologies and platforms that enable re-packaged broadcast television programming (such as news clips, sports highlights, and special "mobisodes" of popular TV programs) and other studio-generated video content (such as film previews and music videos) to be transmitted to and viewed on mobile devices.
  • broadcast television programming such as news clips, sports highlights, and special "mobisodes" of popular TV programs
  • other studio-generated video content such as film previews and music videos
  • aspects of the present invention additionally enable significant reductions in the development cost and retail price of both camcorder phones and video messaging/sharing infrastructure equipment, which may be key to large scale commercial adoption of such devices and related mobile multimedia/data services, in both mature and emerging markets.
  • Prior mobile image messaging/sharing services and applications are severely limited to capturing and transmitting much smaller-size and lower-frame-rate video images than those typically captured and displayed on other multimedia devices (see Figure 1), such as TVs, personal computers, digital video camcorders, and personal media players.
  • Video transmission over mobile networks is challenging in nature because of the higher data rates typically required, in comparison to the transmission of other data/media types such as text, audio, and still images.
  • the limited and varying channel bandwidth, along with the fluctuating noise and error characteristics of mobile networks impose further constraints and difficulties on video transport.
  • various joint source-channel coding techniques can be applied to adapt the video bit stream to different channel conditions (see Figure 2).
  • Such a joint source- channel coding approach according to aspects of the present invention may be scalable, in order to adapt to varying channel bandwidths and error characteristics.
  • is supported scalability for multicast scenarios in which different devices at the receiving end of the video stream may have different limitations on decoding computational power and display capabilities.
  • the source video sequence 30 may be first source coded 32 (i.e. compressed), followed by error correction code (ECC) channel coding 34.
  • ECC error correction code
  • source coding typically uses such DCT-based compression techniques as, H.263, MPEG-4, or Motion JPEG.
  • Example channel coding methods are Reed-Solomon codes, BCH codes, FEC codes, and turbo codes.
  • the joint source and channel coded video bit stream then passes through the rate controller 36 to match the channel bandwidth requirement while achieving the best reconstructed video quality.
  • the rate controller performs discrete rate- distortion computations on the compressed video bit stream before it sends the video bit stream for transmission over the channel 38. Due to limitations in computational power in mobile devices, typical rate controllers only consider the available channel bandwidth, and do not explicitly consider the error characteristics of the transmission channel.
  • a further benefit of such an improved adaptive joint-source channel coding technique is the corresponding ability of wireless carriers and MMS service providers to offer a greater range of quality-of-service (QoS) performance and pricing levels to their consumer and enterprise customers, thus maximizing the revenues generated using their wireless network infrastructure.
  • QoS quality-of-service
  • Multicast scenarios require a single adaptive video bit stream that can be decoded by many users. This is especially important in modern, large-scale, heterogeneous networks, in which network bandwidth limitations make it impractical to transmit multiple simulcast video signals specifically tuned for each user. Multicasting of a single adaptive video bit stream greatly reduces the bandwidth requirements, but requires generating a video bit stream that is decodable for multiple users, including high-end users with broadband wireless or wire line connections, and wireless phone users, with limited bandwidth and error-prone connections. Due to limitations in computational power in mobile devices, the granularity of adaptive rate controllers is typically very coarse, for example producing only a 2-layer bit stream including a base layer and one enhancement layer.
  • imager array typically array of CMOS or CCD pixels
  • preamplifiers and analog-to-digital (A/D) signal conversion circuitry
  • image processing functions such as pre-processing, encoding/decoding (codec), post-processing
  • imaging-enabled mobile handsets are limited to capturing smaller-size and lower-frame-rate video images than those typically captured and displayed on other multimedia devices, such as TVs, personal computers, digital video camcorders, and personal media players. These latter devices typically capture/display video images in VGA format (640x480 pixels) or larger, at a display rate of 30 frames-per-second (fps) or higher, whereas commercially available imaging-enabled mobile handsets are limited to capturing video images in QVGA format (320x240 pixels), QCIF format (176x144 pixels) or smaller, at a display rate of 15 fps or lower (See, e.g., Figure 1).
  • This reduced video capture capability is due to the excessive computational requirements, processor power consumption, and buffer memory required to complete the number, type, and sequence of computational steps associated with video compression/decompression using DCT transforms.
  • codec functions may be implemented using such RISC processors, DSPs, ASICs, multimedia processors, and RPDs as separate integrated circuits (ICs), or may combine one or more of the RISC processors, DSPs, ASICs, multimedia processors, and RPDs integrated together in a system-in-a-package (SIP) or system-on-a-chip (SoC).
  • SIP system-in-a-package
  • SoC system-on-a-chip
  • Codec functions running on RISC processors or DSPs are typically software routines, with the advantage that they can be modified in order to correct programming errors or upgrade functionality.
  • the disadvantage of implementing certain complex, repetitive codec functions as software is that the resulting overall processor resource and power consumption requirements typically exceeds those available in mobile communications devices.
  • Codec functions running on ASICs and multimedia processors are typically fixed hardware implementations of complex, repetitive computational steps, with, typically, the advantage that specially tailored hardware acceleration can substantially reduce the overall power consumption of the codec.
  • the disadvantages of implementing certain codec functions in fixed hardware include longer and more expensive design cycles, the risk of expensive product recalls in the case where errors are found in the fixed silicon implementation, and the inability to upgrade fixed silicon functions in deployed products in the case where newly developed features are to be added to the imaging application.
  • Codec functions running on RPDs are typically routines that require both hardware acceleration and the ability to add or modify functionality in final mobile imaging handset products.
  • the disadvantage of implementing certain codec functions on RPDs is the larger number of silicon gates and higher power consumption required to support hardware reconfigurability in comparison to fixed ASIC implementations.
  • An imaging application that reduces or eliminates complex, repetitive codec functions so as to enable mobile imaging handsets capable of capturing VGA (or larger) video at a frame rate of 30 fps with an all-software architecture would be preferable, in order to simplify the above architecture and enable handset costs compatible with high- volume commercial deployment.
  • the present invention is the first technology to successfully accomplish and enable these objectives.
  • Multimedia handsets are required not only to support picture and video messaging capabilities, but also a variety of additional multimedia capabilities (voice, music, graphics) and a variety of fixed and mobile wireless access modes, including but not limited to 2.5G and 3G cellular access, WiBro, HSDPA, WiFi, wireless LAN, and Bluetooth.
  • OTA over-the-air
  • An all-SW imaging application would be preferable to enable OTA distribution and management of the imaging application by handset manufacturers, mobile operators, and other MMS service providers. Again, the present invention is the first technology to successfully enable these objectives.
  • Java technology brings a wide range of devices, from servers to desktops to mobile devices, together under one language and one technology. While the applications for this range of devices differ, Java technology works to bridge those differences where it counts, allowing developers who are functional in one area to leverage their skills across a wide spectrum of devices and applications.
  • J2ME Java 2, Micro Edition
  • Sun redefined the architecture of the Java technology, grouping it into three editions.
  • Standard Edition (J2SE) offered a practical solution for desktop development and low-end business applications.
  • Enterprise Edition (J2EE) was for developers specializing in applications for the enterprise environment.
  • Micro Edition (J2ME) was introduced for developers working on devices with limited hardware resources, such as PDAs, cell phones, pagers, television set top boxes, remote telemetry units, and many other consumer electronic and embedded devices.
  • J2ME is aimed at machines with as little as 128KB of RAM and with processors a lot less powerful than those used on typical desktop and server computers.
  • J2ME actually consists of a set of profiles. Each profile is defined for a particular type of device - cell phones, PDAs, etc. - and consists of a minimum set of class libraries required for the particular type of device and a specification of a Java virtual machine required to support the device.
  • the virtual machine specified in any J2ME profile is not necessarily the same as the virtual machine used in Java 2 Standard Edition (J2SE) and Java 2 Enterprise Edition (J2EE).
  • Sun identified within each of these two categories classes of devices with similar roles — so, for example, all cell phones fell within one class, regardless of manufacturer. With the help of its partners in the Java Community Process (JCP), Sun then defined additional functionality specific to each class of devices.
  • JCP Java Community Process
  • a configuration may be a Java virtual machine (JVM) and a minimal set of class libraries and APIs providing a run-time environment for a select group of devices.
  • JVM Java virtual machine
  • a configuration may specify a least common denominator subset of the Java language, one that fits within the resource constraints imposed by the family of devices for which it was developed. Because there is such great variability across user interface, function, and usage, a typical configuration does not define such important pieces as the user interface toolkit and persistent storage APIs. The definition of that functionality belongs, instead, to what is called a profile.
  • a J2ME profile may be a set of Java APIs specified by an industry-led group that is meant to address a specific class of device, such as pagers and cell phones. Each profile is built on top of the least common denominator subset of the Java language provided by its configuration, and is meant to supplement that configuration.
  • Two profiles important to mobile handheld devices are: the Foundation profile, which supplements the CDC, and the Mobile Information Device Profile (MIDP), which supplements the CLDC. More profiles are in the works, and specifications and reference implementations continue to be developed and released.
  • JTWI Java Technology for the Wireless Industry
  • JSR 185 defines the industry-standard platform for the next generation of Java technology-enabled mobile phones.
  • JTWI is defined through the Java Community Process (JCP) by an expert group of leading mobile device manufacturers, wireless carriers, and software vendors.
  • JTWI specifies the technologies that must be included in all JTWI-compliant devices: CLDC 1.0 (JSR 30), MIDP 2.0 (JSR 118), and WMA 1.1 (JSR 120), as well as CLDC 1.1 (JRS 139) and MMAPI (JSR 135) where applicable.
  • Two additional JTWl specifications that define the technologies and interfaces for mobile multimedia devices are JSR-135 ("Mobile Media API") and JSR-234 ("Advanced Multimedia Supplements").
  • JSR-135 Mobile Media API
  • JSR-234 Advanced Multimedia Supplements
  • Road map A key feature of the JTWI specification is the road map, an outline of common functionality that software developers can expect in JTWI-compliant devices. January 2003 saw the first in a series of road maps expected to appear at six- to nine-month intervals, which will describe additional functionality consistent with the evolution of mobile phones. The road map enables all parties to plan for the future with more confidence: carriers can better plan their application deployment strategy, device manufacturers can better determine their product plans, and content developers can see a clearer path for their application development efforts. Carriers in particular will, in the future, rely on a Java VM to abstract/protect underlying radio/network functions from security breaches such as viruses, worms, and other "attacks" that currently plaque the public Internet.
  • a Java-based imaging application would be preferable for "write-once, run- anywhere" portability across all Java-enabled handsets, for Java VM security and handset/network robustness against viruses, worms, and other mobile network security "attacks", and for simplified OTA codec and application download procedures.
  • Such a Java-based imaging application should conform to JTWI specifications JSR-135 ("Mobile Media API") and JSR-234 ("Advanced Multimedia Supplements"). Aspects of the present invention provide these advantages.
  • Key components of a mobile imaging service platform architecture may include:
  • BTS Mobile Base stations
  • BSC/RNC Base station Controller/Radio Network Controller
  • MSC Mobile Switching Center
  • GSN Gateway Service Node
  • MMSC Mobile Multimedia Service Controller
  • Typical functions included in the MMSC according to aspects of the present invention include:
  • the video gateway in an MMSC may serve to transcode between the different video formats that are supported by the imaging service platform. Transcoding is also utilized by wireless operators to support different voice codecs used in mobile telephone networks, and the corresponding voice transcoders are integrated into the RNC. Upgrading such a mobile imaging service platform with the architecture shown in Figure 4 if found in prior architectures, would typically require deploying new handsets, and manually adding new hardware to the MMSC video gateway. In some mobile video messaging and sharing applications, it may be desirable to eliminate the cost and complexity associated with transcoding.
  • One aspect of the current invention is the ability to embed a software decoder with each transmitted video stream, enabling "self-playing" functionality on common handset and PV video players.
  • the MMS applications servers in an MMSC may support applications such as automated or manual editing of user-created video, as well as database storage, search, and retrieval of user-created video.
  • the computational complexity required to implement such functions requires specialized servers to be installed by mobile operators, with the corresponding video processing functions typically requiring expensive and high-power application-specific integrated circuits (ASICs) and digital signal processors (DSPs), rather than simpler SW applications running on less-expensive and lower-power CPU chips used in standard personal computers (PCs) and servers.
  • ASICs application-specific integrated circuits
  • DSPs digital signal processors
  • an all-software mobile imaging applications service platform would be preferable, in order to support automated OTA upgrade of deployed handsets, automated OTN upgrade of deployed MMSCs, and support for mobile video applications using standard PCs and servers.
  • a Java implementation of the mobile handset imaging application may be preferable in terms of improved handset/network robustness against viruses, worms, and other "attacks", allowing mobile network operators to provide the quality and reliability of service required by national regulators.
  • Upgrading MMSC infrastructure is also costly if new or specialized hardware is required.
  • An all-SW applications and service platform would be preferable in order to enable automated OTA upgrade of handsets, OTN upgrade of MMSC video gateways, and support for mobile video applications using standard PCs and servers.
  • the need for transcoding between different video formats also adds additional cost and complexity.
  • an all-SW video codec solution substantially reduces or eliminates baseband processor and video accelerator costs and requirements in multimedia handsets. Combined with the ability to install the codec post-production via OTA download, this all-SW solution substantially reduces the complexity, risk, and cost of both handset development and video messaging service architecture and deployment.
  • SW video transcoders and editing, storing, searching, retrieval applications enable automated over-the-network (OTN) upgrade of deployed MMS control (MMSC) infrastructure, as well as the use of standard PCs and servers to run such applications.
  • OTN over-the-network
  • MMSC deployed MMS control
  • the present invention wavelet transcoders provide carriers with complete interoperability between the wavelet video format and other standards-based and proprietary video formats.
  • the present invention also allows a software decoder to be embedded with each transmitted video stream, enabling "self-playing" functionality on common handset and PV video players, and eliminating the cost and complexity of transcoding altogether.
  • the present invention's all-SW video platform allows rapid deployment of new MMS services, also parts of embodiments of the present invention, that leverage processing speed and video production accuracy not available with other existing technologies.
  • the present invention's wavelet codecs are also unique in their ability to efficiently process both still images and video, and can thus replace separate MPEG and JPEG codecs with a single lower-cost and lower-power solution that can simultaneously support both mobile picture- mail and video-messaging services.
  • aspects of the present invention utilize 3-D wavelet transforms in video compression/decompression (codec) devices with much lower computational complexity than DCT-based codecs ( Figure 5 provides a comparison of the relative computational requirements of a traditional DCT encoder technology and exemplary technologies of the present invention).
  • codec video compression/decompression
  • Figure 5 provides a comparison of the relative computational requirements of a traditional DCT encoder technology and exemplary technologies of the present invention.
  • the application of a wavelet transform stage also enables design of quantization and entropy-coding stages with greatly reduced computational complexity.
  • Further advantages of the 3-D wavelet codecs of the present invention for mobile imaging applications, devices, and services include:
  • Compact SW decoder for example, such as less than 4OkB in size
  • Compact SW decoder can be integrated with each transmitted video stream to enable "self playing" video messages compatible with common handset and PC video players.
  • Lifting Scheme computation These filters can be computed using the Lifting Scheme which allows in-place computation. This minimizes use of registers and temporary RAM locations, and keeps references local for highly efficient use of caches.
  • Wavelet transforms in pyramid form with customized pyramid structure Certain embodiments of the present invention compute each level of the wavelet transform sequence on half of the data resulting from the previous wavelet level, so that the total computation is almost independent of the number of levels. Aspects of the present invention customize the pyramid to leverage the advantages of the Lifting Scheme above and further economize on register usage and cache memory bandwidth.
  • Block structure In contrast to most wavelet compression implementations, aspects of the present invention may divide the picture into rectangular blocks and processes each block separately from the others. This allows memory references to be kept local and to do an entire transform pyramid with data that remains in the processor cache, saving a significant amount of data movement within most processors.
  • the present block structure is particularly helpful in HW embodiments as it avoids the requirement for large intermediate storage capacity in the signal flow.
  • Block boundary filters the present invention may also use modified filter computations at the boundaries of each block that avoid sharp artifacts as set out in US Patent Application Serial No. 10/418,363, incorporated herein by reference.
  • Chroma temporal removal aspects of the present invention may also avoid processing the chroma-difference signals for every field, instead using a single field of chroma for a GOP as set out in US Patent Application Serial No. 10/447,514, incorporated herein by reference.
  • Temporal compression using 3D wavelets Certain embodiments of the present invention may not use the very expensive motion-search and motion-compensation operations of conventional video compression methods such as MPEG. Instead those embodiments compute a field-to-field temporal wavelet transform. This is much less expensive to compute. Also sometimes used are short integer filters with the Lifting Scheme in this aspect.
  • the quantization step of the compression process may be accomplished using a binary shift operation uniformly over a range of coefficient locations. This avoids the per-sample multiplication or division required by conventional quantization.
  • Cycle-efficient entropy coding In certain embodiments of the present invention, the entropy coding step of the compression process is accomplished using techniques that combine the traditional table lookup with direct computation on the input symbol. Because the symbol distribution has been characterized, such simple entropy coders as Rice-Golomb or exp-Golomb or Dyadic Monotonic can be used. The choice of entropy coder details will often vary depending on the processor platform capabilities.
  • the fine grain scalability of the wavelet-based codec enables improved adaptive rate control, multicasting, and joint source- channel coding.
  • the reduced computational complexity and higher computational efficiency of the present wavelet algorithms allows information on both instantaneous and predicted channel bandwidth and error conditions to be utilized in all three of the source coder, the channel coder, and the rate controller to maximize control of both the instantaneous and average quality (video rate vs. distortion) of the reconstructed video signal (see Figure 6).
  • the improved adaptive joint-source channel coding technique of the present invention allows wireless carriers and MMS service providers to offer a greater range of quality-of-service (QoS) performance and pricing levels to their consumer and enterprise customers. Utilizing improved adaptive joint-source channel coding based on algorithms with higher computational efficiency enables support for a much higher level of network heterogeneity, in terms of channel types (wireless and wire line), channel bandwidths, channel noise/error characteristics, user devices, and user services.
  • FIG. 7 illustrates an improved mobile imaging handset platform architecture according to aspects and embodiments of the present invention.
  • the imaging application according to aspects of the present invention is implemented as an all-software application running as native code or as a Java application on a RISC processor. Acceleration of the Java code operation may be implemented within the RISC processor itself, or using a separate Java accelerator IC. Such a Java accelerator may be implemented as a stand-alone IC, or this 1C may be integrated with other functions in either a SIP or SoC.
  • the improved mobile imaging handset platform architecture illustrated in Figure 7 eliminates the need for separate DSP, ASIC, multimedia processor, or RFD processing blocks for the mobile imaging application, (as would be required in prior devices or systems) and also greatly reduces the buffer memory requirements for image processing in the mobile handset.
  • Figure 8 shows the reduction in computational requirements for full VGA 30 fps video encoding provided by aspects of the current invention, in comparison to current state-of-the-art industry solutions reached after the filing date of the present application's priority filing based upon MPEG-4 and H-264 video codecs.
  • Figure 9 shows one implementation of aspects of the current invention on a commercial mobile GSM camcorder phone platform.
  • the existing GSM baseband/multimedia SoC (Texas Instruments OMAP 850 shown in Figure 9) requires a HW accelerator, a DSP, and a RISC processor for QCIF/15 fps camcorder functionality
  • the present invention provides VGA/30 fps camcorder functionality on this platform using only SW running on the RISC processor without the need of a HW accelerator or a DSP.
  • Figure 10 shows one implementation of aspects of the current invention on a commercial mobile CDMA camcorder phone platform.
  • the existing CDMA baseband/multimedia SoC (Qualcomm MSM6500 shown in Figure 10) requires a HW accelerator, a DSP, and a RISC processor for QCIF/15 fps camcorder functionality
  • the present invention provides VGA/30 fps camcorder functionality on this platform using only SW running on the RISC processor without the HW accelerator or DSP.
  • Components of an improved mobile imaging service platform architecture may include:
  • BTS Mobile Base stations
  • BSC/RNC Base station Controller/Radio Network Controller
  • GSN Gateway Service Node
  • MMSC Mobile Multimedia Service Controller
  • Typical functions included in the MMSC may include:
  • certain steps involved in deploying the improved imaging service platform may include:
  • Video Gateway Transcoder application and/or video messaging/sharing applications are available for updating deployed MMSCs.
  • the update can be installed via automated OTN deployment or via manual procedures.
  • Step 2. Install and configure Video Gateway Transcoder SW application and/or video messaging/sharing SW applications via automated OTN deployment or via manual procedures (see Figure 12).
  • Figure 13 shows "self- playing" video MMS functionality achieved by integrating the SW decoder with the transmitted video stream.
  • Figure 14 shows the reduction in complexity, cost, and number of video application servers required to deploy media producer services such as automated or manual editing of user-created video, as well as database storage, search, and retrieval of user-created video.
  • Figure 15 shows the functional elements of a video messaging/sharing/calling platform incorporating the improved wavelet-based codec/camcorder application, improved joint source channel coding, and improved video editing and database storage, search, and retrieval.
  • Figure 16 shows the benefits in terms of faster, lower cost development and deployment of higher quality multimedia handsets & services, including the ability to deploy an innovative personal multi-media market place platform in which users can preview, share, buy, and sell "soft” copies (download) or “hard” copies (DVD) of user-created audio/video content.
  • the present invention also allows for more efficient video "tagging" for database indexing and network (RSS) feeds, and supports interfaces to existing web-based market places such as E-bay, Google, Yahoo, Microsoft, and other portals.
  • RSS database indexing and network
  • Figure 17 shows several innovative new mobile video services based on the improved wavelet-based codec/camcorder application, improved joint source channel coding, and improved video editing and database storage, search, and retrieval.
  • Figure 18 shows applications of the above video messaging/sharing/calling platform incorporating the improved wavelet-based codec/camcorder application, improved joint source channel coding, and improved video editing and database storage, search, and retrieval, to deploy new video services on fixed wireless, mobile wireless, and wireline networks, as well as "converged" networks combining elements of fixed wireless, mobile wireless, and wireline architectures.
  • aspects of the present invention with their improved wavelet-based mobile video imaging application, joint source-channel coding, handset architecture, and service platform architecture achieve goals of higher mobile video image quality, lower handset cost and complexity, and reduced service deployment costs.
  • Various embodiments of aspects of the present invention provide enhancements to the mobile imaging handset architecture.
  • the imaging application can be installed via OTA download (400a, 400b, 400c) to the baseband multimedia processing section of the handset 402a, to a removable storage device 402b, or to the imaging module 402c.
  • the imaging application can also be installed during manufacturing or at point-of-sale to the baseband multimedia processing section of the handset, to a removable storage device, or to the imaging module. Additional implementation options are also possible as mobile device architectures evolve.
  • performance of the mobile imaging handset may be further improved, and costs and power consumption may be further reduced, by accelerating some computational elements via HW-based processing resources in order to take advantage of ongoing advances in mobile device computational HW (ASIC, DSP, multimedia processor, RPD) and integration technologies (SoC, SIP).
  • ASIC mobile device computational HW
  • DSP digital signal processor
  • RPD multimedia processor
  • SoC integration technologies
  • hybrid architectures offered by aspects of the present invention for the imaging application may offer enhancements by implementing some computationally intensive, repetitive, fixed functions in HW, and implementing in SW those functions for which post-manufacturing modification may be desirable or required.
  • Figure 22 shows potential simplifications in mobile camcorder device architecture, deployment, and maintenance.
  • the all-SW imaging solution of the present invention substantially reduces baseband processor and video accelerator costs and requirements in multimedia handsets. Combined with the ability to install and maintain the codec post-production via OTA download, this all-SW solution can substantially reduce the complexity, risk, and cost of both handset development and video messaging service deployment.
  • the present invention provides mobile operators with the first mobile video messaging and sharing platform that delivers the video quality, mobile handset price- point, and service deployment costs required for mass-market adoption by consumer and enterprise customers.
  • the present invention provides the first all-SW camcorder phone application capable of real-time capture of full (VGA)-size images (640 x 480 pixels) at 30 frames per second (fps), using only according to certain aspects and embodiments of the present invention standard RISC processors already incorporated in the vast majority of multimedia handsets.
  • VGA full
  • fps frames per second
  • the present invention's low-complexity video processing and distribution technologies can be integrated into a powerful new all- software platform that enables turnkey deployment using existing mobile handsets and mobile Multimedia Messaging Service (MMS) infrastructure.
  • MMS mobile Multimedia Messaging Service
  • aspects of the present invention's content management platform provide carriers with modules for integrating compressed images and videos, according to the present technology, together with sounds and text into complete mobile multimedia messages and "ring-tones", along with on-the-fly editing, thumbnail previews, multimedia mailboxes, on-line repository, sharing, and marketing services, and subscription management.
  • Example 1 describes the components, setup, and operation of an introductory demonstration of the functionality and benefits provided by an embodiment of aspects of the present invention's software-only mobile video messaging platform.
  • the demo utilizes commercially available GSM/GPRS multimedia handsets, and was designed to operate over any commercial GSM/GPRS network. The demonstration operated very successfully.
  • the demo can also be readily adapted to utilize CDMA handsets, and to operate over any commercial CDMA network.
  • the demo in Example 1 runs a demo and set of files code named "Droplet" and so labeled in Example 1.
  • the demo includes the following five elements:
  • I/O one 1394 (Firewire) port, two USB 2.0 ports .
  • OS Windows XP
  • the compressed DV video files captured by the camcorder are first converted into decompressed UYVY video format in the PC, and then input to the MDA-II handset for encoding/compression by the present invention's DTV codec.
  • UYVY is a typical video format that would be input to the video codec in a multimedia handset.
  • PC_player files for playing DTV files on the PC
  • Virtual_Dub1.6.3 PC software app for converting between different video formats
  • MMS_server sample monitoring script for the server
  • PHMRegEditor Registry editor for installtion on MDA-II and Xphone
  • Virtual Dub is used to convert the compressed DV video files, as captured by the camcorder, into decompressed UYVY video format in the PC. These decompressed video files are then input to the MDA-II handset for encoding/compression by the present invention's DTV codec.
  • UYVY is a typical video format that would be input to the video codec in a multimedia handset.
  • Virtual_Dub directory a file called VirtualDub.exe. This is selected and the application is verified to run.
  • camcorder If another type of camcorder is used, the appropriate driver must be installed on the PC.
  • the camcorder must be capable of DV video capture while connected to the PC.
  • the remote MMS server functions both as an FTP server (to enable download of video codec files to the handsets, and network storage of video files from the recording handset), and as a mail server (to enable email/SMS notification and download of video messages by networked computers and other handsets). Functionally, the server must be able to send SMS messages, in order to enable SMS notification to other handsets of pending video messages.
  • This monitoring script is referenced in crontab on Unix based servers. This script will monitor every minute for the presence of new files (ending in .Ink) in the ftp:/public_html directory.
  • the timeout period is increased to greater than the default setting of 60 sec.
  • the handset manufacturer HTC has provided the recommended registry changes. If there is no Registry Editor installed on the device, first install the registry editor included in Droplet's Demo package under the "PHMRegEditor" directory.
  • the resulting program PHMEditor is installed on the MDA-II in the directory
  • source code for the MDA-II Handset Ul Application is available under the directory MDA_DTV ⁇ mms_client_src as a reference.
  • This step is optional, since the default video player on the MDA-II can view the decoded Droplet video file.
  • the timeout period needs to be increased to greater than the default setting of 60 sec.
  • the handset manufacturer HTC has provided the recommended registry changes. If there is no Registry Editor installed on the device, first install the registry editor included in Droplets Demo package under the "PHMRegEditor" directory.
  • HKEY_LOCAL_MACHINE SOFTWARE ⁇ Microsoft ⁇ ConnMgr ⁇ Planner ⁇ Setting s o Change the setting for CacheTime to 300 (this is 5 min. timeout period) o Change the setting for SuspendResume to -GPRS! (should allow for no timeout)
  • Ewesoft will be a file called Ewe143-CAB-SmartPhone.zip. Unzip this file on the PC. Use the file called Ewe- SmartPhone2003.arm.CAB.
  • CAB file should be displayed as a menu item. Select the CAB file and the VM will then be installed
  • the MDA-II handset is used to encode/compress high quality uncompressed live video input from an external video camcorder. While the MDA- Il has a VGA capture camera, it can only capture still images at that resolution. The video capture of the MDA-II is limited to QCIF, 10 fps video that is automatically compressed in 3GPP format on the device.
  • the camcorder has a 4-pin connector receptacle.
  • the next screen will ask for video settings; select "Digital Device Format (DV- AVI)". That will capture the video in DV format, which is a high quality (almost lossless) video capture format. • The next screen will show a preview window of the captured video as well as interface to "Start Capture” and "Stop Capture”. Since many wireless handsets do not have a large amount of onboard memory, it is suggested that 2-3 seconds of full motion video (60-90 frames) is recorded for faster computation.
  • DV- AVI Digital Device Format
  • the recorded video in the previous step is in compressed DV format (720x480 pixels, 30 fps at about 28 Mbps) with integrated audio. To simulate full motion capture, the video should be uncompressed in common UYVY format, scaled down to VGA size (640x480 pixels), and the audio information removed.
  • RGB/YCbCr RGB/YCbCr.
  • Video the "Select range” option.
  • 60-90 frames (2-3 seconds of video) is a reasonably large size of video to manipulate.
  • the start can be from the beginning ("Start offset” of 0) or somewhere in the middle of the sequence, depending on the video that is captured.
  • a 2 seconds (60 frames) uncompressed UYVY video VGA (640x480) sequence is 36 MB of data on the computer.
  • Select "File” and “Save as AVI" to save the resulting uncompressed file.
  • step c To create a file for QCIF (recommended for sending a video clip to another handset via a GPRS connection): o
  • step c enter 160 for the "New width”.
  • Enter 144 for the "New height” o
  • step g name the file "testQcif_UYVY.avi”.
  • the above large source files (uncompressed video input) can also be placed on the "Storage Card" of the device, and then copied to the Documents directory for compression.
  • Source file The name of the file containing the uncompressed video in UYVY format. (This was the file that was generated in the previous step)
  • Destination file The name of the file where the compressed video sequence will be stored (The compressed file will have an extension ending in .dtv). For this demo, leave the file name at bitstream.dtv o Horizontal and vertical frame size.
  • the parameters will be 640 and 480 respectively.
  • the parameters will be 160 and 144 respectively.
  • o Type of input file YUV 4:2:2 by default
  • Compression rate Level 12 by default
  • o Range of frames to be compressed Typically set to "All". (Note: if the user chooses to change this, the total number of frames specified must be an even number because DTV processes 2 frames as a group of pictures)
  • This section will demonstrate the ability to send the compressed QCIF/15fps video from the MDA-Il handset to the MMS server via GPRS. From there, an SMS notification will be sent to the targeted handset (in this case the Xphone), indicating that a video MMS is ready for download and playback. Alternatively, an email notification will be sent if the targeted receiving device is a networked computer.
  • the targeted handset in this case the Xphone
  • a new window to select files will open. o Select the file that you want to send (in this case the compressed QCIF/15 fps video file). o A new window will open, to enter the target phone number/email address. o If an email address is entered (determined by the presence of the @ symbol) then the selected file will be sent with email notification. o if a string of digits without "@" is entered, then the file/SMS notification will be sent to mailto:"string"@tmomail.net, corresponding in this case to a T- Mobile subscriber with the phone number entered as "string”.
  • the user will be prompted to establish the GPRS connection. This can be accomplished, for example, by launching Internet Explorer and going to any well- known URL.
  • a script running on the server polls the ftp location publicjitml and determines that new files are present.
  • the server script will parse the ⁇ new file>.lnk file and extract the name of the video file to be sent and the destination handset # or email address.
  • the script will then send the SMS notification message to the target destination either via email or mobile SMS.
  • This section will demonstrate the ability to receive the SMS notification on the Xphone, and to connect to the MMS server and download the QCIF/15fps video file together with the DTV decoder. Upon receipt of the video file and decoder, the file will be decoded and played on the Xphone.
  • a Ul window will pop up allowing the user to enter information (the application will default to processing the bitstream.dtv file): o Horizontal and vertical frame size.
  • the parameters will be 160 and 144 respectively.
  • o Range of frames to be compressed Typically set to "All". (Note: if the user chooses to change this, the total number of frames specified must be an even number because DTV processes 2 frames as a group of pictures) o All other fields will be ignored.
  • aspects of the present invention comprise, in part, an all-software camcorder phone application capable of real-time capture of full (VGA)-size images (640 x 480) at 30 frames per second (fps), which may use only a single standard RISC processor already incorporated in the vast majority of multimedia handsets.
  • current MPEG-based camcorder phones support real-time capture of images that are limited to QCIF or CIF size (1/16th or 1/4 the size of VGA) at 4-15 fps.
  • QCIF or CIF size (1/16th or 1/4 the size of VGA) at 4-15 fps.
  • these small, choppy video clips require complex and expensive handset platform designs, in which the video functions are implemented as a combination of hardware and software, and partitioned between multiple processing devices: RISC processors, ASIC, and DSPs.
  • aspects of the present invention's low-complexity video processing and distribution technologies are integrated into a powerful new and inventive all-software video messaging platform that enables turnkey deployment using existing mobile handsets and mobile Multimedia Messaging Service controller (MMSC) infrastructure.
  • MMSC mobile Multimedia Messaging Service controller
  • embodiments of the present invention's content management platform provide modules for integrating the invention's compressed images and videos together with sounds and text into complete mobile multimedia messages and "ring-tones", along with on-the-fly editing, thumbnail previews, multimedia mailboxes, on-line repository services, and subscription management.
  • aspects of the present invention's video codecs offer customers a 30-40X reduction in power consumption (both SW and HW implementations - see Table 1) when compared to optimized MPEG-2/MPEG-4 codecs.
  • HW product implementation costs are significantly reduced via a 1OX reduction in the number of CMOS gates required, from approximate ⁇ 1 million to -100,000, and hence in the corresponding silicon real estate requirements.
  • VGA full size
  • 30 fps full-frame-rate
  • the present invention's innovative video codec designs also reduce internal memory requirements from several megabytes to 128 kilobytes, freeing up on-board memory resources in mobile handsets for other revenue-generating features and applications.
  • the present invention's codecs are also able to efficiently process both still images and video, and can thus replace separate MPEG and JPEG codecs with a single lower-cost and lower-power solution.
  • the present invention's unique mobile video platform technologies also offer significant benefits across a broad range of other mobile video services, via a combination of: scalable image size: QCIF (176x144) - D1 (720x480), simplified video editing (cuts, inserts, text overlays, etc.), simplified synchronization with voice codecs, and low latency for enhanced video streaming performance.
  • scalable image size QCIF (176x144) - D1 (720x480)
  • simplified video editing cuts, inserts, text overlays, etc.
  • simplified synchronization with voice codecs and low latency for enhanced video streaming performance.
  • the present invention also comprises MMS infrastructure products enabling deployment of premium video messaging services in conjunction with the inventive SW video codec/camcorder applications for mobile handsets. Additional aspects of the invention comprise advanced transcoding applications support complete interoperability with other commonly-deployed standards-based and proprietary video formats. Additionally included is a content management platform that provides modules for integrating the invention's compressed images and videos together with sounds and text into complete mobile multimedia messages and "ring-tones", along with a suite of corresponding MMS message management capabilities. This content management platform can be used by wireless operators and MMS service providers both as a set of SW modules, for rapid and cost-effective upgrades to existing MMS infrastructure, and as a stand-alone server for new MMS controller installations.
  • the inventive MMS infrastructure products may include:
  • SW Transcoder application for upgrading existing MMS Video Gateways to support conversion of video content between Droplet DTV format and other video formats such as MPEG-2, MPEG-4, Motion-JPEG, Microsoft Media, and RealVideo.
  • DTV-CMP SW Content Management Platform Suite of content management SW modules for upgrading existing MMS Message Application Servers: creation of MMS messages and "ring-tones" that integrate the present invention's compressed images and videos together with sounds and text, on-the-fly editing, thumbnail previews, multimedia mailboxes, on-line repository services, and subscription management.
  • DTV-CMS Content Management Server Server-based integrated SW content management platform, for new MMSC deployments.
  • the present invention also comprises a Content Management Service Platform that with SW Modules or Stand-Alone Server may include:
  • Mobile Multimedia Composer Integrates the present invention's improved wavelet-compressed images and videos with sounds and text in one message.
  • Mobile Multimedia Editor Enables on-the-fly editing of the present invention's wavelet-compressed images, videos, and integrated MMS messages with tools and filters.
  • Multimedia Ring-Tone Creator Allows users to create personal multimedia "ring- tones", by combining polyphonic ring-tones and other sounds with wavelet- compressed images and videos.
  • embodiments of the present invention may provide:
  • JSR-135 MOBILE MEDIA API SPECIFICATION
  • the present invention's DTV-JVC Java Video Codec generates decompressed video images that support all Player Functionality defined in Java Community Process JSR-135 including the following:
  • java.lang. Object initDisplayMode (int mode, java.lang. Object arg) Initialize the mode on how the video is displayed.
  • JSR-234 ADVANCED MULTIMEDIA SUPPLEMENTS
  • the present invention's DTV-JVC Java Video Codec generates decompressed video images that support all Player Effect Controls defined in Java Community Process JSR-234 including the following: mageFilterControl
  • ImageFilterControi is an image effect that can be used to set various image filters such as monochrome and negative. imageTonalityContro!
  • ImageTonalityControl is an effect that can be used to set various image settings such as brightness, contrast, and gamma.
  • ImageTransformControl is used to crop, zoom, mirror, flip, stretch, and rotate images.
  • OverlayControl controls the setting of overlay images on top of video or still images.
  • WhiteBalanceControl is an image/video effect for altering the white balance.
  • the present invention also comprises products, methods and processes for establishing, providing and operating a mobile video blog service.
  • This service provides every user having a video phone with the ability to: shoot, edit, save, share, and "publish" their personal videos and movies online.
  • a simple version of the Mobedia SW Cinema application may be distributed to users free of charge, while a more powerful "Cinema-Pro" version may be purchased by users.
  • aspects of the present invention may provide:
  • Mobedia handset client SW users can send Video takes to Mobedia servers, via mobile, fixed wireless, or wireline connections.
  • Mobedia subscription service allows users to archive video takes and movies on the server (paid storage), carry out further editing online, download and save video in a variety of popular formats, or order copies on DVD (paid).
  • Mobedia subscription service allows users to create movie albums on the Mobedia site, and invite friends, family, colleagues, etc. to view their movies, or order their own copies on DVDs (paid) as gifts, etc.
  • Mobedia type aspects of the present invention provide:
  • Droplets Mobedia subscription service allows users to "publish” their movies for the General Public to view on the Mobedia Cinema site.
  • the General Audience can search the archive of published movies by ranking, subject, category, etc., and view free previews on the Mobedia Cinema site.
  • the General Audience can pay to view, download, or order a copy on DVD.
  • the following methods and processes comprise aspects of the present invention and are exclusively enabled by the present technology.
  • Service components of aspects of the present invention comprise:
  • trie present invention's Mobedia SW Video Camcorder application enables video shooting on any java/video phone
  • Mobedia SW Video Cinema client applications include basic video production, editing, and viewing technology (simple and Pro versions)
  • Mobedia SW Video Cinema web-based Content Management applications support Mobedia Cinema movie albums, personal moving sharing, and Mobedia Cinema "publishing".
  • Improved adaptive joint-source channel coding technique is the corresponding ability of wireless carriers and MMS service providers to offer a greater range of quality-of-service (QoS) performance and pricing levels to their consumer and enterprise customers, thus maximizing the revenues generated using their wireless network infrastructure.
  • Improved adaptive joint-source channel coding based on algorithms with higher computational efficiency, enables support for a much higher level of network homogeneity, in terms of channel types (wireless and wire line), channel bandwidths, channel noise/error characteristics, user devices, and user services.
  • methods, devices, processes and business methods providing innovative and enhanced services in the field of still and moving video in the mobile telephone fields.
  • Also provided are systems and methods comprising improved joint source- channel coding using fine grain scalability of the improved wavelet-based codec described above, utilizing information on both instantaneous and predicted channel bandwidth and error conditions in all three of the source coder, the channel coder, and the adaptive rate controller to maximize control of both the instantaneous and average quality (video rate vs. distortion) of the reconstructed video signal.
  • QoS quality-of-service
  • a mobile camcorder application - combining aspects of the two preceding paragraphs with related image pre-processing and post-processing functions, and voice recording, for full camcorder capability in mobile devices, either as an all-SW implementation, an all-HW implementation, or as a hybrid SW + HW implementation.
  • a mobile camcorder application combining the application of the preceding paragraph above with related image pre-processing and post-processing functions, and voice recording, for full camcorder capability in mobile devices, either as an all-SW implementation, an all-HW implementation, or as a hybrid SW + HW implementation.
  • imaging-enabled mobile handset architecture using as pects and features of the preceding paragraphs of this summary, where the mobile imaging application is incorporated in the handset baseband multimedia processing section of the handset, in the imager module, or in a removable storage medium.
  • a mobile imaging transcoder for universal compatibility of the above features of this summary with other standards-based or proprietary imaging formats — all SW application delivered to and installed in an MMSC Video Gateway via automated OTN upgrade or via manual procedures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)
PCT/US2006/005891 2005-02-16 2006-02-16 Mobile imaging application, device architecture, service platform architecture and services WO2006089254A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2007556380A JP2008537854A (ja) 2005-02-16 2006-02-16 モバイルイメージング(画像化)アプリケーション、装置のアーキテクチャ、サービス・プラットフォーム・アーキテクチャとサービス(MOBILEIMAGINGAPPLICATION,DEVICEARCHITECTURE,SERVICEPLATFORMARCHITECTUREandSERVICES)
CA002611683A CA2611683A1 (en) 2005-02-16 2006-02-16 Mobile imaging application, device architecture, service platform architecture and services
EP06735520A EP1856805A4 (en) 2005-02-16 2006-02-16 MOBILE PICTURE PRODUCTION APPLICATION, DEVICE ARCHITECTURE, SERVICE PLATFORM ARCHITECTURE AND SERVICES
AU2006214055A AU2006214055A1 (en) 2005-02-16 2006-02-16 Mobile imaging application, device architecture, service platform architecture and services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US65405805P 2005-02-16 2005-02-16
US60/654,058 2005-02-16

Publications (2)

Publication Number Publication Date
WO2006089254A2 true WO2006089254A2 (en) 2006-08-24
WO2006089254A3 WO2006089254A3 (en) 2007-11-29

Family

ID=36917143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/005891 WO2006089254A2 (en) 2005-02-16 2006-02-16 Mobile imaging application, device architecture, service platform architecture and services

Country Status (7)

Country Link
EP (1) EP1856805A4 (ko)
JP (1) JP2008537854A (ko)
KR (1) KR20070112461A (ko)
CN (1) CN101160577A (ko)
AU (1) AU2006214055A1 (ko)
CA (1) CA2611683A1 (ko)
WO (1) WO2006089254A2 (ko)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9507609B2 (en) 2013-09-29 2016-11-29 Taplytics Inc. System and method for developing an application

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8385725B2 (en) * 2008-02-26 2013-02-26 Samsung Electronics Co., Ltd. Method of the apparatus for recording digital multimedia based on buffering states of the multimedia service
JP5553533B2 (ja) * 2009-06-08 2014-07-16 キヤノン株式会社 画像編集装置およびその制御方法およびプログラム
KR20110000947A (ko) * 2009-06-29 2011-01-06 에스케이네트웍스 주식회사 영상 통화 블로그 서비스 제공 장치 및 방법

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1083762A1 (en) * 1999-09-09 2001-03-14 Alcatel Mobile telecommunication terminal with a codec and additional decoders
JP2001222500A (ja) * 1999-12-01 2001-08-17 Sharp Corp ネットワークゲートウェイにおけるプログラムの配布方法
EP1109400A1 (en) * 1999-12-16 2001-06-20 CANAL+ Société Anonyme Transmission of a command to a receiver or to a decoder
US20020012347A1 (en) * 2000-02-03 2002-01-31 Patrick Fitzpatrick System and method for downloading code
US20010030667A1 (en) * 2000-04-10 2001-10-18 Kelts Brett R. Interactive display interface for information objects
US20020112237A1 (en) * 2000-04-10 2002-08-15 Kelts Brett R. System and method for providing an interactive display interface for information objects
JP2002268892A (ja) * 2001-03-13 2002-09-20 Amada Co Ltd ソフトウェア配信方法及びそのシステム
JP2004038941A (ja) * 2002-04-26 2004-02-05 Matsushita Electric Ind Co Ltd ユニバーサル・マルチメディア・フレームワークの端末装置、サーバ、及びゲートウエイのコンテンツ適合方法
CA2505936A1 (en) * 2002-11-11 2004-05-27 Supracomm, Inc. Multicast videoconferencing
US7430602B2 (en) * 2002-12-20 2008-09-30 Qualcomm Incorporated Dynamically provisioned mobile station and method therefor
US20040133914A1 (en) * 2003-01-03 2004-07-08 Broadq, Llc Digital media system and method therefor
JP2005033664A (ja) * 2003-07-10 2005-02-03 Nec Corp 通信装置及びその動作制御方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1856805A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9507609B2 (en) 2013-09-29 2016-11-29 Taplytics Inc. System and method for developing an application
US10169057B2 (en) 2013-09-29 2019-01-01 Taplytics Inc. System and method for developing an application
US10802845B2 (en) 2013-09-29 2020-10-13 Taplytics Inc. System and method for developing an application
US11614955B2 (en) 2013-09-29 2023-03-28 Taplytics Inc. System and method for developing an application

Also Published As

Publication number Publication date
EP1856805A2 (en) 2007-11-21
CA2611683A1 (en) 2006-08-24
JP2008537854A (ja) 2008-09-25
AU2006214055A1 (en) 2006-08-24
KR20070112461A (ko) 2007-11-26
CN101160577A (zh) 2008-04-09
EP1856805A4 (en) 2009-01-07
WO2006089254A3 (en) 2007-11-29

Similar Documents

Publication Publication Date Title
US8849964B2 (en) Mobile imaging application, device architecture, service platform architecture and services
US8896717B2 (en) Methods for deploying video monitoring applications and services across heterogeneous networks
US20060072837A1 (en) Mobile imaging application, device architecture, and service platform architecture
US20130039433A1 (en) System, method and apparatus of video processing and applications
KR101012618B1 (ko) 이미징 시스템 내의 이미지들의 프로세싱
EP1800415A2 (en) Mobile imaging application, device architecture, and service platform architecture
US20070064124A1 (en) Media spooler system and methodology providing efficient transmission of media content from wireless devices
US20140368672A1 (en) Methods for Deploying Video Monitoring Applications and Services Across Heterogeneous Networks
EP1856805A2 (en) Mobile imaging application, device architecture, service platform architecture and services
CA2583745A1 (en) Video monitoring application, device architectures, and system architecture
EP2210366B1 (en) Methods and systems for transferring multimedia content using an existing digital sound transfer protocol

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680012489.8

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007556380

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006214055

Country of ref document: AU

Ref document number: 2006735520

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020077021306

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2006214055

Country of ref document: AU

Date of ref document: 20060216

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2611683

Country of ref document: CA