WO2009004636A2 - Ppprocédé, dispositif et système pour fournir un contenu multimédia rendu à un dispositif de réception de message - Google Patents

Ppprocédé, dispositif et système pour fournir un contenu multimédia rendu à un dispositif de réception de message Download PDF

Info

Publication number
WO2009004636A2
WO2009004636A2 PCT/IL2008/000926 IL2008000926W WO2009004636A2 WO 2009004636 A2 WO2009004636 A2 WO 2009004636A2 IL 2008000926 W IL2008000926 W IL 2008000926W WO 2009004636 A2 WO2009004636 A2 WO 2009004636A2
Authority
WO
WIPO (PCT)
Prior art keywords
message
rendering
walkdata
rendered
new
Prior art date
Application number
PCT/IL2008/000926
Other languages
English (en)
Other versions
WO2009004636A3 (fr
Inventor
Amir Cogan
Original Assignee
Playwagon Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Playwagon Ltd. filed Critical Playwagon Ltd.
Publication of WO2009004636A2 publication Critical patent/WO2009004636A2/fr
Publication of WO2009004636A3 publication Critical patent/WO2009004636A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/58Message adaptation for wireless communication

Definitions

  • the present invention relates generally to the field of communication. More specifically, the present invention relates to a method, device and system for providing rendered multimedia content to a message recipient device.
  • Multi-media was used to describe the Exploding Plastic Inevitable, a performance that combined live rock music, cinema, experimental lighting and performance art. In the intervening forty years the word has taken on different meanings. In the late 1970s the term was used to describe presentations consisting of multi-projector slide shows timed to an audio track. In the 1990s it took on its current meaning. In common usage the term multimedia refers to an electronically delivered combination of media including video, still images, audio, text in such a way that can be accessed interactively. Much of the content on the web today falls within this definition as understood by millions.
  • Multimedia Messaging Service is a standard for telephone messaging systems that allows sending messages that include multimedia objects (images, audio, video, rich text) and not just text as in Short Message Service (SMS). It is mainly deployed in cellular networks along with other messaging systems like SMS, Mobile Instant Messaging and Mobile E-mail. Its main standardization effort is done by 3GPP, 3GPP2 and Open Mobile Alliance (OMA).
  • MMS Multimedia Messaging Service
  • MMS was originally developed within the Third-Generation Partnership Program (3GPP), a standards organization focused on standards for the UMTS/GSM networks. [005] Since then, MMS has been deployed world-wide and across both GSM/GPRS and CDMA networks.
  • 3GPP Third-Generation Partnership Program
  • MMS has also been standardized within the Third-Generation Partnership Program 2 (3GPP2), a standards organization focused on specifications for CDMA2000 networks. As with most 3GPP standards, the MMS standards have three stages: Stage 1 - Requirements (3GPP TS 22.140)
  • Multimedia content created by one brand of MMS phone may not be entirely compatible with the capabilities of the recipients 1 MMS phone.
  • the recipient MMSC is responsible for providing for content adaptation (e.g., image resizing, audio codec transcoding, etc.), if this feature is enabled by the mobile network operator.
  • content adaptation e.g., image resizing, audio codec transcoding, etc.
  • content adaptation When content adaptation is supported by a network operator, its MMS subscribers enjoy compatibility with a larger network of MMS users than would otherwise be available.
  • r00101 Distribution lists Current MMS specifications do not include distribution lists nor methods by which large numbers of recipients can be conveniently addressed, particularly by content providers, called Value Added Service Providers (VASPs) in 3GPP.
  • VASPs Value Added Service Providers
  • MMS Unlike SMS, MMS requires a number of handset parameters to be set. Poor handset configuration is often blamed as the first point of failure for many users. Service settings are sometimes preconfigured on the handset, but mobile operators are now looking at new device management technologies as a means of delivering the necessary settings for data services (MMS, WAP, etc.) via over-the-air programming (OTA).
  • OTA over-the-air programming
  • the present invention is a method, device and system for providing multimedia content to one or more message recipient devices.
  • a message rendering system including: (1) one or more communication modules, (2) one or more graphics/video rendering engines, and (3) one or more data repositories in which rendering elements such as backgrounds, object models and animated character models may be stored.
  • the one or more communication modules may include an IP based communication module and a cellular communication module.
  • a message authoring device may be used by a message author to instruct the message rendering system to generate a multimedia message and to send the rendered message to one or more recipient devices designated by the message author.
  • the authoring device may be any wireless or wired communication device, including a cell phone, a smart-phone, a desktop or mobile computer, or any other device capable of transmitting data (e.g. instructions) to a remote computing system.
  • Data or instructions transmitted from the authoring device to the message rendering system may include: (1) the text content of the author's message (e.g.
  • rendering parameters which may indicate: (a) selected background which can be a still image or a video (e.g. animated or basic movie), (b) selected object models, (c) selected animated character models (e.g.
  • an animated character may also be referred to as a hero), (d) selected rendering theme, (e) color, size, orientation, velocity and path of any rendered element, also opacity, timing, (f) one or more effects to apply to any rendered element, and (g) a "text effect" used to embed the text content into an animation such that the animation/movement of one or more characters may depend on the content/text and text effect, and the content/text become part of the animation according to a style defined by the text effect.
  • the rendering system may auto-select specific rendering parameters omitted by the user.
  • a message rendering system may render a multimedia message including the content within the data received from the authoring device, wherein the rendering is performed in accordance with at least one rendering parameter within the data/instructions received from the authoring device.
  • the data transmitted from the authoring device to the message rendering system may also include a message recipient device identifier (e.g. phone number, email address, URL or IP address), which identifier may be used by the system's communication module to address and transmit a rendered message to the message's intended recipient.
  • a message recipient device identifier e.g. phone number, email address, URL or IP address
  • the rendering system may include a message interpreter adapted to convert data and or rendering instructions encoded in a message from an authoring device into commands, parameters and or instructions usable by the one or more rendering engines.
  • one or more rendering parameters within data received from an authoring device may be used by one or more rendering engines associated with the message rendering system to select and retrieve from one or more data repositories rendering elements including: (1) background, (2) object models, and (3) animated character models.
  • the background or another element in a rendered message may be an animated movie, which can also be referred to as a "basic movie", which basic movie may also be stored in a repository.
  • At least some rendered portion of a rendered animated character may be embedded within a "basic movie".
  • One or more rendering parameters within data received from an authoring device may be used by one or more rendering engines associated with the message rendering system to adjust the rendering of selected rendering elements, including adjusting the color, size, orientation, velocity opacity and timing and path of any rendered element.
  • one or more rendering parameters may be associated with one or more rendering effects which may be applied to any rendered element.
  • the content e.g. characters, words, phrases, recorder audio and recorder video
  • the content within the data sent by the authoring device may be used by the one or more rendering engines.
  • a text rendering effect may also be selected.
  • the content may be used by the one or more rendering engines to select and/or to adjust a rendering aspect (e.g. color, size, orientation and movement) of a rendering element.
  • the one or more rendering engines may alter the rendering of: (1) movement of an animated character's limbs to correspond with a pattern correlated to characters/letters within the content; (2) movement of an animated character's lips to correlate with audio/words within the content; (3) the size of a rendering element to correlate with an image or video size within the content (e.g.
  • the size of rendered surface onto which the image or video is mapped) - for example at text string may be broken into different lines, and text may be rendered at different sizes on different lines and different angles on different lines.
  • the color of the rendered text, or some other rendering effect of the text may also be auto- selected by the message rendering system as a function of the background selected by the author, for example in order to contrast the text against the selected background. Any rendering methodology or system, known today or to be devised in the future, may be applicable to the present invention.
  • the one or more rendering engines may render a message in a first format (e.g. MPEG, ).
  • Th e message rendering system may include one or a set of transcoders adapted to convert a rendered message from the first format to a second format, which second format may be selected based on display capabilities (e.g. screen size and available video decoders) on the recipient device.
  • a rendered message may be transcoded prior to transmission to the recipient device.
  • Information regarding the display capacities of the recipient device e.g. make and model of the recipient device) may be transmitted to the message rendering system by the authoring device.
  • the message rendering system may include a data table for storing display capability information and/or encoding format information for a given recipient device indexed based on the recipient device identifier. Any encoding, decoding and transcoding technology, known today or to be devised in the future, may be applicable to the present invention.
  • the authoring device may be a conventional mobile phone with an short message service (“sms") application, and the data sent from the authoring device to the message rending system may be a structured sms including a content portion and one or more rendering parameters in the form of formatted character strings (e.g. the string "**B3**" may denote use Background No. 3).
  • the authoring device may include an authoring application, either an installed application or a browser application, which application provides an interface for the input of message content and the selection of possible rendering parameters.
  • the authoring application may intermittently download from the message rendering system, or from an application server associated with the rendering system, information about available rendering element (e.g. rendering objects - backgrounds, still or video and text effects and animated characters) and rendering options (e.g. themes and effects) available on the message rendering system.
  • the authoring application may graphically present the downloaded information through a user interface, and a message author may select which rendering options/parameters to use in conjunction with entered content through the application user interface.
  • the authoring application may combine all user entered data (e.g.
  • the message from the authoring application to the message rendering system may be in the form of a structured sms or in the form of one or more data packets sent over a distributed data network. Any form of digital data communication, known today or to be devised on the future, may be applicable to the present invention.
  • one or more of the rendered elements in a message rendered according to an aspect of present invention may include commercial advertising.
  • an author may authorize the rendering system to insert advertising into the message, and in return, the advertiser may offset some or all of the cost of the message to the author.
  • the advertising may be mapped on or embedded into any rendered element including animated object, animated characters, the background picture or movie, or any other element.
  • Fig. 1 shows a block diagram of an exemplary message rending system according to some embodiments of the present invention
  • Fig. 2 shows a flow chart including the steps of an exemplary method of operating a message rendering system according to some embodiments of the present invention
  • FIG. 3 shows a block diagram of a message authoring device according to some embodiments of the present invention.
  • FIG. 4 shows a flow chart including the steps of an exemplary method of operating a message authoring device according to some embodiments of the present invention
  • FIG. 5A shows a symbolic block diagram of an exemplary embodiment of a message rendering system according to some embodiments of the present invention
  • FIG. 5B shows a hybrid block and flow diagram of a further embodiment of a message rendering system
  • FIG. 6A shows an exemplary web browser interface, of an message authoring applications in accordance with some embodiments of the present invention
  • FIG. 6B shows an exemplary mobile device interface, of an message authoring applications in accordance with some embodiments of the present invention
  • FIG. 7A shows an exemplary flow of information within a rendering system according to some embodiments of the present invention.
  • FIG. 7B shows an further exemplary flow of information within a rending system according to further embodiments of the present invention.
  • Fig. 8A shows an exemplary "font-meta-data" (fmd), describing the glyphs of the fonts used to display a message, in accordance with some embodiments of the present invention
  • Fig. ⁇ B shows the steps an exemplary fmd exposure process of the letter "F", in accordance with some embodiments of the present invention
  • Fig. 9 shows the steps of an exemplary pre recorded hero-body-motion, showing a scene where a movie hero is writing the sender's message, in accordance with some embodiments of the present invention.
  • Fig. 10 shows an exemplary hero animation using 3 parts. His right hand palm + writing tool, his right arm, and the rest of his body;
  • Figs. 11A and 11 B show the frames a 'smoke effect' animation of the "A" character, in accordance with some embodiments of the present invention
  • Embodiments of the present invention may include apparatuses for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
  • a computer readable storage medium such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
  • IP networking is a set of communications protocols that implement the protocol stack on which the Internet and most commercial networks run. It has also been referred to as the TCP/IP protocol suite, which is named after two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were also the first two networking protocols defined.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • the Internet Protocol suite like many protocol suites — can be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted.
  • the TCP/IP reference model consists of four layers.
  • the IP suite uses encapsulation to provide abstraction of protocols and services. Generally a protocol at a higher level uses a protocol at a lower level to help accomplish its aims.
  • the Internet protocol stack has never been altered, by the IETF, from the four layers defined in RFC 1122. The IETF makes no effort to follow the seven-layer OSI model and does not refer to it in standards-track protocol specifications and other architectural documents.
  • DNS TFTP
  • TLS/SSL FTP
  • Gopher HTTP
  • IMAP IRC
  • SNNTP SNNTP
  • POP3 SIP
  • SMTP SMTP
  • SNMP SSH
  • TELNET TELNET
  • ECHO RTP
  • Routing protocols like BGP which for a variety of reasons run lover TCP, may also be considered part of the application or ! network layer.
  • Transport TCP, UDP, DCCP, SCTP 1 IL 1 RUDP
  • ICMP and IGMP run over IP and are considered part of the network layer, as they provide control information.
  • IP IP (IPv4, IPv6)
  • ARP and RARP operate underneath IP but above the link j layer so they belong somewhere in between.
  • Ethernet 1 Ethernet, Wi-Fi, token ring, PPP, SLIP, FDDI, ATM, Frame 1.
  • RFC3439 on Internet architecture, contains a section entitled: "Layering Considered Harmful”: Emphasizing layering as the key driver of architecture is not a feature of the TCP/IP model, but rather of OSI. Much confusion comes from attempts to force OSI-like layering onto an architecture that minimizes their use. [0047] Today, most commercial operating systems include and install the TCP/IP stack by default. For most users, there is no need to look for implementations. TCP/IP is included in all commercial Unix systems, Mac OS X, and all free-software Unix-like systems such as Linux distributions and BSD systems, as well as Microsoft Windows.
  • mobile devices may connect with and access data from an enterprise data system over a communication network at some portion of which may be a wireless network. While the term wireless network may technically be used to refer to any type of network that is wireless, the term is most commonly used to refer to a telecommunications network whose interconnections between nodes is implemented without the use of wires, such as a computer network (which is a type of communications network).
  • Wireless telecommunications networks are generally implemented with some type of remote information transmission system that uses electromagnetic waves, such as radio waves, for the carrier and this implementation usually takes place at the physical level or "layer" of the network. (For example, see the Physical Layer of the OSI Model).
  • electromagnetic waves such as radio waves
  • GSM Global System for Mobile Communications
  • the GSM network is divided into three major systems which are the switching system, the base station system, and the operation and support system (Global System for Mobile Communication (GSM)).
  • GSM Global System for Mobile Communication
  • the cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred where it needs to go (Global System for Mobile Communication (GSM)).
  • GSM Global System for Mobile Communication
  • PCS Personal Communications Service
  • D-AMPS Digital Advanced Mobile Phone Service
  • GSM Global System for Mobile Communications
  • GSM Global standard for digital mobile communication, common in most countries except South Korea and Japan.
  • CDMA and GSM networks operating at 1900 MHz in North America. 9. Mobitex - pager-based network in the USA and Canada, built by Ericsson, now used by PDAs such as the Palm VII and Research in Motion BlackBerry.
  • GPRS General Packet Radio Service
  • upgraded packet-based service within the GSM framework gives higher data rates and always-on service.
  • UMTS Universal Mobile Telephone Service (3rd generation cell phone network). based on the W-CDMA radio access network. 12.AX.25 - amateur packet radio. 13. NMT - Nordic Mobile Telephony, analog system originally developed by PTTs in the Nordic countries. 14. AMPS - Advanced Mobile Phone System introduced in the Americas in about
  • D-AMPS Digital AMPS, also known as TDMA.
  • Wi-Fi Wireless Fidelity, widely used for Wireless LAN, and based on IEEE
  • the present invention is a method, device and system for providing multimedia content to one or more message recipient devices.
  • a message rendering system including: (1) one or more communication modules, (2) one or more graphics/video rendering engines, and (3) one or more data repositories in which rendering elements such as backgrounds, object models and animated character models may be stored.
  • the one or more communication modules may include an IP based communication module and a cellular communication module.
  • a message authoring device may be used by a message author to instruct the message rendering system to generate a multimedia message and to send the rendered message to one or more recipient devices designated by the message author.
  • the authoring device may be any wireless or wired communication device, including a cell phone, a smart-phone, a desktop or mobile computer, or any other device capable of transmitting data (e.g. instructions) to a remote computing system.
  • Data or instructions transmitted from the authoring device to the message rendering system may include: (1) the text content of the author's message (e.g.
  • rendering parameters which may indicate: (a) selected background which can be a still image or a video (e.g. animated or basic movie), (b) selected object models, (c) selected animated character models (e.g.
  • an animated character may also be referred to as a hero), (d) selected rendering theme, (e) color, size, orientation, velocity and path of any rendered element, also opacity, timing, (f) one or more effects to apply to any rendered element, and (g) a "text effect" used to embed the text content into an animation such that the animation/movement of one or more characters may depend on the content/text and text effect, and the content/text become part of the animation according to a style defined by the text effect.
  • the rendering system may auto-select specific rendering parameters omitted by the user.
  • a message rendering system may render a multimedia message including the content within the data received from the authoring device, wherein the rendering is performed in accordance with at least one rendering parameter within the data/instructions received from the authoring device.
  • the data transmitted from the authoring device to the message rendering system may also include a message recipient device identifier (e.g. phone number, email address, URL or IP address), which identifier may be used by the system's communication module to address and transmit a rendered message to the message's intended recipient.
  • a message recipient device identifier e.g. phone number, email address, URL or IP address
  • the rendering system may include a message interpreter adapted to convert data and or rendering instructions encoded in a message from an authoring device into commands, parameters and or instructions usable by the one or more rendering engines.
  • one or more rendering parameters within data received from an authoring device may be used by one or more rendering engines associated with the message rendering system to select and retrieve from one or more data repositories rendering elements including: (1) background, (2) object models, and (3) animated character models.
  • the background or another element in a rendered message may be an animated movie, which can also be referred to as a "basic movie", which basic movie may also be stored in a repository.
  • at least some rendered portion of a rendered animated character (e.g. hero) may be embedded within a "basic movie”.
  • One or more rendering parameters within data received from an authoring device may be used by one or more rendering engines associated with the message rendering system to adjust the rendering of selected rendering elements, including adjusting the color, size, orientation, velocity opacity and timing and path of any rendered element.
  • one or more rendering parameters may be associated with one or more rendering effects which may be applied to any rendered element.
  • the content e.g. characters, words, phrases, recorder audio and recorder video
  • a text rendering effect may also be selected.
  • the content may be used by the one or more rendering engines to select and/or to adjust a rendering aspect (e.g. color, size, orientation and movement) of a rendering element.
  • the one or more rendering engines may alter the rendering of: (1) movement of an animated character's limbs to correspond with a pattern correlated to characters/letters within the content; (2) movement of an animated character's lips to correlate with audio/words within the content; (3) the size of a rendering element to correlate with an image or video size within the content (e.g. the size of rendered surface onto which the image or video is mapped) - for example at text string may be broken into different lines, and text may be rendered at different sizes on different lines and different angles on different lines.
  • a rendering aspect e.g. color, size, orientation and movement
  • the one or more rendering engines may alter the rendering of: (1) movement of an animated character's limbs to correspond with a pattern correlated to characters/letters within the content; (2) movement of an animated character's lips to correlate with audio/words within the content
  • the color of the rendered text, or some other rendering effect of the text may also be auto- selected by the message rendering system as a function of the background selected by the author, for example in order to contrast the text against the selected background. Any rendering methodology or system, known today or to be devised in the future, may be applicable to the present invention.
  • the one or more rendering engines may render a message in a first format (e.g. MPEG, ).
  • Th e message rendering system may include one or a set of transcoders adapted to convert a rendered message from the first format to a second format, which second format may be selected based on display capabilities (e.g. screen size and available video decoders) on the recipient device.
  • a rendered message may be transcoded prior to transmission to the recipient device.
  • Information regarding the display capacities of the recipient device e.g. make and model of the recipient device) may be transmitted to the message rendering system by the authoring device.
  • the message rendering system may include a data table for storing display capability information and/or encoding format information for a given recipient device indexed based on the recipient device identifier. Any encoding, decoding and transcoding technology, known today or to be devised in the future, may be applicable to the present invention.
  • the authoring device may be a conventional mobile phone with an short message service ("sms") application, and the data sent from the authoring device to the message rending system may be a structured sms including a content portion and one or more rendering parameters in the form of formatted character strings (e.g. the string " * *B3 ** " may denote use Background No. 3).
  • the authoring device may include an authoring application, either an installed application or a browser application, which application provides an interface for the input of message content and the selection of possible rendering parameters.
  • the authoring application may intermittently download from the message rendering system, or from an application server associated with the rendering system, information about available rendering element (e.g. rendering objects - backgrounds, still or video and text effects and animated characters) and rendering options (e.g. themes and effects) available on the message rendering system.
  • the authoring application may graphically present the downloaded information through a user interface, and a message author may select which rendering options/parameters to use in conjunction with entered content through the application user interface.
  • the authoring application may combine all user entered data (e.g.
  • the message from the authoring application to the message rendering system may be in the form of a structured sms or in the form of one or more data packets sent over a distributed data network. Any form of digital data communication, known today or to be devised on the future, may be applicable to the present invention.
  • one or more of the rendered elements in a message rendered according to an aspect of present invention may include commercial advertising.
  • an author may authorize the rendering system to insert advertising into the message, and in return, the advertiser may offset some or all of the cost of the message to the author.
  • the advertising may be mapped on or embedded into any rendered element including animated object, animated characters, the background picture or movie, or any other element to be introduced at a later time.
  • Fig. 1 there is shown an exemplary message rendering system interacting with two authoring device, a mobile phone and a desktop computer. The operation of the system of Fig. 1 may be understood in view of the flow chart in Fig. 2.
  • the system may receive authoring messages from an authoring application running on one of the authoring devices.
  • Fig. 3 shows a functional block diagram of an authoring device, running an authoring application, according to some embodiments of the present invention. The operation of the device according to Fig. 3 may be understood in view of the flow diagram of Fig. 4, which includes the basic authoring functions provided by the authoring device running an authoring application according to some embodiment of the present invention.
  • Figs. 6A and 6B show exemplary authoring application interfaces, the first on a desktop web browser and the second on a mobile device.
  • the applications may be downloaded to the authoring device through an application server functionally associated with the system.
  • An authoring message generated and transmitted to the rendering system may be interpreted by an interpreter with may convert the received authoring message into a set of rendering instructions used by one or more rendering engines in rendering a message.
  • the message After the message is rendered in a first format, for example as a flash movie, it may be transcoded by one or more transcoders into a second format, for example MPEG.
  • the one or more transcoders may be functionally associated with a database including data about the display capacities of a specific message recipient device. The one or more transcoders may select a second format based on the display capabilities of the recipient device.
  • a rendering system controller may coordinate the activity and dataflow between all the components of the rendering system.
  • the personal material is a part of the movie animation:
  • the personalized material is fully embedded in the movie, as if this material was part of the output of the professional studio that produced the movie. [0061] Taking text as an example. In the case the personalized material is text, the text is not "pasted” in the movie. It does not just appear somewhere in the movie.
  • the PASS is not "pasted" in the movie. It does not just appear somewhere in the movie.
  • the system composes the final animated movie that is sent to the receiver, or receivers, from professional content that includes a movie scene with some backgrounds, characters, and certain activities.
  • professional content that includes a movie scene with some backgrounds, characters, and certain activities.
  • One example is a short animated joke.
  • the movie is composed from the personalized materials of the sender. There are many such movie scenes that the system uses to compose the final animated movie. These movie scenes are sometimes called in the rest of this document: "basic movie”.
  • Personalized materials may include:
  • the professional context and personalized materials are blended together to create the final animated movie that is sent to the receiver, or receivers.
  • the system may introduce a certain amount of randomization, in the selection of the effects of blending the personalized materials into the movie.
  • a key factor of the system is the simplicity of use by the sender, and by the receiver.
  • the animated movie is rich and versatile, it can be generated by the sender with minimal effort, and viewed by the receiver in the simplest manner.
  • the PASS system is in fact a full self-contained system, ready to be integrated into mobile/wireless operator deployments. It may however be used for additional messaging systems, or for other types of messaging.
  • Figure 5A describes the environment the system is deployed in, and the interfaces it implements.
  • the sender may interface the system from a regular web browser, for example from a desktop PC, or PDA or mobile phone.
  • the sender may interface the system from a mobile/wireless device in one of few ways. He may simply compose an SMS message in a simple predefined format. He may surf to a WAP site and enter the personalized materials there, which is a similar method as surfing from a web browser, but the surfing is done from a mobile/wireless device WAP browser.
  • a client application i.e. an application that is downloaded to his device. This application may for example be written in Java or flash-lite.
  • One way the client may communicate the information needed to the PASS system, is by composing and sending the SMS message described above, instead of the user. That is, the user interacts with the client application and the client application eventually composes and sends the SMS.
  • the client application should offer the user a selection of professional movies to select from. In addition it enables the user to type in his message, and possibly select or enter other information, like a picture or pictures, or sound track. It also enables the user to select the destination of the message, either by typing the number, or selecting from the device's address book.
  • Figure 6A describes an example of how the web interface may look.
  • the sender selects a basic professional movie, inserts the text he wants to send in the textbox on the right top. He may then preview the combined professional content with his text. He also inserts the number of the receiver. He may also select the receiver in different manners, such as from a predefined personalized list of "buddies" he has.
  • the SMS interface is used to interface the system when sending a message, the user sends an SMS with a predefined format. The SMS is sent to a number representing the PASS system (sometimes referred to as "shortcode").
  • the SMS message should include the user text, possibility it can include an indication of what professional predefined movie is to be used in the message, and it includes information on the receiver, such as his phone number or email address or other indication. It may also include additional personalization materials. The different parts of information need to be identified by certain delimiters or identifiers.
  • One example of how the text structure of such an SMS can be arranged is:
  • T ⁇ user text> stands for the sender's text to be embedded into the message.
  • M ⁇ movie_id or movie key word or words> stands for the identifier of what movie to use for the message.
  • the movie can be identified by a specific id, or by a keyword, or keywords representing it. This identifier may be omitted. In that case the system will select a movie according to criteria described later here.
  • D destination identified stands for the identifier of the receiver or receivers. It can be a phone number, an alias, an email , any group of these combined, or other identifiers.
  • the PASS system generates the final movie from this personalized content and from the professional content on the PASS system.
  • This movie is then sent to the receiver or receivers.
  • One method to do this is by packaging the movie in an MMS and sending it via an MM7 MMS Value Added Service interface (MMS VAS).
  • MMS VAS MM7 MMS Value Added Service interface
  • Another way is by sending the receiver a "WAP PUSH" link message. In this case the receiver receives an MMS or SMS message with a link. Once the receiver selects this link, his device will open the link in the device WAP browser, and then the movie is played via the WAP browser. In both these cases the movie is eventually played by the video movie player on the device.
  • the movie may be played eventually using the vector graphics device.
  • Other means to send the movie to the destination exist. For example sending to email, instant messaging and uploading the movie to personal web sites.
  • the system is administered through a web interface, to constantly refresh the "basic movies" and PASS engine effects, or any other administrative action needed for the operation of the PASS system.
  • Figure 7A describes a typical flow of information in the PASS system.
  • the input from the sender is received at 1. It can be received from a WAP client communicating with the system WAP server, or web browser communicating with the system's web server, or by SMS received by the system. Typically the SMS will not directly reach the system. It will be translated first to some http post that communicates with the system web server.
  • the input is processed. If a movie identifier is missing in the input, the system selects a movie for the message. The system selects the effects and embedding methods to be used to integrate the users personalized materials into the movie. Some effects may be hard coded for some movies, other effects can be added at random. The random effect selection helps make each final movie unique, not only in terms of the personalized material, but also in terms of the effects in the movie. Examples of effects are described further in the document. It is also possible at 2 to accumulate commercial classification for the sender, from the personalized material he used in this message, and previous messages as well, for example the text he uses. Key words in the text can be matched against categories that are actually list of words belonging to categories.
  • a keyword is matched, the user is "credited” for that category.
  • a lookup of how many credits the user has for different categories can classify what kind of commercial content can be added to his movie. This type of commercial insertion can be useful in cases that users are willing to pay less for a service if it includes commercial or marketing content.
  • Commercial content can for example include some signs or logs in the background or foreground of the animation, or some commercial objects that the heroes on the movie interact somehow with.
  • a lookup of the user's profile can reveal if he/she has a "video signature". If so it may be added at some part of the movie, usually the end. All the information processed in step 2 is loaded into some sort of description - an XML file (3) is used in this example. This description also includes all the personalized material, or pointers to where this material exists on the system.
  • the selected movie (4) selected either explicitly by the sender or implicitly by the system at 2, is composed of the movie pre-recorded animation, Font Meta Data (5b) and the PASS engine software (5a).
  • the PASS engine generates the dynamic parts of the animation. It uses as input the XML file (3) and the Font-Meta-Data (5b). Font-Meta- Data represents information about font characters, that is required for intimate animation built around the characters.
  • a virtual flash player is loaded with the selected flash movie (4) and the dynamically created description - XML file in this example. The output of this player is the final movie being played virtually at the system. The output of this player is converted to 3gp file format, most widely used on mobile devices, or possibly to any other video format needed.
  • the 3gp file is packaged in an MMS message, possibly with some SMIL script to describe how to present it.
  • the MMS is further delivered via some interface such as MM7 (MMS VAS interface) to the MMSC system of the receiver, or the MMSC of the sender.
  • Movie hero is animated as actually writing the message with some writing tool, on some surface.
  • the writing tool may be a piece of chalk, a pencil, a can of spray that he/she sprays on the surface with, etc.
  • the animation is such that the writing tool actually follows the message letters path, and it is synchronized with the exposure of the letters.
  • a letter like H (capital h) will be animated as 3 line strokes: a left side vertical line, a horizontal line stroke and a right side vertical line stroke.
  • the dynamic animation built around the user's text is generated by the PASS animation generator.
  • This generator is composed of code and data. In one implementation this code is written in the animation language of the FLASH development framework. This language is known as "Action Script". It is similar to the well know software language "Java”.
  • the data part is some description of the glyphs of the fonts used to display the message. This description is called hereby "font-metadata" or fmd in short.
  • the fmd is a collection of n rectangles in the geometry of the plane of the font.
  • the rectangles are ordered and directional. That means they are numbered from 1 to n, and each has a virtual arrow pointing from one side of the rectangle to the opposite side. All this information together defines how the font will be gradually exposed during the writing process. It also defines from where to where the writing tool of the hero will move to create the writing animation of the hero. The body part location and rotation and other parameters of the hero, during the writing are derived from the writing tool location and orientation.
  • the body part motion can be based also on some pre-recorded animation motion of the hero, that is scaled in x direction and y direction and possibly rotation, so that the writing tool is following in location and synchronized in time to the font exposure, and the hero is performing a predefined animation while this is happening.
  • Figure 8A describes an exemplary "font-meta-data" (fmd), describing the glyphs of the fonts used to display a message
  • Fig. ⁇ B describes the steps of an exemplary fmd exposure process of the letter "F”.
  • the arrow direction of each rectangle is for example defined as: find the wider dimension of the rectangle and the arrow is from the first point of this dimension to the direction of the second point of this dimension.
  • the arrow direction of rectangle #1 will be right to left, and the direction of rectangle #2 will be up to down, and of #3 will be left to right.
  • the fmd model can be used on any font and on real handwriting as well. (A scan of a user's handwriting can be loaded to the system and have fmd applied to it).
  • the exposure process of the "F" may comprise the following steps:
  • each mask can be even more smooth by exposing the area behind the mask gradually in the direction of the arrow.
  • Figure 9 show a scene where a movie hero is writing the sender message.
  • the hero may be animated using the following 3 parts shown (enlarged) in figure 10: His right hand palm + writing tool, his right arm, and the rest of his body.
  • the following data represents pre-recorded x, y and rotation values for the right arm.
  • the x,y values are relative to the whole stage of the animation. They could also be relative to the registration point of the main body part, or some other reference. The important information in these values is not the absolute location, but the relative delta-x and delta-y in-between adjacent location.
  • the animation is dynamically generated frame by frame by the PASS engine, fmd and pre-recorded animation data.
  • Fmd is data that describes helpful information regarding each character of a specific font.
  • Fmd is size scalable, the same as fonts are size scalable. Fonts are usually kept in some vector representation like ttf (true type font). This means that scaling the font to larger sizes keeps the smooth look of the font. The font will not "pixelize”.
  • ttf true type font
  • the basic handling format of shapes is vectored as well. Since fmd data represent sizes, positions, relations about fonts and is manipulated in a vector graphic environment - fmd is scalable, thus it is required to produce fmd for a specific font set for some average size of the font, and the fmd for all other sizes is automatically derived by vector graphics scaling.
  • Fmd may be produced completely manually, or by the aid of some pre-written computer program with human intervention, thus half automatic, or it may be produced fully automatic by a computer program with a set of rules that describe the characters of the fonts in some way.
  • Fmd may be produced completely manually, or by the aid of some pre-written computer program with human intervention, thus half automatic, or it may be produced fully automatic by a computer program with a set of rules that describe the characters of the fonts in some way.
  • a specific program can guide someone in positioning and sizing the masking rectangles, helping him realize if they fully cover the characters, and showing him easily the animation of revealing the characters according to the selected mask rectangles.
  • a fully automatic program will start out with a predefined set of rules, describing each character in relation to the fmd.
  • these rules can be an ordered set of line strokes: Each character has an ordered set of line strokes. Each stroke has a direction (up, down, left, right) and general vertical or horizontal position in the character plane.
  • This list of ordered strokes represents a description of how in general the character is written by most common people. For example the "capital f list would include: stroke#1 : direction is left and position is horizontal top. Stroke#2 would be direction down and position middle, stroke#3 would be direction right and position middle.
  • a computer program can search and decide on the size and positions of the masking rectangles in a fully automatic manner. Manual corrections may be applied later if necessary.
  • the example refers only to the letter T at some location, following the segmentation of the letter that is shown above, but this example can be generalized in similar manner to any sequence of characters in one or more lines.
  • the example In the first frame the character "f is completely masked, i.e. it is not visible.
  • the writing tool which is part of the palm object is positioned at the middle of the side of rectangle #1 , represented by x1 ,y1 and x1 ,y2 of that rectangle (right side in this case).
  • the body of the hero is located in such a place that will cause the right arm to exactly connect to the right palm. How this is done is explained later down.
  • the arm is located at a fixed relative location to the body. This is at the right shoulder of the body. In this example this location is always a predefined delta-x and delta-y from the registration point of the body.
  • the engine constantly makes decisions whether to advance the body to the next pre-recorded location, in each new frame to be composed. This is done by comparing the x location indicated by the next pre-defined location with the x location where the body would be if it were to be exactly fit for the current palm location (derived from the font), and accounting for a nominal hand rotation.
  • We will refer to the x location derived from the font as the font-x-location. If the predefined x location is less advanced than the font-x-location, then the body will be advanced to the next pre-recorded location. The body will then get new x,y, and rotation values.
  • the arm now is fit between the body shoulder connection point and the palm connection point, by scaling and rotating so that it perfectly fits between them.
  • the effect of the appearance of the second frame is that the first path of the letter "f is revealed and it seems that the writing tool has exactly following the "revealing path”.
  • Action example may be to exchange this character with another character, by detaching another character as well, and then putting back each character in the location the other character was (this can be used for example to correct spelling errors caused by originally mixed up letters). Additional action examples can be to strike or hit some other object or other hero. Additional action may be to throw the letter at some object or other hero. After the character is detached from the surface, some "detached image" of the character may be displayed where the character was located, or possibly nothing may be displayed in that location.
  • the animation is derived from the character fmd, and pre-recorded animations.
  • the fmd in this case may include information on where the palm should grip around the character, certain changes in the character appearance once it is detached, and possibly other such information.
  • a few pre-recorded animations of the hero approaching the character to be detached may exist. The system selects one of these by random or other criteria.
  • the animation is scaled in terms of x,y location, in such a way that the hero will end up in the correct position relative to the character to be detached, once all (or part) of the pre-recorded animation is played.
  • the palm of the character is then right over the character to be detached, taking into account information in the character fmd on exact location of the palm relative to the character location.
  • the actual detach animation may also be a pre-recorded action of pulling back and forth.
  • the detached character can be animated back and forth with the palm.
  • the palm may be broken into two or more parts, some in-front of the character and some hidden behind the character - to give a "grip" animation.
  • Pre-recorded animation of the detachment, and further pre-recorded animation of the action are used to continue the animation.
  • the character image follows the palm grip image in the prerecorded animation.
  • the character is to be thrown - information on the path it goes through and rotation, etc., is in the pre-recorded animation.
  • fmd data may be used to enhance the animation.
  • the hero can use his mouth or other body parts to manipulate the character.
  • hero may add words in-between words already written, by using smaller postscript letters (above the words already written). Again the method here is to use fmd and pre-recorded animation movements of the hero.
  • the erase action can be used to strike a line over the words to be erased, or to scribble over them, or to erase them with an eraser so they disappear.
  • the prerecorded animation includes the approach of the hero to the location where he starts the erase action, the erase action he does - hand movement.
  • Fmd can be used to enhance the display of the erase process , for example - to show that not all the character was erased, and parts still exist. Adding is done in similar manner. Small scaling of fonts is needed - trivial in a vector graphics system such as FLASH.
  • Hero builds some puzzle like object (puzzle, broken glass or other). The message appears to be full once all pieces are put in place.
  • Each piece object holds a part of the message in order to create the effect of the full message when all pieces are in place. To do this the system back calculates how each piece object should look. One way to do this is by taking the full final message object bitmap, and adding pre-defined cracks on it using a drawing tool. Then using a fill tool to fill the pieces created by the cracks one by one, and manipulating the result such that everything but the piece now be calculated is transparent color, while the piece retains its original drawing.
  • Hero makes a puff of smoke that does a smoke motion and gradually turns into smoky fonts.
  • the whole process is a continuous flow of smoke. This is done by creating an animation for each character individually from a general cloud to the smoky appearance of the character. All the characters of the message are then animated leaving the heroes mouth. Each character moves to its location in the message, but they all overlap during most of the animation, thus it seems like one smoky image.
  • additional animations of smoke can be added, but they fade out at some time during the animation. The total effect is a continuous flow of smoke that is real, and it turns very smoothly to the message letters.
  • FIG. 11A and 11 B An Additional Example of how the "A" character is animated in smoke effect is shown in figures 11A and 11 B. At the beginning the smoke object is small since it is just starting to leave the mouth. All characters will partially overlap at this time. Then the object grows, still all characters are partially overlapping, but each is starting to make its move to its location. Eventually each character is located and smoothly revealed. Space characters can be animated as well - the animation is like a regular character, but the final animation is that it completely disappears. [00106] A set of pre-recorded paths may be used, and scaled as needed to create to character motion to place, while the character is animated as shown above. The total effect can be to present the entire message in one puff or a few.
  • Fmd for any handwriting The fmd can be created for any font, and even any handwriting. If the system receives a scan of a user's handwriting, it can half automatic or automatically derive fmd information for the handwriting, thus enabling all effects of writing for the user's handwriting. [00108] An Additional Example of fmd
  • fmd can be used to animate fonts that are some combination of objects that represent lines and dots.
  • One example is planks and nails.
  • Each character in this font is a specially crafted character composed of m planks and n nails holding the planks attached to the surface.
  • the fmd includes the location of the nails and planks in the character plane. The size of the planks is there as well.
  • the PASS engine can then animate the hero creating the message, by nailing down the planks with nails. Pre-recorded animation of the character taking a plank and nailing it, are used. The fmd is used by the engine to animate how many planks the hero takes for a character, where he locates them on the surface, and where he locates the nails that he hits, to nail down the planks.
  • fonts made of candles and cream both wiped out of a birthday cake onto the surface where the message is displayed.
  • the cake is "shape tweened" using 1 or more intermediate shapes to the final shapes of the characters on the surface. This can be done for example in the same manner the smoke effect was done.
  • fmd can be fonts described by holes created by bullet shooting, or some other shooting. Each character is a collection of "hole locations" in the character plane. Pre-recorded animation of the hero shooting at something, while the bullets end up hitting the message surface is used. The fmd is used to animate how many shots per character exist, and how each shot adds another hole for the character now being generated by shots. Depending on the pace of the animation it can be that one shot accounts for few holes showing up.
  • Video Signature is another unique component of the invention.
  • a "video signature” is a short movie, typically a few seconds, that holds personal information of a user in an attractive amusing animated representation. The system enables the users to simply compose their "Video Signature”. This signature can then be appended to message movies sent to receivers. It can be used in any similar way a static signature (like the one people add at the end of emails) is used, i.e. attached to emails, uploaded to web sites, personal sites, blogs etc.
  • a video animation may include the following personal information: Name, phone number, address, web site, slogan (like "make love, not war” etc), it may include a picture, it may include an avatar representation of the user that is doing some action, like waving, making a ski jump or anything else.
  • the user can select among a predefined list of animations to embed all or some of his personal information to eventually create the "video signature".
  • fmd methods described above, and well known methods to animate avatars can be used to generate the "video signature” animation. Sound can be selected from a predefined list of sounds.
  • "Video signatures" can also be used similarly to business cards, i.e. it can be a standalone object, exchanged as necessary in-between mobile devices, or other electronic devices capable of displaying video.
  • a message received by a receiver includes the final layout of the message.
  • the receiver does not know anything about the process by which the message was created.
  • the writer might have hesitated on certain parts, changed some parts, added words later in certain places, removed certain words, replaced certain words with others, completely dismissed a message he wrote and re-wrote it.
  • This invention enables the author of the message to express all these actions, thoughts and second thoughts, in the message he sends. Emotions during the message can also be accounted for, but they are only a specific aspect of the overall invention. This invention is much wider in terms of timing of actions, location of actions in the message, and flexibility of expressing the process of writing a message.
  • a message will no longer be the final "baked" version.
  • the invention allows the author to deliberately express these actions in the final message, for example to make the message funny, full of humor, and to show his actions during the writing, and the invention also can simply trace the author's real actions during writing the message (like when he thought a lot about a word, words he erased, words he added later etc), and reproduce them in the actual message to be sent to the receiver.
  • This capability may be applied in different ways to the different effects described above.
  • the hero is seen making some actions like thinking, then erasing words and adding words.
  • the hero may also be animated as thinking, and then tearing of some planks or putting a big plank across a word he wishes to erase.
  • the here can make puffs to erase words (dismiss the smoke forming the words), and can add additional puffs to add words later on.
  • the hero can create the full puzzle, then make changes on it with a writing tool.
  • PASS engine API for 3 rd parties One major aspect of the system described is the openness achieved by the PASS engine API. This API allows third parties to technically make use of all or some of the effects built in the PASS engine, and incorporate these effects in their own animation, using their own scenes and hero characters. The third party is required to supply a few items in order to comply with the API. These items may include: A basic movie with backgrounds, scenes and a possibly a hero or heroes. Each pre-recorded data in the PASS engine requires a library item or items, with specific defined names, representing the hero. For example a library item for the hero's body, for the hero's arm and for the hero's palm.
  • An example of the API may be an API for short movie where after the hero writes the message on a surface using a can of spray, the characters drip, wherein the 3 rd party is required to supply the following:
  • the scenes should only include the backgrounds, without the heroes.
  • the scenes should have a surface to write the message.
  • the surface location and size is configurable, but must conform to some constraints, and should be made known to the PASS engine, so it can calculate the layout of the message characters on the surface.
  • the PASS engine may need to resize the characters, so that they fit in the designated area. Notifying the PASS engine can be done by a global variable in "Action Script" (the animation software language) or by adding a parameter to the xml description file that is read by the PASS engine.
  • the PASS engine action script can then be added to this movie, by including it in the first frame of the movie. In addition in the frame where the short movie ends and the text animated part starts, a call to the API is required. From there on the PASS engine take control. The engine will move the movie to the needed scene, for example hand_writing_medium_shot. It will then start the text exposure and hero writing animation generation using the library items of the hero supplied by the third party, and the user text message available from the xml description file. In a similar matter all effects can be exposed in the PASS engine API. [00120] Automatic selection of basic movie, if it is missing in the sender's input
  • the system will select one for him.
  • the movie selection may be based on the text in the sender's message, or the text he is replying to.
  • the system will have a predefined dynamically updated library of categories. Each category will have a list of words that match that category. One word may appear in more than one category. For example a category named time can exist and include words like: late, time, today, tomorrow. [00121]
  • the system analyzes the message the user is sending (or replying to or both) and for each word finds the categories it exists in.
  • Fast lookup methods like hash or TRIE TREES may be used to find word matches in categories.
  • Each word that matches a category raises the count for that category.
  • Eventually the categories with the highest counts are accounted for, and according to some pre-defined mapping, certain top counted category combinations point to certain basic movies.
  • the legend may include the following items:
  • More legend items can exist in similar manner. Some of the legend items refer to gestures, some refer to manipulations on the message, some control the hero's actions while writing the text. [00124]
  • the gesture items can be added by the user authoring the message at any point in the message. The gesture can cause one of a few effects in the generated movie. One possible effect is that the hero is seen drawing the gesture emoticon.
  • This emoticon can be any special emoticon, like the hero face smiling, winking etc. It can be animated to make some motion in the area in which it is located in the sentence. This can be done by some pre-recorded animation of the emoticon.
  • gesture actions of the hero can be produced using pre-recorded animation values to produce the actions.
  • the modify items enable the user to exhibit his hesitation, fixups, corrections, "second thoughts" during the process of writing the message. This capability enhances the message meaning, adds information in the message about the process of writing the message. The message is not just the "baked" final version of the message, but the message becomes actually all the process of creating the message.
  • /dx item means - delete x words. For example /d2 means delete 2 words back from the current location. /d3 means delete 3 words back from the current location.
  • /dx,y is an extension of /dx. It means erase x words, but starting y words backwards.
  • the input required from the user (sender) is kept simple. This is a key feature of the system.
  • the user interface is simple, the generated movie can be rich in effects, making the movie interesting and also versatile, i.e. different from another final movie which shares the same rendering parameters received from the sending authoring device. This is achieved by adding auxiliary effects at random to the movies.
  • Auxiliary effects are small effects usually after the text is fully presented, or during the time the text is exposed. For example paint dripping from a message sprayed on a surface, or some insect going over the message, or the hero detaching a character from the message and doing some action with it.
  • Video and Flash presentation The development of the animation in the system is done using vector graphics. For example FLASH. A mobile version of vector graphic for FLASH exists and is called FLASH LITE.
  • the movies generated by the PASS system are designed to be FLASH
  • LITE compatible This requires following a few well known rules.
  • Other vector graphic formats for mobile exist, such as SVG TINY.
  • the system may produce SVG TINY compatible content as well.
  • the PASS system supports sending the movie in video format as well.
  • 3gp is a well known widely adopted mobile video format, and is supported by PASS, as is
  • the FLASH dynamic movie is played in a virtual flash player, converted to video (avi, mov, mp4 etc) and then converted to a
  • FLASH LITE is to be used, WAP-push rather than MMS will be used as MMS does not support FLASH LITE at the moment. If in the future MMS will support vector graphics of type FLASH LITE, MMS will be used as well.
  • an message authoring device may be used to upload to the repository a personal image or video.
  • the uploaded image or video can be embedded within the rendered message.
  • the camera can focus on a billboard that happens to display the uploaded personal picture or video.
  • the hero may watch TV and the uploaded image/video on the TV may change to the uploaded image/video.
  • images and videos uploaded by one author may be accessible to for use both other authors.
  • the system can offer the user to replace the hero of the movie with an avatar based hero.
  • the avatar can be generated with a dedicated avatar engine designed for the system. In that case the hero becomes an avatar.
  • the avatar is composed of items like hair style, hair cover (cap, ribbon etc), hair color, face color, eyes, nose, mouth, clothes.
  • Each item has many options to select from, enabling the user to build the character he wants.
  • the avatar can be animated in the movie.
  • the avatar may also appear in a short animation in the "video signature" of the user.
  • Certain "basic movies” can embed personal user sound tracks in the scenes of the movie.
  • the PASS system has the capability to insert commercial content into a "basic movie".
  • certain "basic movies” can be prepared ahead of time with commercial content.
  • the commercial content may be the background where the scene is taking place (for example - a fast food network), it may be some object that is used by the hero or interacted with by the hero. It may be some written text, or logo that is related to the animation.
  • a first phase solution is to use smaller font, up to certain readable size, and try to fit the message onto the screen. Font scaling and fmd scaling enable to downsize the font, and have the animation account for the smaller size font. Enlarging the font is also possible in a similar manner (for short messages that may look better with a larger font). If however the message is too long, making it impossible to fit it in one screen with the smallest yet still readable size, some other mechanisms are used. [00145] The mechanisms used depend on the "basic movie" and the effect used to present the text.
  • the camera can wait few seconds on the initial text he writes which is the part that fits the screen, then the camera may shift to a different location where an additional part of the movie is displayed, or the hero is seen finishing to write the additional part in that second location. The camera can do such shifts back and forth, or to additional different locations in the scene until the entire message is displayed. Another way to do this is have the hero erase the first part of the message, then write the second part etc.
  • the hero does the gesture, I.e. he winks to the camera, or smiles itc, or the hero draws the gesture gestures smoticon at the location he is in the message.
  • Emoticon can be special and self animated. crack in ground texture till message is revealed out of ground cracks hero types the message on PC, mobile and is animated as erasing, adding, changing , going back and forth with cursor, etc

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un système de rendu de messages. Conformément à certains modes de réalisation de la présente invention, un ou plusieurs modules de communication sont fournisproposés, aptes à conçus pour recevoir des données provenant d'un dispositif de création de message, les données reçues pouvant comprendre une partie de contenu, un ou plusieurs paramètres de rendu, et un identifiant de dispositif de réception de message rendu. Un ou plusieurs moteurs de rendu sur le système peuvent être aptes àconçus pour rendre un message multimédia sur la base de la partie de contenu et conformément au ou aux paramètres de rendu. Et, un ou plusieurs moteurs de rendu peuvent être aptes àconçus pour ajuster un mouvement rendu d'un élément de rendu sur la base de la partie de contenu.
PCT/IL2008/000926 2007-07-05 2008-07-06 Ppprocédé, dispositif et système pour fournir un contenu multimédia rendu à un dispositif de réception de message WO2009004636A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US94799307P 2007-07-05 2007-07-05
US60/947,993 2007-07-05

Publications (2)

Publication Number Publication Date
WO2009004636A2 true WO2009004636A2 (fr) 2009-01-08
WO2009004636A3 WO2009004636A3 (fr) 2010-03-04

Family

ID=40226627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2008/000926 WO2009004636A2 (fr) 2007-07-05 2008-07-06 Ppprocédé, dispositif et système pour fournir un contenu multimédia rendu à un dispositif de réception de message

Country Status (1)

Country Link
WO (1) WO2009004636A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011124880A3 (fr) * 2010-04-06 2012-01-12 Cm Online Limited Génération et remise d'une vidéo personnalisée
WO2014072739A1 (fr) * 2012-11-09 2014-05-15 Bradley Media Ltd Distribution de vidéo
CN113793404A (zh) * 2021-08-19 2021-12-14 西南科技大学 一种基于文本和轮廓的人为可控图像合成方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040181550A1 (en) * 2003-03-13 2004-09-16 Ville Warsta System and method for efficient adaptation of multimedia message content
US20050143136A1 (en) * 2001-06-22 2005-06-30 Tvsi Lev Mms system and method with protocol conversion suitable for mobile/portable handset display
US20060053227A1 (en) * 2004-09-03 2006-03-09 Oracle International Corporation Multi-media messaging
US20070074097A1 (en) * 2005-09-28 2007-03-29 Vixs Systems, Inc. System and method for dynamic transrating based on content
US20070100904A1 (en) * 2005-10-31 2007-05-03 Qwest Communications International Inc. Creation and transmission of rich content media

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050143136A1 (en) * 2001-06-22 2005-06-30 Tvsi Lev Mms system and method with protocol conversion suitable for mobile/portable handset display
US20040181550A1 (en) * 2003-03-13 2004-09-16 Ville Warsta System and method for efficient adaptation of multimedia message content
US20060053227A1 (en) * 2004-09-03 2006-03-09 Oracle International Corporation Multi-media messaging
US20070074097A1 (en) * 2005-09-28 2007-03-29 Vixs Systems, Inc. System and method for dynamic transrating based on content
US20070100904A1 (en) * 2005-10-31 2007-05-03 Qwest Communications International Inc. Creation and transmission of rich content media

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEMLOUMA, T. ET AL.: 'Content Interaction and Formatting for Mobile Devices' WAM PROJECT, INRIA RHINE-ALPES, DOCENG, [Online] 05 November 2004, UNITED KINGDOM, Retrieved from the Internet: <URL:http://wam.inrialpes.fr/publications/2005/DocEng05-Layaida.pdf> [retrieved on 2009-03-16] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011124880A3 (fr) * 2010-04-06 2012-01-12 Cm Online Limited Génération et remise d'une vidéo personnalisée
WO2014072739A1 (fr) * 2012-11-09 2014-05-15 Bradley Media Ltd Distribution de vidéo
CN113793404A (zh) * 2021-08-19 2021-12-14 西南科技大学 一种基于文本和轮廓的人为可控图像合成方法

Also Published As

Publication number Publication date
WO2009004636A3 (fr) 2010-03-04

Similar Documents

Publication Publication Date Title
US20080141175A1 (en) System and Method For Mobile 3D Graphical Messaging
US7991401B2 (en) Apparatus, a method, and a system for animating a virtual scene
US7813724B2 (en) System and method for multimedia-to-video conversion to enhance real-time mobile video services
US8115772B2 (en) System and method of customizing animated entities for use in a multimedia communication application
US9706040B2 (en) System and method for facilitating communication via interaction with an avatar
US8989786B2 (en) System and method for graphical expression during text messaging communications
US7035803B1 (en) Method for sending multi-media messages using customizable background images
JP4855501B2 (ja) データ入力方法および移動通信端末機
US20050078804A1 (en) Apparatus and method for communication
US20060019636A1 (en) Method and system for transmitting messages on telecommunications network and related sender terminal
EP2885764A1 (fr) Système et procédé destinés à augmenter la clarté et l&#39;expressivité dans des communications réseau
US7671861B1 (en) Apparatus and method of customizing animated entities for use in a multi-media communication application
WO2009004636A2 (fr) Ppprocédé, dispositif et système pour fournir un contenu multimédia rendu à un dispositif de réception de message
JP2007066303A (ja) フラッシュ動画自動生成システム
US20050143102A1 (en) Method and system for user-definable fun messaging
US20050195927A1 (en) Method and apparatus for conveying messages and simple patterns in communications network
US20130210419A1 (en) System and Method for Associating Media Files with Messages
KR20050045779A (ko) 단문 서비스를 통한 멀티미디어 컨텐츠 전송 서비스 방법및 시스템
JP2011091725A (ja) サーバ装置、携帯電話および合成動画作成システム
TWI220357B (en) Method for automatically updating motion picture on sub-screen of mobile phone
EP1506648B1 (fr) Transmission de messages contenant des informations d&#39;image
JP2004171217A (ja) アニメメール通信システム、アニメメール配信方法、およびそのプログラム
CN101111827A (zh) 电子邮件显示装置及电子数据显示装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08776586

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 08776586

Country of ref document: EP

Kind code of ref document: A2