US20100310234A1 - Systems and methods for rendering text onto moving image content - Google Patents

Systems and methods for rendering text onto moving image content Download PDF

Info

Publication number
US20100310234A1
US20100310234A1 US12/281,942 US28194207A US2010310234A1 US 20100310234 A1 US20100310234 A1 US 20100310234A1 US 28194207 A US28194207 A US 28194207A US 2010310234 A1 US2010310234 A1 US 2010310234A1
Authority
US
United States
Prior art keywords
moving image
image content
block
receiving
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/281,942
Inventor
Thor Sigvaldason
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
dotSub LLC
dotSUB Inc
Original Assignee
dotSub LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by dotSub LLC filed Critical dotSub LLC
Priority to US12/281,942 priority Critical patent/US20100310234A1/en
Assigned to DOTSUB LLC reassignment DOTSUB LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIGVALDASON, THOR
Assigned to DSUB ACQUISITION INC. reassignment DSUB ACQUISITION INC. ASSET PURCHASE AGREEMENT Assignors: DOTSUB LLC
Assigned to DOTSUB INC. reassignment DOTSUB INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: DSUB ACQUISITION INC.
Publication of US20100310234A1 publication Critical patent/US20100310234A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4888Data services, e.g. news ticker for displaying teletext characters
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2543Billing, e.g. for subscription services
    • H04N21/25435Billing, e.g. for subscription services involving characteristics of content or additional data, e.g. video resolution or the amount of advertising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47211End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting pay-per-view content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8355Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests

Definitions

  • This application discloses an invention that is related, generally and in various embodiments, to systems and methods for rendering text onto moving image content.
  • this application discloses a method for rendering text onto moving image content.
  • the method comprises receiving a request to translate dialog associated with moving image content, transmitting an interface, transmitting a time-stamped transcription, and receiving a translation of the dialog.
  • the method comprises transmitting a request to translate dialog associated with moving image content, receiving an interface, receiving a time-stamped transcription, and transmitting a translation of the dialog.
  • this application discloses a system for rendering text onto moving image content.
  • the system comprises a provider system that comprises a host.
  • the host is configured to receive and transmit moving image content, receive and transmit a transcription of dialog associated with the moving image content, receive time-stamps associated with the transcription, and receive and transmit a translation of the dialog.
  • the system comprises a client system that comprises a client module and a superimposing module that is configured to superimpose text onto moving image content as the moving image content is received by the client system.
  • aspects of the disclosed invention may be implemented by a computer system and/or by a computer program stored on a computer-readable medium.
  • the computer-readable medium may comprise a disk, a device, and/or a propagated signal.
  • FIGS. 1A-1C illustrate various embodiments of a method for rendering text onto moving image content
  • FIG. 2 illustrates various embodiments of a method for submitting moving image content to a provider
  • FIG. 3 illustrates various embodiments of a method for transcribing dialog associated with moving image content
  • FIG. 4 illustrates various embodiments of a method for time-stamping a transcription of dialog
  • FIG. 5 illustrates various embodiments of a system for rendering text onto moving image content
  • FIGS. 6 through 14 illustrate various examples of screen displays which include displays of information, user interfaces and/or other tools that may be used in association with various embodiments of methods and systems for processing moving image content.
  • FIG. 1 illustrates various embodiments of a method 10 for rendering text onto moving image content.
  • the moving image content may be any moving image content such as, for example, a full feature film, a movie, a video clip, etc.
  • the method 10 may be implemented at least in part by hardware (e.g., device, computer, computer system, equipment, component, etc.); software (e.g., program, application, instruction set, code, etc.); storage medium (e.g., disk, device, propagated signal, etc.); or a combination thereof. It should be noted, however, that the method 10 may be performed in any manner consistent with the aspects of the disclosed invention.
  • moving image content (e.g., a movie) is submitted to a provider.
  • the moving image content may be submitted to the provider by anyone in any suitable manner.
  • the moving image content may be submitted by a producer, a director, a distributor, etc. and may be submitted electronically to the provider, mailed to the provider, hand-delivered to the provider, etc.
  • a submitter may access a website associated with the provider, and cause the moving image content to be submitted to an IP address associated with the provider.
  • the process advances to block 14 , where the provider receives the submitted moving image content.
  • the moving image content may be submitted in any suitable manner, it follows that the submitted moving image content may be received by the provider in any suitable manner.
  • the submitted moving image content is received electronically via a server associated with the provider. From block 14 , the process advances to block 16 or to block 18 .
  • the process advances from block 14 to block 16 , where the moving image content is converted to a digital format. From block 16 , the process advances to block 18 , where the moving image content is stored for use as described hereinbelow.
  • the moving image content may be stored as a flat file on a medium accessible by a server associated with the provider. If the moving image content received by the provider at block 14 is already in a digital format, the process advances from block 14 to block 18 , where the moving image content is stored for use as described hereinbelow.
  • the moving image content stored at block 18 may serve as a master version of the moving image content. The master version may be used to create each different version of the moving image content subsequently viewed.
  • the process advances to block 20 , where the moving image content is classified by title, producer, genre, etc. or any combination thereof.
  • the classification information is stored at block 22 for use as described hereinbelow. According to various embodiments, the classification information is stored on a medium accessible by a server associated with the provider.
  • the process advances to block 24 , where a time-stamped transcription of the original dialog associated with the moving image content is generated. Generally, the time-stamped transcription is in the native language of the original dialog.
  • the process described at block 24 may be completed by the provider or another party, and may be completely manually offline or may be completed online.
  • the term “online” refers to being connected to a remote service such as, for example, the Internet.
  • the process advances to block 26 , where the time-stamped transcription is stored for use as described hereinbelow.
  • the time-stamped transcription may be stored as a database file on a medium accessible by a server associated with the provider.
  • the time-stamped transcription may serve as a master version for all subsequent translations of the text associated with the moving image content as described hereinbelow.
  • the moving image content is ready for text rendering.
  • a request to translate the time-stamped transcription into another language is submitted to the provider.
  • the request may be submitted by anyone in any suitable manner.
  • the request may be submitted by a professional translator, and may be submitted electronically to the provider, telephoned to the provider, mailed to the provider, hand-delivered to the provider, etc.
  • a translator may access a website associated with the provider and cause the request to be submitted to an IP address associated with the provider.
  • the process advances to block 30 , where the request to translate is received by the provider. Responsive to the request, an interface is transmitted at block 32 to a client system associated with the person who made the request. From block 32 , the process advances to block 34 , where the client system receives the interface. From block 34 , the process advances to block 36 , where the interface is utilized to request a copy of the master version of the time-stamped transcription from the provider.
  • the request includes an indication of a particular moving image content (e.g., by the title of the moving image content).
  • the provider receives the request at block 38 , and responsive thereto, transmits a copy of the master version of the time-stamped transcription to the client system at block 40 .
  • the client system receives the copy of the time-stamped transcription at block 42 , and coordinates the presentation of the time-stamped transcription to the translator at block 44 . From block 44 , the process advances to block 46 , where the translator selects a language other than the language of the original dialog, then inputs text corresponding to the translation of the time-stamped transcription into the selected language. When the translator is finished inputting such text, the translator may cause the textual translation to be transmitted to the provider at block 48 . As the textual translation is based on the time-stamped transcript, the textual translation is also time-stamped to correspond with the original dialog. The textual translation is received by the provider at block 50 , is classified as to the appropriate language, the start and stop time for each line of text, etc.
  • the textual translation stored at block 54 represents the text associated with the moving image content. According to various embodiments, the textual translation is stored as a database file on a medium accessible by a server associated with the provider. The process described from block 12 to block 54 , or any portion thereof, may be repeated sequentially or concurrently for any number of submitters, any number of translators, and any amount of moving image content.
  • a translator wishes to edit a current version of a translation
  • the process advances to block 56 , where the translator may submit a request to edit a current version of a translation.
  • the request may be submitted by anyone in any suitable manner.
  • the request may be submitted by a professional translator, and may be submitted electronically to the entity, telephoned to the entity, mailed to the entity, hand-delivered to the entity, etc.
  • a translator may access a website associated with the provider and cause the request to be submitted to an IP address associated with the provider.
  • the process advances to block 58 , where the request to edit a current version of a translation is received by the provider. Responsive to the request, an interface is transmitted at block 60 to a client system associated with the person who made the request. From block 60 , the process advances to block 62 , where the client system receives the interface. From block 62 , the process advances to block 64 , where the interface is utilized to request a copy of the current version of the translation from the provider. The provider receives the request at block 66 , and responsive thereto, transmits a copy of the current version of the translation to the client system at block 68 .
  • the client system receives the copy of the current version of the translation at block 70 , and coordinates the presentation of the current version of the translation to the translator at block 72 . From block 72 , the process advances to block 74 , where the translator inputs the text corresponding to the edits of the translation. When the translator is finished inputting such edits, the translator may cause the edits to be transmitted to the provider at block 76 . As the edits are based on the current version of the translation, which is based on the time-stamped transcript, the edits are also time-stamped to correspond with the original dialog. The edits are received by the provider at block 78 , and the edits are incorporated to the current stored version of the translation at block 80 . The edit process described from block 56 to block 80 , may be repeated sequentially or concurrently for any number of translators, for any number of translations, for any amount of moving image content.
  • the process advances to block 82 , where a viewer may request to view the moving image content with text rendered thereon in a particular language.
  • a viewer may access a website associated with the provider, and cause the request to be submitted to an IP address associated with the provider.
  • the provider receives the request at block 84 . From block 84 , the process may advance to block 86 or block 96 .
  • the process advances from block 84 to block 86 , where the provider transmits the appropriate text to the client system, then transmits the requested moving image content at block 88 to the client system.
  • the text transmitted at block 86 may also include text in any number of other languages.
  • the client system receives the text, and then superimposes the particular text on the moving image content as the moving image content is received at block 92 . Therefore, the text is rendered onto the moving image content dynamically.
  • the client system coordinates the presentation of the moving image content with the text rendered thereon to the viewer at block 94 .
  • the process described at blocks 82 - 94 may be repeated sequentially or concurrently for any number of viewers for any amount of moving image content in any number of languages.
  • the process advances from block 84 to block 96 , where the provider utilizes the master version of the moving image content and the current version of the appropriate text to produce a physical copy of the moving image content complete with the appropriate text.
  • the physical medium may include text in any number of languages.
  • the process advances to block 98 , where the provider delivers or arranges for the delivery of the physical copy to the viewer. Once the viewer receives the physical copy, the viewer may view the physical copy in a suitable manner.
  • the above-described method 10 may be utilized by multiple people to work on the same or different moving image content, in the same or different language, at the same or different time, separately, collectively, or any combination thereof.
  • the method 10 may be utilized to increase the scope of what moving image content can be made available with text rendered thereon and to lower the cost associated with such offerings. It will also be appreciated that according to various embodiments, instead of the steps of the method 10 being performed sequentially as described hereinabove, many of the steps can be performed concurrently.
  • the provider may charge a fee to the person/entity who originally submits the moving image content.
  • the provider may also charge a fee to the viewer for providing the moving image content with the text rendered thereon in a given language.
  • the moving image content may be provided on a pay-per-view basis, and the provider may share a portion of the revenues generated by the pay-per-view with the appropriate translator or translators.
  • the translators may charge the provider and/or the submitter a fee for translating the dialog.
  • the translators may provide the translations for free as a public service.
  • the submission of moving image content, the transcription of the dialog associated with moving image content, and the time-stamping of the transcription may be accomplished online as described with respect to FIGS. 2-4 .
  • FIG. 2 illustrates various embodiments of a method 100 for submitting moving image content to a provider.
  • the process begins at block 102 , where a submitter utilizes a client module residing at a client system to electronically submit moving image content (e.g., a movie) to a provider system.
  • client module refers to any type of software application that may be utilized to access, interact with, and view content associated with various Internet resources.
  • the process advances to block 104 , where the provider system receives and stores the submitted moving image content.
  • the moving image content stored at block 104 may serve as a master version of the moving image content, and may be used to create each different version of the moving image content subsequently viewed.
  • the process advances to block 106 , where the provider system converts the stored moving image content to a digital format suitable for interactive work, and classifies the formatted moving image content as “not transcribed.”
  • FIG. 3 illustrates various embodiments of a method 120 for transcribing dialog associated with moving image content.
  • the process begins at block 122 , where a transcriber utilizes a client module residing at a client system to submit a request to a provider system, where the request is a request to transcribe dialog associated with a given piece of moving image content stored by the provider system (e.g., the moving image content stored at block 106 of FIG. 2 ).
  • the process advances to block 124 , where the provider system receives the request, and responsive thereto, transmits the requested moving image content in a suitable format along with an interface (e.g., HTML and Flash) to the client system.
  • an interface e.g., HTML and Flash
  • the process advances to block 126 , where the client system receives the moving image content and the interface, and the transcriber utilizes the interface to interactively play the moving image content and transcribe lines of dialog associated therewith. From block 126 , the process advances to block 128 , where the transcriber causes the transcription to be electronically transmitted from the interface to the provider system. From block 128 , the process advances to block 130 , where the provider system receives and stores the transcription, and reclassifies the previously stored moving image content as “transcribed but not time-stamped.”
  • FIG. 4 illustrates various embodiments of a method 140 for time-stamping a transcription of dialog associated with a given piece of moving image content.
  • the process begins at block 142 , where a time-stamper utilizes a client module residing at a client system to submit a request to a provider system, where the request is a request to time-stamp a stored transcription of dialog associated with a given piece of stored moving image content. (e.g., the transcript stored at block 130 of FIG. 3 and the moving image content stored at block 106 of FIG. 2 ).
  • the process advances to block 144 , where the provider system receives the request, and responsive thereto, transmits the moving image content and the transcript along with an interface to the client system.
  • the process advances to block 146 , where the client system receives the moving image content, the transcript and the interface, and the time-stamper utilizes the interface and its interactive elements (e.g., dialog begins button, dialog ends button, play clock, etc.) to play the moving image content and indicate starting and ending time-stamps for each segment of the transcript.
  • the process advances to block 148 , where the time-stamper causes the time-stamps to be electronically transmitted from the interface to the provider system.
  • the process advances to block 150 , where the provider system receives and stores the time-stamps, and reclassifies the previously stored moving image content as “transcribed and time-stamped.” At this point, the dialog associated with the moving image content is ready for subsequent translating.
  • FIG. 5 illustrates various embodiments of a system 200 for rendering text onto moving image content.
  • one or more elements of the system 200 may perform the method 10 described hereinabove.
  • the system 200 includes a client system 210 for presenting information to and receiving information from a user.
  • the client system 210 may include one or more client devices such as, for example, a personal computer (PC) 212 , a workstation 214 , a laptop computer 216 , a network-enabled personal digital assistant (PDA) 218 , and a network-enabled mobile telephone 220 .
  • client devices such as, for example, a personal computer (PC) 212 , a workstation 214 , a laptop computer 216 , a network-enabled personal digital assistant (PDA) 218 , and a network-enabled mobile telephone 220 .
  • PDA personal digital assistant
  • Other examples of a client device include, but are not limited to a server, a microprocessor, an integrated circuit, fax machine or any other component, machine, tool, equipment, or some combination thereof capable of responding to and executing instructions and/or using data.
  • the client system 210 may include a client module 222 , and a superimposing module 224 for superimposing text onto the moving image content as the moving image content is received by the client system 210 .
  • the client module 222 may be utilized to access, interact with, and view content associated with various Internet resources.
  • the client system 210 may also include Macromedia Flash Player, and the superimposing module 224 may be embodied, for example, as a Flash plug-in.
  • the modules 222 - 224 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device.
  • the modules 222 - 224 may be stored on a computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed.
  • the modules 222 - 224 may be installed on separate, distinct client devices and may be administered by different entities. Also, different functional aspects of the modules 222 - 224 may be installed on separate, distinct client devices.
  • the client system 210 operates under the command of a client controller 226 .
  • the broken lines are intended to indicate that in some implementations, the client controller 226 , or portions thereof considered collectively, may instruct one or more elements of the client system 210 to operate as described.
  • Examples of a client controller 226 include, but are not limited to a computer program, a software application, computer code, set of instructions, plug-in, applet, microprocessor, virtual machine, device, or combination thereof, for independently or collectively instructing one or more client devices to interact and operate as programmed.
  • the client controller 226 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, Flash/Actionscript, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device.
  • the client controller 226 e.g., software application, computer program
  • the client system 210 may be connected through a network 230 having wired or wireless data pathways 232 , 234 to provider system 240 . Although only one client system 210 is shown in FIG. 2 , it is understood that any number of client systems 210 may be connected to the provider system 240 via the network 230 .
  • the network 230 may include any type of delivery system including, but not limited to a local area network (e.g., Ethernet), a wide area network (e.g.
  • the Internet and/or World Wide Web may include elements, such as, for example, intermediate nodes, proxy servers, routers, switches, and adapters configured to direct and/or deliver data.
  • the client system 210 and the provider system 240 each include hardware and/or software components for communicating with the network 230 and with each other.
  • the client system 210 and provider system 240 may be structured and arranged to communicate through the network 230 using various communication protocols (e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth) and/or to operate within or in concert with one or more other communications systems.
  • various communication protocols e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth
  • the provider system 240 generally hosts a set of resources. As shown, the provider system 240 includes a host 242 , and may include data storage means 244 (e.g., storage arrays, disks, devices, etc.) in communication with the host 242 .
  • the host 242 may be implemented by one or more servers (e.g., IBM® OS/390 operating system servers, Linux operating system-based servers, Windows NTTM servers) providing one or more assets (e.g., data storage, applications, etc.).
  • servers e.g., IBM® OS/390 operating system servers, Linux operating system-based servers, Windows NTTM servers
  • the host 242 may be configured to perform one or more of the following functions: receiving and transmitting moving image content, receiving and transmitting a transcription of dialog associated with the moving image content, receiving time-stamps associated with the transcription, and receiving and transmitting a translation of the dialog.
  • the functionality of the host 242 may be implemented by more than one host.
  • the various hosts are configured to collaborate with one another to perform the method 10 described hereinabove.
  • the functionality of the host 242 may be implemented by one or more modules that comprise the host 242 .
  • a submission module 246 may be configured to manage the process of receiving and storing moving image content.
  • a transcription module 248 may be configured to manage the process of transcribing original dialogs associated with moving image content.
  • a time-stamp module 250 may be configured to manage the process of time-stamping transcriptions of dialogs associated with moving image content.
  • a translation module 252 may be configured to manage the process of translating time-stamped transcriptions of dialogs associated with moving image content into different languages.
  • a rendering module 254 may manage the process of retrieving stored moving image content, transcriptions thereof, and time-stamped transcriptions and translations thereof, and transmitting the moving image content, transcripts, time-stamped transcripts, and translations.
  • the modules 246 - 254 are configured to collaborate with one another to perform the method 10 described hereinabove.
  • the modules 246 - 254 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device.
  • the modules 246 - 254 may be stored on a computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed.
  • the modules 246 - 254 are shown in FIG. 5 as part of the host 242 , according to various embodiments, the modules 246 - 254 may be installed on separate, distinct hosts and may be administered by different entities. Also, different functional aspects of the modules 246 - 254 may be installed on separate, distinct hosts.
  • the provider system 240 operates under the command of a provider controller 256 .
  • the broken lines are intended to indicate that in some implementations, the provider controller 256 , or portions thereof considered collectively, may instruct one or more elements of provider system 240 to operate as described.
  • Examples of a provider controller 256 include, but are not limited to a computer program, a software application, computer code, set of instructions, plug-in, microprocessor, virtual machine, device, or combination thereof, for independently or collectively instructing one or more computing devices to interact and operate as programmed.
  • the provider controller 256 may be implemented utilizing any suitable algorithms, computing language (e.g., C, C++, Java, JavaScript, Perl, Visual Basic, VBScript, Delphi, SQL, PHP, etc.) and may be embodied permanently or temporarily in any type of computer, computer system, device, machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions.
  • the provider controller 256 when implemented as software or a computer program, for example, may be stored on a computer-readable medium (e.g., device, disk, or propagated signal) such that when a computer reads the medium, the functions described herein are performed.
  • a single component described herein may be replaced by multiple components, and multiple components described herein may be replaced by a single component.
  • FIGS. 6 through 14 various examples of screen displays, tools and/or interfaces are illustrated that may be employed in connection with various embodiments of the methods, systems, and media described above. These screen displays and interfaces are included for the purpose of illustrating examples of certain practical implementations, operations, and functions of the invention.
  • a “Home” page 601 can be presented to a user upon login to the provider system 240 , for example, to allow the user to make various selections associated with processing or accessing moving image content. For example, selecting a “Featured” link 602 on the “Home” page 601 may present various works of moving image content that an administrator of the provider system 204 desires to highlight at a given time.
  • one or more films 604 , 606 may be presented to the user in a “Featured Films” section of the page 601 .
  • Each film 604 , 606 may have various associated data or other characteristics such as a number of times that the film has been viewed 604 A, 606 A; the number of translations initiated 604 B, 606 B; the number of translations completed 604 C, 606 C; and/or, the time and date 604 D, 606 D when the films 604 , 606 were posted to the web site.
  • a “Most Viewed” link 608 that navigates the user to the most frequently accessed moving image content.
  • a “Latest” link 610 may be included that directs the user to the most recently posted moving image content.
  • a “Genre” link 612 may be provided that permits the user to search moving image content by various predefined types of content.
  • the screen display 701 illustrated in FIG. 7 includes a list 702 of various moving image content genre that can be selected.
  • a “Collections” link 614 may be included that takes the user to a screen that display various selections moving image content grouped into predefined collections.
  • a “Language” link 616 may be configured to navigate the user to a page 801 that permits the user to view moving image content by a selected language.
  • An excerpt of a list 802 of different languages that can be selected by the user is shown on the “Language” page 801 .
  • the user may be permitted to make a selection 804 for a given language from among content in original language 804 A, content having complete translations 804 B, or content having partial translations 804 C.
  • a “Country” link 618 may also be included that displays works with moving image content on a country-by-country basis.
  • the screen display 901 of FIG. 9 illustrates details of a particular film 902 that may be presented upon user selection of the film 902 .
  • a “Choose Language” function 904 permits the user to specify a presentation language for the film 902 that will be used when the film is played or displayed (e.g., using subtitles).
  • a “Share This Film” tool 906 provides a URL address 906 A and a HTML code reference 906 B that can be employed by the user to embed a link in an e-mail, a web site, or another medium that will navigate users from the link to the film 902 on the provider system 240 , for example.
  • content communicated by the provider system 240 to/from other entities, such as clients or other users may be formatted in accordance with RSS protocol or MPEG-4 encoding format, for example, to be accessed or presented on a variety of different types of access devices (e.g., laptops, personal computers, wireless phones, “iPod” devices, “iPhone” devices, smart phones, personal data assistants, and/or other like devices).
  • RSS and MPEG-4 any other protocol or standard may be used which governs formatting or compressing audio and/or visual content, such as for web streaming media, CD and DVD distribution, telephony, videophone, broadcast communications, or web syndication (e.g., as may be employed by news websites and web blogs).
  • a “Translate This Film” tool 908 permits the user to translate the film 902 from a variety of different languages into one or more other different languages.
  • One or more languages displayed by this tool 908 may include an associated designation (e.g., a completion percentage in brackets 908 A) that reflects how much of the film 902 has been translated into the given language. In the example shown, 29% of the film 902 has been translated into Bulgarian.
  • a “Translate!” button 908 B for example, the user can direct the provider system 240 to proceed with presenting the film 902 on the site in transcript format, or to play the film 902 , in the desired target translation language.
  • a screen display 1001 may be presented to the user upon making a “My Films” selection 1002 .
  • the “My Films” page 1001 may include films that are designated as “Most Viewed by Me” 1004 ; as “My Favorites” 1006 ; and/or as “Posted by Me” 1008 .
  • a “Post a New Film” tool 1010 can be selected by the user to download new moving image content to the web site.
  • the user can designate various characteristics for a new film, for example, in an “About This Film” section 1102 , such as a title 1104 for the film and a target location or file 1106 from which the film can be downloaded.
  • a “Who Can View This Film?” section 1108 the user can specify permissions for the film by restricting or permitting viewing access to certain predesignated users, individuals or groups.
  • a “Who Can Transcribe This Film?” section 1110 and a “Who Can Translate This Film?” section 1112 the user can likewise specify permissions for transcribing and/or translating the new film that will be posted.
  • a “Transcribe Film” function 1202 may be selected that navigates the user to a transcription tool 1302 which can be used to generate subtitles, for example, in a certain language when a selected film 1204 is played.
  • a transcription tool 1302 which can be used to generate subtitles, for example, in a certain language when a selected film 1204 is played.
  • the user may enter text in fields 1306 A- 1306 E which can be time-stamped in corresponding time entry fields 1308 A- 1308 E.
  • a user may create a transcription of a film by entering text in fields 1306 A- 1306 E which corresponds to what the user hears while the film plays on the screen 1304 .
  • Similar functionality can also be employed to play a film in a first language, for example, and then enter a translation of the first language into a second language or additional languages.
  • the user may also press a “Reorder by Time” button 1310 to sort text entries in the fields 1306 A- 1306 E according to a chronological order defined by the time stamp information contained in fields 1308 A- 1308 E.
  • the provider system 240 may be configured to retrieve video content automatically from a video podcast (e.g., www.rocketboom.com), for example, or another source or web site containing video content, so that the retrieved video content can be transcribed and/or translated in accordance with the methods and systems described herein.
  • the provider system 240 may be configured to retrieve new episodes of a video program automatically to make the episodes available for transcription and to alleviate the need for users to manually upload the video content to the system 240 .
  • the provider system 240 may be configured to allow users to subscribe to various types of communication feeds (e.g., RSS feeds) from the system 240 .
  • RSS feeds e.g., RSS feeds
  • a particular user may want to receive all German-language videos once they have been translated.
  • a “Subscribe (RSS)” function 1402 may be accessed by the user to subscribe the user to receive content automatically in accordance with one or more criteria currently being viewed or accessed through the system 240 .
  • selection of the “Subscribe (RSS)” button 1402 can be configured to subscribe the user to receive all German-language content from the system 240 once the content has been fully translated.
  • the user may be able to select from among various criteria in a user interface to specify parameters by which automatic delivery of content will be executed the system 240 .
  • criteria may include, for example and without limitation, language of content, genre of content, country of content, degree of translation completeness for content, when content is posted (e.g., the criteria may include delivering newly posted content), and/or various other criteria.
  • the provider system 240 may be configured to automatically convert moving image content into an MPEG4 video encoding format, for example, or other suitable encoding formats employed by various access devices, computer systems, or content players.
  • the provider system 240 may be configured to render subtitles onto video frames of the content once a predetermined degree of translation completeness is achieved (e.g., 100% complete).
  • the encoded video files may be communicated from the system 240 to users in accordance with the various RSS feed processes described herein.
  • translated and/or transcribed moving image content can be saved, deleted, marked as complete, rendered into a specific file format (e.g., “Flash” format), rendered into a particular language or languages, exported to an access device, embedded in a web page, stored on CD, DVD, or other storage medium, and/or otherwise formatted for communication in a variety of ways.
  • a specific file format e.g., “Flash” format
  • the method 10 can be adapted to allow for an audio translation of the original dialog to be generated in a variety of languages, stored, for example, in an MPEG format, and transmitted as an audio stream to be presented concurrently with the moving image content.
  • This audio process may be utilized in lieu of or in addition to the text rendering process described hereinabove.

Abstract

A method for rendering text onto moving image content. The method comprises receiving a request to translate dialog associated with moving image content, transmitting an interface, transmitting a time-stamped transcription, and receiving a translation of the dialog.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a U.S. national stage application of PCT International Application No. PCT/US2007/05662, filed Mar. 5, 2007 and published as PCT Publication WO 2007/103357 on Sep. 13, 2007, and which claims priority to and is a continuation-in-part of U.S. application Ser. No. 11/368,647, filed Mar. 6, 2006 and published as U.S. Pub. No. US 2007/0211169 A1 on Sep. 13, 2007, the disclosures all of which are hereby incorporated in their entirety by reference.
  • BACKGROUND
  • This application discloses an invention that is related, generally and in various embodiments, to systems and methods for rendering text onto moving image content.
  • Current processes for rendering text (e.g., subtitles, open captions, closed captions, etc.) onto moving image content are highly fragmented, labor intensive, and generally involve a plurality of contributors operating offline in a piecemeal manner. The current processes tend to be relatively inefficient and expensive, and as a result, a relatively small amount of moving image content having text rendered thereon is made available for viewing.
  • SUMMARY
  • In one general respect, this application discloses a method for rendering text onto moving image content. According to various embodiments, the method comprises receiving a request to translate dialog associated with moving image content, transmitting an interface, transmitting a time-stamped transcription, and receiving a translation of the dialog.
  • According to other embodiments, the method comprises transmitting a request to translate dialog associated with moving image content, receiving an interface, receiving a time-stamped transcription, and transmitting a translation of the dialog.
  • In another general respect, this application discloses a system for rendering text onto moving image content. According to various embodiments, the system comprises a provider system that comprises a host. The host is configured to receive and transmit moving image content, receive and transmit a transcription of dialog associated with the moving image content, receive time-stamps associated with the transcription, and receive and transmit a translation of the dialog.
  • According to other embodiments, the system comprises a client system that comprises a client module and a superimposing module that is configured to superimpose text onto moving image content as the moving image content is received by the client system.
  • Aspects of the disclosed invention may be implemented by a computer system and/or by a computer program stored on a computer-readable medium. The computer-readable medium may comprise a disk, a device, and/or a propagated signal.
  • DRAWINGS
  • Various embodiments of the disclosed invention are described herein by way of example in conjunction with the following figures.
  • FIGS. 1A-1C illustrate various embodiments of a method for rendering text onto moving image content;
  • FIG. 2 illustrates various embodiments of a method for submitting moving image content to a provider;
  • FIG. 3 illustrates various embodiments of a method for transcribing dialog associated with moving image content;
  • FIG. 4 illustrates various embodiments of a method for time-stamping a transcription of dialog;
  • FIG. 5 illustrates various embodiments of a system for rendering text onto moving image content; and,
  • FIGS. 6 through 14 illustrate various examples of screen displays which include displays of information, user interfaces and/or other tools that may be used in association with various embodiments of methods and systems for processing moving image content.
  • DETAILED DESCRIPTION
  • It is to be understood that at least some of the figures and descriptions of the disclosed invention have been simplified to illustrate elements that are relevant for a clear understanding of the disclosed invention, while eliminating, for purposes of clarity, other elements. Those of ordinary skill in the art will recognize, however, that these and other elements may be desirable. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the disclosed invention, a discussion of such elements is not provided herein.
  • FIG. 1 illustrates various embodiments of a method 10 for rendering text onto moving image content. The moving image content may be any moving image content such as, for example, a full feature film, a movie, a video clip, etc. In various implementations, the method 10 may be implemented at least in part by hardware (e.g., device, computer, computer system, equipment, component, etc.); software (e.g., program, application, instruction set, code, etc.); storage medium (e.g., disk, device, propagated signal, etc.); or a combination thereof. It should be noted, however, that the method 10 may be performed in any manner consistent with the aspects of the disclosed invention.
  • The process begins at block 12, where moving image content (e.g., a movie) is submitted to a provider. The moving image content may be submitted to the provider by anyone in any suitable manner. For example, the moving image content may be submitted by a producer, a director, a distributor, etc. and may be submitted electronically to the provider, mailed to the provider, hand-delivered to the provider, etc. According to various embodiments, a submitter may access a website associated with the provider, and cause the moving image content to be submitted to an IP address associated with the provider.
  • From block 12, the process advances to block 14, where the provider receives the submitted moving image content. As the moving image content may be submitted in any suitable manner, it follows that the submitted moving image content may be received by the provider in any suitable manner. According to various embodiments, the submitted moving image content is received electronically via a server associated with the provider. From block 14, the process advances to block 16 or to block 18.
  • If the moving image content received by the provider at block 14 is not in a digital format, the process advances from block 14 to block 16, where the moving image content is converted to a digital format. From block 16, the process advances to block 18, where the moving image content is stored for use as described hereinbelow. According to various embodiments, the moving image content may be stored as a flat file on a medium accessible by a server associated with the provider. If the moving image content received by the provider at block 14 is already in a digital format, the process advances from block 14 to block 18, where the moving image content is stored for use as described hereinbelow. The moving image content stored at block 18 may serve as a master version of the moving image content. The master version may be used to create each different version of the moving image content subsequently viewed.
  • From block 18, the process advances to block 20, where the moving image content is classified by title, producer, genre, etc. or any combination thereof. The classification information is stored at block 22 for use as described hereinbelow. According to various embodiments, the classification information is stored on a medium accessible by a server associated with the provider. From block 22, the process advances to block 24, where a time-stamped transcription of the original dialog associated with the moving image content is generated. Generally, the time-stamped transcription is in the native language of the original dialog. The process described at block 24 may be completed by the provider or another party, and may be completely manually offline or may be completed online. As used herein, the term “online” refers to being connected to a remote service such as, for example, the Internet. From block 24, the process advances to block 26, where the time-stamped transcription is stored for use as described hereinbelow. According to various embodiments, the time-stamped transcription may be stored as a database file on a medium accessible by a server associated with the provider. The time-stamped transcription may serve as a master version for all subsequent translations of the text associated with the moving image content as described hereinbelow. After the time stamped transcription is stored, the moving image content is ready for text rendering.
  • From block 26, the process advances to block 28, where a request to translate the time-stamped transcription into another language is submitted to the provider. The request may be submitted by anyone in any suitable manner. For example, the request may be submitted by a professional translator, and may be submitted electronically to the provider, telephoned to the provider, mailed to the provider, hand-delivered to the provider, etc. According to various embodiments, a translator may access a website associated with the provider and cause the request to be submitted to an IP address associated with the provider.
  • From block 28, the process advances to block 30, where the request to translate is received by the provider. Responsive to the request, an interface is transmitted at block 32 to a client system associated with the person who made the request. From block 32, the process advances to block 34, where the client system receives the interface. From block 34, the process advances to block 36, where the interface is utilized to request a copy of the master version of the time-stamped transcription from the provider. According to various embodiments, the request includes an indication of a particular moving image content (e.g., by the title of the moving image content). The provider receives the request at block 38, and responsive thereto, transmits a copy of the master version of the time-stamped transcription to the client system at block 40.
  • The client system receives the copy of the time-stamped transcription at block 42, and coordinates the presentation of the time-stamped transcription to the translator at block 44. From block 44, the process advances to block 46, where the translator selects a language other than the language of the original dialog, then inputs text corresponding to the translation of the time-stamped transcription into the selected language. When the translator is finished inputting such text, the translator may cause the textual translation to be transmitted to the provider at block 48. As the textual translation is based on the time-stamped transcript, the textual translation is also time-stamped to correspond with the original dialog. The textual translation is received by the provider at block 50, is classified as to the appropriate language, the start and stop time for each line of text, etc. at block 52, and is stored at block 54 for use as described hereinbelow. The textual translation stored at block 54 represents the text associated with the moving image content. According to various embodiments, the textual translation is stored as a database file on a medium accessible by a server associated with the provider. The process described from block 12 to block 54, or any portion thereof, may be repeated sequentially or concurrently for any number of submitters, any number of translators, and any amount of moving image content.
  • From block 54, the process advances to block 56 or to block 80. If a translator wishes to edit a current version of a translation, the process advances to block 56, where the translator may submit a request to edit a current version of a translation. The request may be submitted by anyone in any suitable manner. For example, the request may be submitted by a professional translator, and may be submitted electronically to the entity, telephoned to the entity, mailed to the entity, hand-delivered to the entity, etc. According to various embodiments, a translator may access a website associated with the provider and cause the request to be submitted to an IP address associated with the provider.
  • From block 56, the process advances to block 58, where the request to edit a current version of a translation is received by the provider. Responsive to the request, an interface is transmitted at block 60 to a client system associated with the person who made the request. From block 60, the process advances to block 62, where the client system receives the interface. From block 62, the process advances to block 64, where the interface is utilized to request a copy of the current version of the translation from the provider. The provider receives the request at block 66, and responsive thereto, transmits a copy of the current version of the translation to the client system at block 68.
  • The client system receives the copy of the current version of the translation at block 70, and coordinates the presentation of the current version of the translation to the translator at block 72. From block 72, the process advances to block 74, where the translator inputs the text corresponding to the edits of the translation. When the translator is finished inputting such edits, the translator may cause the edits to be transmitted to the provider at block 76. As the edits are based on the current version of the translation, which is based on the time-stamped transcript, the edits are also time-stamped to correspond with the original dialog. The edits are received by the provider at block 78, and the edits are incorporated to the current stored version of the translation at block 80. The edit process described from block 56 to block 80, may be repeated sequentially or concurrently for any number of translators, for any number of translations, for any amount of moving image content.
  • Following block 80, or block 54 if a translator does not wish to edit a current version of a translation, the process advances to block 82, where a viewer may request to view the moving image content with text rendered thereon in a particular language. According to various embodiments, a viewer may access a website associated with the provider, and cause the request to be submitted to an IP address associated with the provider. The provider receives the request at block 84. From block 84, the process may advance to block 86 or block 96.
  • If the request received at block 84 is a request to view the moving image content via a client system, the process advances from block 84 to block 86, where the provider transmits the appropriate text to the client system, then transmits the requested moving image content at block 88 to the client system. According to various embodiments, the text transmitted at block 86 may also include text in any number of other languages. At block 90, the client system receives the text, and then superimposes the particular text on the moving image content as the moving image content is received at block 92. Therefore, the text is rendered onto the moving image content dynamically. The client system coordinates the presentation of the moving image content with the text rendered thereon to the viewer at block 94. The process described at blocks 82-94 may be repeated sequentially or concurrently for any number of viewers for any amount of moving image content in any number of languages.
  • If the request received at block 84 is a request to view the moving image content from a physical medium such as, for example, a digital video disk (DVD), the process advances from block 84 to block 96, where the provider utilizes the master version of the moving image content and the current version of the appropriate text to produce a physical copy of the moving image content complete with the appropriate text. According to various embodiments, the physical medium may include text in any number of languages. From block 96, the process advances to block 98, where the provider delivers or arranges for the delivery of the physical copy to the viewer. Once the viewer receives the physical copy, the viewer may view the physical copy in a suitable manner.
  • From the foregoing, it will be appreciated by one skilled in the art that the above-described method 10 may be utilized by multiple people to work on the same or different moving image content, in the same or different language, at the same or different time, separately, collectively, or any combination thereof. The method 10 may be utilized to increase the scope of what moving image content can be made available with text rendered thereon and to lower the cost associated with such offerings. It will also be appreciated that according to various embodiments, instead of the steps of the method 10 being performed sequentially as described hereinabove, many of the steps can be performed concurrently.
  • According to various embodiments, the provider may charge a fee to the person/entity who originally submits the moving image content. The provider may also charge a fee to the viewer for providing the moving image content with the text rendered thereon in a given language. According to various embodiments, the moving image content may be provided on a pay-per-view basis, and the provider may share a portion of the revenues generated by the pay-per-view with the appropriate translator or translators. For embodiments where a physical copy of the moving image content is provided to the viewer, the provider may share a portion of the revenues generated by the sale of the physical copy with the appropriate translator or translators. According to various embodiments, the translators may charge the provider and/or the submitter a fee for translating the dialog. According to other embodiments, the translators may provide the translations for free as a public service.
  • According to various embodiments, the submission of moving image content, the transcription of the dialog associated with moving image content, and the time-stamping of the transcription may be accomplished online as described with respect to FIGS. 2-4.
  • FIG. 2 illustrates various embodiments of a method 100 for submitting moving image content to a provider. The process begins at block 102, where a submitter utilizes a client module residing at a client system to electronically submit moving image content (e.g., a movie) to a provider system. As used herein, the term “client module” refers to any type of software application that may be utilized to access, interact with, and view content associated with various Internet resources. From block 102, the process advances to block 104, where the provider system receives and stores the submitted moving image content. The moving image content stored at block 104 may serve as a master version of the moving image content, and may be used to create each different version of the moving image content subsequently viewed. From block 104, the process advances to block 106, where the provider system converts the stored moving image content to a digital format suitable for interactive work, and classifies the formatted moving image content as “not transcribed.”
  • FIG. 3 illustrates various embodiments of a method 120 for transcribing dialog associated with moving image content. The process begins at block 122, where a transcriber utilizes a client module residing at a client system to submit a request to a provider system, where the request is a request to transcribe dialog associated with a given piece of moving image content stored by the provider system (e.g., the moving image content stored at block 106 of FIG. 2). From block 122, the process advances to block 124, where the provider system receives the request, and responsive thereto, transmits the requested moving image content in a suitable format along with an interface (e.g., HTML and Flash) to the client system. From block 124, the process advances to block 126, where the client system receives the moving image content and the interface, and the transcriber utilizes the interface to interactively play the moving image content and transcribe lines of dialog associated therewith. From block 126, the process advances to block 128, where the transcriber causes the transcription to be electronically transmitted from the interface to the provider system. From block 128, the process advances to block 130, where the provider system receives and stores the transcription, and reclassifies the previously stored moving image content as “transcribed but not time-stamped.”
  • FIG. 4 illustrates various embodiments of a method 140 for time-stamping a transcription of dialog associated with a given piece of moving image content. The process begins at block 142, where a time-stamper utilizes a client module residing at a client system to submit a request to a provider system, where the request is a request to time-stamp a stored transcription of dialog associated with a given piece of stored moving image content. (e.g., the transcript stored at block 130 of FIG. 3 and the moving image content stored at block 106 of FIG. 2). From block 142, the process advances to block 144, where the provider system receives the request, and responsive thereto, transmits the moving image content and the transcript along with an interface to the client system. From block 144, the process advances to block 146, where the client system receives the moving image content, the transcript and the interface, and the time-stamper utilizes the interface and its interactive elements (e.g., dialog begins button, dialog ends button, play clock, etc.) to play the moving image content and indicate starting and ending time-stamps for each segment of the transcript. From block 146, the process advances to block 148, where the time-stamper causes the time-stamps to be electronically transmitted from the interface to the provider system. From block 148, the process advances to block 150, where the provider system receives and stores the time-stamps, and reclassifies the previously stored moving image content as “transcribed and time-stamped.” At this point, the dialog associated with the moving image content is ready for subsequent translating.
  • FIG. 5 illustrates various embodiments of a system 200 for rendering text onto moving image content. In general, one or more elements of the system 200 may perform the method 10 described hereinabove.
  • As shown, the system 200 includes a client system 210 for presenting information to and receiving information from a user. The client system 210 may include one or more client devices such as, for example, a personal computer (PC) 212, a workstation 214, a laptop computer 216, a network-enabled personal digital assistant (PDA) 218, and a network-enabled mobile telephone 220. Other examples of a client device include, but are not limited to a server, a microprocessor, an integrated circuit, fax machine or any other component, machine, tool, equipment, or some combination thereof capable of responding to and executing instructions and/or using data.
  • According to various embodiments, the client system 210 may include a client module 222, and a superimposing module 224 for superimposing text onto the moving image content as the moving image content is received by the client system 210. As explained previously, the client module 222 may be utilized to access, interact with, and view content associated with various Internet resources. The client system 210 may also include Macromedia Flash Player, and the superimposing module 224 may be embodied, for example, as a Flash plug-in.
  • The modules 222-224 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device. The modules 222-224 may be stored on a computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed. According to various embodiments, the modules 222-224 may be installed on separate, distinct client devices and may be administered by different entities. Also, different functional aspects of the modules 222-224 may be installed on separate, distinct client devices.
  • In various implementations, the client system 210 operates under the command of a client controller 226. The broken lines are intended to indicate that in some implementations, the client controller 226, or portions thereof considered collectively, may instruct one or more elements of the client system 210 to operate as described. Examples of a client controller 226 include, but are not limited to a computer program, a software application, computer code, set of instructions, plug-in, applet, microprocessor, virtual machine, device, or combination thereof, for independently or collectively instructing one or more client devices to interact and operate as programmed.
  • The client controller 226 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, Flash/Actionscript, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device. The client controller 226 (e.g., software application, computer program) may be stored on a computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed.
  • In general, the client system 210 may be connected through a network 230 having wired or wireless data pathways 232, 234 to provider system 240. Although only one client system 210 is shown in FIG. 2, it is understood that any number of client systems 210 may be connected to the provider system 240 via the network 230. The network 230 may include any type of delivery system including, but not limited to a local area network (e.g., Ethernet), a wide area network (e.g. the Internet and/or World Wide Web), a telephone network (e.g., analog, digital, wired, wireless, PSTN, ISDN, GSM, GPRS, and/or xDSL), a packet-switched network, a radio network, a television network, a cable network, a satellite network, and/or any other wired or wireless communications network configured to carry data. The network 230 may include elements, such as, for example, intermediate nodes, proxy servers, routers, switches, and adapters configured to direct and/or deliver data.
  • In general, the client system 210 and the provider system 240 each include hardware and/or software components for communicating with the network 230 and with each other. The client system 210 and provider system 240 may be structured and arranged to communicate through the network 230 using various communication protocols (e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth) and/or to operate within or in concert with one or more other communications systems.
  • The provider system 240 generally hosts a set of resources. As shown, the provider system 240 includes a host 242, and may include data storage means 244 (e.g., storage arrays, disks, devices, etc.) in communication with the host 242. The host 242 may be implemented by one or more servers (e.g., IBM® OS/390 operating system servers, Linux operating system-based servers, Windows NT™ servers) providing one or more assets (e.g., data storage, applications, etc.). According to various embodiments, the host 242 may be configured to perform one or more of the following functions: receiving and transmitting moving image content, receiving and transmitting a transcription of dialog associated with the moving image content, receiving time-stamps associated with the transcription, and receiving and transmitting a translation of the dialog. According to various embodiments, the functionality of the host 242 may be implemented by more than one host. For such embodiments, the various hosts are configured to collaborate with one another to perform the method 10 described hereinabove.
  • According to various embodiments, the functionality of the host 242 may be implemented by one or more modules that comprise the host 242. For example, according to various embodiments, a submission module 246 may be configured to manage the process of receiving and storing moving image content. A transcription module 248 may be configured to manage the process of transcribing original dialogs associated with moving image content. A time-stamp module 250 may be configured to manage the process of time-stamping transcriptions of dialogs associated with moving image content. A translation module 252 may be configured to manage the process of translating time-stamped transcriptions of dialogs associated with moving image content into different languages. A rendering module 254 may manage the process of retrieving stored moving image content, transcriptions thereof, and time-stamped transcriptions and translations thereof, and transmitting the moving image content, transcripts, time-stamped transcripts, and translations. In various embodiments, the modules 246-254 are configured to collaborate with one another to perform the method 10 described hereinabove.
  • The modules 246-254 may be implemented utilizing any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi, etc.) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device. The modules 246-254 may be stored on a computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed. Although the modules 246-254 are shown in FIG. 5 as part of the host 242, according to various embodiments, the modules 246-254 may be installed on separate, distinct hosts and may be administered by different entities. Also, different functional aspects of the modules 246-254 may be installed on separate, distinct hosts.
  • In various implementations, the provider system 240 operates under the command of a provider controller 256. The broken lines are intended to indicate that in some implementations, the provider controller 256, or portions thereof considered collectively, may instruct one or more elements of provider system 240 to operate as described. Examples of a provider controller 256 include, but are not limited to a computer program, a software application, computer code, set of instructions, plug-in, microprocessor, virtual machine, device, or combination thereof, for independently or collectively instructing one or more computing devices to interact and operate as programmed.
  • In general, the provider controller 256 may be implemented utilizing any suitable algorithms, computing language (e.g., C, C++, Java, JavaScript, Perl, Visual Basic, VBScript, Delphi, SQL, PHP, etc.) and may be embodied permanently or temporarily in any type of computer, computer system, device, machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions. The provider controller 256 when implemented as software or a computer program, for example, may be stored on a computer-readable medium (e.g., device, disk, or propagated signal) such that when a computer reads the medium, the functions described herein are performed. It will be appreciated that, to perform one or more of the above-described functions, a single component described herein may be replaced by multiple components, and multiple components described herein may be replaced by a single component.
  • With general reference to FIGS. 6 through 14, various examples of screen displays, tools and/or interfaces are illustrated that may be employed in connection with various embodiments of the methods, systems, and media described above. These screen displays and interfaces are included for the purpose of illustrating examples of certain practical implementations, operations, and functions of the invention.
  • With reference to FIG. 6, a “Home” page 601 can be presented to a user upon login to the provider system 240, for example, to allow the user to make various selections associated with processing or accessing moving image content. For example, selecting a “Featured” link 602 on the “Home” page 601 may present various works of moving image content that an administrator of the provider system 204 desires to highlight at a given time. In the example shown in FIG. 6, one or more films 604, 606 may be presented to the user in a “Featured Films” section of the page 601. Each film 604, 606 may have various associated data or other characteristics such as a number of times that the film has been viewed 604A, 606A; the number of translations initiated 604B, 606B; the number of translations completed 604C, 606C; and/or, the time and date 604D, 606D when the films 604, 606 were posted to the web site.
  • Other options for the user that may be offered on the “Home” page 601 include a “Most Viewed” link 608 that navigates the user to the most frequently accessed moving image content. A “Latest” link 610 may be included that directs the user to the most recently posted moving image content. A “Genre” link 612 may be provided that permits the user to search moving image content by various predefined types of content. For example, the screen display 701 illustrated in FIG. 7 includes a list 702 of various moving image content genre that can be selected. A “Collections” link 614 may be included that takes the user to a screen that display various selections moving image content grouped into predefined collections. With reference to FIG. 8, a “Language” link 616 may be configured to navigate the user to a page 801 that permits the user to view moving image content by a selected language. An excerpt of a list 802 of different languages that can be selected by the user is shown on the “Language” page 801. Also, the user may be permitted to make a selection 804 for a given language from among content in original language 804A, content having complete translations 804B, or content having partial translations 804C. A “Country” link 618 may also be included that displays works with moving image content on a country-by-country basis.
  • The screen display 901 of FIG. 9 illustrates details of a particular film 902 that may be presented upon user selection of the film 902. A “Choose Language” function 904 permits the user to specify a presentation language for the film 902 that will be used when the film is played or displayed (e.g., using subtitles). A “Share This Film” tool 906 provides a URL address 906A and a HTML code reference 906B that can be employed by the user to embed a link in an e-mail, a web site, or another medium that will navigate users from the link to the film 902 on the provider system 240, for example.
  • In various embodiments, content communicated by the provider system 240 to/from other entities, such as clients or other users, may be formatted in accordance with RSS protocol or MPEG-4 encoding format, for example, to be accessed or presented on a variety of different types of access devices (e.g., laptops, personal computers, wireless phones, “iPod” devices, “iPhone” devices, smart phones, personal data assistants, and/or other like devices). In addition to RSS and MPEG-4, however, any other protocol or standard may be used which governs formatting or compressing audio and/or visual content, such as for web streaming media, CD and DVD distribution, telephony, videophone, broadcast communications, or web syndication (e.g., as may be employed by news websites and web blogs).
  • Referring again to FIG. 9, a “Translate This Film” tool 908 permits the user to translate the film 902 from a variety of different languages into one or more other different languages. One or more languages displayed by this tool 908 may include an associated designation (e.g., a completion percentage in brackets 908A) that reflects how much of the film 902 has been translated into the given language. In the example shown, 29% of the film 902 has been translated into Albanian. By selecting a “Translate!” button 908B, for example, the user can direct the provider system 240 to proceed with presenting the film 902 on the site in transcript format, or to play the film 902, in the desired target translation language.
  • With regard to FIG. 10, a screen display 1001 may be presented to the user upon making a “My Films” selection 1002. The “My Films” page 1001 may include films that are designated as “Most Viewed by Me” 1004; as “My Favorites” 1006; and/or as “Posted by Me” 1008. In addition, a “Post a New Film” tool 1010 can be selected by the user to download new moving image content to the web site. As shown in FIGS. 11A and 11B, the user can designate various characteristics for a new film, for example, in an “About This Film” section 1102, such as a title 1104 for the film and a target location or file 1106 from which the film can be downloaded. In a “Who Can View This Film?” section 1108, the user can specify permissions for the film by restricting or permitting viewing access to certain predesignated users, individuals or groups. In a “Who Can Transcribe This Film?” section 1110 and a “Who Can Translate This Film?” section 1112, the user can likewise specify permissions for transcribing and/or translating the new film that will be posted.
  • Referring now to FIGS. 12 and 13, a “Transcribe Film” function 1202 may be selected that navigates the user to a transcription tool 1302 which can be used to generate subtitles, for example, in a certain language when a selected film 1204 is played. As shown, as the film 1204 plays on screen 1304, the user may enter text in fields 1306A-1306E which can be time-stamped in corresponding time entry fields 1308A-1308E. For example, a user may create a transcription of a film by entering text in fields 1306A-1306E which corresponds to what the user hears while the film plays on the screen 1304. It can be appreciated that similar functionality can also be employed to play a film in a first language, for example, and then enter a translation of the first language into a second language or additional languages. The user may also press a “Reorder by Time” button 1310 to sort text entries in the fields 1306A-1306E according to a chronological order defined by the time stamp information contained in fields 1308A-1308E.
  • In various embodiments, the provider system 240 may be configured to retrieve video content automatically from a video podcast (e.g., www.rocketboom.com), for example, or another source or web site containing video content, so that the retrieved video content can be transcribed and/or translated in accordance with the methods and systems described herein. For example, the provider system 240 may be configured to retrieve new episodes of a video program automatically to make the episodes available for transcription and to alleviate the need for users to manually upload the video content to the system 240.
  • The provider system 240 may be configured to allow users to subscribe to various types of communication feeds (e.g., RSS feeds) from the system 240. For example, a particular user may want to receive all German-language videos once they have been translated. In an example shown in the screen display 1401 of FIG. 14, a “Subscribe (RSS)” function 1402 may be accessed by the user to subscribe the user to receive content automatically in accordance with one or more criteria currently being viewed or accessed through the system 240. For example, in the example illustrated by FIG. 14, selection of the “Subscribe (RSS)” button 1402 can be configured to subscribe the user to receive all German-language content from the system 240 once the content has been fully translated.
  • In certain embodiments, the user may be able to select from among various criteria in a user interface to specify parameters by which automatic delivery of content will be executed the system 240. Such criteria may include, for example and without limitation, language of content, genre of content, country of content, degree of translation completeness for content, when content is posted (e.g., the criteria may include delivering newly posted content), and/or various other criteria.
  • In accordance with various embodiments described above, the provider system 240 may be configured to automatically convert moving image content into an MPEG4 video encoding format, for example, or other suitable encoding formats employed by various access devices, computer systems, or content players. For example, the provider system 240 may be configured to render subtitles onto video frames of the content once a predetermined degree of translation completeness is achieved (e.g., 100% complete). In certain embodiments, the encoded video files may be communicated from the system 240 to users in accordance with the various RSS feed processes described herein.
  • In the various embodiments described above, translated and/or transcribed moving image content can be saved, deleted, marked as complete, rendered into a specific file format (e.g., “Flash” format), rendered into a particular language or languages, exported to an access device, embedded in a web page, stored on CD, DVD, or other storage medium, and/or otherwise formatted for communication in a variety of ways.
  • While several embodiments of the invention have been described, it should be apparent, however, that various modifications, alterations and adaptations to those embodiments may occur to persons skilled in the art with the attainment of some or all of the advantages of the invention. For example, it will be appreciated that the method 10 can be adapted to allow for an audio translation of the original dialog to be generated in a variety of languages, stored, for example, in an MPEG format, and transmitted as an audio stream to be presented concurrently with the moving image content. This audio process may be utilized in lieu of or in addition to the text rendering process described hereinabove. This application is therefore intended to cover all such modifications, alterations and adaptations without departing from the scope and spirit of the disclosed invention as defined by the appended claims.

Claims (20)

1. A method for rendering text onto moving image content, the method comprising:
receiving a request to translate dialog associated with moving image content;
transmitting an interface;
transmitting a time-stamped transcription; and
receiving a translation of the dialog.
2. The method of claim 1, further comprising:
receiving the moving image content;
receiving a transcription of the dialog; and
receiving time-stamps associated with the transcription.
3. The method of claim 2, further comprising:
storing the received moving image content; and
converting the received moving image content to a digital format.
4. The method of claim 3, further comprising classifying the received moving image content.
5. The method of claim 2, further comprising:
receiving a request to transcribe the dialog; and
storing the transcription.
6. The method of claim 5, further comprising reclassifying the received moving image content.
7. The method of claim 2, further comprising:
receiving a request to time-stamp the transcript; and
storing the time-stamps.
8. The method of claim 7, further comprising reclassifying the received moving image content.
9. The method of claim 1, further comprising:
receiving a request to edit a current version of the translation;
transmitting the current version of the translation; and
receiving an edited translation.
10. The method of claim 9, further comprising incorporating each edit to a stored version of the translation.
11. The method of claim 1, further comprising:
receiving a request to view the moving image content with text rendered thereon;
transmitting the translation; and
transmitting the moving image content.
12. A method for rendering text onto moving image content, the method comprising:
transmitting a request to translate dialog associated with moving image content;
receiving an interface;
receiving a time-stamped transcription; and
transmitting a translation of the dialog.
13. The method of claim 12, further comprising:
transmitting a transcription of the dialog; and
transmitting time-stamps associated with the transcription.
14. The method of claim 12, further comprising:
receiving the translation;
receiving the moving image content; and
superimposing text on the moving image content as the moving image content is received.
15. The method of claim 14, wherein the text is superimposed dynamically.
16. A system for rendering text onto moving image content, the system comprising:
a provider system, comprising:
a host configured to:
receive and transmit moving image content;
receive and transmit a transcription of dialog associated with the moving image content;
receive time-stamps associated with the transcription; and
receive and transmit a translation of the dialog.
17. The system of claim 16, further comprising data storage means in communication with the host.
18. A system for rendering text onto moving image content, the system comprising:
a client system, comprising:
a client module; and
a superimposing module configured to superimpose text onto moving image content as the moving image content is received at the client system.
19. A computer program stored on a computer-readable medium, the program comprising instructions which when executed by a processor, cause the processor to:
transmit text associated with moving image content; and
transmit the moving image content.
20. A computer program stored on a computer-readable medium, the program comprising instructions which when executed by a processor, cause the processor to superimpose text onto moving image content dynamically.
US12/281,942 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content Abandoned US20100310234A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/281,942 US20100310234A1 (en) 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/368,647 US20070211169A1 (en) 2006-03-06 2006-03-06 Systems and methods for rendering text onto moving image content
US12/281,942 US20100310234A1 (en) 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content
PCT/US2007/005662 WO2007103357A2 (en) 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
US11/368,647 Continuation US20070211169A1 (en) 2006-03-06 2006-03-06 Systems and methods for rendering text onto moving image content
US11/368,647 Continuation-In-Part US20070211169A1 (en) 2006-03-06 2006-03-06 Systems and methods for rendering text onto moving image content
PCT/US2007/005662 A-371-Of-International WO2007103357A2 (en) 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/315,811 Continuation US9373359B2 (en) 2006-03-06 2011-12-09 Systems and methods for rendering text onto moving image content

Publications (1)

Publication Number Publication Date
US20100310234A1 true US20100310234A1 (en) 2010-12-09

Family

ID=38475490

Family Applications (8)

Application Number Title Priority Date Filing Date
US11/368,647 Abandoned US20070211169A1 (en) 2006-03-06 2006-03-06 Systems and methods for rendering text onto moving image content
US12/281,942 Abandoned US20100310234A1 (en) 2006-03-06 2007-03-05 Systems and methods for rendering text onto moving image content
US13/166,208 Expired - Fee Related US8863220B2 (en) 2006-03-06 2011-06-22 Systems and methods for rendering text onto moving image content
US13/315,811 Active 2026-05-28 US9373359B2 (en) 2006-03-06 2011-12-09 Systems and methods for rendering text onto moving image content
US13/366,743 Abandoned US20120201511A1 (en) 2006-03-06 2012-02-06 Systems and methods for rendering text onto moving image content
US14/789,515 Active US9538252B2 (en) 2006-03-06 2015-07-01 Systems and methods for rendering text onto moving image content
US15/397,139 Active US9936259B2 (en) 2006-03-06 2017-01-03 Systems and methods for rendering text onto moving image content
US15/943,305 Expired - Fee Related US10306328B2 (en) 2006-03-06 2018-04-02 Systems and methods for rendering text onto moving image content

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/368,647 Abandoned US20070211169A1 (en) 2006-03-06 2006-03-06 Systems and methods for rendering text onto moving image content

Family Applications After (6)

Application Number Title Priority Date Filing Date
US13/166,208 Expired - Fee Related US8863220B2 (en) 2006-03-06 2011-06-22 Systems and methods for rendering text onto moving image content
US13/315,811 Active 2026-05-28 US9373359B2 (en) 2006-03-06 2011-12-09 Systems and methods for rendering text onto moving image content
US13/366,743 Abandoned US20120201511A1 (en) 2006-03-06 2012-02-06 Systems and methods for rendering text onto moving image content
US14/789,515 Active US9538252B2 (en) 2006-03-06 2015-07-01 Systems and methods for rendering text onto moving image content
US15/397,139 Active US9936259B2 (en) 2006-03-06 2017-01-03 Systems and methods for rendering text onto moving image content
US15/943,305 Expired - Fee Related US10306328B2 (en) 2006-03-06 2018-04-02 Systems and methods for rendering text onto moving image content

Country Status (2)

Country Link
US (8) US20070211169A1 (en)
WO (1) WO2007103357A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110164175A1 (en) * 2010-01-05 2011-07-07 Rovi Technologies Corporation Systems and methods for providing subtitles on a wireless communications device
US8601526B2 (en) 2008-06-13 2013-12-03 United Video Properties, Inc. Systems and methods for displaying media content and media guidance information
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US9201627B2 (en) 2010-01-05 2015-12-01 Rovi Guides, Inc. Systems and methods for transferring content between user equipment and a wireless communications device
US9218122B2 (en) 2011-12-29 2015-12-22 Rovi Guides, Inc. Systems and methods for transferring settings across devices based on user gestures
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US9854318B2 (en) 2011-06-06 2017-12-26 Rovi Guides, Inc. Systems and methods for sharing interactive media guidance information
US10303357B2 (en) 2010-11-19 2019-05-28 TIVO SOLUTIONS lNC. Flick to send or display content
US20220020284A1 (en) * 2020-07-17 2022-01-20 Summit K12 Holdings, Inc. System and method for improving learning efficiency
US20230169275A1 (en) * 2021-11-30 2023-06-01 Beijing Bytedance Network Technology Co., Ltd. Video processing method, video processing apparatus, and computer-readable storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211169A1 (en) * 2006-03-06 2007-09-13 Dotsub Llc Systems and methods for rendering text onto moving image content
US9710553B2 (en) * 2007-05-25 2017-07-18 Google Inc. Graphical user interface for management of remotely stored videos, and captions or subtitles thereof
US9244913B2 (en) * 2010-03-19 2016-01-26 Verizon Patent And Licensing Inc. Multi-language closed captioning
US9595015B2 (en) * 2012-04-05 2017-03-14 Nokia Technologies Oy Electronic journal link comprising time-stamped user event image content
CN104583983B (en) * 2012-08-31 2018-04-24 惠普发展公司,有限责任合伙企业 The zone of action of image with addressable link
US20140143218A1 (en) * 2012-11-20 2014-05-22 Apple Inc. Method for Crowd Sourced Multimedia Captioning for Video Content
WO2024059895A1 (en) * 2022-09-23 2024-03-28 Rodd Martin Systems and methods of client-side video rendering

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5889564A (en) * 1995-04-03 1999-03-30 Sony Corporation Subtitle colorwiping and positioning method and apparatus
US20010044726A1 (en) * 2000-05-18 2001-11-22 Hui Li Method and receiver for providing audio translation data on demand
US20030035063A1 (en) * 2001-08-20 2003-02-20 Orr Stephen J. System and method for conversion of text embedded in a video stream
US20040117820A1 (en) * 2002-09-16 2004-06-17 Michael Thiemann Streaming portal and system and method for using thereof
US20040168203A1 (en) * 2002-12-12 2004-08-26 Seo Kang Soo Method and apparatus for presenting video data in synchronization with text-based data
US20050086702A1 (en) * 2003-10-17 2005-04-21 Cormack Christopher J. Translation of text encoded in video signals
US20050162551A1 (en) * 2002-03-21 2005-07-28 Koninklijke Philips Electronics N.V. Multi-lingual closed-captioning
US20070011012A1 (en) * 2005-07-11 2007-01-11 Steve Yurick Method, system, and apparatus for facilitating captioning of multi-media content
US20070211169A1 (en) * 2006-03-06 2007-09-13 Dotsub Llc Systems and methods for rendering text onto moving image content

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7117231B2 (en) * 2000-12-07 2006-10-03 International Business Machines Corporation Method and system for the automatic generation of multi-lingual synchronized sub-titles for audiovisual data
US20020198699A1 (en) * 2001-06-21 2002-12-26 International Business Machines Corporation Apparatus, system and method for providing open source language translation
US20050078221A1 (en) * 2003-09-26 2005-04-14 Koji Kobayashi Apparatus for generating video contents with balloon captions, apparatus for transmitting the same, apparatus for playing back the same, system for providing the same, and data structure and recording medium used therein

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5889564A (en) * 1995-04-03 1999-03-30 Sony Corporation Subtitle colorwiping and positioning method and apparatus
US20010044726A1 (en) * 2000-05-18 2001-11-22 Hui Li Method and receiver for providing audio translation data on demand
US20030035063A1 (en) * 2001-08-20 2003-02-20 Orr Stephen J. System and method for conversion of text embedded in a video stream
US20050162551A1 (en) * 2002-03-21 2005-07-28 Koninklijke Philips Electronics N.V. Multi-lingual closed-captioning
US20040117820A1 (en) * 2002-09-16 2004-06-17 Michael Thiemann Streaming portal and system and method for using thereof
US20040168203A1 (en) * 2002-12-12 2004-08-26 Seo Kang Soo Method and apparatus for presenting video data in synchronization with text-based data
US20050086702A1 (en) * 2003-10-17 2005-04-21 Cormack Christopher J. Translation of text encoded in video signals
US20070011012A1 (en) * 2005-07-11 2007-01-11 Steve Yurick Method, system, and apparatus for facilitating captioning of multi-media content
US20070211169A1 (en) * 2006-03-06 2007-09-13 Dotsub Llc Systems and methods for rendering text onto moving image content

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8601526B2 (en) 2008-06-13 2013-12-03 United Video Properties, Inc. Systems and methods for displaying media content and media guidance information
US8978088B2 (en) 2008-06-13 2015-03-10 Rovi Guides, Inc. Systems and methods for displaying media content and media guidance information
US9414120B2 (en) 2008-06-13 2016-08-09 Rovi Guides, Inc. Systems and methods for displaying media content and media guidance information
US10631066B2 (en) 2009-09-23 2020-04-21 Rovi Guides, Inc. Systems and method for automatically detecting users within detection regions of media devices
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US10085072B2 (en) 2009-09-23 2018-09-25 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US9201627B2 (en) 2010-01-05 2015-12-01 Rovi Guides, Inc. Systems and methods for transferring content between user equipment and a wireless communications device
US20110164175A1 (en) * 2010-01-05 2011-07-07 Rovi Technologies Corporation Systems and methods for providing subtitles on a wireless communications device
US11662902B2 (en) 2010-11-19 2023-05-30 Tivo Solutions, Inc. Flick to send or display content
US11397525B2 (en) 2010-11-19 2022-07-26 Tivo Solutions Inc. Flick to send or display content
US10303357B2 (en) 2010-11-19 2019-05-28 TIVO SOLUTIONS lNC. Flick to send or display content
US9854318B2 (en) 2011-06-06 2017-12-26 Rovi Guides, Inc. Systems and methods for sharing interactive media guidance information
US9218122B2 (en) 2011-12-29 2015-12-22 Rovi Guides, Inc. Systems and methods for transferring settings across devices based on user gestures
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US20220020284A1 (en) * 2020-07-17 2022-01-20 Summit K12 Holdings, Inc. System and method for improving learning efficiency
US20230169275A1 (en) * 2021-11-30 2023-06-01 Beijing Bytedance Network Technology Co., Ltd. Video processing method, video processing apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
WO2007103357A2 (en) 2007-09-13
US20170280202A1 (en) 2017-09-28
US9538252B2 (en) 2017-01-03
US20120204218A1 (en) 2012-08-09
US20160192024A1 (en) 2016-06-30
US9936259B2 (en) 2018-04-03
US20120128323A1 (en) 2012-05-24
US9373359B2 (en) 2016-06-21
US20180227642A1 (en) 2018-08-09
US20070211169A1 (en) 2007-09-13
US10306328B2 (en) 2019-05-28
WO2007103357A3 (en) 2008-04-17
US8863220B2 (en) 2014-10-14
US20120201511A1 (en) 2012-08-09

Similar Documents

Publication Publication Date Title
US9373359B2 (en) Systems and methods for rendering text onto moving image content
US7415537B1 (en) Conversational portal for providing conversational browsing and multimedia broadcast on demand
KR101683323B1 (en) Media content retrieval system and personal virtual channel
US9595050B2 (en) Method of disseminating advertisements using an embedded media player page
US7849160B2 (en) Methods and systems for collecting data for media files
US8332886B2 (en) System allowing users to embed comments at specific points in time into media presentation
US8185477B2 (en) Systems and methods for providing a license for media content over a network
US20080284910A1 (en) Text data for streaming video
US20110083069A1 (en) Method and system for providing applications to various devices
US20100169942A1 (en) Systems, methods, and apparatus for tagging segments of media content
KR20130009498A (en) Apparatus and method for scalable application service

Legal Events

Date Code Title Description
AS Assignment

Owner name: DOTSUB LLC, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIGVALDASON, THOR;REEL/FRAME:021494/0156

Effective date: 20071025

AS Assignment

Owner name: DOTSUB INC., NEW YORK

Free format text: CHANGE OF NAME;ASSIGNOR:DSUB ACQUISITION INC.;REEL/FRAME:022302/0025

Effective date: 20090127

Owner name: DSUB ACQUISITION INC., NEW YORK

Free format text: ASSET PURCHASE AGREEMENT;ASSIGNOR:DOTSUB LLC;REEL/FRAME:022301/0870

Effective date: 20090121

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION