AU2006249275B2 - Thin video client editing - Google Patents

Thin video client editing Download PDF

Info

Publication number
AU2006249275B2
AU2006249275B2 AU2006249275A AU2006249275A AU2006249275B2 AU 2006249275 B2 AU2006249275 B2 AU 2006249275B2 AU 2006249275 A AU2006249275 A AU 2006249275A AU 2006249275 A AU2006249275 A AU 2006249275A AU 2006249275 B2 AU2006249275 B2 AU 2006249275B2
Authority
AU
Australia
Prior art keywords
video
editing
timestamp
network
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2006249275A
Other versions
AU2006249275A1 (en
Inventor
Rajanish Calisa
Hayden Graham Fleming
Andrew Kisiliakov
Rupert William Galloway Reeve
Nicholas James Seow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2006249275A priority Critical patent/AU2006249275B2/en
Publication of AU2006249275A1 publication Critical patent/AU2006249275A1/en
Application granted granted Critical
Publication of AU2006249275B2 publication Critical patent/AU2006249275B2/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

S&F Ref: 788627 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, of Applicant: Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Rajanish Calisa Rupert William Galloway Reeve Andrew Kisiliakov Hayden Graham Fleming Nicholas James Seow Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Thin video client editing The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(601640_1) THIN VIDEO CLIENT EDITING Field of the Invention The current invention relates to the field of remote and distributed video editing in a home environment and, in particular, to a method and system for editing video data. The 5 current invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for editing video data. Background Due to the explosive growth in consumer devices such as digital cameras and digital video cameras, a typical household tends to accumulate large collections of digital video 10 data, still images and the like over a period of time. Furthermore, most households now own a personal computer (PC) which is used to store the large collections of digital video data. The video data is typically captured during vacations, parties, weddings and the like. A person using a digital video camera tends to capture as much video data (or footage) as 15 possible and is typically indiscriminate in using the video camera. Hence, over time the video data captured tends to include a large proportion of video data which is rather uninteresting or just not worth storing. Easy editing of video data captured by a home user becomes very important as part of managing the captured video data. Hence, video editing is no longer limited to 20 professionals trained in the art of video editing. Furthermore, an average home user finds it rather challenging to use sophisticated video editing software that is generally available in the market, since the average user only tends to have limited knowledge of a PC. This knowledge tends to be limited in relation to storing and accessing video data.
-2 The popularity of digital media in general has been further amplified by recent advances in wireless technologies as well as availability of specialist devices such as media servers and media renderers. The advent of wireless networking has facilitated home networking where various devices are interconnected by a combination wired or wireless 5 means. A media server is typically a software application that runs on a PC and provides access to digital media stored on the PC. A media renderer is a device which is typically connected to a display such as a television (e.g., plasma, liquid crystal display (LCD) or cathode ray tube (CRT)). A media renderer allows the user to download video data from 10 the media server and watch the video data on the television display. In environments such as home networks, video data is typically streamed using standard protocols such as Hypertext Transfer Protocol (HTTP) in standard formats such as MPEG-2 to ensure interoperability between devices from different vendors. Conventionally, an end-user has had to edit video data with specialized applications 15 on a PC before the video data can be streamed to the television for display as one of more frames of video data. Further, there also exists a class of video editing applications which allow video data stored on a remote PC acting as a server to be edited remotely on another PC acting as a client. In this instance, video data from the server is often annotated with frame identifiers 20 so that the client PC can request the server to perform editing operations on video data representing a range of video frames addressed by these frame identifiers. In addition, a special proprietary protocol is required between the client PC and the server to enable editing of video data. Hence standardized protocols such as HTTP and formats such as -3 MPEG-2 cannot be used. The use of specialised protocols tends to increase the complexity of the client PC used to perform video editing. Furthermore, there are other disadvantages of the aforementioned remote video editing described above. For example, the video data retrieved from the video server needs 5 to be buffered before being streamed to the client PC. The client PC also needs to buffer the video data that the client PC wishes to edit. In cases where buffering is limited, the editing relies on the reaction time of the end-user. The video server does not know exactly what an end-user watching one or more video frames represented by the video data to be edited, was viewing when the end-user 10 decided to perform the editing, due to numerous sources of network latency. Typically the end-user has to react quickly to ensure that a particular sequence of video frames is either edited in or out. Still further, network latency such as delays due to decoding and network congestion can introduce phase differences between the server and the client PC. Hence, the server may not be able to accurately determine the range of video data the user wishes 15 to edit. Thus, a need clearly exists for a more efficient method and system for editing video data. Summary It is an object of the present invention to substantially overcome, or at least 20 ameliorate, one or more disadvantages of existing arrangements. According to one aspect of the present invention there is provided a method of displaying a video on a display device, said method comprising the steps of: -4 receiving an editing command associated with said timestamp from a video editing client; and annotating a portion of said video with the editing command, said editing command determining how the portion of the video is displayed on the display device without 5 altering the video . According to another aspect of the present invention there is provided a method of displaying a video on a display device, said method comprising the steps of: sending said video from a video data streamer over a network to a display controller connected to said network; 10 extracting at least one timestamp from said video , said extracted timestamp being sent by said display controller over said network to a video editor connected to said network; and sending editing command associated with said timestamp from a video editing client to said video editor, wherein said editing command is used to annotate a portion of the 15 video, said editing command determining how the portion of the video is displayed on the display device without altering the video . According to still another aspect of the present invention there is provided a method of displaying a video on a display device, said method comprising the steps of: receiving said video from a video data streamer over a network; 20 extracting at least one timestamp from said video ; and sending said extracted timestamp over said network to a video editor connected to said network, wherein at least one editing command associated with the timestamp from a video editing client is used to annotate a portion of the video, said editing command determining how the portion of the video is displayed on the display device without 25 altering the video.
-5 According to still another aspect of the present invention there is provided a video editing system comprising: a video data streamer for sending a video over a network to which said video data streamer is connected; 5 a display controller connected to said network, for receiving said video and for extracting at least one timestamp from said video , said extracted timestamps being sent by said display controller over said network to a video editor; and a video editing client connected to said network, for sending at least one editing command associated with said timestamp to said video editor, wherein said editing 10 command is used to annotate a portion of the video, said editing command determining how the portion of the video is displayed on the display device without altering the video. According to still another aspect of the present invention there is provided a computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to display a video on a display 15 device, said program comprising: code for receiving at least one timestamp from a display controller, said timestamp having been extracted by said display controller from the video received by said display controller over a network from a video data streamer connected to said network; code for receiving at least one editing command associated with said timestamp from 20 a video editing client; and code for annotating a portion of the video with said editing command , said editing command determining how the portion of the video is displayed on the display device without altering the video. Other aspects of the invention are also disclosed.
-6 Brief Description of the Drawings Some aspects of the prior art and one or more embodiments of the present invention will now be described with reference to the drawings and appendices, in which: Fig. I shows a video editing system upon which methods described below may be 5 practiced; Fig. 2 shows a video server and a video client of the system of Fig. 1; Fig. 3 is a flow diagram showing a method of streaming video data; Fig. 4 is a flow diagram showing a method of editing video data; Fig. 5 is a flow diagram showing a method of retrieving video data, as executed in the 10 method of Fig. 4; Fig. 6 is a flow diagram showing a method of performing a session initialisation, as executed in the method of Fig. 4; Fig. 7 is a flow diagram showing a method of reading tag annotations, as executed in the method of Fig. 4; 15 Fig. 8 is a flow diagram showing a method of processing a command, as executed in the method of Fig. 4; Fig. 9 is a schematic block diagram of a general purpose computer upon which the video server of Fig. 1 may be practiced; Fig. 10 is a schematic block diagram of a general purpose computer upon which the 20 video client of Fig. 1 may be practiced; and Fig. 11 is a flow diagram showing a method of editing video data performed by the video editor of Fig. 2. Detailed Description including Best Mode -7 Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. 5 It is to be noted that the discussions contained in the "Background" section and that above relating to prior art arrangements relate to discussions of documents or devices which form public knowledge through their respective publication and/or use. Such should not be interpreted as a representation by the present inventor(s) or patent applicant that such documents or devices in any way form part of the common general knowledge in the 10 art. Methods of editing video data are described below with reference to Figs. I to 10. The described methods allow editing to be performed without prior modification of a source of video data. The described methods also eliminate the need for a specialised video format or transmission protocol. Still further, no non-trivial software applications 15 are needed on a client computer, for example, to perform the video editing in accordance with the described embodiments. Such a client is preferably configured to send user input and timestamps back to a video server. In the described methods, as video data is being received by the client for display, timestamps are extracted from the video data and sent back to the video server 20 simultaneously to displaying the video data. The client also notifies the video server periodically as to what portion of video data is currently being viewed. When a user wants to perform an editing operation on the displayed video data, a command is sent from the client to the video server. Accordingly, the video server knows what section (or frame) of video data the command applies to due to the video server having been updated about what the user is viewing on the client. In one embodiment, the timestamps extracted from the video data are values associated with one or more frames of the video data in a video data stream. Such 5 timestamps are typically included in MPEG-2 data streams and are associated with frames of video data at the time that the video data is encoded which is typically following capture of the video data. The timestamps are typically determined by sampling the state of a counter controlled by a system clock. In this instance, the timestamps associated with a particular frame of video data may include a presentation timestamp and a decoding 10 timestamp. Alternatively, the timestamps extracted from the video data may be frame identifiers such as a frame sequence number, a unique identifier for a frame, or an absolute time value (e.g., hour, minute, second, date) representing the time that a particular frame of video data was captured. Fig. I shows a video editing system 100 upon which the described methods may be 15 practiced. The video editing system 100 comprises a home network 122. The home network 122 may be a Local Area Network (LAN), for example. The home network 122 may be configured using a combination of wired technologies like Ethernet and wireless technologies like 802.11 b/g. The video editing system 100 comprises a plurality of devices connected to the 20 network 122 including a video server 101 and a video client 102. In the exemplary embodiment, the video client 102 is an enhanced networked television. The system 100 also comprises a digital video camera 103, a PC 104 and a media server 105. The media server 105 may be used for accessing digital video data stored in the PC 104. The system 100 also comprises a media renderer 106 connected to a television 107, via a link 108. The -9 television 107 may be used to display video data stored in remote locations on the home network 122. Each of the video server 101, the video client 102 (i.e., the networked display television), the digital video camera 103, the PC 104, the media server 105 and the media 5 renderer 106 have a similar configuration. As an example, the video server 101 may be implemented in the form of a computer module, as seen in Fig. 9. In this instance, the video server 101 typically includes at least one processor unit 905 and a memory unit 906, for example, formed from semiconductor random access memory (RAM) and read only memory (ROM). The video server 101 also includes a number of input/output (1/0) 10 interfaces including an audio-video interface 907 that may be used to couple to a video display 914 and an I/O interface 913 that may be used to couple to a keyboard 902 and a mouse 903. The video server 101 also includes a local network interface 911 which, via a connection 923, permits coupling of the video server 101 to the network 122. As also illustrated in Fig. 9, the local network 122 may also couple to a wide-area 15 network (WAN) 920, such as the Internet or a private WAN, via a connection 924, which would typically include a so-called "firewall" device or similar functionality. The interface 911 may be formed by an EthernetTM circuit card, a wireless Bluetooth or an IEEE 802.11 wireless arrangement. In some embodiments, the video server 101 may also comprise an external 20 Modulator-Demodulator (Modem) transceiver device (not shown) which may be used by the video server 101 for communicating to and from the WAN 920. In this instance, the video server 101 may also comprise an interface for the modem. Such a modem may be incorporated within the video server 101, for example, within an interface. The modem - 10 may be a traditional "dial-up" modem. Alternatively, the modem may be a broadband modem. A wireless modem may also be used for wireless connection to the WAN 920. The interface 913 may afford both serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and 5 having corresponding USB connectors (not illustrated). Storage devices 909 are provided and typically include a hard disk drive (HDD) 910. Other devices such as a floppy disk drive and a magnetic tape drive (not shown) may also be used. An optical disk drive 912 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g.: CD-ROM, DVD), USB-RAM, 10 and floppy disks for example may then be used as appropriate sources of data. The components 905 to 913 of the video server 101 typically communicate via an interconnected bus 904 and in a manner which results in a conventional mode of operation of the video server 101 known to those in the relevant art. Examples of computers on which the video server 101, the video client 102, the PC 15 104, the media server 105 and the media renderer 106 can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple MacTM or alike computer systems evolved therefrom. The video server 101, the video client 102, the digital video camera 103, the PC 104, the media server 105 and the media renderer 106 typically communicate with one another 20 using industry standard protocols such as TCP/IP, UDP, HTTP, and UPnP. Standard formats such as MPEG-2, JPEG etc., may be used to exchange media between each of the video server 101, the video client 102, the digital video camera 103, the PC 104, the media server 105 and the media renderer 106. For example, an end-user can operate the media renderer 106 coupled with the television 107 using a remote control to browse and - 11 download video from the video server 101 or the media server 105 and display the video on the television 107. The video client 102 may also provide the same functionality of streaming video and subsequent display of that streamed video. Further the digital video camera 103 may download video data into storage of any one of the video server 101, the 5 media server 105 or the PC 104. The described methods may be implemented using the video system 100 comprising the network 122 with the devices 101 to 107 connected thereto, wherein the processes of Figs. 1 to 9, may be implemented as software, such as one or more application programs executable within one or more of the devices 102 to 107. In particular, the steps of the 10 described methods may be effected by instructions in the software that are carried out within the one or more of the devices 101 to 107. The instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding 15 code modules manage a user interface, for example, between the first part and the user. The software may be stored in a computer readable medium, including the storage devices described above, for example. The software may be loaded into one or more of the devices 101 to 107 from the computer readable medium, and then be executed by the devices 101 to 107. A computer readable medium having such software or computer 20 program recorded on it is a computer program product. The use of the computer program product in the one or more of the devices 101 to 107 preferably effects an advantageous apparatus for implementing the described methods. As described above, the video client 102 has a similar configuration to the video server 101. As seen in Fig. 10, the video client 102 comprises substantially similar - 12 components to the video server 101 as described above, where the components are numbered differently in Fig. 10. For example, the video client 102 comprises a processor 1005 and a hard disk drive 1010. The video client 102 also comprises loudspeakers 1017 that couples to an audio-video interface 1007. However, the hardware configuration of the 5 video client 102 will not be explained in further detail herein. Fig. 2 shows the video server 101 and the video client 102. The video server 101 communicates with the video client 102 over the network 122 using standardised protocols such as TCP/IP, UDP and HTTP. As seen in Fig. 2, the video server 101 comprises three functional software modules which are resident in the hard disk drive 910 and are 10 controlled in their execution by the processor 905. The three software modules are a video storage module 201, a video editor module 202 and a video data streamer module 203. Accordingly, the video data streamer 203 and the video editor 202 are configured within the video server 101. However, in one embodiment, the video data streamer 203 and the video editor 202 may be physically decoupled. That is, the video data streamer 203 and 15 the video editor 202 may be configured separately on different computers. The video storage module 201 is configured for storing previously captured video data. Such video data may be captured using the digital video camera 103 and may be stored using standard mechanisms such as UPnP and HTTP protocols. The video data received from the digital video camera 103 is preferably in standardized formats such as 20 MPEG-2. Such MPEG-2 video data typically comprises a sequence of individual frames of video data in chronological order. The video data is stored in the hard disk drive 910 typically in one or more video files. The video data has metadata associated therewith. In particular, the video storage module 101 maintains suitable metadata to access and edit the video data stored in the video files. As described below, one or more portions of the video - 13 data are identified using the metadata. In the exemplary embodiment, the metadata stored for each portion of video data is shown in Table 1 below: Table 1 5 Metadata Description Video Source ID Unique identifier of the source which originally recorded the video. Tag Type Edit tags can be either "discard" or "keep". Start Time A value indicating the start time of the video. End Time A value indicating the end time of the video. The metadata is typically stored in a relational database configured within the hard disk drive 910 for easy and quick access. As described in Table 1, the metadata comprises the Video Source ID which is a unique identifier identifying a source of the video data. 10 For example, the Video Source ID may be an identifier of the digital video camera 103. The Tag Type is an enumerated type which identifies the type of edit the user applies to the video data. In the exemplary embodiment, the user can either choose to keep a portion of video data which is indicated by a "keep" Tag Type or discard a portion of video data which is indicated by a "discard" Tag Type. For ease of explanation, such a portion of 15 video data will be referred to below as "a sequence of video frames." However, the portion of video data may be in any form and may represent one or more video frames or even some part of a video frame.
- 14 The video server 101 is configured to retain the sequence of video frames that is marked by the user as "keep" and eventually delete the sequence of video frames marked as "discard". The "Start Time" value indicates a start time of the sequence of video frames and the 5 "End Time" value indicates the end time of the sequence of video frames. The video storage module 201 maintains the above metadata typically stored as a table in a relation database configured within the hard disk drive 910. The metadata may be added, removed and accessed from the relation database using Structured Query Language (SQL). The video data streamer module 203 is configured for sending (or streaming) video 10 data, over the network 122 to which the video data streamer 203 is connected, to the video client 102. Typically, the video client 102 opens a HTTP connection to the video server 101 for the purposes of retrieving video data. A method 300 of streaming video data will now be described in detail below with reference to Fig. 3. The method 300 is preferably implemented as software in the form of 15 the video data streamer module 203 which is resident on the hard disk drive 910 and is controlled in its execution by the processor 905. The method 300 begins at step 301, where the processor 905 waits for the video client 102 to establish a connection. Typically, this connection is in the form of a new HTTP connection to a port that the video data streamer module 203 is listening on. At the 20 next step 202, if the processor 905 receives an HTTP request requesting that a new client connection be established, then the method 300 proceeds to step 303. Otherwise, the method 300 returns to step 301. At step 303, the processor 905 extracts the Video Source ID from the HTTP request. Then at the next step 304, the processor 905 extracts a time range from the HTTP request.
- 15 The time range is specified as a part of a start time and an end time of a sequence of video frames, and is typically expressed in milliseconds. The Video Source ID together with the start time and end time uniquely identifies the sequence of video frames. Then at step 305, the processor 905 retrieves video data representing the sequence of 5 video frames identified by the extracted Video source ID and time range, from the hard disk drive 910. A method 500 of retrieving video data, as executed at step 305, will be described in detail below with reference to Fig. 5. At step 306, the processor 905 streams the retrieved video data back to the video client 102 as a response to the HTTP request received at step 302. The video data retrieval from the video storage module 201 and 10 subsequent streaming of the video data may be implemented through a call-back mechanism with appropriate rate control to ensure that the video data is transmitted from the video server 101 to the video client 102 at the correct bit rate. Further, in order to support multiple video clients concurrently, the method 300 may allocate a separate thread to stream the video data. 15 The method 500 of retrieving video data, as executed at step 305, will now be described with reference to Fig. 5. The method 500 is preferably implemented as software in the forn of the video storage module 201 which is resident on the hard disk drive 910 and is controlled in its execution by the processor 905. The method 500 begins at step 501, where the processor 905 receives the Video 20 Source ID. Upon receipt, the processor 205 may store the Video Source ID in the memory 906. At the next step 502, the processor 905 receives the time range comprising the start time and end time of the sequence of video frames. As described above, the start time and the end time are preferably in milliseconds.
- 16 Then at step 503, the processor 905 locates a video file which contains the sequence of video frames corresponding to the start time specified. Preferably, the name of the file that contains the sequence of video frames comprises the Video Storage ID and a timestamp of a first frame of the sequence of video frames as part of the file name. 5 Accordingly, the processor 905 may locate the video file using the file name. Still further, the video file may be stored according to a suitable index which may be used by the processor 905 to look up the video file based on the Video Source ID and the start time. Once the correct video file has been located and opened, the method 500 proceeds to step 504. At step 504, the processor 905 searches the video file to find a first frame having 10 a time value that is greater or equal to the start time specified. At the next step 505, the processor 905 reads video data representing one or more frames of the sequence of video frames from the located video file. The reading of the video data continues at step 505 until a frame with a time value that is greater than the specified end time is read or there are no more frames in the video file to read. At step 506, if the processor 905 determines that the 15 end time has been reached then the method 500 concludes. Otherwise, the method 500 proceeds to step 507. At step 507, if end of file is reached (i.e., there are no more frames in the video file to read), then the method 500 proceeds to step 508. Otherwise, the method 500 returns to step 505 and the processor 905 continues to read frames from the video file. At step 508, the processor 905 locates a next video file based on the Video Source ID and 20 the time range and the method 500 then returns to step 505 operating on the new video file. The video data that is read from the video file(s) at step 505 is passed back to the video data streamer module 203 at the correct bit rate typically through a call-back interface.
- 17 The video editor module 202 is responsible for receiving editing commands 204 and timestamps 205 from the video client 102 and processes the editing commands 204 and timestamps 205 appropriately. As described below, the editing commands 204 received by the video editor 202 are applied by the video editor 202 to one or more portions of the 5 video data depending on the timestamps 205. The editing commands 204 and the timestamps 205 are received by the video editor module 202 via the network 122 from the video client 102. The video data, the timestamps and the editing commands may be transmitted between, the video data streamer 203, the display controller 206, the video editor and the 10 video editing client 207, as described above, on different communication channels or on the same communication channel. A method 400 editing video data will now be described with reference to Fig. 4. The method 400 is preferably implemented as software in the form of the video editor module 202 which is resident on the hard disk drive 910 and is controlled in its execution by the 15 processor 905. The method 400 begins at step 401, where the processor 905 waits for the video client 102 to establish a connection. This connection is typically in the form of a network connection. At the next step 402, if the processor 905 determines that the video client 102 connects to the TCP port that the video editor module 202 is listening on, then the method 20 400 proceeds to step 403. Otherwise, the method 400 returns to step 401. At step 403, the processor 905 receives an initialisation (INIT) message from the processor 1005 of the video client 102. The INIT message comprises the Video Source ID and a time range which specifies a sequence of video frames that the client is interested in editing. The INIT message may be stored in memory 906 upon being received by the - 18 processor 905. As above, this sequence of video frames may comprise one or more video frames or even a part of a video frame. Then at the next step 404, the processor 905 extracts the Video Source ID from the INIT Message. Then at step 405, the processor 905 extracts the time range from the INIT 5 message. The Video Source ID and the time range represent the sequence of video frames that are to be edited. At the next step 406, the processor 905 performs a session initialisation. A method 600 of performing a session initialisation, as executed at step 406, will be described in detail below with reference to Fig. 6. 10 The method 400 continues at the next step 407, where the processor 905 waits for further commands and messages from the processor 1005 of the video client 102. At the next step 409, if the processor 905 determines that a new command is received and the command is "Exit", then the method 400concludes. Otherwise, the method 400 proceeds to step 408. At step 408, the new command received from the video client 102 is 15 processed. A method 800 of processing a command, as executed at step 408, will be described in detail below with reference to Fig 8. Following step 408, the method 400 returns to step 407, where the processor 905 waits or the next new command from the video client 102. The method 600 of performing a session initialisation, as executed at step 406, will 20 be described in detail below with reference to Fig. 6. The method 600 is preferably implemented as one or more software modules of the video editor module 202 which is resident on the hard disk drive 910 and is controlled in its execution by the processor 905. The method 600 begins at step 601, where the processor 905 receives the Video Source ID as extracted from the INIT message at step 404. Then at the next step 602, the -19 processor 905 receives the time range comprising a start time and an end time (in milliseconds) of the sequence of video frames to be edited. Then at step 603, the processor 905 reads tag annotations matching the Video Source ID and the time range, from the video storage module 201. A method 700 of reading tag annotations, as executed at step 5 603, will be described in detail below with reference to Fig. 7. The method 600 concludes at the next step 604, where a list of the tag annotations read at step 603 is stored in the memory 906 for processing during the editing session. The method 700 of reading tag annotations, as executed at step 603, will now be described with reference to Fig. 7. The method 700 is preferably implemented as software 10 in the form of the video editor module 202 which is resident on the hard disk drive 910 and is controlled in its execution by the processor 905. The method 700 begins at step 701, where the processor 905 receives the Video Source ID as extracted from the INIT message at step 404. Then at step 702, the processor 905 receives the time range comprising the start time and end time (in milliseconds) of the 15 sequence of video frames to be edited. At the next step 703, the processor 905 constructs a suitable SQL query to read records from the relation database whose Video Source ID is equal to the one received at step 701, and whose time range (comprising start and end time) overlaps with the time range received at step 702. The relation database is preferably configured within the hard disk drive 910. 20 Then at step 704, the processor 905 executes the query constructed at step 703 and the tag annotations matching the Video Source ID and the time range retrieved by the processor 905 at steps 701 and 702 are retrieved from the relational database. At step 705, the retrieved tag annotations are returned to the video editor module 202.
- 20 The method 800 of processing a command, as executed at step 408, will be described in detail below with reference to Fig. 8. The method 800 is preferably implemented as software in the form of the video editor module 202 which is resident on the hard disk drive 910 and is controlled in its execution by the processor 905. 5 The method 800 begins at step 801, where the processor 905 receives the command that was sent by the processor 1005 of the video client 102 and received by the processor 905 of the video server 101 at step 509. At the next step 802, if the processor 905 determines that the command is a timestamp, then the method 900 proceeds to step 804. Otherwise, if the command is an 10 edit instruction, then the method 900 proceeds to step 803. At step 804, the new timestamp as sent by the client 102 and received at step 801, is processed. This timestamp indicates a time that the last frame of video data that was viewed on the video client 102, for example, by a user. Also at step 804, the list of tag annotations stored at step 604 is checked to find a tag annotation that includes the time of the timestamp received at step 801. Video edits 15 are performed by updating the tag annotations based on the timestamp received at step 801 and the current editing mode. For example, if the current editing mode is discard then the video data between the timestamp received at step 801 and a previous timestamp is discarded from the hard disk drive 910. The tag annotations are appropriately adjusted to reflect the discarding of the video data. Similarly if the editing mode is "Keep", then the 20 video data between the timestamp received at step 801 and the previous timestamp is retained in the hard disk drive 910. At step 803, the current editing state is changed to either "Keep" or "Discard" video data based on the editing instruction received at step 801. This editing state is maintained through out the editing session.
-21 A method 1100 of editing video data as performed by the video editor 202 described above, will now be described with reference to Fig. 11. The method 1100 may be implemented as software in the form of the video editor 202 resident on the hard disk drive 910 of the video server 101 and being controlled in its execution by the processor 901. 5 The method 1100 begins at step 1101, where the video editor 202 receives timestamps from the display controller 206. The timestamps are the timestamps that have been extracted by the display controller 206 from video data received by the display controller 206 over the network 122 from the video data streamer 203 connected to the network 122, as described above. Then at the next step 1103, the video editor 202 receives 10 editing commands from the video editing client 207. The method 1100 then concludes at the next step 1105, where the video editor 202 applies the editing commands to one or more portions of the video data depending on the timestamps received at step 1101. The video client 102 is typically responsible for downloading video data in a standard format such as MPEG-2 and rendering the downloaded video data on the display 15 1014 of the video client. Typically, the video client 102 in the form of a networked display is controlled by a remote control device similar to a remote control that is used to operate a television. The video client 102 typically has a user interface which can be used to browse media, typically in the form of video data, stored on remote servers in the form of the media servers 105. Standard protocols such as UPnP, DLNA exist to support such media 20 browsing. The video client 102 leverages existing technologies such as UPnP and DLNA and builds on these existing technologies to create a unique and novel method of editing video remotely stored. In the exemplary embodiment, the video client 102 is a networked display providing simple user interface for browsing video data and playing the video data for display on the - 22 display 1014, for example. The user controlling the video client 102 using a remote control can operate the user interface of the video client 102 and send commands to the video server 101. As seen in Fig. 2, the video client 102 comprises a display controller 206 connected to the network 122. The display controller 206 is configured for receiving the 5 video data from the video data streamer 203, using standard protocols. The display controller 206 is also configured for extracting timestamps from the video data that the display controller 206 receives from the video data streamer 203. The extracted timestamps are sent by the display controller 206 over the network 122 to the video editor 202 of the video server 101. The extracted timestamps are the timestamps of the video 10 frames displayed on the display 1014. As such, the video server 101 can keep track of what the end-user has just watched. In one embodiment, rather than extracting timestamps from the video data, as described above, the timestamps may be generated by the display controller 206 based on a simple count of frames of video data received by the display controller 206. 15 As also seen in Fig. 2, in response to button selections by the user on the remote control being used to control the video client 102, a video editing client 207 connected to the network 122, is configured for sending editing commands to the video editor 202 of the video server 101. The buttons on the remote control are suitably mapped to the video editing or annotation commands by the video editing client 207. Accordingly, the video 20 editing client 207 and the display controller 206 are configured within the video client computer 102. However, in one embodiment, the video editing client 207 and the display controller 206 may be physically decoupled. That is, the video editing client 207 and the display controller 206 may be configured separately on different computers.
- 23 Further, in one embodiment, the timestamps may be extracted from the video data by the video server 101. In this instance, the video client 102 may be used for merely viewing the video data and forwarding editing commands. In the exemplary embodiment, the annotations that are used are only "Keep" and 5 "Discard". Further, video data marked as discard are not physically deleted. The invention however is not limited to the annotations Keep and Discard only. For example, it is conceivable to have multiple users of the network 122 each annotating a same sequence of video frames in different ways. This may be achieved by introducing new annotation tag types or storing a user-id along with the annotation. Further, video marked as "Discard" 10 may be deleted from the hard disk drive 910 as and when necessary to make room for new video data to be stored on the video server 101. In the exemplary embodiment, the device that sends the timestamps and the editing commands is the video client 102 implemented in the form of the Networked Display. However, the timestamp sending device may be different from the device that sent the 15 video editing commands. In this instance, existing television displays which have no knowledge of video editing may be adapted for use in video editing as described above. Further, with the use of interpolation, timestamps in the video data may be sparse. The video editing client 102 may be completely physically decoupled from a display being used to view the video data to be edited. In this instance, the video client 102 sends 20 commands independently to the video server 101. Further, the display used to view the video data to be edited is only responsible for sending the timestamps indicating the frames the user has viewed. Accordingly, the display used to view the video data does not need to be a complex computer. Any suitable display (e.g., an MPEG2 display) that is already - 24 configured to in relation to timestamps may be used to implement the described methods with minimal modification. Another advantage of decoupling the video editing client 102 from a display being used to view the video data to be edited is that multiple users may view the same sequence 5 of video frames on a display and independently and simultaneously annotate the viewed video frames in different ways. In this instance, the video server 101 maintains concurrent sessions for each video client that wishes to edit the sequence of video frames and handle the timestamps and editing instructions in each editing session. The methods described above, eliminate the effect of many sources of latency in a 10 networked video system. Industrial Applicability It is apparent from the above that the arrangements described are applicable to the computer and data processing industries). The foregoing describes only some embodiments of the present invention, and 15 modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have 20 correspondingly varied meanings.

Claims (11)

1. A method of displaying a video on a display device , said method comprising the steps of: 5 receiving at least one timestamp from a display controller, said timestamp having been extracted by said display controller from the video received by said display controller; receiving an editing command associated with said timestamp from a video editing client; and 10 annotating a portion of said video with the editing command, said editing command determining how the portion of the video is displayed on the display device without altering the video .
2. The method according to claim 1, wherein a video data streamer is configured 15 within a server that performs said editing.
3. The method according to claim 1, wherein said video editing client and said display controller are configured within a client computer. 20
4. The method according to claim 1, wherein the video editing client and the display controller are physically decoupled. - 26 5. The method according to claim 2, wherein said editing command, said video and said timestamp are transmitted on different communication channels over a network from the video data streamer to the display controller.
5
6. A method according to claim 1, wherein the annotated editing command is associated with a user, such that the video can be displayed on the display device according to the user.
7. A method of displaying a video on a display device, said method comprising the 10 steps of : sending said video from a video data streamer over a network to a display controller connected to said network; extracting at least one timestamp from said video , said extracted timestamp being sent by said display controller over said network to a video editor connected to said 15 network; and sending editing command associated with said timestamp from a video editing client to said video editor, wherein said editing command is used to annotate a portion of the video, said editing command determining how the portion of the video is displayed on the display device without altering the video . 20
8. A method of displaying a video on a display device, said method comprising the steps of : receiving said video from a video data streamer over a network; extracting at least one timestamp from said video ; and - 27 sending said extracted timestamp over said network to a video editor connected to said network, wherein at least one editing command associated with the timestamp from a video editing client is used to annotate a portion of the video, said editing command determining how the portion of the video is displayed on the display device without 5 altering the video.
9. A video editing system comprising: a video data streamer for sending a video over a network to which said video data streamer is connected; 10 a display controller connected to said network, for receiving said video and for extracting at least one timestamp from said video , said extracted timestamps being sent by said display controller over said network to a video editor; and a video editing client connected to said network, for sending at least one editing command associated with said timestamp to said video editor, wherein said editing 15 command is used to annotate a portion of the video, said editing command determining how the portion of the video is displayed on the display device without altering the video.
10. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to display a video on a 20 display device, said program comprising: code for receiving at least one timestamp from a display controller, said timestamp having been extracted by said display controller from the video received by said display controller over a network from a video data streamer connected to said network; - 28 code for receiving at least one editing command associated with said timestamp from a video editing client; and code for annotating a portion of the video with said editing command , said editing command determining how the portion of the video is displayed on the display device 5 without altering the video.
11. A method of displaying a video on a display device, said method being substantially as herein before described with reference to any one of the embodiments as that embodiment is shown in the accompanying drawings. 10 DATED this Twenty Fifth Day of January 2010 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant SPRUSON&FERGUSON
AU2006249275A 2006-12-08 2006-12-08 Thin video client editing Ceased AU2006249275B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2006249275A AU2006249275B2 (en) 2006-12-08 2006-12-08 Thin video client editing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2006249275A AU2006249275B2 (en) 2006-12-08 2006-12-08 Thin video client editing

Publications (2)

Publication Number Publication Date
AU2006249275A1 AU2006249275A1 (en) 2008-06-26
AU2006249275B2 true AU2006249275B2 (en) 2010-03-04

Family

ID=39580347

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2006249275A Ceased AU2006249275B2 (en) 2006-12-08 2006-12-08 Thin video client editing

Country Status (1)

Country Link
AU (1) AU2006249275B2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991928A (en) * 1995-09-25 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> Method for editing image
US6597375B1 (en) * 2000-03-10 2003-07-22 Adobe Systems Incorporated User interface for video editing
US20040062525A1 (en) * 2002-09-17 2004-04-01 Fujitsu Limited Video processing system
US20050013243A1 (en) * 2001-12-11 2005-01-20 Dirk Adolph Method for editing a recorded stream of application packets, and corresponding stream recorder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991928A (en) * 1995-09-25 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> Method for editing image
US6597375B1 (en) * 2000-03-10 2003-07-22 Adobe Systems Incorporated User interface for video editing
US20050013243A1 (en) * 2001-12-11 2005-01-20 Dirk Adolph Method for editing a recorded stream of application packets, and corresponding stream recorder
US20040062525A1 (en) * 2002-09-17 2004-04-01 Fujitsu Limited Video processing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Patent Abstracts of Japan *

Also Published As

Publication number Publication date
AU2006249275A1 (en) 2008-06-26

Similar Documents

Publication Publication Date Title
US7996538B2 (en) Information processing apparatus and content information processing method for transmitting content and event information to a client
US9003301B2 (en) Image management method and system using thumbnail in DLNA system
US9229937B2 (en) Apparatus and method for managing digital contents distributed over network
US20070050837A1 (en) Method, apparatus and system for generating and distributing rich digital bookmarks for digital content navigation
US20150187389A1 (en) Video editing device
US20070198654A1 (en) Network Server
US8490147B2 (en) System and method for collecting contents on audio/video network and controlling execution of the contents
US7574514B2 (en) Systems and methods for identifying original streams of media content
US20050027740A1 (en) Content information management apparatus and content information management method
EP1589435B1 (en) Information processing device, information processing method, and computer program
US20070252897A1 (en) Image capturing apparatus and method, and recording medium therefor
KR20080018778A (en) Method, av cp device and home network system for performing av contents with segment unit
JP4303085B2 (en) Content provision service system
KR20060132595A (en) Storage system for retaining identification data to allow retrieval of media content
US20170229146A1 (en) Real-time content editing with limited interactivity
US20080235198A1 (en) Translation Service for a System with a Content Directory Service
CN111918119A (en) IOS system data screen projection method, device, equipment and storage medium
KR20030062315A (en) Method, system, and program for creating, recording and distributing digital stream contents
US20080183809A1 (en) Content Transmission System, Content Sending Apparatus and Method, Content Reception Apparatus and Method, Program, and Recording Media
US20120180095A1 (en) Transmitter and transmission method
US20080022334A1 (en) Communication apparatus
JP2024026035A (en) Computer programs, terminals and servers
AU2006249275B2 (en) Thin video client editing
WO2010109768A1 (en) Network control device, network control system, network control method, and program
KR101859766B1 (en) System and method for displaying document content using universal plug and play

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired