CA2260077A1 - Digital video system having a data base of coded data for digital audio and video information - Google Patents

Digital video system having a data base of coded data for digital audio and video information Download PDF

Info

Publication number
CA2260077A1
CA2260077A1 CA002260077A CA2260077A CA2260077A1 CA 2260077 A1 CA2260077 A1 CA 2260077A1 CA 002260077 A CA002260077 A CA 002260077A CA 2260077 A CA2260077 A CA 2260077A CA 2260077 A1 CA2260077 A1 CA 2260077A1
Authority
CA
Canada
Prior art keywords
digital
video
coding
information
video system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002260077A
Other languages
French (fr)
Inventor
James Stigler
Ken Mendoza
Rodney D. Kent
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DIGITAL LAVA Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2260077A1 publication Critical patent/CA2260077A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Abstract

A digital video system having coding and control station adapted to receive digital reference video information (3), for coding the digital reference video information to generate coded data (9); and coded data store (2) for storing the coded data. The coded data may include time reference data, attribute data, multiple transcript data or annotations or static documents. Subtitling is performed with ease. Video and transcripts of audio are displayed simultaneously. Slow motion and fast action are permitted. Search and analysis of video and text are available.

Description

CA 02260077 l999-01-ll WO 3a~ 827 PCT/US97/12061 DIGITAL VIDEO SYSTEM HAV~G A DATA BASE OF CODED
DATA FOR DIGITAL AUDIO AND VIDEO INFORMATION
EIELD OF T~ INVENTION

This invention relates to a digital video system and method for manipulating digital video information.

BACKGROUND OF THE INVENTION

Recent advances in computer hardware and sorlv~e are advancing the digital revolution by making it possible to economically store video, in a digital format, on a personal computer. The software applications that use this technology are only now beginning to emerge, but they are expected to become as commonplacein the next decade as spreadsheet and text manipulation programs are today. In June of 1995, the Multimedia PC Working Group and the SPA, an influential org~ni7~tion comprised of leading PC software publishers, approved the new multimedia PC
standard platform. It employs MPEG, a video compression industry standard that allows for the efficient storage and VHS-~uality playback of dlgital video on a personal computer. The use of digital audio and video information has the advantage of nearly instant access to any point of information without a time delay. Popular digital video applications include multimedia publishing and digital video editing.
Multimedia publishing includes desktop delivery of games, reference materials, computer-based training courses, advertising, interactive music CDs and electronic books. The user can view and browse the digital information. Digital video editing is used to edit video and audio clips.
U.S. Patent No. 5,467,288 to Fasciano et al. issued November 14, 1995, is directed to a digital audio workstation for the audio portions of videoprograms. The Fasciano workstation combines audio editing capability with the ability to imme~ tt~ly display video images associated with th~ audio program. An operator's indication of a point or segment of audio informaticn is detected and used SUBSTITUTE S~IEET (RULE 26) CA 02260077 l999-01-ll to retrieve and display the video images that correspond to the indicated audio programming. The workstation includes a labeling and notation system for recording digitized audio or video information. It provides a means for storing in association with a particular point of the audio or video information, a digitized voice or textual message for later reference regarding that information.
U.S. Patent No. 5,045,940 to Peters et al. issued September 3, 1991 is directed to a data pipeline system which synchronizes the display of digitized audio and video data regardless of the speed with which the data was recorded on its linear medium. The video data is played at a constant speed, synchronized by the audio speed. The above systems do not provide for the need to analyze, index, annotate, store and retrieve large amounts of video information. They cannot support an unlimited quantity of video. They do not permit a transcript to be displayed simultaneously with video or permit ease of subtitling. Subtitling is a p~in.~t~king and labor intensive process for the film industry and an impediment to entry into foreign markets. These systems do not permit searches of video or text for words or events or permit real time coding of video. Additionally, these systems do not permit ch~ngin~ the time domain during which video is displayed. They do not permit viewing video clips sequentially in the form of a presentation. They do not have an alarm feature which can desi~n~te the time to perform a system action.

SUBSTlTUTE SHEET (RULE 26~

OB.~ECTS OF THE INVENTION

- It is therefore an object of the present invention to overcome the above-mentioned shortcomings of the background art.
It is another object of the present invention to provide an interactive digital video system which can accommodate an unlimited quantity of video information.
It is yet another object of the present invention to provide a digital video system relating digitized video information to digitized audio information.
It is an additional object of the present invention to provide a digital video system relating digitized video information to additional information such as a transcript, annotations, scanned documents or exhibits, or waveforms from an oscilloscope.
It is still an additional object of the present invention to provide a digital video system which permits ease of subtitling.
It is a further object of the present invention to provide a digital video system which permits multiple subtitles, for example in different languages.
It is still another object of the present invention to provide a digital system which pelmils analysis of video and audio information.
It is yet a further object of the present invention to provide a digital video system which permits searches of video information for an event or word orphrase.
It is an additional object of the present invention to permit viewing video information and a textual transcript of audio information simultaneously.
It is yet an additional object of the present invention to provide a digital video system that peln~ searches of a transcript of the audio information.
It is one more object of the present invention to provide a digital video system that provides multiple transcripts.

SUBSTITUTE SI~EET (RULE 26 CA 02260077 l999-01-ll WO !~8/~fi27 PCT/US97/12061 It is still a further object of the present invention to provide a digital video system that permits real time coding of video.
It is an object of the present invention to provide a digital video system which permits changing the time domain during which video is played from slow motion to real time to fast motion.
It is another object of the present invention to provide a digital video system which has an alarm feature.
It is yet another object of the present invention to provide a digital video system which has a presentation mode.
In order to achieve these and other objects of the invention, there is provided a digital video system comprising coding and control means, adapted to receive digital reference video information, for coding the digital reference video information to generate coded data; and coded data storing means for storing thecoded data from the coding and control means.

SUBSTITUTE SHEFr (RULE ~63 BRIEF DESCRIPTION OF THE DRAWINGS
The present invention may be better understood and further advantages and uses thereof more readily ~arenl, when considered in view of the following detailed description of exemplary embodiments, taken with the accompanying drawings in which:
FIG.lA is a functional block diagram of a plerelred embodiment of the present invention;
FIG.lB is a functional block diagram of the coding and control means shown in FIG.lA;
FIG.lC is a chart showing the structure of the coded data store of FIG.
lA for indexing data;
FIG.lD is a software flowchart of the preferred em~odiment of the present invention;
FIG.lE is a map of time reference information;
FIG.2A is a drawing of the main button bar of the present invention;
FIG.2B is a diagram of the manager button bar of the present invention;
FIG.2C is a diagram of the application tool bar of the present invention;
FIG.3is a diagram of the user list window of the user module of the present invention;
FIG.4is a diagram of the user detail window of lhe user module of the present invention;
FIG.S.is a table showing the coding and transcription rights of the user detail window of the user module of the present invention;
FIG.6is a table of the system management rights of the user detail window of the user module of the present invention;
FIG.7is a diagram of the module sub-menu of the study module of the SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-Ol-ll present invention;
FIG. 8 is a diagram of the study list window of the study module of the present invention;
FIGS. 9A and 9B are diagrams of the study detail window of the study module of the present invention;
FIG. lOA is a diagram of the study outline of the study detail window of the study module of the present invention before dragging a characteristic;
FIG. lOB is a diagram of the study outline of the study detail window of the study module of the present invention after dragging a characteristic;
FIG. 11 is a diagram of the select an event/sampling method choice menu for creating a new event type and opening the event type detail window of the present invention;
FIG. 12 is a diagram illustrating creating a new pass in the study outline of the study detail window of the study module of the present invention;
FIG. 13 is a diagram of the event type detail window of the study module of the present invention;
FIG. 14 is a diagram of the characteristic detail window of the study module of the present invention;
FIG. 15 is a diagram of the unit selection window of the study module of the present invention;
FIG. 16 is a diagram of the use units from other study window of the study module of the present invention;
FIG. 17 is a diagram of the unit list window of the unit module of the present invention;
FIG. 18 is a diagram of the unit detail window of the unit module of the present invention;
FIG. 19 is a table of the palettes which may be opened over the video window of the present invention;
FIG. 20 is a diagram of the video window of the present invention;
SUBS~ITUTE SHEET (RULE 26) FIG. 21A is a diagram of the title area of the video window of the present invention;
FIG. 21B is a diagram of the video area of the video window of the present invention;
FIG. 21C is a diagram of the mark area of the video window of the present invention;
FIG. 21D is a diagram of the instance area of the video window of the present invention;
FIG. 22 is a diagram of the mark area of the video window of the present invention;
FIG. 23is a diagram of the instance area of the video window of the present invention;
FIG. 24is a diagram of the List Area of the video window;
FIG. 25 is a diagram of the List Area with two transcripts displayed;
FIG. 26is a diagram of the Select an Outline window;
FIG. 27 is a diagram of the outline description window;
FIG. 28is a diagram of the outline palette;
FIG. 29is a diagram of the outline item window;
FIG. 30is a diagram of the sample definition window;
FIG. 31is a diagram of the sample palette;
FIG. 32is a diagram of the sample information window;
FIG. 33is a diagram of the unit analysis window;
FIG. 34 is a diagram of the define unit variable window;
FIG. 35 is a diagram of the define event variable window;
FIG. 36is a diagram of the instance analysis window;
FIG. 37 is a diagram of the define analysis variable window;
FIG. 38is a diagram of the search window contcnts common for text and event inet~nee searches;
FIG. 39 is a diagram of the event in.et~nce searcl, window; and SUt~S ~ JTE SHEET (RULE 26) FIG. 40 is a diagram of the text search window.

SUBSTITUTE SHEET (RULE 26~

WO 9~ 2827 PCT/US97/12061 DETAILED DESCRIPTION OF THE INVENTION

With reference to FIG. lA, there is shown a digital video system in accordance with the preferred embodiment of the present invention including coding and control means 1 for coding digital reference video information and generating coded data and coded data store 2 for storing the coded data from the coding andcontrol means. The coding and control means 1 is adapted to receive digital reference video information from video reference source 3. The coding and control means 1 is connecte~l via a databus 5 to the coded data store 2. The coding and control means 1 includes a general multipurpose computer which operates in accordance with an operations program and an applications program. Also illustrated in FIG. lA is an output 6 which may be a display connected to an input/output interface.
The video reference source 3 may be a video c~c~ette recorder such as a SONY model EV-9850. The coding and control means 1 may be an Apple ~cTntoch 8500/132 computer system. The coded data store 2 may be a hard disk such as a Qu~nt~m XP32150 and a CD-ROM drive such as a SONY CPU 75.5-25.
The output 6 may be a display monitor such as an Apple Multiple Scan 17 M2A94.
As illustrated in FIG. lA video information from a video reference source 3 may be ~ iti7ed by digital encoder 9 and compressed by compressor 10.
The digital video information may be stored in digital storage means 11.
Alternatively, if the video information is already digitized, it may be directly stored in digital storage means 11. Digital video information from digital storage means 11 may be decoded and decompressed by decode/decompression means 12 and input to the coding and control means 1. The video reference source 3 may be an analog video tape, a camera, or a video broadcast. The coding and control means 1 may generate coded data automatically, or by interactive operation with a user, by interactive operation with a user in real time, or semi-automatically. For semi-automatic control, the user inputs parameters.
SUBSTITUTE SHEET (RULE 26 When the only source of information is video information, the coding and control means performs the function of indexing only. In~exing is the process through which derivative information is added to the reference video information or stored separately. This derivative information provides the ability to encode instances of events and/or conduct searches based on event criteria.

Terms 'Reference"information is video or audio information such as a video tape and its corresponding audio sound track.
'Derivative"information is information generated during the coding process such as indices of events in the video, attributes, characteristics, choices, se~ected choices and time reference values associated with the above. Derivativeinformation also includes linkin~ data generated during the coding process whichincludes time rerelence values, and unit and segment designa.ions.
"Additional"information is information that is input to the video system in addition to reference information. It includes digital or analog information such as a transcript of audio reference information, notes, annotations, a static picture, graphics, a document such as an exhibit, or input from an oscilloscope.
The coding and control means 1 may be used interactively by a user to mark the start point of a video clip and a time rer~lence value r~ ,scnting a mark in point is generated as coded data and stored in the coded data store 2. Further, the user may optionally interactively mark the end point of a video clip and a time reference value ,~ scnting the mark out point is generated a~ coded data and stored in the coded data store 2. The user may interactively mark an event type in one pass through the digital reference video information. The user may mark plural passesthrough the reference video information to mark plural event types. The mark in and mark out points are stored in indices for event types.
The coded data that is added may be codes of data that are transpar.,.,l to a standard player of video but which are capable of interpretation by an modified SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-Ol-ll player. The coded data may be a time reference value indicating the unit of digital reference video information. Additionally the coded data may be a time referencevalue indicating the segment within a unit of digital reference video information.
Thus, unlimited quantities of digital reference video information may be identified and ~ccessed with the added codes. There may be more than one source of refer~ ce video information in the invention.
Also illustrated in FIG. lA is an audio reference source 4 which is optional. The digital system of the present invention may operate with simply a source of video reference information 3. Optionally, however! a source of audio reference information 4, a source of digital additional information XD 13 or a source of analog additional information XA 14 may be added. There may be plural sources 4 of audio reference information, plural sources of digital additional information 13 or plural sources of additional analog information 14. Any combination of sources of information may be included.
When there is a source of audio reference information 4, the audio reference information is input to digital storage means 1 l. If the audio lerelellce information from source 4 is already digital, it may be directly input and stored in digital storage means ll. Alternatively, if the audio reference information fromsource 4 is analog, the information may be di~iti~e~ and compressed by digital encoder 7 and co",pression means 8 before being stored in digital storage means 11.
The digitized audio reference information is output from digital storage means 11 to coding and control means l via decode/decol,.pression meanC 12. The compression and decompression means 8 and 12 are optional. The audio reference sources 4 maybe separate tracks of a stereo recording. Each track is considered a se~ le source of audio reference information.
The video reference source 3 and the audio reference source 4 may be a video cassette recorder such as SONY EVO-9850. The digital video encoder 9 and compressor l0 may be a MPEG-l Encoder sucn as the Future Tel Prime View II.
The digital audio encoder 7 and compressor 8 may be a sound encoder such as the SU~ JTE SHEET (RULE 26) WO 981'~2827 PCT/US97112061 Sound Blaster 16. A PC-compatible computer system such as a Gateway P5-133 stores the data to a digital storage means 11 such as a compact disc recording system like a Yamaha CDR00 or a hard disk like a Se~g~te ST72430N.
The coding and control means 1 codes the rere~ ce video and audio information to generate coded data. Whenever there is more than one source of information such as an audio reference source 4, a source of additional digital information 13 or a source of additional analog information 14, the coding and control means 1 pelrol---s a linking function. T.inking is the process by which information from dirrelent sources are synchronized or correlated with each other.
This is accomplished through the use of time reference data. T inking provides the ability to play and view video, audio and additional information in a synchronized manner. The linking data permits instant random access of information. The coding and control means 1 performs the linking function in addition to the indexing function discussed above. Linking and indexing are referred to as "coding". When there ismore than one source of information the coding and control means 1 pelrolnls linking and/or indexing. The linking data which comprises time reference values is stored as coded data in coded data store 2. Additionally, the indices of data that is added by the process of coding is stored in coded data store 2.
In addition to audio reference and video reference information, the digital video system may include a source of additional information which may beanalog or digital. In the event that the additional infol ~I-ation is analog the additional inf ormation from source 14 may be digitized by digital encoder 15. The additional information from source 13 or 14 may be the transcript of the audio ,er~rel.ce information, notes or annotations regarding the audio or video reference information, a static picture, graphics, or a document such as an exhibit for a videotaped deposition with or without comments. The source of the additional information may be a scanner or stored digital information or a transcript of a deposition beingproduced in real time by a stenographer. The annotations or notes may be produced in real time also. The coding and control means codes the ~el~ ce video SUBSTITUTE SHEET (RULE 26) WO 9~ h27 PCT/US97/12061 information, reference audio information, and additional analog or digital information to generate coded data which includes linking data and indexing data.
During the coding process, when indexing is pclîoll"ed interactively by a user, the coded data which is generated is attribute data. The attribute data may be an event type. This creates a first table which is an index of event types. For example, event types may be '~uestions," 'Pause Sounds," or '~Vriting on Board" for a study of a video of a teacher's teaching methods. These are events which take place in the video. The attribute data may regard a characteristic associated with an event type. This creates an index of characteristics. In the example mentioned, characteristics for the event type of '~2uestions" may be '~lmini~trative questions, "
"questions regarding discipline," or "content of questions." This creates a second table which is an index of characteristics associated with each event type. The attribute data may include a plurality of choices for a characteristic. In the above example, choices for the characteristic of '~dmini~trative questions"ma~ include '~lmini~trative questions regarding attendance,"'~rlmini~trative questions reg~rding grades,"or ~(lmini.~trative questions regarding homework." This creates a third table which is an index of choices of a characteristic. A fourth table designates a selected choice of a plurality of possible choices. Thus for example, the selection may be '~(lmini~trative questions regarding grades." A fifth table is created which includes time reference values associated with each in~t~nce of the event type. So for example, an index is created of time reference values associated with each time a question is asked for the event type '~uestions". During the coding process, the user interactively marks the mark in point of the video reference information that design~tes each instance of a question being asked. Additionally, the user may optionally mark the mark out point when the question is finished being asked.
The digital video system of the invention also permits automatic or semi-automatic coding and control. For example, the coding and control means l may create an index of the time reference values corresponding to each time the video scene ch~nge~. The coding and control means l may generate a time reference value SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-Ol-ll WO 981'~2827 PCT/US97/12061 for a scene change by comparing the ~i~iti7e~1 data of a number of frames N to determine a scene change. So for example, where N=3 the coding and control means 1 may compare three frames to determine if a scene has been changed. The user may input the parameter N. For example, the user may change N to 5 and change the operation of the system so that the coding and control means 1 compares five frames to determine if a scene has been changed. Depending upon the type ofevents being shown in the video, it may be necessary to determine a threshold amount of changed data to determine if a scene has changed. For example, a camera cut can be more easily determined than fading. Further, if the video is of a sports event, there may be a lot of dynamic action in the video even though no scene change has occurred. Thus, the user may input the threshold amount T, for example T=~0%, ofch~nged data which is necessary to determine if a scene has changed. The user may change the threshold amount T, from 50% to 20% for example.
The indexing and control means 1 includes the ability to search for instances of an event type. The coding and control means 1 may search for instances of one event type occurring within a time interval Y of instances of a second event type. Thus, the system can determine each instance when one event occurred within a time interval Y of a second event.
The coding and control means 1 includes an alarm feature. An alarm may be set at each instance of an event type. When the alarm occurs, the coding and control means 1 controls a system action. Thus, for example, each time a question is asked in the video, the system may position the video and play. Other system actions such as stopping the video, highlighting text of a transcript or subtitling may occur.
The coded data stored 2 may be a relational database, an object data~ase or a hierarchical database.
The coding and control means 1 performs the linking function when there is more than one source of information. T inking data is stored to relate digital video and digital audio information. T inkinp data may also link digital video or digital audio information to additional information from sources 13 and 14. T.inkin~
SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll data includes time reference values. Correlation and synchronization may occur automatically, semi-automatically or interactively. Synchronization is the addition of time reference information to data which has no time reference. Correlation is the translation or transformation of information with one time bas, to information with another time base to make sure that they occur at the same time.
The digital system of the present invention operates on time lefe~ ce values that are norm~li7erl unitless values. During synchronization, time leference values are added to information that includes no time reference such as a document which is an exhibit for a videotaped deposition. If both sources of information include time reference information, the correlation process transforms one or both to the time reference norm~li7p~l unitless values employed by the system. One or both sources of information may be transformed or points may be chosen that are synched together. The time reference information of one source can be transformed to a different time scale by a transformation function. The transformation function may be linear, non-linear, continuous, or not continuous. Additionally, the transformation function may be a simple offset. The transformation function may disregard blocks of video between time reference values, for skipping advertising commercials, for example.
Time codes with hour, minute, second and frame designations are frequently used in the film industry. The coding and control means 1 correlates these ~le~ign~tions to the norm~li7ed unitless time reference values employed by the system.
Likewise, the coding and control means 1 may transform a time scale to the time code designation with hour, minute, second and frame designations. The coding and control means 1 may correlate two sources of information by simply checking the drift over a time interval and selecting points to synch the two information sources together. The coding function of the digital system of the pre~ent invention is not just an editing function. Information is added. Indices are create~. Further, a database of linking data is created. The original reference data is not necessarily modified.
New data is created though the system can be used for editin~ The coded data store SUBSTITUTE SHEET (RULE 26) WO 98,~2&27 PCTr~S97/12061 may be in any format including edit decision list (EDL) which is the industry standard, or any other binary form. The coded data store 2 stores the data base indices which are created, link-ng data, and data from the additional sources 13 and 14 which may include static pictures, graphics, documents such as deposition exhibits, and text which may include transcripts, translations, annotations, or close captioned data. Subtitles are stored as a transcript. There may be multiple transcripts or translations or annotations or documents. This permits multiple subtitles.
FIG. lB illustrates the coding and control means 1 of FIG. lA. The coding and control means includes controller 16. Controller 16 is connected to derivative data coding means 17 and correlation and synch means 18. Controller 16 is also connected to the coded data store 2 and to the output 6. Digital information from the digital storage means 11 is input to the derivative data coding means 17. If information from one source only is input to the derivative data coding means 17, only the indexing function is performed. If information from two sources is input to the derivative data coding means 17 indexing and linking is pcrformed. The coding and control means I may further include correlation and synch means 18 for receiving additional data XD and XA. The correlation and synch means 18 correlates data with a time reference to the video information from the digital storage means 11 and synchronizes data without a time reference base to the digital video information from the digital storage means 11. Control loop 19 illustrates the control operation of the controller 16. The user may be part of control loop 19 in interactive or semi-automatic operation. Control loop 20 illustrates the control function of controller 16 over correlation and synch means 18. The user may be a pan of control loop 20 ininteractive and semi-automatic operation.
For interactive or semi-automatic operation, the control loops 19 and 20 also include input/output interface devices which may include a keyboard, mouse,stylus, tablet, touchscreen, scanner or printer.

SUBSTITUTE SHEET (RULE 26 CA 02260077 l999-Ol-ll WO 9~ 2827 PCT/US97/12061 FIG. lC is a chart showing the structure of the coded data store 2 of lA
for indexing data. FIG. lD is a software flowchart. The following define the indices of the coded data store 2.

DEFINITIONS

Characteristics Characteristics are variables which are applicable to a particular event type. An example would be Event Type 'Teacher Question" and a characteristics of the question might be 'Difficulty Level."
CharChoices CharChoices contains valid values of the parent Characteristics variable. For example, in the example of the Characteristic 'Difficulty Level" the CharChoices might ~e '~Iigh, " 'Medium"
and '~ow." CharChoices serves as a data validation tool to confine user data entry to a known input that is statistically analyzable.
Event Types Stores model information of the event code such as whether the code can have an in and out point. Serves as a parent to the characteristic table which includes possible variables to characterize the event type.
Instances Contains in~t~nces of particular event types with time reference information.
InstCharChoice Stores actual value attributes to a characteristic of a particular event instance. For example, one instance of the teacher question might have a value in the characteristic 'Difficulty Level" of 'Medium. "
OutlineHdng Stores the major he~ling~ for a particular outline.
OutlineSubHdng Stores actual instances that are cont~ine~ in an outline. These instances were originally coded and stored in the instance table, SUBSTITUTE SHFET (RULE 26~

W O 98/02827 PC TrUS97/12061 but when they are copied to an outline are completely independent of the original instance.
Pass Filters Stores filter records which are created by the sampling process.
These records are used to screen areas for coding.
Samples Stores samples for the purposes of further characterization.
These instances are either a random sample of previously coded in~t~nces or computer generated time slices created using sampling methodologies.
Segments Corresponds to physical media where the video is stored. This table is a 'hlany" to the parent Units table.
SeqNums Stores Sequence numbers for all tables.
Sessions Sessions keeps track of coding for each study down to the pass and unit level. Therefore, a user may go back to his/her previous work and resume from where they left off.
Studies Parent table to all information used to code view and analyze units. The following tables are children: Study_Units, Study_Events, Study_Pass.
Studies_Pass Stores information for a particular pass in the study such as pointers to filters and locked status for sampling.
StudyUnits Contains references to units that are attached to a particular study. Since there may be multiple units for each study and there may be multiple studies that utilize a particular unit, this table functions as a join table in the many-to-many relationship.
Study_Event This table stores particular information relevant to the use of aparticular event type in a particular study and a particular pass.
Since there may be multiple Event Types for each study and there may be multiple studies that utilize a particular Event Type, this table functions as a join table in the many-to-many relationship.
SUBSTlTUTE SHEET (RULE 26) Transcribe Stores textual transcript and notes and time reference values for each utterance which correspond to a Unit.
Units Parent table of videos viewable.
Users Contains all valid users of the system.
The coded data store 2 stores data representing time reference values relating the digital audio information to the digital video information and vice versa.
Accordingly, for any point in the video information, the corresponding audio information may be instantly and randomly accessed with no time delay.
Additionally, for any point in the audio information, the corresponding video frame information may be instantly and randomly accessed with no time delay. The codeddata store 2 stores attribute data. The attribute data is stored in an index and is derivative data that is added during the coding process. The attribute data may be an event type, such as any action shown in the video such as a person in the video raising his hand or st~nding up or m~king a field goal. Attribute data may be time reference data indicating instances of an event type. The attribute data may also include a characteristic associated with an event, such as directness or acting shy.
The attribute data may also include a plurality of choices of characteristics such as being succinct or being vague. It may be the chosen choice of plural possible choices. The coded data store 2 stores time reference data corresponding to the attribute data.
Additionally in the invention, the coded data store 2 stores data representing the text of a transcript of the digitized audio information. For use in legal services such as recording and analyzing depositions, a video deposition can be digitized. The video information originates at reference source 3 and the audio information origin~tes at reference source 4. The video and audio information may be digitized and/or compressed via digital encoders 7 and 9 and con~ ,ssors 8 and 10 and stored in a digital storage means 11. Additionally, a tranc~,ript of the deposition may be stored in coded data store 2. More than one transcripl, foreign language translations, for example, may be stored. Coding and control means 1 accesses video SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll WO 98~'1,'~a27 PCT/US97/12061 information from digital storage means 11, audio information ~rom digital storage means 11 and the transcript information from coded data store 7 and simultaneously displays the video and the text of the transcript on output disp,ay 6. Additionally, the audio is played. The video is displayed in one area of a Video Window called thevideo area and the text of the transcript is displayed in a transcript area. More than one transcript may be displayed.
Notes and annotations and static documents in the form of text or pictures/graphics may be stored in the coded data store 2 and may be simultaneously displayed in a second transcript area of the Video Window. The Video Window is illustrated in FIG. 20 and is described in detail later.
Additionally or alternatively, subtitles can be added to the video information and displayed on output display 6 in the same are~ as the video. When the digital video system is operated employing subtitles, the viewer can view the video information with subtitles and simultaneously watch the text of the transcript on output display 6.
The attribute data that is stored may be regarding video scene changes.
Every time the scene in the video changes the time reference data of the scene change is stored in the coded data store 2. This may be performed interactively or automatically or semi- automatically. If there are a number of times that an event occurs the time reference values associated with each occurrence of the event are stored in the coded data store 2. The present invention has a presentation ability where a presentation may be displayed on output display 6. The video associated with each stored time reference value is displayed in sequenc~ to create a presentation. For example, in an application dealing with legal services and videotaped depositions, every time a witness squints his eyes r~ay be kept track of by storing a time ~ ere-lce value associated with each occurrence of the event during the coding process. The time reference values represent the times at which the pertinent video portion starts and finishes. Then a presentation may be made of each occurrence of the event one after the next.
SUBSTITUTE S~IEET (RULE 26 WO 9XI'~ .7 PCT/US97/12061 The digital system of the invention includes search abilities where a word or a phrase may be searched in the text of the transcript of the digitized audio information. A search of notes, annotations or a digitized document for a word or phrase may also be performed. Additionally, the present invention includes the ability to perform statistical analysis on the attribute data. Random sampling of instances of an event type can be performed. Coding and control means l accessescoded data store 2 and analyzes the data in accordance with standard statisticalanalysis. The invention includes a method of analyzing video information including storing digital video information, storing digital audio information, storing coded data linking the digital video and digital audio information, storing coded data regarding events in indices, and computing statistical quantities based on ~he coded data.The present invention results in a video analysis file for a multime~
spre~lsheet cont~ining time-dependent information linked to video information. The video information and textual (transcript, annotations or digitized documents) information can be searched. The video information may be stored on a CD-ROM
disk employing the MPEG-l video standard. Other video standards may be employed. Additionally, other storage media may be employed. The coded data store 2 and digital storage means l l illustrated in FIG. lA may actually be parts of the same memory.
Analog videotapes may be converted into digital video format by a standard digitized video transfer service that is fast and inexpensive, and deals with high volume at a low cost. The digital video service digitizes the video, compresses it and synchronizes it with the audio. Alternatively, the systen~ may digitize the video and audio information from reference sources 3 and 4. The source of information may be a commercial, broadcast or analog video tape.
The present invention permits video analysis so that the user may view, index, link, organize, mark, annotate and analyze video information. This is referred to as "coding." On screen buttons and controls permit the marking, coding and annotation of the video. A transcription module permits synchronized subtitles.
SUBSTITUTE SHEET (RULE 26) Multiple subtitles are possible, which is of importance to the foreign market for films which may require subtitles for different languages. The present invention has note-taking abilities. Searches may be performed for the video information, notes, the transcript of the audio information, coded annotations or digitized documents. Apresentation feature permits the selection and org~ni7~tion of video segments into an outline to present them sequentially on a display or to record them to a VCR or a co..,~uter file.
Complex coding and annotations are performed in several passes such that multiple users may code and annotate the digitized information. One user may make several passes through the video for coding, marking and annotating or several users may each make a pass coding, marking and annotating for separate reasons.
Information may be stored and displayed in a spreadsheet format and/or transferred to a statistical analysis program, and/or to a graphics program. Types of statistical analyses which may be conducted, for example, are random sampling, sequential analysis, cluster analysis and linear regression. Standard algorithms for statistical analysis are well known. Additionally, the information may be input to a projecttracking program or standard reports may be prepared. Spreadsheets and graphs may be displayed and printed.
The present invention has use in video analysis for research and high end analysis, the legal field and the sports market. The present invention would be useful in research in fields of behavior, e~uc~tion, psychology, science, product marketing, market research and focus groups, and the medical fields. For example, teaching practices may be researched. Verbal utterances are transcribed, multiple analysts mark and code the events and annotate the video information for verbal and nonverbal events, lesson content and teacher behavior. The transcribed utterances, marks, codes, and annotations are linked to the video and stored. The information may be consolidated, org~ni7ed, presented or input for statisti~;al analysis andinterpretation.

SUBSTlTUTE SHEET (RULE 26 CA 02260077 l999-Ol-ll Other fields of research where the invention has application are industrial process improvement, quality control, human factors analysis, software usability testing, industrial engineering, and human/computer interactions evaluations.
For example, videos of operators at a computer system can be analyzed to determine if the computer system and software is user friendly The present invention would be useful in legal services where videotaped depositions may be annotated and analyzed.
Deposition exhibits may be stored in the coded data store with or without notes on the documents. Additionally, there is an application for the present invention in the sports market where sports videos may be annotated and coded for analysis by coaches and athletes. The present invention includes applications and operationssoftware, fil"~ware, and functional hardware modules such as a User Module, a Menu Manager, a Unit Module, a Study Module, a Video Window, a Transcribe Mode, a List Manager, an Outline Presentation Feature, a Sampling Feature, an Analysis Module and a Search Module. Reports may be created and output.
A unit, as used herein, is composed of a video and transcript data. A
unit may span several tapes, CD's or disks. These media are referred to as segments, and a unit has at least one segment. The present invention may handle multiple segments per unit. This permits the present invention to accommodate an un~imit.od quantity of video information. A unit may include plural transcripts stored in memory. A transcript is the text of speech in the video, foreign language translation, subtitles or description or comments about the video.
A study includes a collection of units. A study is defined to specify coding rules for the units associated with it, for example, what event types andcharacteristics are to be recorded. A unit may be associated with one or more studies.
When an analyst starts coding for a study, the b~ssic unit of work is called a session. A session is a specific coding pass for a specific unit by one user.
The number of sessions that are created for a study is equal to the number of units included in the study mu}tiplied by the number of coding passes defined for the study.
SUBSTITUTE S~lEFr (RULE 26~

CA 02260077 l999-01-ll WO 98~ a27 PCTIUS97/12061 A session must be open in order to go into code mode on the coding window. If nosession is open, the user is proml)ted to open one.

111~; USER MODULE

The User Module includes all windows, controls, and areas that are used to define users, control security, logon, and do primary navigation through the interactive digital system. The User Module is briefly mentioned here for the purpose of describing logon and is explained in more detail later.
The interactive video analysis program of the pr~sent invention requires a logon before granting access to the program functions and data. The purpose of the logon process is not only to secure the database content, but also to identify the user, assign access privileges, and track information such as how long a user has beenlogged on. After a successful logon, the user is assigned access privileges and presented with the program's main button bar which contains icons that allow entry to various parts of the program. The number and type of icon that appear on the button bar are for a given user dependent on the privileges granted to him in his user record.
The main button bar, or alternatively the application tool bar, is part of the Menu Manager.

THE MENU MANAGER

The main button bar is illustrated in FIG. 2A. The m~n~e button bar of FIG. 2B is ~ccesse~ from the main button bar of FIG. 2A and is an extension of the main button bar. Access to commonly accessed modules ,s provided by the mainand manage button bars. In another preferred embodiment, the application tool bar of FIG. 2C replaces the main button bar and m~n~e button bar of FIGS. 2A AND 2B.
With respect to FIG. 2C, icon 21 represents '~iew,"icon 22 represents "code,"icon 23 represents 'transcribe,"icon 24 represents 'gtudy," icon 25 represents '~Init'' for defining new units, icon 26 represents 'gearch," and icon 27 represents '~nalysis. "
Area 28 displays the units, for example, which unit is current or ~e~ ils selection of SUBSTITUTE SHEET (RULE 26~

previously def~ned units. Area 29 represents the '~utline" feature and area 30 is directed to '~essions" selection. The application wide tool bar provides access to the most commonly accessed modules including Video-View Mode, Video-Code Mode, Video-Transcribe Mode, Search Module, Unit Module, Analysis Module, Help Module, Session Selection, Unit Selection, and Outline Selection.

Video-View Mode The Video-View Mode opens the Video Window, m~king the view mode the active module. If the user has never accessed a unit record, the user will be presented with a unit selection dialog.

Video-Code Mode The Video-Code Mode opens the Video Window, m~king the code mode the active module. If the user has never accessed a session, the user will be presented with a session selection dialog.

Video-Transcribe Mode The Video Transcribe Mode opens the Video Window, m~kin~ the transcribe mode the active module. When the transcribe mode is activated the transcription looping palette will be displayed automatically.

Search Module The Search Module opens the search window, m~king it the current module.
-Unit Module The Unit Module opens the Unit Module, m~king it the current module.

SUBSTITUTE SHEET (RULE 26~

WO ~ 'tb27 PCT/US97/12061 Study Module The Study Module opens the Study Module, m~lrin~ it the current window.

Analysis Module The Analysis Module opens the Analysis Module, m~king it the current window.

Help Module The Help Module opens the interactive video analysis help system.

Session Selection Popup The session selection popup provides the ability .o change the current session when in Video-Code Mode.

Unit Selection Popup The unit selection popup provides the ability to change the current unit when in Video-View Mode.

Outline Selection Popup The outline selection popup provides the ability to change the current outline when in Video-Transcribe Mode.

THE USER MODULE

The User Module is now described in more detail. Users are added via an application preferences dialog.

SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll User List Window FIG. 3 illustrates the user list window. The user list window lists the users.

User Detail Window The user detail window of the User Module is illustrated in FIG. 4. It is the primary window that contains all data needed to define a user, including information about the user and security privileges. This window is presented when adding a new user, or when editing an existing user. The fields and controls in the window include, the window name 'User Detail, " the first name, the last name, the user code, phone number, e-mail address, depa-Ll"ent, custoni fields, whether logged on now, last logon date, number of logons since, logged hours reset count, comments, logon id, set password, and login enabled. The user detail window includes coding and transcription rights area 31. This is a field of four check boxes that grant privileges to code video (create instances) or edit the transcription text as shown in the table of FIG. 5. The user detail window also includes system management rights area 32. This area is a field of five check boxes that grant privileges to manage setup of the study and various other resources as shown in the table of FIG. 6. The user detail window further includes the make-same-as button, navigation controls, a print user detail report button and a cancel/save user button.

THE STUDY MODULE

The collection of windows and procedures that t~gether allow definition of studies, event types, characteristics, choices, units and samples comprise the "Study Module". The Study Module is reached from the main button bar or the applications tool bar that is presented when the interactive video analysis program is initiated. A study can be thought of as a plan for marking events that are seen in the video or in the transcription text of the audio information. A study contains one or SUBSTITUTE SHEET (RULE 26~

WO 98/'~ 27 PCT/US97112061 more event types, which are labels for the events that are to be marked. Each event type may also have one or more characteristics, which are values recorded about the event. When an event is marked in the video or transcript text it is formally called an "event instance". When the project is first ini~i~li7e~, one study is created. A default study is used when the user does not choose to create a predefined coding plan (study), but rather wishes to use the program in a mode when event type can be assigned at will.

General Navi~ation The Study Module may be accessed by selecting"he study button from the application tool bar or by selecting study from the module submenu. When themodule is first opened the user is presented with a standard find dialog whereby he can search for specific records which he wishes to work with. The find dialog screen is illustrated in FIG. 7.

Generic Actions Double-clicking on a list item results in opening that item for edit. For example, double-clicking on a study in the studies list window, as illustrated in FIG.
8, results in opening a study for edit in the study detail window. There is a close box in the title bar or a save button whereby the record is saved and the user is returned to the previous window. The ok/cancel button has the action of returning to the original wlndow.

Navigation Controls General navigation controls such as First, Prev, Next and Last are included. The First control goes to the first record in the selection displayed in the list. The Prev button goes to the record immediately before the current record in the selection displayed in the list. The Next button goes to the record immediately after SUBSTITUTE SHEET (RULE 26~

the current record in the selection displayed in the list. The Last button goes to the last record in the selection displayed in the list.

Constrained Studies A study can be constrained to be a subset of another study. This means that the study can only contain units that were specified in the other study (either all the units, or a subset of the units). If additional units are added to the "parent study", they become available to the constrained study but are not automatically added.
Constraining a study to be a subset of another study also means that the event types for the parent study are available as event filters in the sample definition for the constrained study. As explained in detail below, a study is constrained when the"constrain unit selection to be a subset of the specified study" hutton is checked on the "use units from other study window" as illustrated in FIG. 16. The constraint cannot be added after any sessions have been created for the study. The constraint can be removed any time as long as the constraint study does not include any event types from the parent study in its sample criteria.

The Default Study Every project contains a default study that is created when the project is first created. The default study allows entry into code mode of the Video Windowshown in FIG. 20 if no formal studies have been defined. Event types and characteristics may be added to the default study at will from 2he Video Window.The default study is m~int~ine~ from the video window and not from the Study Module, hence, it does not appear in the study listing windo~ shown in FIG. 8. It does appear whenever studies are listed in all other modules. A session is always open for the default study which is called the default session. If no other studies have been created in the project, the default session is opened without pro~ "ing when the user goes into code mode on the study window.
SUBSmUTE SHEET (RIJLE 26 WO98~ 827 PCT/US97/12061 Applied Rules Units may be added to a study. A unit cannot be added to a study unless it is locked. The purpose of the lock is to insure that the set of units for a specific study does not change once a pass has been locked.
Studies may be deleted. A study cannot be deleted if it is constrained by another study or if the study itself is locked. A study should not be allowed to be constrained to another study that is not locked yet.

Studies List Window The studies list window shown in FIG. 8 presents all the studies defined for the project. The window displays only the three fields: study name, description, and author. Double-clicking on a study record opens the study detail window for the selected study.

Study Detail Window The study detail window is the primary window that contains all data needed to define a study. This window is presented when creating a new study or when editing an existing study. The study detail window is illllstrated in FIGS. 9A
and 9B.
Referring to FIG. 9A the study detail window includes a number of fields and controls. Field 41 is the window title. The name of this window is "Study Detail". Field 42 is the study name. In this field the name of the study may be entered. Field 43 is the author. This is a non-enterable area that is filled by the program using login data. Field 44 is the create date area which includes the date and time when the study was initially created. This is a non-enterable area that is filled by the program when the study record is created. Field 45 is the study description which is a scrollable enterable area for text to describe the study. Field 46 is the study outline which is a scrollable area that shows the event types, characteristics, and SUBSTITUTE SHEET (RULE 26~

WO 98~ li27 PCT/US97/12061 choices created for the study. Event types are shown in bold in FIG. 9A.
Characteristics are displayed in plain text under each event type. Choices are displayed in italics under each characteristic. Thus, as shown in FIG. 9A the event type is "Question Asked", the characteristic is a "Directness" and the choices are "Succinct" and "Vague".
A line separates each pass, the pass separation line is ~le~ign~ted 46d. If an event type or characteristic is double-clicked that item is opened for edit in its a~propliate detail window. If a choice value is double-clicked on, its parent characteristic is opened for edit in the characteristic detail window.
FIG. 9A illustrates a study detail window for a study for video analysis of teaching practices. In the field of research for education, teaching practices may be analyzed by video taping teachers interacting with students in the school room.
Various event types such as asking questions or raising one's hand or answering a question are analyzed by marking the events in the video. As shown in FIG. 9A, event type 46a is displayed in bold with the event code and event type name (e.g., "Questions Asked"); the type of marking associated with the event type (for example, "Vi/T" means "mark Video In Point and text" for each in~t~nce); and the pass in which the event type is to be marked (e.g., " l "). Detailed descriptions of theme~ning of each of these are given under the "event type detail window" which isshown in FIG. 13.
Allowable mark values are:

V = Video In and Out points are to be marked Vi = Video In point is to be marked E = Exhaustive segmentation T = Text is to be marked "V", "Vi", and "E" are mutually exclusive. "T" may be used by itself or combinedwith "V" or "Vi". For example, "Vi/T" means the Video In point and the text are to SUBSTIME SHEET (RULE 26) CA 02260077 l999-01-ll be marked for the event type. If no marking is turned on, then nothing is displayed (for example, see "Answer" in Pass 3 in the illustration).
When an event type is double-clicked, action opens that event type in the event type detail window.
The characteristic label 46b as shown in FIG. 9A is displayed in plain text with the characteristic code (e.g., "Dl "), name of the characteristic, and date data entry type (e.g., "select one"). Characteristics are displayed imme~ tely under the event type to which they are associated. When a charactelistic is double-clicked, the action is to open that characteristic in the characteristic detail window as shown in FIG. 14.
The order in which the characteristics are displayed under the event type is also the order in which they are displayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic belonging to the same event type. Characteristics cannot be dragged from one event type to a different event type (for example: the user cannot drag characteristic "Directness" from event type "Question Asked" to event type "Answer"), but characteristics can be dragged from one event type to the same event type that belongs to a different pass through the video (for example: the user can drag characteristic "Effectiveness" from "Answer" in pass 3 to "Answer" in pass 2). When a characteristic is moved all associated choice values are moved with the characteristic and retain their same order. FIGS. lOA and 1~1~ illustrate dragging a characteristic. FIG. lOA illustrates the before condition. The characteristic "A~ro~liateness" in pass 1 will be dragged to pass 2. FIG. lOB illustrates the after condition. The characteristic "A~plo~,iateness" was dragged from pass 1 to pass 2.
The action is to create a new appearance in the event type "Question Asked" in pass 2, with "A~,~liateness" underneath it.
The choice value 46c illustrated in FIG. 9A is displayed in plain text with a user-defined value (e.g., "1") and choice name. Choices are displayed immediately under the characteristic to which they are associated. The user can SUBSTITUTE SHEET (RULE 26) WO 98/02827 rCT/US97/12061 change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
The pass separator line 46d shown in FIG. 9A se~arales the passes through the video being analyzed. If more than one pass has been created, a passse~lor line is drawn between the event types of each pass. The pass se~alor linecannot be dragged or selected. Button 47 is the add event type button. The action of this button is to create a new event type and open the event type detail window shown in FIG. 13. The "select an event/sampling method" menu for creating a new event type and opening an event type detail window is illustrated in FIG. 11. Button 48 of the study detail window of FIG. 9A is the "remove from study" button. The action of this button is to remove the highlighted item from the study along with all indented items under it. For example, removing an event type also removes the associated characteristics and choice values directly under it. If the last event type is removed from a pass, the pass is automatically deleted and removed from the "passes and sampling" area 55 of the study detail window. Pass 1 may not be deleted.
The pass display area 49 displays the pass to which the highlighted event is assigned. It is also a control tool to select the pass. The pass display 49a is a non-enterable area which displays the pass of the currently highlighted event type. The pass selector area 49b is a control tool that works only when an event type is selected.
Clicking the up-arrow moves the selected event type to the next higher pass.
Similarly, clicking the down-arrow has the action of moving the selected event to the next lower pass. If the pass number is set to a value greater than any existing pass, the action is to create a new pass. Each pass must contain at least one event type.
The show characteristics checkbox 50, when checked, is for displaying all characteristics under the ap~lo~l,ate event type in the study outline area 46 and to enable the "show choices" checkbox. The show choices checkbox 51, when checked, displays all choice values under the a~pro~l;ate characteristic in the study outline area 46.
SUBSTITUTE SIIEET ~RULE 26 WO ~81'~B~7 PCT/US97/12061 The add pass button ~2 has the action of creating a new pass. The FIG.
12 illustrates a newly created pass l~rescnted by a separator line and a pass ~ ["ber.
New event types will be added to the pass, and existing event types can be dragged to the pass.
The specified units area 53 of the study detail window of FIG. 9A has the action of presenting the unit selection window shown in FIG. 15. The specified units area 53 is a non-enterable text area to the right of the button which displays the number of units selected for the study. The button is disabled when the checkboxtitled "Include all units in project" is checked.
Area 54 includes a unit constraint message. If a constraint is in effect that effects the selection of units, the text describing the constraint is displayed in this area. There are two possible values of the message: "Constrained to the subset of [study]" and "Constrained to include all units in the project". The second constraint is imposed when the checkbox "Include all units in project" 6(~ is chosen. Area 56 is the unit selection description. This area is a scrollable enteral le area for text to describe the units selected for the study. Area 55 is the "pass~s and s~mrlin~" area.
This is a scrollable non-enterable area that displays an entry for each pass with the pass number and its sample mode. Area 57 includes navigation controls: First, Prev, Next and Last.
Button 58 is the print study button which is used to print the study detail report. Buttons 59 are the cancel/save study buttons. The save button saves all the study data and returns to the studies list window shown in FIG. 8. The cancel button returns to the studies list window of FIG. 8 without saving the changes to the study data. Checkbox 60 is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project ~re automatically included in the study. Units may be added any time to the project, and they are automatically added to .he study.

SUBSTITUTE SHEET (RULE 26 CA 02260077 l999-Ol-ll WO 98~ 27 PCT/US~7/12061 The Event Type Detail Window The event type detail window is illustrated in FIG. 13. This window is for entry of all attributes for an event type. The window is reached through the study detail window of FIG. 9A when either an event type is double-clicked in the study outline 46 or when a new event is added employing button 47. A number of fields and controls of the event type detail window are described below.
The window title area 61 gives the name of the window which is "Event Type Detail". The event code area 62 is an area for the code that uniquely identifies this event type when analysis data is created or exported. The event name area 63 is the area for the name of the event type. The saved search area 64 is a non-enterable text area which appears only if this event type was created by the Search Module to mark instances retrieved from a search. The area provides ir rormation only. An event type created to be a saved search can have characteristics, but cannot have video marking or text marking turned on. No new instances can be coded for a saved search event type.
The coding instruction area 65 is a scrollable text area for entry of instructions on how to mark this event type. This text area is presented when help is selected on the Video Window. The event instance coding area 66 contains checkboxes for specifying the rules identified at areas 67, 68 and 69 for how event in~t~nces are to be coded.
Typically instances will be marked using video, text or both. This means that "video marking", "text marking" or both will be checked. Instances can be marked for this event type in all passes in which the event type occurs, unless checkbox 69 entitled "restrict instance coding to earlier pass only" is checked. In this case, new instances can only be marked in the first pass in which the event typeappears in the coding plan. For example, the same event type may appear in pass 2 and in pass 3. If the event instance coding is "mark video" and checkbox 69 "restrict to the earliest pass only" is checked, new instances may be marked in pass 2~ but not SUBSTITUTE SHEET (RULE 26) WO 98~ 827 PCT/US97/12061 in pass 3. An example of where this would be done is when one pass is for instance hunting (such as pass 2) and another pass is reserved for just characterizing (pass 3).
The event in~t~nce coding requirement determines what will be saved in the database for each instance. If an event type is defined as "Video In" only, then any "Video Out" or text marking is ignored when instances are created.
The "mark video" checkbox 67 specifies whether instances are to be marked using Video In or Out Points. The checked condition means that new instances are to be marked using the video mark controls. Th~ unchecked condition means that no video is to be marked for this event type. Three choices are presented for how the video is to be marked for an event type. This governs the behavior of the mark controls on the Video Window when an instance of this event type is marked.The choices are:
(1) mark with In Point only; only the Video In Point is to be marked for this event type;
(2) mark with In and Out Points: both the Video In Point and Out Point are to be marked for this event type; and (3) exhausted segmentation: only the Video In Point is to be marked for this event type; the program automatically assigns an Out Point equal to the time code of the next In Point (i.e., every In Point is also the Out Point for the prior In Point).
The text marking checkbox 68 specifies whether instances are to be marked using text. The text condition means that new in.ct~nces are to be markedusing the text mark control. The unchecked condition means that no text is to bemarked for this event type.
The "restrict instance coding to earlier pass only" checkbox 69 specifies whether instances can be marked in all passes in which the event type appears in the coding plan, or only in one pass. The checked condition means that event instances can only be marked in the first pass (first means the first sequential pass, not necessarily pass 1) in which the event type appears. If the event type appears in other SUBSTITUTE SH~ET (RULE ~6) CA 02260077 l999-01-ll WO 981'~827 PCT/US97/12061 passes in the coding plan, it behaves as if "mark video" and "mark text" are both unchecked. For example, event types in other passes can only be for entering characteristic values, not for marking new instances.
The characteristics outline area 70 is a scrollable area that shows all the characteristics and choices associated with an event type for ~1 passes.
Characteristics are displayed in plain text. Choices are displayed under each characteristic in italics. If a characteristic in the characteristics outline area 70 is double-clicked, the item is opened for edit in the characteristic detail window illustrated in FIG. 14. If a choice value is double-clicked, its parent characteristic is opened for edit in the characteristic detail window. The order in which the characteristics are displayed in the outline is also the order in which they aredisplayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic within the same pass. When a characteristic is moved, all associated choices are moved within the characteristic and retain their same order. A characteristic can belong to only one pass.
The add characteristic button 71 has the action of creating a new characteristic and displaying the characteristic detail window iliustrated in FIG. 14.
The delete characteristic/choice button 72 has the action of deleting what is selected in the characteristics outline and all indented items under it. For example, deleting a characteristic also deletes all of its associated choice values.
The print event type button 73 has the action of printing the event detail report. The cancel/save event buttons 74 includes a save button which has the action of saving all the event type data and returning to the study detail window and a cancel button which has the action of returning to the study detail window without saving the changes in the event type data.

SUBSTITUTE SH~El (RULE 26) CA 02260077 l999-01-ll WO 98~ 8~7 PCT/US97/12061 Characteristic Detail Window The characteristic detail window as illustrated in FIG. 14 is for entry of all attributes for a characteristic. This window is reached either through the study detail window illustrated in FIG. 9A or the event type detail window illustrated in FIG. 13 when either a characteristic is double-clicked in the outline or when a new characteristic is added. The fields and controls of the characteristic detail window are described below. The window title area 81 gives the name of this window which is"Characteristic Detail". The characteristic code area 82 is an enterable area for the code that identifies this characteristic when analysis data is created or exported. The characteristic name area 83 is an enterable area for the name of the characteristic.
The coding instruction area 84 is a scrollable text area for entry of instructions on how to mark the characteristic. This text is available when help is selected on the Video Window.
The data entry type area 85 presents four options on how data is to be collected for this characteristic. This governs the behavior ol the mark controls on the Video Window when values for this characteristic are recorded. The options are:
(1) Enter Numeric Value: the value to be entered must be a real number.
(2) Enter Text Value: the value can be any keyboard-entered data.
(3) Select One from Choice List: the choices in the choice list are presented, and only one may be chosen. In this case, each choice must have a choice value (enterable by the user).
(4) Select All Applicable Choices: the choices in the choice list are presented, and all that apply may be chosen. In this case, each choice has a choice value that is programmatically determined; the first choice value is 1, then 2, then 4, then 8, etc.

SUBSTITUTE SHEET (RULE 26 CA 02260077 l999-01-ll WO ~8~ 7 PCT/US97/12061 The data entry type can not be changed once a s ession has been created for the pass in which this characteristic appears, nor can choices be added, changed, or deleted.
The choice list area 86 is a scrollable area that shows the choices associated with this characteristic. Choices can be added and deleted using add and delete choice buttons 87 and 88. Drag action allows the choices to be arranged in any order.
The add choice button 87 has the action of creating a new line in the characteristic outline for entry of a new choice. The new line is enterable. Thedelete choice button 88 has the action of deleting the selected choice after confirmation from the user. The print characteristic button 8~ has the action ofprinting the characteristic detail report. The cancel/save characteristic buttons 90 can return to the study detail window or the event type detail win~ow without saving the changes for cancel or with saving the changes for save.

Unit Selection Window The unit selection window is illustrated in FIG. 15. The unit selection window allows specification of the units to be included in the study. The window is presented when the specified unit button is clicked on the study detail window illustrated in FIG. 9A. When the window is first opened, the "units selected forstudy" area 102 is filled with the units that have already been selected for the study.
No units are displayed in the "unit list" area 97 unless the study is constrained to be a subset of another study. In this case this area is filled with all the units in the parent study.
The fields and controls of the unit selection window are described below:
The window title area 91 gives the name of this window which is "Unit Selection for Study:" followed by the name of the current study such as "Math Lessons". The "unit selection description" area 92 is a scrollable text area for a SUBSTITUTE SHEET (RULE 26) WO 98~ b27 PCT/US97112061 description of this selection of units. This is the same text as appears in the "unit selection description" area on the study detail window of FIG. 9A.
The "show all units" button 93 has action which depends on the constraint condition. If the unit selection is constrained to be a subset of another study, the button action is to display all the units specified for the parent study.
Otherwise, the button action is to display all the units in the project in the video list.
The "find units" button 94 has the action of presenting a search enabling the user to search for video units that will be displa~ed in the units list 97.
The search allows search on any of the unit fields, using the user-defined custom fields. The units found as a result of this search are displayed in the unit listing area 97. If the unit selection is not constrained to be a subset of another study, the find action is to search all the units in the project. If the unit selection is constrained to be a subset of another study, the find action is to limit search to the units specified for the parent study.
The "copy from study" button 95 has the action of presenting the "use units from other study" window, pro~ ing the user to select a study. When a study is selected, the units from the selected study are displayed in the unit listing area 97.
If the checkbox entitled "constrain to be a subset of the specified study" is checked on the "use units from other study" window, the constraint message 96 is displayed on the window. Area 96 is the constraint mess~ge and checkbox. The message and checkbox only appear when a unit selection constraint is in e~fect. There are two possible values for the message:
If the unit selection was constrained to be a subset of another study in the "use unit from other study" window, the message appears as "constrained to be a subset of [study3". If the unit selection was constrained to include all the units in the project from the units menu on the study detail window of FIG. 9A, the message appears "Include all units in the project".
The unit listing area 97 is a scrollable area which lists video units by unit ID and name. This area is filled by action of the "all", "find", and "copy study"
SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll buttons 93-95. Units in this area 97 are copied to the '~units selected for study" area 102 by clicking and dragging. When a unit is dragged to the "units selected for study" list 102, the unit appears grayed in the list. Units are removed from this list by clicking and dragging to the remove video icon 101.
The clear unit listing button 98 has the action of clearing the contents of the unit listing area 97. The copy to study button 99 has the action of copying the highlighted unit in the unit listing area 97 to the unit selected for study listing are 102.
The checkbox 100 entitled "Randomly select units and add to the study"
has the action of creating a random sample if the checkbox is checked. The action of checkbox 100 is to change the behavior of the copy to study button. When checked, the sample number area lOOa becomes enterable. When unchecked, the sample number area is non-enterable. If checked, and a sample number is given, the copy to study button has the following action:
A random number of units is selected consisting of (sample number) units from the unit listing area, and added to the existing selection in the "units selected for study" listing area. Units in the unit listing area that al)l)arelllly appear in the "units selected for study" listing area are ignored for purpose of creating the sample.
The "remove video from list" icon 101 is a drag destination. The action is to remove videos from the list from which they were dragged. For example, if a video is dragged from the unit listing 97 to this icon, it is removed from the unit listing area. This is not the same action as deleting the unit.
The "units selected for study" area 102 is a scrollable area which lists units associated with the study. Units are listed by unit ID and name. Units can be added to this list from the unit list 97 by clicking and dragging.
The "clear units selected for study" button 103 has the action of clearing the contents of the "units selected for study" listing area 102 a~ter confirmation with the user. The print unit selection information button 104 has 'he action of printing the units in study detail report. The cancel/save choice buttons 105 include the save SUBSTITUTE S~EET (RULE 26~

CA 02260077 l999-Ol-ll button which saves the units selected for study selection and returns to the window that made the call to this window. The cancel button returns lo the window that made the call without saving the changes.

Use Units From Other Study Window The "use units from other study" window is illustrated in FIG. 16. This window is used to fill the units list 97 of the window shown in FIG. 15 with all the units that belong to another study. The window also contains a checkbox that imposes the constraint that only units from the selected study, the parent study, can be used in the current study. This window is opened when the "copy from study" button 95 of the window shown in FIG. 15 is clicked on the unit selection window. The "units from other study" window shown in FIG. 16 includes a number of fields andcontrols. The study list area 111 is a scrollable area which contains all the studies in the project, except for studies constrained to "Include all units in project" and the current study. The study description area 112 is in a non-enttrable scrollable area of text that contains the description of the highlighted study shown in area 111. This text is from the unit selection description area on the study detail window illustrated in FIG. 9A. The button 113 labeled "Replace current selection in unit list" causes the action of displaying the units for the hi~hli~hted study in the unit list on the unit selection window. The checkbox entitled "Constrain unit selection to be a subset of the specified study" 114 imposes a constraint on the study so that only units from the selected study in area 111 can be used for the current study. Action is to constrain the contents of the units listing area so it only contains the units specified for the selected study. The button entitled "Add units to current sele-tion in unit list"
displays the units for the highlighted study in the unit list on t~.e unit selection window.

THE UNIT MODUI,E

The purpose of the unit module is to include all the windows, controls SUBSTITUTE SHEET (RULE 2f;~

CA 02260077 l999-01-ll and areas that are used to define video units, open and work with sessions, and manage sessions in the interactive video analysis system. The unit module includes a unit list window.

Unit List Window The unit list window is illustrated in FIG. 17. The unit list window presents all the units defined for the project. For example, this includes all the units in the database. The unit list window displays the unit ID and the unit name.
Optionally, nine custom unit fields may also appear. Double-clicking on a recordpresents the unit detail window.

Unit Detail Window The unit detail window is illustrated in FIG. 18. The unit detail window is the primary window that contains all data needed to define a unit, including creation of the transcript. This window is presented when adding a new unit or when editing an existing unit. The fields and controls of the unit detail window are described below:
The window title area 121 gives the name of this window which is "Unit Detail". The unit name area 122 is an enterable area for the name of the unit. This must be a unique name. Internally, all attributes for the unit are associated with an internal unit identifier, so the unit name can be changed. The unit ID area 123 is an enterable area for code to identify the unit. The create date ar6 a 124 gives the date and time when the unit was initially created. This is a non-enterable area that is filled by the program when the unit record is created. The descrip lon area 125 is a -scrollable text area for entry in order to describe the unit. Nine custom unit fields are an optional feature. Each field is an enterable area for storing data up to 30 characters long. The field names are customizable.
The segment list area 126 is a scrollable area that displays all the segments for this unit. Each segment is displayed with its name (file name from the SUBSTITUTE SHEFr (RULE 26) CA 02260077 l999-Ol-ll path), length (determined by reading the media on which the segment is recorded), start time (calculated using the sum of the previous segment length), and end time (calculated using the sum of the previous segment length plus the length of thissegment). The sequence number is by default the order in which the segments werecreated. The sequence number determines the order in which the segments are to be viewed. Order of the segments can be ch~ngecl by draggmg. Dragging is only supported for a new record. When a segment is moved by dr~gging the start and end times of all other segments are recalculated.
The add segment button 128 has the action of pr senting the open-file dialog, pronlpling the user to select the volume and file that contains the segment.
When the user selects a file, the file name is entered as the segment name in the segment list 126. The length is also determined and written into the segment list.
The first frame of the video is displayed in the video view area.
The delete segment button 129 has the action of plol-lpting the user to confirm that the hiEhlighted segment in the segment list 126 is to be deleted. Upon confirmation, the segment is deleted and the start and end times of all other segments are recalculated. The study membership area 130 is a scrollable area that lists all studies in which this unit is included. Units are ~.signe~l to a study on a study detail window on FIG. 9A. When such an ~c~i~nment is made, the study is included in this area 130.
The transcript information area 131is a non-ent.~rable area which displays the size of each transcript (the number of characters). The import transcript button 132 proml)ls the user for which transcript to import, and then presents the standard open file dialog prol--pling the file name to import. When the file is selected, the file is imported using tab-delimited fields. The size of the transcript is written to the transcript size area 135. The edit transcript button 133 opens the Video Window with the transcript text. The export transcript button 134 has the action of prompting the user for which transcript to export, then presents the standard new file prol..~ling the file name for the export file.
SUBSTITUTE SHEET (RULE 26 CA 02260077 l999-01-ll WO !~t~J'~,~827 PCT/US97/12061 Navigation controls 138 operate as previously described. The print button 137 has the action of printing the unit detail report. The cancel/save unit buttons 136 include the save button which ~ronlpls the user for confirmation that the segment sequence is correct. After confirmation, the user is either relulned to the window or the unit data is saved. For an existing record, the save button action is to save the unit data and return to the unit list window of FIG. 17. If the cancel button is used, any changes to segments or to the transcript are rolled back after confirmation. A video position indicator/control 139 has the same operation as the video position indicator/control of the Video Window. It indicates the relative positions of the current frame in the segment.
Session ~n-llinp And Mana~ement Coding a unit for a specific study takes place in a session. When a user goes into code mode on the video window, a session must be opened that sets the coding parameters. The progress of coding can be tracked by monitoring the status of the sessions for a study. The present invention includes various windows to open and close sessions during coding and management windows that give detailed session information. If "code" is selected on the main button bar, and if the user has no other currently opened sessions, the user is p~on,l)ted to open a session for the current study on the create session window. If the user has previous sessions that are still open, the resume session window is presented and the user may open an existing session or create a session. After a session is active, the user may move freely back and forth between view mode and code mode on the video window. While in code mode, the user may open the session info window to display information about the session.
Session management is performed from the session listing window which is accessed by clicking session on the m~n~ge button bar. Double-clicking on a session in the session listing window opens the session detail window which provides information similar to the session info window.
The session info window presents information about the current session including the name of the current study, the number of this particular pass, with the SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll WO ~8~'1,7827 PCT/US97/12061 total number of passes in the study, a text description of the study, information about the unit including the unit name and unit I.D., the number of the segments that make up the unit and the total length in hours, min-~tes, seconds and frames for the entire unit, including all segments. Additionally, the session info window gives information about the sample that is in effect for the current pass, a pass outline which contains all the indented event types, characteristics and choice values for Ihe current pass, a print button and a button to close the window. A session placemark saves a time code to close the window. A session placemark saves a time code with the session so thatwhen the session is resumed the video is automatically positioned at the placemark.
This occurs when a session is ended without closing it.
When the user clicks sessions on the manage button bar, the select a study window appears. A select button chooses a selected study. The session listwindow is opened, listing all the sessions for the selected study. Clicking on a record listed presents the session detail window. The session detail window gives information about the session. The information includes the name of the study, the pass number for the particular session, along with the total number of passes defined for the study, the name of the video unit being coded, the unit I.D. for the video unit, the session status such as 'hever opened, " '~opened, " '~eopened, " and "closed, " the name of the user who opened the session, the length of the unit in hours, minl~tes, and seconds, the total elapsed time in the code mode between when the unit was opened and closed, the number of events that have been coded in the session, and the number of characteristics recorded for event instances. Sample information such as the sample method that is in effect for the pass in the session and the sample size created for the session is displayed.

THE VIDEO VV~DOW

The Video Window is used to: (i) play the video belonging to a unit;
(ii) display, edit, and/or synchronize transcription text belonging to the unit; (iii) create event types and structure characteristics under them (for the default study SU~S 111 ~JTE SHEET (RULE 26) CA 02260077 l999-01-ll only); (iv) mark and characterize event instances; (v) retrieve previously marked event instances for editing or viewing.
An "event instance" is the marked occurrence ol a predefined event ("event type") within video or transcription text. The video and/or text is associated with an event type and characteristic to create a specific in~t~nce.
The Video Window may be opened through one of several actions:
(1) By selecting the code button on the application tool bar or selecting code from the view menu.
(2) By selecting the transcriber button on the application tool bar or selecting transcribe from the view menu.
(3) By selecting the view button on the application tool bar or selecting view from the view menu.
(4) The show in palette button is clicked on the search window - so that the window is opened in view mode.
In each case, the window is opened to display a snecified unit (including video and transcription text).
The Video Window supports three modes of operation: view mode, transcribe mode, and code mode.

Code Mode Event instances are marked only in code mode. During the coding process, when an event instance is observed, the following steps are E~ Ço"lled:(1) Click the Mark In button to mark an initial In Point, then "fine tune" the video position using the "In Point" frame control to incrementally position the video to precisely zero in on the video frame where the event instance begins. Tile In Point time code becomes part of the instance record.
(2) Highlight a text selection and click the Mark Text button. The location of the text selection becomes part of the instance record;
SUBSTITUTESHEET(RULE26) CA 02260077 l999-Ol-ll if the event type requires text only, the time code of the beginning of the utterance in which the highlighted text begins also becomes part of the instance record.
(3) Click the Mark Out button to mark the Out Point of the instance, then "fine tune" the video position using the "Out Point" frame control.
(4) Click to select an event type. The event type listing displays all the event types that can be coded in a particular pass, no other event types may be coded.
(5) Scroll through the characteristics and enter or select values for each one.
These steps can be done in any order. Clicking ~he save instance button completes the marking of an instance. After save instance is clicked, the instance can only be edited by recalling it by clicking on it in the instance listing, editing it using the frame controls, selecting a different event type or characteristic values, and clicking save instance to save the updates. When a new instance is saved, the in~t~nce is sorted into the instance listing if the event type is checked and is displayed in a different color to distinguish it from previously created instances.

Transcribe Mode In the transcribe mode, only the Mark In button and the In Point frame control are enabled. This allows marking the In Point of each utterance. This is not the same as marking an instance. Event types, event instance~, and characteristics are not displayed in the transcribe mode.

View Mode In the view mode, the buttons that mark or change the In/Out Points and text selection are disabled. The event type listing displays all the event types defined for the c~ ent study rather than for the ~;ul~el~l session and allows event types to be SUBSTITUTE SHEET (RULE 26) CA 02260077 l999-01-ll checked so in~t~nces for the event type are displayed in the instance listing.
Characteristic values may be viewed for each instance, but not changed. If there is no current study, nothing appears in the event type listing.

Pre-Processing When the Video Window is first opened, initi~ tion depends on the mode in which it is to be opened.
Various palettes may be opened over the Video Window. These palettes may only appear in certain modes. FIG. 19 includes a table of the palettes that may be opened over the Video Window. The palettes include the sample palette, the outline palette, search results palette, and transcribe video loop palette.

Se~ment Switchin~

The current video segment may be changed in a number of ways: (1) by selecting the segment switching buttons on the sides of the progress bar, (2) when the video plays to the end of the current segment, and (3) when an in~t~nce is clicked in the instance listing, or a time code is clicked in any palette that is not the ~;u segment.
The path of the required segment is retrieved from the unit file. If the path does not exist because the segment is a removable media, the user is prompted to open the file cont~inin~ the segment. If any invalid path is entered, an error is given and the user is pro~ )led again. If cancel is clicked, the user is returned to the Video Window in the current segment.

Areas of the Video Window The Video Window has five major areas: the title area, the video area, the mark area, the instance area, and the list area.
The title area is illustrated in FIG. 21A. The video area is illustrated in SUBSTITUTE SHEET (P~ULE 26) WO ~ 2827 rCT/US97112061 FIG. 21B and contains the video display area, play controls, relative position indicators, zoom and sound controls and drawing tools. The mark area is illustrated in FIG. 21C and contains controls to Mark ~n~t~nces, refine In and Out Points on the video, and save marked instances. The in~t~nce area is illustrated in FIG. 21D and contains li.~ting~ of event types, characteristic labels, character~stic choices, and events in~t~nces that have already been marked. The list area contains the transcript text and controls to change the mode of operation.
With respect to the video area 141 illustrated in FIG. 21B, the video position indicator/control 142 acts like a thermometer. As the video plays, the grey area moves from left to right, filling up the thermometer. It displays the relative position of the current frame in the current segment. At the end of the segment, the thermometer is completely filled with grey. Increments on the control indicate tenths of the segment. The end of the grey area can be dragged back and forth. When released, the action is to move the current video frame to the location in the video corresponding to the relative position of the control. The video resumes the current play condition. A small amount of grey is always displayed on the thermometer, even when the ~;ur~enl frame is the first frame of the segment. This is so that the end of the grey can be picked up using the click and drag action even when the first frame of the video is the current location.
A subtitle area 143 displays the transcription text that corresponds to the video. Two lines of the text are displayed. Button 144 is the zoom tool. The action is to zoom the selected area to fill the frame of the video display. Button 145 is the unzoom tool which restores the video display to lx magnification. Button 146 is the volume control. Click action pops a thermometer used to control the volume. Button 147 is the mute control. The button toggles the sound on or off. Area 148 gives the current video frame. Button 149 moves the video back five seconds. Button 150 goes to the beginning of the culrenl video segment and resumes the ~;urr~"ll play condition. Button 151 is the pause button and button 152 is th~ play button. Button 153 is the subtitle control which toggles the video subtitle through three modes:
SUBSTITUTE SHEET(RULE26) CA 02260077 l999-01-ll WO~8~ 827 PCT/US97/12061 1) Display subtitles from transcript one;
2) Display subtitles from transcript two; and 3) Do not display subtitles.
Button 154 is the draw tool which enables drawing on the video display.
The cursor becomes a pencil and drawing starts upon mouse down and continues as the mouse is moved until mouse up. The draw tool can only be selected when the video is paused. Button 155 is the eraser tool which enables erasure of lines created using the draw tool. Button 156 is the scissor tool which copies the cu~ ly displayed frame to the clipboard. Drawings made over the video using the draw tool are copied as well. The scissors tool can only be selected when the video is paused.
Button 157 is the frame advance which advances the video by one frame. Button 158 is the open video dialogue which opens a window to display the video in a largerarea. The link control 159 controls the link between the vide~ and transcript area.
When '~n~ the video is linked with the transcript. In other words, when the video is moved the closest utterance is highlighted in the transcript area. When the linkcontrol button is '~ff, " moving the video has no effect on the transcript area.With respect to the mark area of the Video Window, reference is made to FIG. 21C and PIG. 22. The action of the controls in the mark area is dependent on the current video mode (view, code, and transcribe).
The Mark In button 161 is disabled in the view mode. In the code mode the button action is to "~rab" the time code of the cullenl video frame regardless of the play condition and display it in the In Point area 162. In the transcribe mode, the button action is to "grab" the time code of the current video frame regardless of play condition and display it in the In Point area 162 and in the time code area for the utterance in which the insertion point is positioned. Button action is to overwrite any previous contellts in the In Point area and the utterance time code area with the time code of the current video frame.
The In Point area 162 is a non-enterable area which displays the time code of the frame that is the be~inning of the instance. This area is upr~te~l by one of ~UBSTITUTE S~IEET (RULE 26~

CA 02260077 l999-01-ll WO ~)81'~&27 PCI'/US97/12061 five actions: ~1) clicking the Mark In button in the code and transcribe modes such that the area gets the time code for the current frame; (2) manipulating the In Point frame control in the code and transcribe modes so that the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires a video-in or exhaustive, segmentation coding so that the area gets the In Point of the instance; (4) highlighting an utterance in the view and transcribe modes so the area gets the time code of the utterance; and (5) clicking an outline item on the outline palette so that the area gets the In Point of the outline item.
The In Point frame control button 163 has identical action in the code and transcribe modes. Control is disabled in the view mode. Control action is toincrementally move the video forwards or backwards a few frames to "fine tune" the In Point.
The Mark Out button 164 is enabled in code mode only. The button action is exactly analogous to the Mark In button 161, except the Out Point is set and displayed in the Out Point area 165.
The Out Point area 165 is a non-enterable area ~Nhich displays the time code of the frame that is the end of the instance. If there is no Out Point for the instance, the area is blank. This area is upd~ted by one of four actions: (1) clicking the Mark Out button in the code mode so that the area gets the time code for thecurrent frame; (2) manipulating the Out Point frame control in the code mode so the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires Video Out coding so that the area gets the Out Point of the instance or becomes a blank; and (4) hi~hlighting an utterance in the view and transcribe modes so that the area becomes blank.
The Out Point frame control button 166 is only enabled in the code mode. The control is analogous to the In Point frame control 163 except the Out Point is adjusted.
SUBSTITUTE SHEET ~RULE 26) CA 02260077 l999-Ol-ll The mark text button 167 is enabled only in the code mode. The button action is to register the position of the highlighted text as the instance marking. The button appearance changes to signify that text has been marked. Internally, the time code of the be~inning of the utterance in which the highlighted text begins is retained, along with the position of the first and last characters of the highlighted text.
The event type listing area 170 is a scrollable area in which the action and contents depend on the mode. The area is blank in the transcribe mode. In the code mode, the scrollable area contains a row for each event type that can be coded in the current pass. Only event types that are listed here can be coded in a particular session. In code mode with the outline palette open, this area is blank. In viewmode, the area contains a row for each event type defined in the study. If there is no current study, the area is blank.
The event type listing contains four columns. The first column is the checkmark that indicates that instances of this event type are to be displayed in the instance listing area. The second column is the unique event type code. The third column is the event type name. The fourth column is the event instance coding requirement. In both modes, if an event type is double-clicked the action is to place a checkmark next to it or to remove the checkmark. The checkmark indicates that event instances with this event type are to be listed in the "previously marked instances" area. In the illustration the event type "Question Asked" is checked. All the instances of questions being asked in this unit are listed in the "previously marked instances" area.
In code mode, clicking an event type has the action of refreshing the characteristic labels popup 171 to contain all the characteristics structured under the highlighted event type for the current pass. In the view mode, the action is to refresh the characteristics label popup to contain all the characteristics structured under the highlighted event type in the study.
The characteristics labels area 171 is a popup that contains labels (names) of all the characteristics structured under the highlighted event type in area SUBSTITUTE Sl~EEl (RULE 26) CA 02260077 l999-01-ll 170 for the current pass. Selecting an item from the popup has the action of refreshing the characteristic value area 174 to display choices for the selectedcharacteristic, or m~king it an enterable area, if the selected characteristic has a data entry type of "text" or "numeric". Selecting an item from the popup a1so has theaction of refreshing the characteristic count 173 to display the sequence number of the selected characteristic.
The next/previous characteristic buttons 172 are a two-button cluster that have the action of selecting the next item in the characteristic label popup, or selecting the previous item in the popup. The characteristic count area 173 is a non-enterable text display of the sequence number of the currently displayed characteristic label and the total number of characteristic labels for the current pass. The characteristic value area 174 is either a scrollable area or an enterable area. The clear button 175 has the action of clearing the In Point and Out Point areas and resetting the Mark In, Mark Out, and mark text buttons to normal display (for example, removing any reverse video).
The save instance button 176 only has action in the code mode and is disabled in the other modes. The button name is "save instance" unless an event inst~nce is selected in the event instance listing, in which case the button name is "save changes". The action of the button is to validate data entry. An event type must be selected. All characteristics must be given values. All the points must be marked to satisfy the event in.ct~nce coding rules for the selected event type.
The event type help button 177 only applies to the code and view modes. The action is to present a dialog cont~inin~ the coding instruction for the highlighted event type. The show/add event type button 178 is only visible in the code mode for the default study. The action is to present a window to select one or more event types previously created for the default study to be included in the event type listing area. A button on this window allows users to create a new event type for the default study. The button is provided so that the user may select which of the previously defined event types for the default study are to be included in the event SUBSTITUTE SHEET p~ULE 26) type listing area. This allows the user to select just those event types of immediate interest for addition to the listing. The user also has the option of creating new event types for the default study using the event type detail window.
The edit event type button 179 is only visible in the code mode for the default study. The action of the button is to allow the user to edit the hiphli~hted event type.
The remove/delete event type button 180 is only visible in the code mode for the default study. The action of the button is to prompt the user for whether the highlighted event type is to be removed from the event type listing or is to be deleted permanently with all its instances.

Instance Area With reference to FIG. 21D and 23 there is shown the instance area of the Video Window. The instance area provides a listing of instances that have been marked for selected event types and controls to retrieve an instance for viewing or editing, to add instances to an outline, and delete instances. This area is active only in the code and view modes. The area is disabled in code mode when the outline window is open.
The in~t~nce listing area 181 is a scrollable area that contains all the instances marked in the current session for the event types that are checked in the event type listing. Each instance is listed with a time code and event type code. The meaning of the time code depends on the event type. If the video is marked, the In Point is displayed. If only text is marked, the time code of the beginnin~ of the utterance is displayed. A symbol is placed after the time code to indicate that the time code corresponds to the video frame closest to the beginning of the utterance in the event of marked text. Clicking an instance moves the video to the be~innin~ of the instance and resumes the playing condition.
After selecting an instance, the following controls can be used. The delete in~t~n~e button 182 is enabled in the code mode only. The action of the button SUBSTITUTE SHEET (RULE 26~

CA 02260077 l999-01-ll is to delete the highlighted instance after confirmation with the user. The add to outline button 183 is enabled in the code and view modes only. Action is to add the instance to the current outline. The return to In Point button 184 is enabled in the code and view modes only. The action of the button is to move the video to the first frame of the highlighted event instance. The video resumes the prior play condition.
The pause button 185 is enabled in the code and view modes only. The action is to pause the video at the current frame. The play to Out Point button 186 is enabled in the code and view modes only. The action of the button is to play the video starting at the current frame and stop at the Out Point for the highli~ht d event instance. The go to Out Point button 187 is enabled in the code and view modes only. The action of the button is to move the video to three seconds before the ~ut Point of the highlighted event instance, play the video to the Out Point, and stop.

TRANSCRIBE MODE

The transcribe mode has two operations: (i) transcribing the spoken words or actions on the video into text; and (ii) assigning time reference values to each of the utterances in the video.
The first operation, transcribing video content into text, is largely accomplished by watching the video and entering text into the list area. This process is aided by the Transcribe-Video Loop palette. The palette provides a control that enables the user to play a short segment of video over and over without touching any controls. The user sets the loop start point and end point. When the contents of the loop have been successfully transcribed, a 'leap' button moves the loop to the next increment of video.
The second operation, assigning time reference values to utterances, is accomplished using the same frame controls and the Mark-In control as described in the '~Tideo Window".

SUBSTITUTE ~EFr (~tULE 26) LIST MANAGER

The list manager is used to display and work with the text associated with the video. Typically this text is the transcription of what is being said in the video, though the text may actually be anything - observations about the video, translation, etc. Rec~ e it is anticipated that its most common use will be to hold transcription of speech in the video, the text is referred to as the 'transcription' or transcript. In speech, each speaker takes turns speaking; the transcription of each turn is an 'utterance'; e.g. an utterance is the transcription of one speaker's turn at speech.
Utterances are records in the database; each utterance has a time reference value (In point), two transcription text fields, and a speaker field.
The area on the screen that the list manager controls is called the 'List Area'. The List Area is shown in FIG. 24. It is the right side of the Video Window of FIG. 20. The list m~n~ger gets its name because it is not a conventional text area;
it displays text from utterance records in the transcript so that the text looks like a contiguous block. Actions on the text block update the utterance records.
During transcription the video is transcribed and synchronized with the transcript. Each utterance is associated with a time reference value that synchronizes it with the video; an In point is marked that identifies where the utterance begins in the video. (Note: there is no Out point associated with an utterance; the out point is assumed to be the In point of the next consecutive utterance.) Each utterance is also associated with a speaker.
Utterances in the list area are always displayed in the order as entered or specified (in case of an insertion) by the user.

Mode of Operation The list are supports three modes of operation: View Mode, Transcribe Mode and Code Mode. The area behaves differently in each of the three modes. For SUBSTITUTE SHEET ~ULE 26) instance, the action of clicking in the area to create an insertion point is the same in all three modes, but a subsequent action of typing characters would have the effect of inserting characters into the text area only in Transcribe mode'; it would have no effect at all in View mode or Code mode.

View Mode The List Area in View Mode displays the transcript text next to the video.
Clicking on the text has the action of moving the video to the point closest to the utterance. Moving the video using other controls on the Video Window has the effect of highlighting the utterance closest to the video. The text can not be changed in any manner, nor may the time reference values associated with it be changed. View mode affects the other controls on the Video window as well: new event instancescan not be marked or edited, and characteristic values can not be recorded or changed.

Transcribe Mode The purpose of the Transcribe Mode is to allow text en~ry and edit, and to provide controls for marking the text to synchronize it with the video. The marking process is limited to marking the video In point for each utterance; event instances can not be marked or edited, and characteristic values can not be recorded or ch~n~.d .

Code Mode The purpose of Code Mode is to mark event instances and enter characteristic values. The coding process typically starts only after the entire Unit is transcribed and time reference values are associated with every utterance, as the time reference value is used during coding.
There are icons which can be clicked to change the mode.

SUBSTITUTE Sl IEEr (RU~ E 26) The list area has a header area 191 with the mode icons 195. The time column 192 displays the time reference value associated with each utterance. This is the point on the video that was marked to correspond with the bepinning of the utterance (e.g. the time reference value is the In point for when this utterance is made in the video). If the utterance has not been marked, the time reference value isdisplayed at 00:00:00.
The speaker column 193 identifies the speaker. The transcript 1 column 194 displays the text of the first transcript. This area is enterable in the Transcribe Mode.
Area splitter 196 allows the user to split the transcript text area into two halves so that a second transcript is displayed. This is shown in FIG. 25. A video may be on more than one media unit (disk, tape, etc.) (segmer.ts). Segment boundaries are identified in the list area as a heavy horizontal iine that goes across all four columns.
Whenever an utterance is highlighted, action is to move the video to the beginning time reference value of the utterance or the closest previous utterance that has a time reference value. In the transcribe mode the text is fully editable and selectable. In the view mode, all key actions have the effect of highlighting the entire utterance in the Code Mode or navigating between highlight~-l utterances. In theCode Mode instances are marked. A full set of actions is supported to select text so it can be marked. Highlighted text can not be changed.
Whenever an event instance that has text coding (as determined by the event type) is selected on the Video Window, the list area is updated to scroll to the marked utterance, and highlight the marked selection within the utterance. Whenever an event instance that does not have text coding is selected on the Video Window, the list area is updated to scroll to the closest utterance, and high!ight the utterance. As the video plays or is moved, the list area is updated to scroll to the closest utterance to the current video frame and highlight the utterance.

SUBSTITUTE SI~EET (RULE 26) CA 02260077 l999-01-ll Find/Find Again The Video Window menubar contains comm~n~ for Find and Find Again.
The effect on the list area is identical for each of these comm~n-l~. The user is pro,l,pled for a text value and/or speaker name; the list m~n~ger searches for the next instance starting at the current insertion point position.

Marking Time Reference Values After the transcript text has been entered, each utterance is marked to identifythe time reference value on the video to which it belongs. In Transcribe Mode, the Mark In button and controls are enabled to allow exact video positioning of the In point of each utterance.
When in Code Mode, the list area tracks the current insertion position and/or highlight range in Code Mode: the utterance ID, time reference value, and character offset is available to the mark controls so the exact insertion point position or highlight range can be recorded with the instance.

OUTLINE PRESENTATION FEATURE

The outline presentation feature allows the user to select and structure the video and transcript text from event instances. The intended use of this feature is to prepare presentations that include selected instances.
The outline palette for the current outline is opened when Show Outline is requested anywhere. If no current outline is active, the user is prompted to select one by the Select An Outline window shown in FIG. 26. It displays outlines that have been created. The author of each outline is displayed in the scrollable area.
The user may select an outline or pushing the plus button will create a new outline.
The negative button will delete the selected outline if the user is the author.
The outline description window is displayed when an outline is created.
It has two enterable areas as shown in FIG. 27: the outline name and the description.
SUBSTITUTE SHEEr Q~ULE 2~) The outline palette is shown in FIG. 28.
Event instances dragged to the Outline icon ~ Oll the Video Window of FIG. 20 become part of the current outline. If there is no current outline, the user is plol,lpted to specify one, or create a new one. The current outline remains in effect until a different outline is selected.
When the event instance is dropped on the Outline icon, the Outline Item window, shown in FIG. 29, is opened to prompt the user for a description of the item. The Outline Item window displays all the headers for the current outline (in the same order as specified in the outline) so a header for the item can be specified as an optional step.
If the outline header is specified on the Outline Item window, the item is added as the last item under the header. If no outline header is specified, the item is added as the first item in the orphan area.
When an event instance is added to the outline, an Outline Item is created from the unit, time reference value, event type, and text selection of the instance. After creation, the outline item is completely independent of the instance.
The outline item may be edited for In/Out point, text selection, or deleted entirely, without affecting the event instance, and vice-versa.
If the event instance has already been used to create an outline item for the current outline, the user is warned of this and prompted for whether the action should be ignored, or whether a second outline item should be created.
After outline items have been created, they can be structured in the Outline Palette. Outline items can be structured under and mcved between headers, and the order of headers can be changed. Once the outline is complete, it can beprinted and the video portion can be exported to an MPEG file.
When the outline palette is active, it can be used to control the video display. Clicking an outline item moves the video to the associated time reference value. The outline item's time reference value can be edited for In and Out points.
The outline item's transcnpt marking may also be edited.
~"~STITUTE Sl IEET (RULE 2~3 CA 02260077 l999-Ol-ll Outline items retain association with the utterances (Transcript 1 and Transcript 2) associated with the outline item (by time ref~r~,nce value) corresponding to the video. The user may specify whether these are to be printed with the outline.
The outline area 200 is a scrollable area that contains all outline items and outline headers. Outline items are indented under outline headers. Drag and drop action is supported in this area to allow headers and outline items to be moved freely through the outline area. Outline headers appear in bold and are numberedwith whole numbers. When a header is moved the outline items move with it. A
header may be clicked and dragged anywhere in the outline area. Outline items appear in plain text and are numbered with decimal numbers that begin with the header number. Outline items appear with the event code that went along with theevent instance from which the item was created. Items may be clicked and draggedanywhere in the outline area - under the same header, under a different header, or to the orphan area.
If an item is clicked, in the video window, the video is moved to the In point of the outline item, the utterance closest to the current video frame is highlighted, and the current play condition is resumed. If the outline item points to video from a unit or segment not culrenlly mounted, the user is pro~ ,led to insert it.
In the Video window, the In and Out points of the outline item appear in the Mark controls. The Mark controls are enabled when the Outline window is displayed, so the In and/or Out points of the outline item can ~e edited. This has no effect whatsoever on the instance from which the outline itenm was created. If an item is not associated with a header, it is displayed on the top of the outline area 200a and is called an '~orphan~.
If an outline item is highlighted areas 201, 202, 203 and 204 of the outline palette are filled. The study area 201 displays the study from which the event instance was taken to create the highlighted outline item. The unit area 202 displays the name of the video unit associated with the highli~hted outline item. The In point area 203 displays the In point of the video associated with the highlighte~1 outline SUBSTITUTE SHEEr (RULE 26) item. The duration area 204 displays the duration of the video associated with the outline item. Play Outline button 205 has the Button action to play the video starting at the In Point of the first outline item and continue playing each outline item in the order of appearance in the outline. Play stops at the Out Point of the last outline item.

Export Mode The system supports the creation of a new MPE~ file based on the instances that have been moved into an outline. That is, given marked video in and video out points, the system can create a new MPEG file which contains only the marked video content. The new MPEG file also contains the relevant additional information such as transcript text, and derivative information such as event, characteristic and instance information. When viewed with one of the generally available MPEG viewers, the exported MPEG file is viewable. However, when viewed with a LAVA MPEG viewer (made by LAVA, L.L.C.), not only is the MPEG file viewable, but all of the relevant additional and derivative information such as the transcIipt text, event, characteristic and instance information is viewable and accessible for random positioning, searching, subtitling and manipulation.
Two types of output can be produced from an O~tline: a printed Outline Report and a MPEG file created from the outline item in the order specified on the outline cont~ining video from the outline.

SAMPLING

Sampling is the creation of a specific subset of video or event instances that can be used for new instance hunting or the creation of a specific subset of event instances that can be characterized. There are five methods for creating samples.
The sample method is specified on the sample definition window and displayed on the study definition window for each coding pass. The samples are presented to the coder in the Sample Palette so they can be visited one by one. The samples are saved SUBSTITUTE SHEET ~ULE 26~

in the database so they can be retrieved into the Sample Palette anytime. FIG. 30 shows the sample definition window. Area 210 permits a choice of sampling method.

No Samplin~

This sample method means that no samples will be crealed. The coder can use all the video in the search for event in~tAnces.

Fractional Event Sample This method means the sample is to be created from a specified percentage of the total event instances that occur in the Unit that belong to the 'Specify Event" area.
The default value for percentage is 100~. An event must be selected from the 'Specify Event"popup if this sample method is chosen.

Ouantitative Event Sample This method means the sample is to be created from a specified number of the event instances that occur in the Unit that belong to the 'Speci~y Event" area. An event must be selected from the '~pecify Event"popup if this sample method is chosen.

Ouantitative Time Sample This method means the sample is to ~e created from a specified number of video clips from the Unit with a specified duration. Two parameters are required for this option: the number of samples to be created from the Unit, and the duration in seconds of each sample.
If 'Occurring within Event"is specified, the number of clips refers to the entire video, not from each event.
If an '~ccurring within Event" event filter is specified, ~he random selection of video is from the set of all instances of the event type that are at least as long as the SUBSTITUTE SHEEr (RULE 26) WO 98/02827 PCTtUS97/12061 value entered; that is, if the criteria was to randomly select 12 clips of 15 seconds each using = 'Teacher Questions" as the event filter, then the first proces~ing pass would be to find all instances of 'Teacher Questions"at least 15 seconds long. The second pass would be to randomly select twelve 15 second intervals within the selection, so that every possible 15 second period within the selection, has an equal probability of being selected for the sample. Additional constraints are:
-Sample periods may not overlap ~-Sample periods may not span from one instance to another; e.g. the sample period must be wholly cont~in~ within a single event instance.

Proportional Time Sample This method means the sample is to be created from randomly selected video clips of a given duration from the Unit. The number of samples is given in terms of '~samples per minute of video". Three parameters are required for this option: the number of samples desired, the interval of time over which the samples are to bechosen, and the duration in seconds of each sample.

The Event Filter The event filter area 219 allows restriction of the selection of event instancesor time samples to periods within another event type.
- Time samples restrict the creation of new instances to the sample periods, according to the Event Coding Constraint specified ir; the sample definition.
- Instance samples allow the retrieval of selected instances, typically for characterization .

A time sample is created by specifying one of the time sample methods (Random Time Sample or Proportional Random Time Sample).

SUBSTITUTE ~HEET ~ULE 26) An instance sample is created by specifying one of the event sample methods (Fractional Event Sample or Q~ntit~tive Event Sample).
If an 'Occurring within Event"event filter is in effect, the in.~t~nce listing on the Video window limits the display of existing instances to only instances with an In point within the time period of the highli~hted sample in the Sample Palette. For example, if five event instances are listed in the Sample Palette and one is highlighted, only event instances with an In point within the time period (e.g. From Video In to Video Out) of the highlighted instance would be listed in the event listing (subject to the other controls that specify what event types are to be displayed in the instance listing).
The sample palette is shown in FIG. 31. Checkmarks '~23 next to the sample list area 224 may be set. The sample list area contains an entry for each sample with time reference values for the In point and Out point of the sample.
FIG. 32 is the sample information window which is opened from choosing the Show Sample Info button 222 on the sample palette. The event filter area is a non-enterable scrollable area that contains text describing the event filter in effect for the current pass. The illustration shows the format for how the filter is to be described -it follows the same conventions as the 'Within" area in the Sample Definition Window.

ANALYSIS MODULE

The analysis module is used to gather statistics a~out event instances across video units. The module provides functions for defining variables, searching for and retrieving information about event instances, displaying the results, and exporting the data. Typically the results of an analysis will be exported for further analysis in a statistical program.
From the main button bar the user may choose the analysis module. A
window requests the user to designate a unit analysis or an instance analysis.
The analysis module allows the user to produce statistical information SUBSTITUTE SR'EEr (RULE 26) WO 9~ h27 PCT/US97/12061 about event instances on either a Unit by Unit basis or an Instance by Instance basis.
The results can be displayed or exported for further analysis.
There are two 'flavors"of analysis - Unit analysis and Event analysis.
Unit analysis aggregates information about the instances found in a unit and returns statistics such as count, mean, and standard deviation about the event instances found in the unit. Event Instance analysis returns characteristic values directly for each instance found in the units included in the analysis.
In unit analysis the user specifies the event variables. In instance ana~ysis the user specifies characteristics. FIG. 33 shows the unit analysis window.
Area 232 is the analysis description area. Area 236 is the variable definition area.
There are four columns: the sequence number, the variable description, the shortvariable name and the statistic that will be calculated for the variable such as: count, mean, SD (standard deviation). Variables may be dragged to change order and added or deleted. The execute analysis button 242 executes the analysis.
The analysis results area 243 has a column for each variable defined in variable listing area 237 and a row for each unit in the analysis. A unit variable may be added and defined. The unit value will be returned for each unit in the analysis.
An event variable may be added and defined. A calculated value will be returned for each unit in the analysis. The calculated variable is a statistic about instances matching a description. FIG. 34 shows the define unit variable window and FIG. 35 shows the define event variable window. The event criteria area 255 specifies event instances to be found for analysis. Event instances are found for the event type in area 254 that occur within other instances and/or have specific characteristic values.
Area 256 sets additional criteria. The event variable is calculated using the attribute designated in area 257. Area 258 indicates the calculation to pelro~n- (mean, count instances, total, standard deviation, total number, sum, minimum, maximum, range, or count before/after for exhaustive segmentation).
FIG. 36 illustrates the instance analysis window. Area 262 describes the analysis. Area 264 specifies the event type and is analogous to the define event SUBSTmlTE SHEEr (P(ULE 26) variable window of FIG. 35 for unit analysis. Area 265 is the variable listing area.
It has four columns. The first three are the same as for unit analysis. The fourth column is '~rigin". The origin is 'Unit" for unit variables 'Inst. " for instance properties '~har" for characteristic values Variables may be added and deleted. There is a button 268 to execute analysis. Area 269 gives the analysis results with a column for each variable in variable listing area 265 and a row for each event instance in the analysis. FIG. 37 is the define analysis variable window.

SEARCH MODULE

The search module is used to perform ad-hoc searches ~or text or event instances, display the results, and allow the results to be used to control the Video Window.
The Search Module allows the user to search for text ol event instances across multiple video units. The results can be displayed in a palette over the Video Window so each 'find' can be viewed.
The Search Window is designed lo allow multiple iterative searches. Each search can begin with the results of the previous search: the new search results can be added to or subtracted from the previous search results.
There are two types of searches: searches for text strings within the transcripttext ('Text Search'), and searches for event instances that match a give event type and other criteria ('Instance Search'). Each search has its own window, but most of the controls in each window are identical.
The search module is accessed from the main button bar for a text search or an instance search. FIG. 38 is a search window with features common to text and instance searches. Area 271 indicates if it is a text or in~t~nçe search. Area 272 SUBSTITUTE SHEET(RULE26) CA 02260077 l999-01-ll WO !)2~J'~2827 PCT/US97/12061 shows the relationship to a previous search. Area 277 designates units to search.
Area 281 specifies what is being searched for: the event instance or word or phrase.
Multiple criteria may be set to identify the characteristic or position. Button 282 executes the search. Area 283 lists the results. Button 284 will add the result to an outline. Area 285 gives the instant count.
If the search within a study button is selected on the search window a unit selection for search window permits the user to select individual units within a study to limit the search.
When the Show In Palette button is pushed a results palette permits the search results to be examined and there is a checkmark that may be set for each result.For event searching the results are event instances. FI(~,. 39 shows the event instance search window. A search is done for an event type o ;curring within an event type where a particular characteristic has a valid charact_ristic value. The operator area 290 may be =, <, >, ~, 3, 1, contains, or inchldes.

Marking Results as Instances After instances have been found, they may be marked as instances of a special class of event type (called a 'Saved Search' event type). This provides several capabilities:
~ The user can quickly retrieve the instances in a future search by specifying the Saved Search event type instead of complex criteria;
~ Characteristic values can be applied to the event instances, and a later pass can be created to record other characteristics.

FIG. 40 is the text search window. The text search can search the text of multiple units of video. It finds all instances of a word or phrase. The search term is input in area 291. The speaker is input in area 292. Area 293 indicates which transcripts are searched. Area 294 permits searching text within an event type with a characteristic and selected choice of characteristic.
SUBSTITUTESHEEr ~ULE2 REPORTS

Study Module Reports The study listing report lists studies in the current selection, sorted in the cullelll sort order. The study detail report details one study, giving all details about it. The event detail report details one event type belonging to a study, giving all details about it. The characteristic detail report details one characteristic belonging to the study, giving all details about it. The units in study detail report lists all the units that have been selected for a single study.

Unit and Session Reports The unit listing report lists all units in the current selection, sorted in the current sort order. The unit detail report gives all details about a unit. The session listing report prints the contents of the current session list window. The session detail report prints the contents of the current session detail window.

User Reports The user listing report lists all users in the current selection, sorted in the cullenl sort order. The user detail report details one user. The system settings report prints all the system settings.

Outline Report The outline report is printed from the outline palette.

Search Report The search report gives results of an event instance search or a text search.
The search criteria report gives the search criteria.

SUBSTITUTE SHEET (RULE 26 Analysis Reports The analysis results report prints the data created for the analysis that is displayed. The analysis variable definition report prints the description of all the variables defined in the analysis.

Sample Reports The sample detail report describes the sample and lists the time reference values in the sample.

Transcript Report The transcript report details the contents of the list m~n~g~r.
The above description is included to illustrate the operation of the preferred embodiments of the present invention and not meant to limit the scope of the invention. The scope of the invention is to be limited only by the followingclaims. From the above discussion, many variations will be appa G-Il to one skilled in the art that would yet be encompassed by the spirit and scope of the invention.

SUBSTIME SHEEl (RULE 2B~

Claims

We claim:
1. A digital video system comprising:
coding and control means, adapted to receive digital reference video information, for coding said digital reference video information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
2. The digital video system of Claim 1, wherein said coding and control means include derivative data coding means and a controller in a control loop.
3. The digital video system of Claim 2, wherein said control loop includes the user.
4. The digital video system of Claim 2, wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
5. A digital video system comprising:
digital storage means for storing digital reference video information;
coding and control means for coding said digital reference video information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
6. The digital video system of Claim 5, further comprising output means.
7. The digital video system of Claim 5, wherein said digital reference video information is encoded and compressed.
8. The digital video system of Claim 7, further comprising means for decoding and decompressing said encoded and compressed digital reference video information.
9. The digital video system of Claim 6, wherein said output means further comprises display means.

10. The digital video system of Claim 5, wherein said digital storage means receives digital reference video information from more than one video source.
11. The digital video system of Claim 5, wherein said coding and control means include derivative data coding means and a controller in a control loop.
12. The digital video system of Claim 11, wherein said control loop includes the user.
13. The digital video system of Claim 11, wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
14. The digital video system of Claim 10, wherein said coding and control means include derivative data coding means and a controller in a control loop, and said derivative data coding means includes means for linking said digital reference video information from more than one video source together.
15. A digital video system comprising:
digital storage means for storing digital reference video information and digital reference audio information;
coding and control means for coding said digital reference video and audio information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
16. The digital video system of Claim 15, further comprising output means.
17. The digital video system of Claim 15, wherein said digital reference video and audio information is encoded and compressed.
18. The digital video system of Claim 17, further comprising means for decoding and decompressing said encoded and compressed digital reference video and audio information.
19. The digital video system of Claim 16, wherein said output means further comprises display means.

20. The digital video system of Claim 15, wherein said digital storage means receives digital reference video information from more than one video source.
21. The digital video system of Claim 15, wherein said digital storage means receives digital reference audio information from more than one audio source.
22. The digital video system of Claim 15, wherein said coding and control means include derivative data coding means and a controller in a control loop.
23. The digital video system of Claim 22, wherein said control loop includes the user.
24. The digital video system of Claim 22, wherein said derivative data coding means includes means for creating indices of coded data derived from at least one of said digital reference video and audio information.
25. The digital video system of Claim 22, wherein said derivative data coding means includes means for linking said digital reference video and audio information.
26. The digital video system of Claim 20, wherein said coding and control means include derivative data coding means and a controller in a control loop, and said derivative data coding means includes means for linking said digital reference video information from more than one video source together.
27. The digital video system of Claim 21, wherein said coding and control means includes derivative data coding means and controller in a control loop, and saidderivative data coding means includes means for linking said digital reference audio information from more than one audio source together.
28. A digital video system comprising:
digital storage means for storing digital reference video information;
coding and control means, having an input means for receiving digital additional information, for coding said digital reference video information and said digital additional information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.

29. The digital video system of Claim 28, further comprising a digital encoder for receiving analog additional information from an analog source and outputting digitally encoded analog additional information to said coding and control means.
30. The digital video system of Claim 28, wherein said coding and control means receives digital additional information from more than one source.
31. The digital video system of Claim 28, further comprising output means.
32. The digital video system of Claim 28, wherein said digital reference video information is encoded and compressed.
33. The digital video system of Claim 32, further comprising means for decoding and decompressing said encoded and compressed digital reference video information.
34. The digital video system of Claim 31, wherein said output means further comprises display means.
35. The digital video system of Claim 28, wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.
36. The digital video system of Claim 35, wherein at least one of said control loops include the user.
37. The digital video system of Claim 35, wherein said derivative data coding means includes means for creating indices of coded data derived from said digital reference video information.
38. The digital video system of Claim 35, wherein said correlation and synch means links said digital reference video information and said digital additionalinformation together.
39. The digital video system of Claim 29, wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.

40. The digital video system of Claim 39, wherein said correlation and synch means links said digital reference video information and said digitally encoded analog additional information together.
41. A digital video system comprising:
digital storage means for storing digital reference video information and digital reference audio information;
a digital encoder for receiving analog additional information from an analog source and outputting digitally encoded analog additional information to a coding and control means;
said coding and control means, having an input means for receiving digital additional information and digitally encoded analog additional information, for coding said reference video and audio information and said digital additional information and digitally encoded analog additional information to generate coded data; and coded data storing means for storing said coded data from said coding and control means.
42. The digital video system of Claim 41, further comprising output means.
43. The digital video system of Claim 41, wherein said digital reference video and audio information is encoded and compressed.
44. The digital video system of Claim 43, further comprising means for decoding and decompressing said encoded and compressed digital reference video and audio information.
45. The digital video system of Claim 42, wherein said output means further comprises display means.
46. The digital video system of Claim 41, wherein said digital storage means receives digital reference video information from more than one video source.
47. The digital video system of Claim 41, wherein said digital storage means receives digital reference audio information from more than one audio source.

48. The digital video system of Claim 41, wherein said input means of said coding and control means receives digital additional information from a digital source.49. The digital video system of Claim 41, wherein said coding and control means include derivative data coding means and a controller in a first control loop and correlation and synch means in a second control loop with said controller.
50. The digital video system of Claim 49, wherein at least one of said control loops include the user.
51. The digital video system of Claim 49, wherein said derivative data coding means includes means for creating indices of coded data derived from at least one of said digital reference video and audio information and means for linking said digital reference video and audio information together.
52. The digital video system of Claim 49, wherein said correlation and synch means links said digital reference video and audio information and said digital additional information and said digitally encoded analog, additional informationtogether.
53. The digital video system of Claim 41, wherein said digital storage means receives digital additional information from more than one source.
54. The digital video system of Claim 41, wherein said digital storage means receives digitally encoded analog additional information from more than one source.
CA002260077A 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information Abandoned CA2260077A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US67856396A 1996-07-12 1996-07-12
US08/678,563 1996-07-12

Publications (1)

Publication Number Publication Date
CA2260077A1 true CA2260077A1 (en) 1998-01-22

Family

ID=24723323

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002260077A Abandoned CA2260077A1 (en) 1996-07-12 1997-07-11 Digital video system having a data base of coded data for digital audio and video information

Country Status (6)

Country Link
EP (1) EP1027660A1 (en)
JP (1) JP2001502858A (en)
AU (1) AU3724497A (en)
CA (1) CA2260077A1 (en)
MX (1) MXPA99000549A (en)
WO (1) WO1998002827A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963203A (en) 1997-07-03 1999-10-05 Obvious Technology, Inc. Interactive video icon with designated viewing position
US6573907B1 (en) 1997-07-03 2003-06-03 Obvious Technology Network distribution and management of interactive video and multi-media containers
JP2000262479A (en) * 1999-03-17 2000-09-26 Hitachi Ltd Health examination method, executing device therefor, and medium with processing program recorded thereon
US6771657B1 (en) * 1999-12-09 2004-08-03 General Instrument Corporation Non real-time delivery of MPEG-2 programs via an MPEG-2 transport stream
WO2002041634A2 (en) * 2000-11-14 2002-05-23 Koninklijke Philips Electronics N.V. Summarization and/or indexing of programs
MXPA03010679A (en) * 2001-05-23 2004-03-02 Tanabe Seiyaku Co Compositions for promoting healing of bone fracture.
EP1262881A1 (en) * 2001-05-31 2002-12-04 Project Automation S.p.A. Method for the management of data originating from procedural statements
US7756393B2 (en) 2001-10-23 2010-07-13 Thomson Licensing Frame advance and slide show trick modes
US8891020B2 (en) * 2007-01-31 2014-11-18 Thomson Licensing Method and apparatus for automatically categorizing potential shot and scene detection information
CN115471780B (en) * 2022-11-11 2023-06-06 荣耀终端有限公司 Sound-picture time delay testing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2945692B2 (en) * 1988-05-27 1999-09-06 コダック・リミテッド Data processing system for processing images that can be annotated
US5524193A (en) * 1991-10-15 1996-06-04 And Communications Interactive multimedia annotation method and apparatus
US5600775A (en) * 1994-08-26 1997-02-04 Emotion, Inc. Method and apparatus for annotating full motion video and other indexed data structures
US5596705A (en) * 1995-03-20 1997-01-21 International Business Machines Corporation System and method for linking and presenting movies with their underlying source information

Also Published As

Publication number Publication date
MXPA99000549A (en) 2003-09-11
AU3724497A (en) 1998-02-09
JP2001502858A (en) 2001-02-27
EP1027660A1 (en) 2000-08-16
WO1998002827A1 (en) 1998-01-22

Similar Documents

Publication Publication Date Title
JP3185505B2 (en) Meeting record creation support device
US7739255B2 (en) System for and method of visual representation and review of media files
US6956593B1 (en) User interface for creating, viewing and temporally positioning annotations for media content
US6332147B1 (en) Computer controlled display system using a graphical replay device to control playback of temporal data representing collaborative activities
EP0774719B1 (en) A multimedia based reporting system with recording and playback of dynamic annotation
US7672864B2 (en) Generating and displaying level-of-interest values
US6938029B1 (en) System and method for indexing recordings of observed and assessed phenomena using pre-defined measurement items
US20030078973A1 (en) Web-enabled system and method for on-demand distribution of transcript-synchronized video/audio records of legal proceedings to collaborative workgroups
US5717869A (en) Computer controlled display system using a timeline to control playback of temporal data representing collaborative activities
US6789109B2 (en) Collaborative computer-based production system including annotation, versioning and remote interaction
US5786814A (en) Computer controlled display system activities using correlated graphical and timeline interfaces for controlling replay of temporal data representing collaborative activities
US7873258B2 (en) Method and apparatus for reviewing video
US20030124502A1 (en) Computer method and apparatus to digitize and simulate the classroom lecturing
US20050160113A1 (en) Time-based media navigation system
US20070101266A1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
KR20060128022A (en) Automated system and method for conducting usability testing
JP2000090121A (en) Media browser, media file browsing method and graphical user interface
US20040177317A1 (en) Closed caption navigation
CA2260077A1 (en) Digital video system having a data base of coded data for digital audio and video information
JP2001306599A (en) Method and device for hierarchically managing video, and recording medium recorded with hierarchical management program
Knoll et al. Management and analysis of large-scale video surveys using the software vPrism™
Roy et al. NewsComm: a hand-held interface for interactive access to structured audio
US20040056881A1 (en) Image retrieval system
US20140250056A1 (en) Systems and Methods for Prioritizing Textual Metadata
JP2001324988A (en) Audio signal recording and reproducing device and audio signal reproducing device

Legal Events

Date Code Title Description
FZDE Discontinued
FZDE Discontinued

Effective date: 20020711