EP1027660A1 - Digitales videosystem mit datenbank für codierte daten für audio- und videoinformation - Google Patents
Digitales videosystem mit datenbank für codierte daten für audio- und videoinformationInfo
- Publication number
- EP1027660A1 EP1027660A1 EP97934108A EP97934108A EP1027660A1 EP 1027660 A1 EP1027660 A1 EP 1027660A1 EP 97934108 A EP97934108 A EP 97934108A EP 97934108 A EP97934108 A EP 97934108A EP 1027660 A1 EP1027660 A1 EP 1027660A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- digital
- video
- coding
- information
- video system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003860 storage Methods 0.000 claims description 28
- 230000009471 action Effects 0.000 abstract description 97
- 238000004458 analytical method Methods 0.000 abstract description 62
- 230000003068 static effect Effects 0.000 abstract description 5
- 239000000523 sample Substances 0.000 description 66
- 238000010586 diagram Methods 0.000 description 43
- 238000000034 method Methods 0.000 description 37
- 230000008859 change Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 17
- 238000013518 transcription Methods 0.000 description 17
- 230000035897 transcription Effects 0.000 description 17
- 230000000694 effects Effects 0.000 description 15
- 238000005070 sampling Methods 0.000 description 14
- 230000002452 interceptive effect Effects 0.000 description 12
- 238000000151 deposition Methods 0.000 description 10
- 230000008021 deposition Effects 0.000 description 10
- 238000012790 confirmation Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 238000013479 data entry Methods 0.000 description 6
- 238000003780 insertion Methods 0.000 description 6
- 230000037431 insertion Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 230000014616 translation Effects 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000007639 printing Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000007619 statistical method Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000006837 decompression Effects 0.000 description 3
- 230000008676 import Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 208000004350 Strabismus Diseases 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013502 data validation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 239000000796 flavoring agent Substances 0.000 description 1
- 235000019634 flavors Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 239000012925 reference material Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012882 sequential analysis Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
- H04N21/8405—Generation or processing of descriptive data, e.g. content descriptors represented by keywords
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8545—Content authoring for generating interactive applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
Definitions
- This invention relates to a digital video system and method for manipulating digital video information.
- U.S. Patent No. 5,467,288 to Fasciano et al. issued November 14, 1995, is directed to a digital audio workstation for the audio portions of video programs.
- the Fasciano workstation combines audio editing capability with the ability to immediately display video images associated with th ⁇ - audio program. An operator's indication of a point or segment of audio information is detected and used to retrieve and display the video images that correspond to the indicated audio programming.
- the workstation includes a labeling and notation system for recording digitized audio or video information. It provides a means for storing in association with a particular point of the audio or video information, a digitized voice or textual message for later reference regarding that information.
- U.S. Patent No. 5,045,940 to Peters et al. issued September 3, 1991 is directed to a data pipeline system which synchronizes the display of digitized audio and video data regardless of the speed with which the data was recorded on its linear medium.
- the video data is played at a constant speed, synchronized by the audio speed.
- the above systems do not provide for the need to analyze, index, annotate, store and retrieve large amounts of video information. They cannot support an unlimited quantity of video. They do not permit a transcript to be displayed simultaneously with video or permit ease of subtitling. Subtitling is a painstaking and labor intensive process for the film industry and an impediment to entry into foreign markets.
- a digital video system comprising coding and control means, adapted to receive digital reference video information, for coding the digital reference video information to generate coded data; and coded data storing means for storing the coded data from the coding and control means.
- FIG. 1 A is a functional block diagram of a preferred embodiment of the present invention
- FIG. IB is a functional block diagram of the coding and control means shown in FIG. 1A;
- FIG. 1C is a chart showing the structure of the coded data store of FIG. 1A for indexing data
- FIG. ID is a software flowchart of the preferred embodiment of the present invention.
- FIG. IE is a map of time reference information
- FIG. 2A is a drawing of the main button bar of the present invention.
- FIG. 2B is a diagram of the manager button bar of the present invention.
- FIG. 2C is a diagram of the application tool bar of the present invention.
- FIG. 3 is a diagram of the user list window of the user module of the present invention.
- FIG. 4 is a diagram of the user detail window of the user module of the present invention.
- FIG. 5. is a table showing the coding and transcription rights of the user detail window of the user module of the present invention.
- FIG. 6 is a table of the system management rights of the user detail window of the user module of the present invention.
- FIG. 7 is a diagram of the module sub-menu of the study module of the present invention.
- FIG. 8 is a diagram of the study list window of the study module of the present invention.
- FIGS. 9A and 9B are diagrams of the study detail window of the study module of the present invention.
- FIG. 10A is a diagram of the study outline of the study detail window of the study module of the present invention before dragging a characteristic
- FIG. 1 OB is a diagram of the study outline of the study detail window of the study module of the present invention after dragging a characteristic
- FIG. 11 is a diagram of the select an event/sampling method choice menu for creating a new event type and opening the event type detail window of the present invention
- FIG. 12 is a diagram illustrating creating a new pass in the study outline of the study detail window of the study module of the present invention.
- FIG. 13 is a diagram of the event type detail window of the study module of the present invention.
- FIG. 14 is a diagram of the characteristic detail window of the study module of the present invention.
- FIG. 15 is a diagram of the unit selection window of the study module of the present invention.
- FIG. 16 is a diagram of the use units from other study window of the study module of the present invention.
- FIG. 17 is a diagram of the unit list window of the unit module of the present invention.
- FIG. 18 is a diagram of the unit detail window of the unit module of the present invention.
- FIG. 19 is a table of the palettes which may be opened over the video window of the present invention.
- FIG. 20 is a diagram of the video window of the present invention
- FIG. 21 A is a diagram of the title area of the video window of the present invention
- FIG. 21B is a diagram of the video area of the video window of the present invention.
- FIG. 21 C is a diagram of the mark area of the video window of the present invention.
- FIG. 21D is a diagram of the instance area of the video window of the present invention.
- FIG. 22 is a diagram of the mark area of the video window of the present invention.
- FIG. 23 is a diagram of the instance area of the video window of the present invention.
- FIG. 24 is a diagram of the List Area of the video window
- FIG. 25 is a diagram of the List Area with two transcripts displayed
- FIG. 26 is a diagram of the Select an Outline window
- FIG. 27 is a diagram of the outline description window
- FIG. 28 is a diagram of the outline palette
- FIG. 29 is a diagram of the outline item window
- FIG. 30 is a diagram of the sample definition window
- FIG. 31 is a diagram of the sample palette
- FIG. 32 is a diagram of the sample information window
- FIG. 33 is a diagram of the unit analysis window
- FIG. 34 is a diagram of the define unit variable window
- FIG. 35 is a diagram of the define event variable window
- FIG. 36 is a diagram of the instance analysis window
- FIG. 37 is a diagram of the define analysis variable window
- FIG. 38 is a diagram of the search window contents common for text and event instance searches.
- FIG. 39 is a diagram of the event instance searcl- window; and FIG. 40 is a diagram of the text search window.
- FIG. 1A there is shown a digital video system in accordance with the preferred embodiment of the present invention including coding and control means 1 for coding digital reference video information and generating coded data and coded data store 2 for storing the coded data from the coding and control means.
- the coding and control means 1 is adapted to receive digital reference video information from video reference source 3.
- the coding and control means 1 is connected via a databus 5 to the coded data store 2.
- the coding and control means 1 includes a general multipurpose computer which operates in accordance with an operations program and an applications program.
- an output 6 which may be a display connected to an input/output interface.
- the video reference source 3 may be a video cassette recorder such as a SONY model EV-9850.
- the coding and control means 1 may be an Apple Macintosh 8500/132 computer system.
- the coded data store 2 may be a hard disk such as a Quantum XP32150 and a CD-ROM drive such as a SONY CPU 75.5-25.
- the output 6 may be a display monitor such as an Apple Multiple Scan 17 M2A94.
- video information from a video reference source 3 may be digitized by digital encoder 9 and compressed by compressor 10.
- the digital video information may be stored in digital storage means 11. Alternatively, if the video information is already digitized, it may be directly stored in digital storage means 11.
- Digital video information from digital storage means 11 may be decoded and decompressed by decode/decompression means 12 and input to the coding and control means 1.
- the video reference source 3 may be an analog video tape, a camera, or a video broadcast.
- the coding and control means 1 may generate coded data automatically, or by interactive operation with a user, by interactive operation with a user in real time, or semi-automatically. For semiautomatic control, the user inputs parameters.
- the coding and control means performs the function of indexing only. Indexing is the process through which derivative information is added to the reference video information or stored separately. This derivative information provides the ability to encode instances of events and/or conduct searches based on event criteria.
- 'Reference information is video or audio information such as a video tape and its corresponding audio sound track.
- Derivative information is information generated during the coding process such as indices of events in the video, attributes, characteristics, choices, selected choices and time reference values associated with the above. Derivative information also includes linking data generated during the coding process which includes time reference values, and unit and segment designations.
- Additional “information” is information that is input to the video system in addition to reference information. It includes digital or analog information such as a transcript of audio reference information, notes, annotations, a static picture, graphics, a document such as an exhibit, or input from an oscilloscope.
- the coding and control means 1 may be used interactively by a user to mark the start point of a video clip and a time reference value representing a mark in point is generated as coded data and stored in the coded data store 2. Further, the user may optionally interactively mark the end point of a video clip and a time reference value representing the mark out point is generated at. coded data and stored in the coded data store 2. The user may interactively mark an event type in one pass through the digital reference video information. The user may mark plural passes through the reference video information to mark plural event types. The mark in and mark out points are stored in indices for event types.
- the coded data that is added may be codes of data that are transparent to a standard player of video but which are capable of interpretation by an modified player.
- the coded data may be a time reference value indicating the unit of digital reference video information. Additionally the coded data may be a time reference value indicating the segment within a unit of digital reference video information. Thus, unlimited quantities of digital reference video information may be identified and accessed with the added codes. There may be more than one source of reference video information in the invention.
- an audio reference source 4 which is optional.
- the digital system of the present invention may operate with simply a source of video reference information 3.
- a source of audio reference information 4 a source of digital additional information X D 13 or a source of analog additional information X A 14 may be added.
- the audio reference information is input to digital storage means 11. If the audio reference information from source 4 is already digital, it may be directly input and stored in digital storage means 11. Alternatively, if the audio reference information from source 4 is analog, the information may be digitized and compressed by digital encoder 7 and compression means 8 before being stored in digital storage means 11. The digitized audio reference information is output from digital storage means 11 to coding and control means 1 via decode/decompression means 12. The compression and decompression means 8 and 12 are optional.
- the audio reference sources 4 may be separate tracks of a stereo recording. Each track is considered a separate source of audio reference information.
- the video reference source 3 and the audio reference source 4 may be a video cassette recorder such as SONY EVO-9850.
- the digital video encoder 9 and compressor 10 may be a MPEG-1 Encoder such as the Future Tel Prime View II.
- the digital audio encoder 7 and compressor 8 may be a sound encoder such as the Sound Blaster 16.
- a PC-compatible computer system such as a Gateway P5-133 stores the data to a digital storage means 11 such as a compact disc recording system like a Hyundai CDR00 or a hard disk like a Seagate ST72430N.
- the coding and control means 1 codes the reference video and audio information to generate coded data. Whenever there is more than one source of information such as an audio reference source 4, a source of additional digital information 13 or a source of additional analog information 14, the coding and control means 1 performs a linking function. Linking is the process by which information from different sources are synchronized or correlated with each other. This is accomplished through the use of time reference data. Linking provides the ability to play and view video, audio and additional information in a synchronized manner. The linking data permits instant random access of information. The coding and control means 1 performs the linking function in addition to the indexing function discussed above. Linking and indexing are referred to as 'boding".
- the coding and control means 1 When there is more than one source of information the coding and control means 1 performs linking and/or indexing.
- the linking data which comprises time reference values is stored as coded data in coded data store 2. Additionally, the indices of data that is added by the process of coding is stored in coded data store 2.
- the digital video system may include a source of additional information which may be analog or digital.
- the additional inf ormation from source 14 may be digitized by digital encoder 15.
- the additional information from source 13 or 14 may be the transcript of the audio reference information, notes or annotations regarding the audio or video reference information, a static picture, graphics, or a document such as an exhibit for a videotaped deposition with or without comments.
- the source of the additional information may be a scanner or stored digital information or a transcript of a deposition being produced in real time by a stenographer.
- the annotations or notes may be produced in real time also.
- the coding and control means codes the reference video information, reference audio information, and additional analog or digital information to generate coded data which includes linking data and indexing data.
- the coded data which is generated is attribute data.
- the attribute data may be an event type.
- event types may be 'Questions," 'Pause Sounds, "or 'Writing on Board” for a study of a video of a teacher's teaching methods. These are events which take place in the video.
- the attribute data may regard a characteristic associated with an event type. This creates an index of characteristics.
- characteristics for the event type of 'Questions may be ' dministrative questions," 'questions regarding discipline, "or 'content of questions.
- the attribute data may include a plurality of choices for a characteristic.
- choices for the characteristic of 'administrative questions may include 'administrative questions regarding attendance, " ' dministrative questions regarding grades, "or 'administrative questions regarding homework.
- a fourth table designates a selected choice of a plurality of possible choices. Thus for example, the selection may be 'administrative questions regarding grades.”
- a fifth table is created which includes time reference values associated with each instance of the event type. So for example, an index is created of time reference values associated with each time a question is asked for the event type 'Questions".
- the user interactively marks the mark in point of the video reference information that designates each instance of a question being asked. Additionally, the user may optionally mark the mark out point when the question is finished being asked.
- the digital video system of the invention also permits automatic or semi-automatic coding and control.
- the coding and control means 1 may create an index of the time reference values corresponding to each time the video scene changes.
- the user may input the parameter N. For example, the user may change N to 5 and change the operation of the system so that the coding and control means 1 compares five frames to determine if a scene has been changed.
- the user may change the threshold amount T, from 50% to 20% for example.
- the indexing and control means 1 includes the ability to search for instances of an event type.
- the coding and control means 1 may search for instances of one event type occurring within a time interval Y of instances of a second event type.
- the system can determine each instance when one event occurred within a time interval Y of a second event.
- the coding and control means 1 includes an alarm feature.
- An alarm may be set at each instance of an event type.
- the coding and control means 1 controls a system action.
- the system may position the video and play. Other system actions such as stopping the video, highlighting text of a transcript or subtitling may occur.
- the coded data stored 2 may be a relational database, an object database or a hierarchical database.
- the coding and control means 1 performs the linking function when there is more than one source of information.
- Linking data is stored to relate digital video and digital audio information.
- Linking data may also link digital video or digital audio information to additional information from sources 13 and 14.
- Linking data includes time reference values. Correlation and synchronization may occur automatically, semi-automatically or interactively. Synchronization is the addition of time reference information to data which has no time reference. Correlation is the translation or transformation of information with one time bas- to information with another time base to make sure that they occur at the same time.
- the digital system of the present invention operates on time reference values that are normalized unitless values.
- time reference values are added to information that includes no time reference such as a document which is an exhibit for a videotaped deposition.
- both sources of information include time reference information
- the correlation process transforms one or both to the time reference normalized unitless values employed by the system.
- One or both sources of information may be transformed or points may be chosen that are synched together.
- the time reference information of one source can be transformed to a different time scale by a transformation function.
- the transformation function may be linear, non-linear, continuous, or not continuous. Additionally, the transformation function may be a simple offset. The transformation function may disregard blocks of video between time reference values, for skipping advertising commercials, for example.
- Time codes with hour, minute, second and frame designations are frequently used in the film industry.
- the coding and control means 1 correlates these designations to the normalized unitless time reference values employed by the system.
- the coding and control means 1 may transform a time scale to the time code designation with hour, minute, second and frame designations.
- the coding and control means 1 may correlate two sources of information by simply checking the drift over a time interval and selecting points to synch the two information sources together.
- the coding function of the digital system of the present invention is not just an editing function. Information is added. Indices are create-.!. Further, a database of linking data is created. The original reference data is not necessarily modified.
- the coded data store may be in any format including edit decision list (EDL) which is the industry standard, or any other binary form.
- EDL edit decision list
- the coded data store 2 stores the data base indices which are created, linking data, and data from the additional sources 13 and 14 which may include static pictures, graphics, documents such as deposition exhibits, and text which may include transcripts, translations, annotations, or close captioned data.
- Subtitles are stored as a transcript. There may be multiple transcripts or translations or annotations or documents. This permits multiple subtitles.
- FIG. IB illustrates the coding and control means 1 of FIG. 1A.
- the coding and control means includes controller 16. Controller 16 is connected to derivative data coding means 17 and correlation and synch means 18. Controller 16 is also connected to the coded data store 2 and to the output 6. Digital information from the digital storage means 11 is input to the derivative data coding means 17. If information from one source only is input to the derivative data coding means 17, only the indexing function is performed. If information from two sources is input to the derivative data coding means 17 indexing and linking is performed.
- the coding and control means 1 may further include correlation and synch means 18 for receiving additional data X D and X A .
- the correlation and synch means 18 correlates data with a time reference to the video information from the digital storage means 11 and synchronizes data without a time reference base to the digital video information from the digital storage means 11.
- Control loop 19 illustrates the control operation of the controller 16. The user may be part of control loop 19 in interactive or semiautomatic operation.
- Control loop 20 illustrates the control function of controller 16 over correlation and synch means 18. The user may be a part of control loop 20 in interactive and semi-automatic operation.
- control loops 19 and 20 also include input/output interface devices which may include a keyboard, mouse, stylus, tablet, touchscreen, scanner or printer.
- FIG. IC is a chart showing the structure of the coded data store 2 of 1A for indexing data.
- FIG. ID is a software flowchart. The following define the indices of the coded data store 2.
- Characteristics are variables which are applicable to a particular event type. An example would be Event Type 'Teacher
- CharChoices contains valid values of the parent Characteristics variable. For example, in the example of the Characteristic
- CharChoices serves as a data validation tool to confine user data entry to a known input that is statistically analyzable.
- Event Types Stores model information of the event code such as whether the code can have an in and out point. Serves as a parent to the characteristic table which includes possible variables to characterize the event type.
- Instances Contains instances of particular event types with time reference information.
- InstCharChoice Stores actual value attributes to a characteristic of a particular event instance. For example, one instance of the teacher question might have a value in the characteristic 'Difficulty Level" of
- OutlineHdng Stores the major headings for a particular outline.
- OutlineSubHdng Stores actual instances that are contained in an outline. These instances were originally coded and stored in the instance table, but when they are copied to an outline are completely independent of the original instance.
- Pass Filters Stores filter records which are created by the sampling process.
- Samples Stores samples for the purposes of further characterization.
- These instances are either a random samp-e of previously coded instances or computer generated time slices created using sampling methodologies.
- Segments Corresponds to physical media where the video is stored. This table is a 'many" to the parent Units table.
- SeqNums Stores Sequence numbers for all tables. Sessions Sessions keeps track of coding for each study down to the pass and unit level. Therefore, a user may go back to his/her previous work and resume from where they left off.
- Studies Pass Stores information for a particular pass in the study such as pointers to filters and locked status for sampling.
- StudyUnits Contains references to units that are attached to a particular study. Since there may be multiple units for each study and there may be multiple studies that utilize a particular unit, this table functions as a join table in the many-to-many relationship.
- Study Event This table stores particular information relevant to the use of a particular event type in a particular study and a particular pass. Since there may be multiple Event Types for each study and there may be multiple studies that utilize a particular Event Type, this table functions as a join table in the many-to-many relationship. Transcribe Stores textual transcript and notes and time reference values for each utterance which correspond to a Unit. Units Parent table of videos viewable.
- the coded data store 2 stores data representing time reference values relating the digital audio information to the digital video information and vice versa. Accordingly, for any point in the video information, the corresponding audio information may be instantly and randomly accessed with no time delay. Additionally, for any point in the audio information, the corresponding video frame information may be instantly and randomly accessed with no time delay.
- the coded data store 2 stores attribute data.
- the attribute data is stored in an index and is derivative data that is added during the coding process.
- the attribute data may be an event type, such as any action shown in the video such as a person in the video raising his hand or standing up or making a field goal. Attribute data may be time reference data indicating instances of an event type.
- the attribute data may also include a characteristic associated with an event, such as directness or acting shy.
- the attribute data may also include a plurality of choices of characteristics such as being succinct or being vague. It may be the chosen choice of plural possible choices.
- the coded data store 2 stores time reference data corresponding to the attribute data.
- the coded data store 2 stores data representing the text of a transcript of the digitized audio information.
- a video deposition can be digitized.
- the video information originates at reference source 3 and the audio information originates at reference source 4.
- the video and audio information may be digitized and/or compressed via digital encoders 7 and 9 and compressors 8 and 10 and stored in a digital storage means 11.
- a tran * : z ⁇ pt of the deposition may be stored in coded data store 2. More than one transcript, foreign language translations, for example, may be stored.
- Coding and control means 1 accesses video information from digital storage means 11 , audio information from digital storage means 11 and the transcript information from coded data store 2 and simultaneously displays the video and the text of the transcript on output disp-ay 6. Additionally, the audio is played.
- the video is displayed in one area of a Video Window called the video area and the text of the transcript is displayed in a transcript area. More than one transcript may be displayed.
- Video Window is illustrated in FIG. 20 and is described in detail later.
- subtitles can be added to the video information and displayed on output display 6 in the same are ⁇ as the video.
- the viewer can view the video information with subtitles and simultaneously watch the text of the transcript on output display 6.
- the attribute data that is stored may be regarding video scene changes.
- the time reference data of the scene change is stored in the coded data store 2. This may be performed interactively or automatically or semi- automatically. If there are a number of times that an event occurs the time reference values associated with each occurrence of the event are stored in the coded data store 2.
- the present invention has a presentation ability where a presentation may be displayed on output display 6.
- the video associated with each stored time reference value is displayed in sequence to create a presentation. For example, in an application dealing with legal services and videotaped depositions, every time a witness squints his eyes may be kept track of by storing a time reference value associated with each occurrence of the event during the coding process.
- the time reference values represent the times at which the pertinent video portion starts and finishes.
- the digital system of the invention includes search abilities where a word or a phrase may be searched in the text of the transcript of the digitized audio information. A search of notes, annotations or a digitized document for a word or phrase may also be performed. Additionally, the present invention includes the ability to perform statistical analysis on the attribute data. Random sampling of instances of an event type can be performed. Coding and control means 1 accesses coded data store 2 and analyzes the data in accordance with standard statistical analysis.
- the invention includes a method of analyzing video information including storing digital video information, storing digital audio information, storing coded data linking the digital video and digital audio information, storing coded data regarding events in indices, and computing statistical quantities based on the coded data.
- the present invention results in a video analysis file for a multimedia spreadsheet containing time-dependent information linked to video information.
- the video information and textual (transcript, annotations or digitized documents) information can be searched.
- the video information may be stored on a CD-ROM disk employing the MPEG-1 video standard. Other video standards may be employed. Additionally, other storage media may be employed.
- the coded data store 2 and digital storage means 11 illustrated in FIG. 1A may actually be parts of the same memory.
- Analog videotapes may be converted into digital video format by a standard digitized video transfer service that is fast and inexpensive, and deals with high volume at a low cost.
- the digital video service digitizes the video, compresses it and synchronizes it with the audio.
- the systeu- may digitize the video and audio information from reference sources 3 and 4.
- the source of information may be a commercial, broadcast or analog video tape.
- the present invention permits video analysis so that the user may view, index, link, organize, mark, annotate and analyze video information. This is referred to as 'boding."
- buttons and controls permit the marking, coding and annotation of the video.
- a transcription module permits synchronized subtitles. Multiple subtitles are possible, which is of importance to the foreign market for films which may require subtitles for different languages.
- the present invention has note- taking abilities. Searches may be performed for the video information, notes, the transcript of the audio information, coded annotations or digitized documents.
- a presentation feature permits the selection and organization of video segments into an outline to present them sequentially on a display or to record them to a VCR or a computer file.
- Complex coding and annotations are performed in several passes such that multiple users may code and annotate the digitized information.
- One user may make several passes through the video for coding, marking and annotating or several users may each make a pass coding, marking and annotating for separate reasons.
- Information may be stored and displayed in a spreadsheet format and/or transferred to a statistical analysis program, and/or to a graphics program. Types of statistical analyses which may be conducted, for example, are random sampling, sequential analysis, cluster analysis and linear regression. Standard algorithms for statistical analysis are well known. Additionally, the information may be input to a project tracking program or standard reports may be prepared. Spreadsheets and graphs may be displayed and printed.
- the present invention has use in video analysis for research and high end analysis, the legal field and the sports market.
- the present invention would be useful in research in fields of behavior, education, psychology, science, product marketing, market research and focus groups, and the medical fields.
- teaching practices may be researched.
- Verbal utterances are transcribed, multiple analysts mark and code the events and annotate the video information for verbal and nonverbal events, lesson content and teacher behavior.
- the transcribed utterances, marks, codes, and annotations are linked to the video and stored. The information may be consolidated, organized, presented or input for statist:- al analysis and interpretation.
- Other fields of research where the invention has application are industrial process improvement, quality control, human factors analysis, software usability testing, industrial engineering, and human/computer interactions evaluations.
- videos of operators at a computer system can be analyzed to determine if the computer system and software is user friendly
- the present invention would be useful in legal services where videotaped depositions may be annotated and analyzed. Deposition exhibits may be stored in the coded data store with or without notes on the documents.
- the present invention includes applications and operations software, firmware, and functional hardware modules such as a User Module, a Menu Manager, a Unit Module, a Study Module, a Video Window, a Transcribe Mode, a List Manager, an Outline Presentation Feature, a Sampling Feature, an Analysis Module and a Search Module. Reports may be created and output.
- a unit is composed of a video and transcript data.
- a unit may span several tapes, CD's or disks. These media are referred to as segments, and a unit has at least one segment.
- the present invention may handle multiple segments per unit. This permits the present invention to accommodate an unlimited quantity of video information.
- a unit may include plural transcripts stored in memory.
- a transcript is the text of speech in the video, foreign language translation, subtitles or description or comments about the video.
- a study includes a collection of units.
- a study is defined to specify coding rules for the units associated with it, for example, what event types and characteristics are to be recorded.
- a unit may be associated with one or more studies.
- a session is a specific coding pass for a specific unit by one user.
- the number of sessions that are created for a study is equal to the number of units included in the study multiplied by the number of coding passes defined for the study.
- a session must be open in order to go into code mode on the coding window. If no session is open, the user is prompted to open one.
- the User Module includes all windows, controls, and areas that are used to define users, control security, logon, and do primary navigation through the interactive digital system.
- the User Module is briefly mentioned here for the purpose of describing logon and is explained in more detail later.
- the interactive video analysis program of the pr sent invention requires a logon before granting access to the program functions and data.
- the purpose of the logon process is not only to secure the database content, but also to identify the user, assign access privileges, and track information such as how long a user has been logged on.
- the user is assigned access privileges and presented with the program's main button bar which contains icons that allow entry to various parts of the program. The number and type of icon that appear on the button bar are for a given user dependent on the privileges granted to him in his user record.
- the main button bar, or alternatively the application tool bar, is part of the Menu Manager.
- the main button bar is illustrated in FIG. 2A.
- the manage button bar of FIG. 2B is accessed from the main button bar of FIG. 2A and is an extension of the main button bar. Access to commonly accessed modules ⁇ -» provided by the main and manage button bars.
- FIG. 2C replaces the main button bar and manage button bar of FIGS. 2A AND 2B.
- icon 21 represents 'View
- icon 22 represents 'bode
- icon 22 represents 'bode
- Area 28 displays the units, for example, which unit is current or permits selection of previously defined units.
- Area 29 represents the 'outline" feature and area 30 is directed to Sessions" selection.
- the application wide tool bar provides access to the most commonly accessed modules including Video- View Mode, Video-Code Mode, Video-Transcribe Mode, Search Module, Unit Module, Analysis Module, Help Module, Session Selection, Unit Selection, and Outline Selection.
- the Video- View Mode opens the Video Window, making the view mode the active module. If the user has never accessed a unit record, the user will be presented with a unit selection dialog.
- the Video-Code Mode opens the Video Window, making the code mode the active module. If the user has never accessed a session, the user will be presented with a session selection dialog.
- the Video Transcribe Mode opens the Video Window, making the transcribe mode the active module.
- the transcribe mode is activated the transcription looping palette will be displayed automatically.
- the Search Module opens the search window, making it the current module.
- the Unit Module opens the Unit Module, making it the current module. Study Module
- the Study Module opens the Study Module, making it the current window.
- the Analysis Module opens the Analysis Module, making it the current window.
- the Help Module opens the interactive video analysis help system.
- the session selection popup provides the ability to change the current session when in Video-Code Mode.
- the unit selection popup provides the ability to change the current unit when in Video- View Mode.
- the outline selection popup provides the ability to change the current outline when in Video-Transcribe Mode.
- FIG. 3 illustrates the user list window.
- the user list window lists the users.
- the user detail window of the User Module is illustrated in FIG. 4. It is the primary window that contains all data needed to define a user, including information about the user and security privileges. This window is presented when adding a new user, or when editing an existing user.
- the fields and controls in the window include, the window name 'User Detail, "the first name, the last name, the user code, phone number, e-mail address, department, custom fields, whether logged on now, last logon date, number of logons since, logged hours reset count, comments, logon id, set password, and login enabled.
- the user detail window includes coding and transcription rights area 31. This is a field of four check boxes that grant privileges to code video (create instances) or edit the transcription text as shown in the table of FIG. 5.
- the user detail window also includes system management rights area 32. This area is a field of five check boxes that grant privileges to manage setup of the study and various other resources as shown in the table of FIG. 6.
- the user detail window further includes the make-same-as button, navigation controls, a print user detail report button and a cancel /save user button.
- the collection of windows and procedures that together allow definition of studies, event types, characteristics, choices, units and samples comprise the "Study Module” .
- the Study Module is reached from the main button bar or the applications tool bar that is presented when the interactive video analysis program is initiated.
- a study can be thought of as a plan for marking events that are seen in the video or in the transcription text of the audio information.
- a study contains one or ore event types, which are labels for the events that are to be marked. Each event type may also have one or more characteristics, which are values recorded about the event.
- Event instance When an event is marked in the video or transcript text it is formally called an "event instance" .
- the project is first initialized, one study is created. A default study is used when the user does not choose to create a predefined coding plan (study), but rather wishes to use the program in a mode when event type can be assigned at will.
- the Study Module may be accessed by selecting the study button from the application tool bar or by selecting study from the module submenu.
- the module is first opened the user is presented with a standard find dialog whereby he can search for specific records which he wishes to work with.
- the find dialog screen is illustrated in FIG. 7.
- Double-clicking on a list item results in opening that item for edit. For example, double-clicking on a study in the studies list window, as illustrated in FIG. 8, results in opening a study for edit in the study detail window.
- the ok/cancel button has the action of returning to the original window.
- the First control goes to the first record in the selection displayed in the list.
- the Prev button goes to the record immediately before the current record in the selection displayed in the list.
- the Next button goes to the record immediately after
- a study can be constrained to be a subset of another study. This means that the study can only contain units that were specified in the other study (either all the units, or a subset of the units). If additional units are added to the "parent study", they become available to the constrained study but are not automatically added. Constraining a study to be a subset of another study also means that the event types for the parent study are available as event filters in the sample definition for the constrained study. As explained in detail below, a study is constrained when the "constrain unit selection to be a subset of the specified study” button is checked on the "use units from other study window" as illustrated in FIG. 16. The constraint cannot be added after any sessions have been created for the study. The constraint can be removed any time as long as the constraint study does not include any event types from the parent study in its sample criteria.
- Every project contains a default study that is created when the project is first created.
- the default study allows entry into code mode of the Video Window shown in FIG. 20 if no formal studies have been defined. Event types and characteristics may be added to the default study at will from the Video Window.
- the default study is maintained from the video window and not from the Study Module, hence, it does not appear in the study listing window shown in FIG. 8. It does appear whenever studies are listed in all other modules.
- a session is always open for the default study which is called the default session. If no other studies have been created in the project, the default session is opened without prompting when the user goes into code mode on the study window. Applied Rules
- Units may be added to a study.
- a unit cannot be added to a study unless it is locked.
- the purpose of the lock is to insure that the set of units for a specific study does not change once a pass has been locked.
- Studies may be deleted. A study cannot be deleted if it is constrained by another study or if the study itself is locked. A study should not be allowed to be constrained to another study that is not locked yet.
- the studies list window shown in FIG. 8 presents all the studies defined for the project.
- the window displays only the three fields: study name, description, and author. Double-clicking on a study record opens the study detail window for the selected study.
- the study detail window is the primary window that contains all data needed to define a study. This window is presented when creating a new study or when editing an existing study.
- the study detail window is illustrated in FIGS. 9A and 9B.
- the study detail window includes a number of fields and controls.
- Field 41 is the window title. The name of this window is "Study Detail" .
- Field 42 is the study name. In this field the name of the study may be entered.
- Field 43 is the author. This is a non-enterable area that is filled by the program using login data.
- Field 44 is the create date area which includes the date and time when the study was initially created. This is a non-enterable area that is filled by the program when the study record is created.
- Field 45 is the study description which is a scrollable enterable area for text to describe the study.
- Field 46 is the study outline which is a scrollable area that shows the event types, characteristics, and choices created for the study. Event types are shown in bold in FIG. 9A.
- Characteristics are displayed in plain text under each event type. Choices are displayed in italics under each characteristic. Thus, as shown in FIG. 9A the event type is "Question Asked", the characteristic is a “Directness” and the choices are "Succinct” and "Vague”.
- FIG. 9 A illustrates a study detail window for a study for video analysis of teaching practices.
- teaching practices may be analyzed by video taping teachers interacting with students in the school room.
- Various event types such as asking questions or raising one's hand or answering a question are analyzed by marking the events in the video.
- event type 46a is displayed in bold with the event code and event type name (e.g., "Questions Asked”); the type of marking associated with the event type (for example, "Vi/T” means "mark Video In Point and text" for each instance); and the pass in which the event type is to be marked (e.g., "1 ").
- Event type detail window which is shown in FIG. 13.
- V Video In and Out points are to be narked
- V Video In point
- Vi Video In point
- E Event 3
- Vi/T means the Video In point and the text are to be marked for the event type. If no marking is turned on, then nothing is displayed (for example, see “Answer” in Pass 3 in the illustration).
- action opens that event type in the event type detail window.
- the characteristic label 46b as shown in FIG. 9 A is displayed in plain text with the characteristic code (e.g., "DI "), name of the characteristic, and date data entry type (e.g. , "select one"). Characteristics are displayed immediately under the event type to which they are associated. When a characteristic is double-clicked, the action is to open that characteristic in the characteristic detail window as shown in FIG. 14.
- the order in which the characteristics are displayed under the event type is also the order in which they are displayed on the Video Window.
- the user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic belonging to the same event type.
- Characteristics cannot be dragged from one event type to a different event type (for example: the user cannot drag characteristic "Directness” from event type "Question Asked” to event type "Answer”), but characteristics can be dragged from one event type to the same event type that belongs to a different pass through the video (for example: the user can drag characteristic "Effectiveness" from "Answer” in pass 3 to "Answer” in pass 2).
- When a characteristic is moved all associated choice values are moved with the characteristic and retain their same order.
- FIGS. 10A and 10B illustrate dragging a characteristic.
- FIG. 10A illustrates the before condition.
- the characteristic "Appropriateness” in pass 1 will be dragged to pass 2.
- FIG. 10B illustrates the after condition.
- the characteristic "Appropriateness” was dragged from pass 1 to pass 2. The action is to create a new appearance in the event type "Question Asked” in pass 2, with “Appropriateness” underneath it.
- the choice value 46c illustrated in FIG. 9A is displayed in plain text with a user-defined value (e.g., " 1 ") and choice name. Choices are displayed immediately under the characteristic to which they are associated. The user can change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
- a user-defined value e.g., " 1 ”
- Choices are displayed immediately under the characteristic to which they are associated. The user can change the order of choices by clicking on a choice value and dragging it above or below another choice value belonging to the same characteristic. Choice values cannot be dragged from one characteristic to another or between passes.
- the pass separator line 46d shown in FIG. 9 A separates the passes through the video being analyzed. If more than one pass has been created, a pass separator line is drawn between the event types of each pass. The pass separator line cannot be dragged or selected.
- Button 47 is the add event type button. The action of this button is to create a new event type and open the event type detail window shown in FIG. 13. The "select an event/sampling method" menu for creating a new event type and opening an event type detail window is illustrated in FIG. 11.
- Button 48 of the study detail window of FIG. 9 A is the "remove from study” button. The action of this button is to remove the highlighted item from the study along with all indented items under it.
- removing an event type also removes the associated characteristics and choice values directly under it. If the last event type is removed from a pass, the pass is automatically deleted and removed from the "passes and sampling" area 55 of the study detail window. Pass 1 may no. be deleted.
- the pass display area 49 displays the pass to which the highlighted event is assigned. It is also a control tool to select the pass.
- the pass display 49a is a non- enterable area which displays the pass of the currently highlighted event type.
- the pass selector area 49b is a control tool that works only when an event type is selected. Clicking the up-arrow moves the selected event type to the next higher pass. Similarly, clicking the down-arrow has the action of moving the selected event to the next lower pass. If the pass number is set to a value greater than any existing pass, the action is to create a new pass. Each pass must contain at least one event type.
- the show characteristics checkbox 50 when checked, is for displaying all characteristics under the appropriate event type in the study outline area 46 and to enable the "show choices" checkbox.
- the show choices checkbox 51 when checked, displays all choice values under the appropriate characteristic in the study outline area
- the add pass button 52 has the action of creating a new pass.
- the FIG. 12 illustrates a newly created pass represented by a separator line and a pass number. New event types will be added to the pass, and existing event types can be dragged to the pass.
- the specified units area 53 of the study detail window of FIG. 9A has the action of presenting the unit selection window shown in FIG. 15.
- the specified units area 53 is a non-enterable text area to the right of the button which displays the number of units selected for the study. The button is disabled when the checkbox titled "Include all units in project" is checked.
- Area 54 includes a unit constraint message. If a constraint is in effect that effects the selection of units, the text describing the constraint is displayed in this area. There are two possible values of the message: “Constrained to the subset of [study]” and “Constrained to include all units in the project” . The second constraint is imposed when the checkbox "Include all units in project" 60 is chosen.
- Area 56 is the unit selection description. This area is a scrollable enteraHe area for text to describe the units selected for the study.
- Area 55 is the "passes and sampling” area. This is a scrollable non-enterable area that displays an entry for each pass with the pass number and its sample mode.
- Area 57 includes navigation controls: First, Prev, Next and Last.
- Button 58 is the print study button which is used to print the study detail report.
- Buttons 59 are the cancel/save study buttons.
- the save button saves all the study data and returns to the studies list window shown in FIG. 8.
- the cancel button returns to the studies list window of FIG. 8 without saving the changes to the study data.
- Checkbox 60 is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project --.re automatically included in the study. Units may be added any time to the project, and they are automatically added to -he study.
- the Event Type Detail Window is the "Include all units in project" checkbox which has the action of setting behavior of the study so that all units in the project --.re automatically included in the study. Units may be added any time to the project, and they are automatically added to -he study.
- the event type detail window is illustrated in FIG. 13. This window is for entry of all attributes for an event type.
- the window is reached through the study detail window of FIG. 9A when either an event type is double-clicked in the study outline 46 or when a new event is added employing button 47.
- a number of fields and controls of the event type detail window are described below.
- the window title area 61 gives the name of the window which is "Event Type Detail" .
- the event code area 62 is an area for the code that uniquely identifies this event type when analysis data is created or exported.
- the event name area 63 is the area for the name of the event type.
- the saved search area 64 is a non-enterable text area which appears only if this event type was created by the Search Module to mark instances retrieved from a search. The area provides information only. An event type created to be a saved search can have characteristics, but cannot have video marking or text marking turned on. No new instances can be coded for a saved search event type.
- the coding instruction area 65 is a scrollable text area for entry of instructions on how to mark this event type. This text area is presented when help is selected on the Video Window.
- the event instance coding area 66 contains checkboxes for specifying the rules identified at areas 67, 68 and 69 for how event instances are to be coded.
- instances will be marked using video, text or both. This means that "video marking”, “text marking” or both will be checked. Instances can be marked for this event type in all passes in which the event type occurs, unless checkbox 69 entitled “restrict instance coding to earlier pass only” is checked. In this case, new instances can only be marked in the first pass in which the event type appears in the coding plan. For example, the same event type may appear in pass 2 and in pass 3. If the event instance coding is "mark video” and checkbox 69 "restrict to the earliest pass only” is checked, new instances may be marked in pass 2, but not in pass 3. An example of where this would be done is when one pass is for instance hunting (such as pass 2) and another pass is reserved for just characterizing (pass 3).
- the event instance coding requirement determines what will be saved in the database for each instance. If an event type is defined as "Video In” only, then any "Video Out” or text marking is ignored when instances are created.
- the "mark video" checkbox 67 specifies whether instances are to be marked using Video In or Out Points.
- the checked condition means that new instances are to be marked using the video mark controls.
- Th unchecked condition means that no video is to be marked for this event type.
- Three choices are presented for how the video is to be marked for an event type. This governs the behavior of the mark controls on the Video Window when an instance of this event type is marked. The choices are:
- the text marking checkbox 68 specifies whether instances are to be marked using text.
- the text condition means that new instances are to be marked using the text mark control.
- the unchecked condition means that no text is to be marked for this event type.
- the "restrict instance coding to earlier pass only" checkbox 69 specifies whether instances can be marked in all passes in which the event type appears in the coding plan, or only in one pass.
- the checked condition means that event instances can only be marked in the first pass (first means the first sequential pass, not necessarily pass 1) in which the event type appears. If the event type appears in other passes in the coding plan, it behaves as if "mark video" and "mark text" are both unchecked. For example, event types in other passes can only be for entering characteristic values, not for marking new instances.
- the characteristics outline area 70 is a scrollable area that shows all the characteristics and choices associated with an event type for all passes. Characteristics are displayed in plain text. Choices are displayed under each characteristic in italics. If a characteristic in the characteristics outline area 70 is double-clicked, the item is opened for edit in the characteristic detail window illustrated in FIG. 14. If a choice value is double-clicked, its parent characteristic is opened for edit in the characteristic detail window. The order in which the characteristics are displayed in the outline is also the order in which they are displayed on the Video Window. The user can change the order by clicking on a characteristic and dragging it to a point above or below another characteristic within the same pass. When a characteristic is moved, all associated choices are moved within the characteristic and retain their same order. A characteristic can belong to only one pass.
- the add characteristic button 71 has the action of creating a new characteristic and displaying the characteristic detail window illustrated in FIG. 14.
- the delete characteristic/choice button 72 has the action of deleting what is selected in the characteristics outline and all indented items under it. For example, deleting a characteristic also deletes all of its associated choice values.
- the print event type button 73 has the action of printing the event detail report.
- the cancel/save event buttons 74 includes a save button which has the action of saving all the event type data and returning to the study detail window and a cancel button which has the action of returning to the study detail window without saving the changes in the event type data.
- the characteristic detail window as illustrated in FIG. 14 is for entry of all attributes for a characteristic. This window is reached either through the study detail window illustrated in FIG. 9A or the event type detail window illustrated in FIG. 13 when either a characteristic is double-clicked in the outline or when a new characteristic is added.
- the fields and controls of the characteristic detail window are described below.
- the window title area 81 gives the name of this window which is "Characteristic Detail" .
- the characteristic code area 82 is an enterable area for the code that identifies this characteristic when analysis data is created or exported.
- the characteristic name area 83 is an enterable area for the name of the characteristic.
- the coding instruction area 84 is a scrollable text area for entry of instructions on how to mark the characteristic. This text is available when help is selected on the Video Window.
- the data entry type area 85 presents four options on how data is to be collected for this characteristic. This governs the behavior o the mark controls on the Video Window when values for this characteristic are recorded. The options are:
- each choice has a choice value that is programmatically determined; the first choice value is 1 , then 2, then 4, then 8, etc.
- the data entry type can not be changed once a session has been created for the pass in which this characteristic appears, nor can choices be added, changed, or deleted.
- the choice list area 86 is a scrollable area that shows the choices associated with this characteristic. Choices can be added and deleted using add and delete choice buttons 87 and 88. Drag action allows the choices to be arranged in any order.
- the add choice button 87 has the action of creating a new line in the characteristic outline for entry of a new choice.
- the new line is enterable.
- the delete choice button 88 has the action of deleting the selected choice after confirmation from the user.
- the print characteristic button 89 has the action of printing the characteristic detail report.
- the cancel /save characteristic buttons 90 can return to the study detail window or the event type detail window without saving the changes for cancel or with saving the changes for save.
- the unit selection window is illustrated in FIG. 15.
- the unit selection window allows specification of the units to be included in the study.
- the window is presented when the specified unit button is clicked on the study detail window illustrated in FIG. 9A.
- the "units selected for study" area 102 is filled with the units that have already been selected for the study. No units are displayed in the "unit list” area 97 unless the study is constrained to be a subset of another study. In this case this area is filled with all the units in the parent study.
- the window title area 91 gives the name of this window which is "Unit Selection for Study: " followed by the name of the current study such as "Math Lessons” .
- the "unit selection description” area 92 is a scrollable text area for a description of this selection of units. This is the same text as appears in the "unit selection description” area on the study detail window of FIG. 9A.
- the "show all units" button 93 has action which depends on the constraint condition. If the unit selection is constrained to be a subset of another study, the button action is to display all the units specified for the parent study. Otherwise, the button action is to display all the units in the project in the video list.
- the "find units" button 94 has the action of presenting a search enabling the user to search for video units that will be displayed in the units list 97.
- the search allows search on any of the unit fields, using the user-defined custom fields.
- the units found as a result of this search are displayed in the unit listing area 97. If the unit selection is not constrained to be a subset of another study, the find action is to search all the units in the project. If the unit selection is constrained to be a subset of another study, the find action is to limit search to the units specified for the parent study.
- the "copy from study” button 95 has the action of presenting the "use units from other study” window, prompting the user to select a study.
- the units from the selected study are displayed in the unit listing area 97. If the checkbox entitled “constrain to be a subset of the specified study” is checked on the "use units from other study” window, the constraint message 96 is displayed on the window.
- Area 96 is the constraint message and checkbox. The message and checkbox only appear when a unit selection constraint is in effect. There are two possible values for the message:
- the message appears as "constrained to be a subset of [study] " . If the unit selection was constrained to include all the units in the project from the units menu on the study detail window of FIG. 9A, the message appears "Include all units in the project” .
- the unit listing area 97 is a scrollable area which lists video units by unit ID and name. This area is filled by action of the "all" , "find", and "copy study” buttons 93-95. Units in this area 97 are copied to the "units selected for study" area 102 by clicking and dragging. When a unit is dragged to the 'Units selected for study” list 102, the unit appears grayed in the list. Units are removed from this list by clicking and dragging to the remove video icon 101.
- the clear unit listing button 98 has the action of clearing the contents of the unit listing area 97.
- the copy to study button 99 has the action of copying the highlighted unit in the unit listing area 97 to the unit selected for study listing are 102.
- the checkbox 100 entitled "Randomly select units and add to the study” has the action of creating a random sample if the checkbox is checked.
- the action of checkbox 100 is to change the behavior of the copy to study button. When checked, the sample number area 100a becomes enterable. When unchecked, the sample number area is non-enterable. If checked, and a sample number is given, the copy to study button has the following action:
- a random number of units is selected consisting of (sample number) units from the unit listing area, and added to the existing selection in the "units selected for study" listing area. Units in the unit listing area that apparently appear in the "units selected for study” listing area are ignored for purpose of creating the sample.
- the "remove video from list” icon 101 is a drag destination.
- the action is to remove videos from the list from which they were dragged. For example, if a video is dragged from the unit listing 97 to this icon, it is removed from the unit listing area. This is not the same action as deleting the unit.
- the "units selected for study" area 102 is a scrollable area which lists units associated with the study. Units are listed by unit ID and name. Units can be added to this list from the unit list 97 by clicking and dragging.
- the "clear units selected for study” button 103 has the action of clearing the contents of the "units selected for study” listing area 102 after confirmation with the user.
- the print unit selection information button 104 has the action of printing the units in study detail report.
- the cancel/save choice buttons 105 include the save button which saves the units selected for study selection and returns to the window that made the call to this window. The cancel button returns to the window that made the call without saving the changes.
- the "use units from other study” window is illustrated in FIG. 16.
- This window is used to fill the units list 97 of the window shown in FIG. 15 with all the units that belong to another study.
- the window also contains a checkbox that imposes the constraint that only units from the selected study, the parent study, can be used in the current study.
- This window is opened when the "copy from study” button 95 of the window shown in FIG. 15 is clicked on the unit selection window.
- the "units from other study” window shown in FIG. 16 includes a number of fields and controls.
- the study list area 111 is a scrollable area which contains all the studies in the project, except for studies constrained to "Include all units in project" and the current study.
- the study description area 112 is in a non-enterable scrollable area of text that contains the description of the highlighted study shown in area 111. This text is from the unit selection description area on the study detail window illustrated in FIG. 9A.
- the button 113 labeled "Replace current selection in unit list” causes the action of displaying the units for the highlighted study in the unit list on the unit selection window.
- the checkbox entitled “Constrain unit selection to be a subset of the specified study” 114 imposes a constraint on the study so that only units from the selected study in area 111 can be used for the current study. Action is to constrain the contents of the units listing area so it only contains the units specified for the selected study.
- the button entitled “Add units to current selection in unit list” displays the units for the highlighted study in the unit list on the unit selection window.
- the purpose of the unit module is to include all the windows, controls and areas that are used to define video units, open and work with sessions, and manage sessions in the interactive video analysis system.
- the unit module includes a unit list window.
- the unit list window is illustrated in FIG. 17.
- the unit list window presents all the units defined for the project. For example, this includes all the units in the database.
- the unit list window displays the unit ID and the unit name.
- nine custom unit fields may also appear. Double-clicking on a record presents the unit detail window.
- the unit detail window is illustrated in FIG. 18
- the unit detail window is the primary window that contains all data needed to define a unit, including creation of the transcript. This window is presented when adding a new unit or when editing an existing unit.
- the fields and controls of the unit detail window are described below:
- the window title area 121 gives the name of this window which is "Unit Detail".
- the unit name area 122 is an enterable area for the name of the unit. This must be a unique name. Internally, all attributes for the unit are associated with an internal unit identifier, so the unit name can be changed.
- the unit ID area 123 is an enterable area for code to identify the unit.
- the create date area 124 gives the date and time when the unit was initially created. This is a non-enterable area that is filled by the program when the unit record is created.
- the descrip-i ⁇ n area 125 is a scrollable text area for entry in order to describe the unit. Nine custom unit fields are an optional feature. Each field is an enterable area for storing data up to 30 characters long. The field names are customizable.
- the segment list area 126 is a scrollable area that displays all the segments for this unit. Each segment is displayed with its name (file name from the path), length (determined by reading the media on which the segment is recorded), start time (calculated using the sum of the previous segment length), and end time (calculated using the sum of the previous segment length plus the length of this segment).
- the sequence number is by default the order in which the segments were created. The sequence number determines the order in which the segments are to be viewed. Order of the segments can be changed by dragging. Dragging is only supported for a new record. When a segment is moved by dragging the start and end times of all other segments are recalculated.
- the add segment button 128 has the action of presenting the open-file dialog, prompting the user to select the volume and file that contains the segment.
- the file name is entered as the segment name in the segment list 126.
- the length is also determined and written into the segment list.
- the first frame of the video is displayed in the video view area.
- the delete segment button 129 has the action of prompting the user to confirm that the highlighted segment in the segment list 126 is to be deleted. Upon confirmation, the segment is deleted and the start and end times of all other segments are recalculated.
- the study membership area 130 is a scrollable area that lists all studies in which this unit is included. Units are assigned to a study on a study detail window on FIG. 9A. When such an assignment is made, the study is included in this area 130.
- the transcript information area 131 is a non-enterable area which displays the size of each transcript (the number of characters).
- the import transcript button 132 prompts the user for which transcript to import, and then presents the standard open file dialog prompting the file name to import. When the file is selected, the file is imported using tab-delimited fields. The size of the transcript is written to the transcript size area 135.
- the edit transcript button 133 opens the Video
- the export transcript button 134 has the action of prompting the user for which transcript to export, then presents the standard new file prompting the file name for the export file.
- Navigation controls 138 operate as previously described.
- the print button 137 has the action of printing the unit detail report.
- the cancel/save unit buttons 136 include the save button which prompts the user for confirmation that the segment sequence is correct. After confirmation, the user is either returned to the window or the unit data is saved. For an existing record, the save button action is to save the unit data and return to the unit list window of FIG. 17. If the cancel button is used, any changes to segments or to the transcript are rolled back after confirmation.
- a video position indicator/control 139 has the same operation as the video position indicator/control of the Video Window. It indicates the relative positions of the current frame in the segment. Session Handling And Management
- Coding a unit for a specific study takes place in a session.
- a session When a user goes into code mode on the video window, a session must be opened that sets the coding parameters. The progress of coding can be tracked by monitoring the status of the sessions for a study.
- the present invention includes various windows to open and close sessions during coding and management windows that give detailed session information. If 'bode" is selected on the main button bar, and if the user has no other currently opened sessions, the user is prompted to open a session for the current study on the create session window. If the user has previous sessions that are still open, the resume session window is presented and the user may open an existing session or create a session. After a session is active, the user may move freely back and forth between view mode and code mode on the video window. While in code mode, the user may open the session info window to display information about the session.
- Session management is performed from the session listing window which is accessed by clicking session on the manage button bar. Double-clicking on a session in the session listing window opens the session detail window which provides information similar to the session info window.
- the session info window presents information about the current session including the name of the current study, the number of this particular pass, with the total number of passes in the study, a text description of the study, information about the unit including the unit name and unit I.D., the number of the segments that make up the unit and the total length in hours, minutes, seconds and frames for the entire unit, including all segments. Additionally, the session info window gives information about the sample that is in effect for the current pass, a pass outline which contains all the indented event types, characteristics and choice values for the current pass, a print button and a button to close the window.
- a session placemark saves a time code to close the window.
- a session placemark saves a time code with the session so that when the session is resumed the video is automatically positioned at the placemark. This occurs when a session is ended without closing it.
- the select a study window appears.
- a select button chooses a selected study.
- the session list window is opened, listing all the sessions for the selected study. Clicking on a record listed presents the session detail window.
- the session detail window gives information about the session. The information includes the name of the study, the pass number for the particular session, along with the total number of passes defined for the study, the name of the video unit being coded, the unit I.D.
- the session status such as "never opened, '"opened,” 'reopened, “ and 'closed, " the name of the user who opened the session, the length of the unit in hours, minutes, and seconds, the total elapsed time in the code mode between when the unit was opened and closed, the number of events that have been coded in the session, and the number of characteristics recorded for event instances.
- Sample information such as the sample method that is in effect for the pass in the session and the sample size created for the session is displayed.
- the Video Window is used to: (i) play the video belonging to a unit; (ii) display, edit, and/or synchronize transcription text belonging to the unit; (iii) create event types and structure characteristics under them (for the default study only); (iv) mark and characterize event instances; (v) retrieve previously marked event instances for editing or viewing.
- An “event instance” is the marked occurrence 01 a predefined event ("event type") within video or transcription text.
- the video and/or text is associated with an event type and characteristic to create a specific instance.
- the Video Window may be opened through one of several actions:
- the window is opened to display a specified unit (including video and transcription text).
- the Video Window supports three modes of operation: view mode, transcribe mode, and code mode.
- Event instances are marked only in code mode. During the coding process, when an event instance is observed, the following steps are performed:
- the event type listing displays all the event types that can be coded in a particular pass, no other event types may be coded.
- Clicking the save instance button completes the marking of an instance.
- the instance can only be edited by recalling it by clicking on it in the instance listing, editing it using the frame controls, selecting a different event type or characteristic values, and clicking save instance to save the updates.
- the instance is sorted into the instance listing if the event type is checked and is displayed in a different color to distinguish it from previously created instances.
- buttons that mark or change the In/Out Points and text selection are disabled.
- the event type listing displays all the event types defined for the current study rather than for the current session and allows event types to be
- S SUI BSTITUTE SHEET (RULE 26) checked so instances for the event type are displayed in the instance listing. Characteristic values may be viewed for each instance, but not changed. If there is no current study, nothing appears in the event type listing.
- initialization depends on the mode in which it is to be opened.
- FIG. 19 includes a table of the palettes that may be opened over the Video Window.
- the palettes include the sample palette, the outline palette, search results palette, and transcribe video loop palette.
- the current video segment may be changed in a number of ways: (1) by selecting the segment switching buttons on the sides of the progress bar, (2) when the video plays to the end of the current segment, and (3) when an instance is clicked in the instance listing, or a time code is clicked in any palette that is not the current segment.
- the path of the required segment is retrieved from the unit file. If the path does not exist because the segment is a removable shrine., the user is prompted to open the file containing the segment. If any invalid path is entered, an error is given and the user is prompted again. If cancel is clicked, the user is returned to the Video Window in the current segment.
- the Video Window has five major areas: the title area, the video area, the mark area, the instance area, and the list area.
- the title area is illustrated in FIG. 21 A.
- the video area is illustrated in FIG. 21B and contains the video display area, play controls, relative position indicators, zoom and sound controls and drawing tools.
- the mark area is illustrated in FIG. 21C and contains controls to Mark Instances, refine In and Out Points on the video, and save marked instances.
- the instance area is illustrated in FIG. 21D and contains listings of event types, characteristic labels, characteristic choices, and events instances that have already been marked.
- the list area contains the transcript text and controls to change the mode of operation.
- the video position indicator/control 142 acts like a thermometer. As the video plays, the grey area moves from left to right, filling up the thermometer. It displays the relative position of the current frame in the current segment. At the end of the segment, the thermometer is completely filled with grey. Increments on the control indicate tenths of the segment. The end of the grey area can be dragged back and forth. When released, the action is to move the current video frame to the location in the video corresponding to the relative position of the control. The video resumes the current play condition. A small amount of grey is always displayed on the thermometer, even when the current frame is the first frame of the segment. This is so that the end of the grey can be picked up using the click and drag action even when the first frame of the video is the current location.
- a subtitle area 143 displays the transcription text that corresponds to the video. Two lines of the text are displayed.
- Button 144 is the zoom tool. The action is to zoom the selected area to fill the frame of the video display.
- Button 145 is the unzoom tool which restores the video display to lx magnification.
- Button 146 is the volume control. Click action pops a thermometer used to control the volume.
- Button 147 is the mute control. The button toggles the sound on or off.
- Area 148 gives the current video frame.
- Button 149 moves the video back five seconds.
- Button 150 goes to the beginning of the current video segment and resumes the current play condition.
- Button 151 is the pause button and button 152 is the play button.
- Button 153 is the subtitle control which toggles the video subtitle through three modes: 1) Display subtitles from transcript one;
- Button 154 is the draw tool which enables drawing on the video display. The cursor becomes a pencil and drawing starts upon mouse down and continues as the mouse is moved until mouse up. The draw tool can only be selected when the video is paused.
- Button 155 is the eraser tool which enables erasure of lines created using the draw tool.
- Button 156 is the scissor tool which copies the currently displayed frame to the clipboard. Drawings made over the video using the draw tool are copied as well. The scissors tool can only be selected when the video is paused.
- Button 157 is the frame advance which advances the video by one frame.
- Button 158 is the open video dialogue which opens a window to display the video in a larger area.
- the link control 159 controls the link between the videc and transcript area. When 'on" the video is linked with the transcript. In other words, when the video is moved the closest utterance is highlighted in the transcript area. When the link control button is 'off," moving the video has no effect on the transcript area.
- FIG. 21C With respect to the mark area of the Video Window, reference is made to FIG. 21C and FIG. 22.
- the action of the controls in the mark area is dependent on the current video mode (view, code, and transcribe).
- the Mark In button 161 is disabled in the view mode.
- the button action In the code mode the button action is to "grab” the time code of the current video frame regardless of the play condition and display it in the In Point area 162.
- the button action In the transcribe mode, the button action is to "grab” the time code of the current video frame regardless of play condition and display it in the In Point area 162 and in the ti ⁇ code area for the utterance in which the insertion point is positioned.
- Button action is to overwrite any previous contents in the In Point area and the utterance time code area with the time code of the current video frame.
- the In Point area 162 is a non-enterable area which displays the time code of the frame that is the beginning of the instance. This area is updated by one of five actions: (1) clicking the Mark In button in the code and transcribe modes such that the area gets the time code for the current frame; (2) manipulating the In Point frame control in the code and transcribe modes so that the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires a video-in or exhaustive segmentation coding so that the area gets the In Point of the instance; (4) highlighting an utterance in the view and transcribe modes so the area gets the time code of the utterance; and (5) clicking an outline item on the outline palette so that the area gets the In Point of the outline item.
- the In Point frame control button 163 has identical action in the code and transcribe modes. Control is disabled in the view mode. Control action is to incrementally move the video forwards or backwards a few frames to "fine tune" the In Point.
- the Mark Out button 164 is enabled in code mode only.
- the button action is exactly analogous to the Mark In button 161 , except the Out Point is set and displayed in the Out Point area 165.
- the Out Point area 165 is a non-enterable area which displays the time code of the frame that is the end of the instance. If there is no Out Point for the instance, the area is blank. This area is updated by one of four actions: (1) clicking the Mark Out button in the code mode so that the area gets the time code for the current frame; (2) manipulating the Out Point frame control in the code mode so the area gets the time code for the current frame; (3) clicking an instance in the instance listing in the code and view modes for an event type that requires Video Out coding so that the area gets the Out Point of the instance or becomes a blank; and (4) highlighting an utterance in the view and transcribe modes so that the area becomes blank.
- the Out Point frame control button 166 is only enabled in the code mode.
- the control is analogous to the In Point frame control 163 except the Out
- Point is adjusted.
- the mark text button 167 is enabled only in the code mode.
- the button action is to register the position of the highlighted text as the instance marking.
- the button appearance changes to signify that text has been marked. Internally, the time code of the beginning of the utterance in which the highlighted text begins is retained, along with the position of the first and last characters of the highlighted text.
- the event type listing area 170 is a scrollable area in which the action and contents depend on the mode.
- the area is blank in the transcribe mode.
- the scrollable area contains a row for each event type that can be coded in the current pass. Only event types that are listed here can be coded in a particular session. In code mode with the outline palette open, this area is blank. In view mode, the area contains a row for each event type defined in the study. If there is no current study, the area is blank.
- the event type listing contains four columns.
- the first column is the checkmark that indicates that instances of this event type are to be displayed in the instance listing area.
- the second column is the unique event type code.
- the third column is the event type name.
- the fourth column is the event instance coding requirement. In both modes, if an event type is double-clicked the action is to place a checkmark next to it or to remove the checkmark.
- the checkmark indicates that event instances with this event type are to be listed in the "previously marked instances” area. In the illustration the event type "Question Asked” is checked. All the instances of questions being asked in this unit are listed in the "previously marked instances” area.
- clicking an event type has the action of refreshing the characteristic labels popup 171 to contain all the characteristics structured under the highlighted event type for the current pass.
- the action is to refresh the characteristics label popup to contain all the characteristics structured under the highlighted event type in the study.
- the characteristics labels area 171 is a popup that contains labels
- the next/previous characteristic buttons 172 are a two-button cluster that have the action of selecting the next item in the characteristic label popup, or selecting the previous item in the popup.
- the characteristic count area 173 is a non- enterable text display of the sequence number of the currently displayed characteristic label and the total number of characteristic labels for the current pass.
- the characteristic value area 174 is either a scrollable area or an enterable area.
- the clear button 175 has the action of clearing the In Point and Out Point areas and resetting the Mark In, Mark Out, and mark text buttons to normal display (for example, removing any reverse video).
- the save instance button 176 only has action in the code mode and is disabled in the other modes.
- the button name is "save instance” unless an event instance is selected in the event instance listing, in which case the button name is "save changes”.
- the action of the button is to validate data entry.
- An event type must be selected. All characteristics must be given values. All the points must be marked to satisfy the event instance coding rules for the selected event type.
- the event type help button 177 only applies to the code and view modes.
- the action is to present a dialog containing the coding instruction for the highlighted event type.
- the show/add event type button 178 is only visible in the code mode for the default study.
- the action is to present a window to select one or more event types previously created for the default study to be included in the event type listing area.
- a button on this window allows users to create a new event type for the default study.
- the button is provided so that the user may select which of the previously defined event types for the default study are to be included in the event type listing area. This allows the user to select just those event types of immediate interest for addition to the listing.
- the user also has the option of creating new event types for the default study using the event type detail window.
- the edit event type button 179 is only visible in the code mode for the default study. The action of the button is to allow the user to edit the highlighted event type.
- the remove/delete event type button 180 is only visible in the code mode for the default study. The action of the button is to prompt the user for whether the highlighted event type is to be removed from the event type listing or is to be deleted permanently with all its instances.
- the instance area provides a listing of instances that have been marked for selected event types and controls to retrieve an instance for viewing or editing, to add instances to an outline, and delete instances. This area is active only in the code and view modes. The area is disabled in code mode when the outline window is open.
- the instance listing area 181 is a scrollable area that contains all the instances marked in the current session for the event types that are checked in the event type listing. Each instance is listed with a time code and event type code. The meaning of the time code depends on the event type. If the video is marked, the In Point is displayed. If only text is marked, the time code of the beginning of the utterance is displayed. A symbol is placed after the time code to indicate that the time code corresponds to the video frame closest to the beginning of the utterance in the event of marked text. Clicking an instance moves the video to the beginning of the instance and resumes the playing condition.
- the delete instance button 182 is enabled in the code mode only. The action of the button 12061
- the add to outline button 183 is enabled in the code and view modes only. Action is to add the instance to the current outline.
- the return to In Point button 184 is enabled in the code and view modes only.
- the action of the button is to move the video to the first frame of the highlighted event instance. The video resumes the prior play condition.
- the pause button 185 is enabled in the code and view modes only. The action is to pause the video at the current frame.
- the play to Out Point button 186 is enabled in the code and view modes only.
- the action of the button is to play the video starting at the current frame and stop at the Out Point for the highlighted event instance.
- the go to Out Point button 187 is enabled in the code and view modes only.
- the action of the button is to move the video to three seconds before the Out Point of the highlighted event instance, play the video to the Out Point, and stop.
- the transcribe mode has two operations: (i) transcribing the spoken words or actions on the video into text; and (ii) assigning time reference values to each of the utterances in the video.
- the first operation transcribing video content into text, is largely accomplished by watching the video and entering text into the list area. This process is aided by the Transcribe- Video Loop palette.
- the palette provides a control that enables the user to play a short segment of video over and over without touching any controls. The user sets the loop start point and end point. When the contents of the loop have been successfully transcribed, a 'leap' button moves the loop to the next increment of video.
- the list manager is used to display and work with the text associated with the video.
- this text is the transcription of what is being said in the video, though the text may actually be anything - observations about the video, translation, etc.
- the text is referred to as the 'transcription' or transcript.
- each speaker takes turns speaking; the transcription of each turn is an 'utterance'; e.g. an utterance is the transcription of one speaker's turn at speech.
- Utterances are records in the database; each utterance has a time reference value (In point), two transcription text fields, and a speaker field.
- the area on the screen that the list manager controls is called the 'List Area' .
- the List Area is shown in FIG. 24. It is the right side of the Video Window of FIG. 20.
- the list manager gets its name because it is not a conventional text area; it displays text from utterance records in the transcript so that the text looks like a contiguous block. Actions on the text block update the utterance records.
- Each utterance is associated with a time reference value that synchronizes it with the video; an In point is marked that identifies where the utterance begins in the video. (Note: there is no Out point associated with an utterance; the out point is assumed to be the In point of the next consecutive utterance.)
- Each utterance is also associated with a speaker.
- Utterances in the list area are always displayed in the order as entered or specified (in case of an insertion) by the user.
- the list are supports three modes of operation: View Mode, Transcribe Mode and Code Mode.
- the area behaves differently in each of the three modes. For instance, the action of clicking in the area to create an insertion point is the same in all three modes, but a subsequent action of typing characters would have the effect of inserting characters into the text area only in Transcribe mode; it would have no effect at all in View mode or Code mode.
- the List Area in View Mode displays the transcript text next to the video. Clicking on the text has the action of moving the video to the point closest to the utterance. Moving the video using other controls on the Video Window has the effect of highlighting the utterance closest to the video.
- the text can not be changed in any manner, nor may the time reference values associated with it be changed.
- View mode affects the other controls on the Video window as well: new event instances can not be marked or edited, and characteristic values can not be recorded or changed.
- the purpose of the Transcribe Mode is to allow text entry and edit, and to provide controls for marking the text to synchronize it with the video.
- the marking process is limited to marking the video In point for each utterance; event instances can not be marked or edited, and characteristic values can not be recorded or changed.
- Code Mode The purpose of Code Mode is to mark event instances and enter characteristic values.
- the coding process typically starts only after the entire Unit is transcribed and time reference values are associated with every utterance, as the time reference value is used during coding.
- the list area has a header area 191 with the mode icons 195.
- the time column 192 displays the time reference value associated with each utterance. This is the point on the video that was marked to correspond with the beginning of the utterance (e.g. the time reference value is the In point for when this utterance is made in the video). If the utterance has not been marked, the time reference value is displayed at 00:00:00.
- the speaker column 193 identifies the speaker.
- the transcript 1 column 194 displays the text of the first transcript. This area is enterable in the Transcribe Mode.
- Area splitter 196 allows the user to split the transcript text area into two halves so that a second transcript is displayed. This is shown in FIG. 25.
- a video may be on more than one media unit (disk, tape, etc.) (segments). Segment boundaries are identified in the list area as a heavy horizontal line that goes across all four columns.
- action is to move the video to the beginning time reference value of the utterance or the closest previous utterance that has a time reference value.
- the text is fully editable and selectable.
- all key actions have the effect of highlighting the entire utterance in the Code Mode or navigating between highlighted utterances.
- the Code Mode instances are marked. A full set of actions is supported to select text so it can be marked. Highlighted text can not be changed.
- the list area is updated to scroll to the marked utterance, and highlight the marked selection within the utterance.
- the list area is updated to scroll to the closest utterance, and highlight the utterance.
- the list area is updated to scroll to the closest utterance to the current video frame and highlight the utterance.
- the Video Window menubar contains commands for Find and Find Again. The effect on the list area is identical for each of these commands.
- the user is prompted for a text value and/or speaker name; the list manager searches for the next instance starting at the current insertion point position.
- each utterance is marked to identify the time reference value on the video to which it belongs.
- the Mark In button and controls are enabled to allow exact video positioning of the In point of each utterance.
- the list area tracks the current insertion position and/or highlight range in Code Mode: the utterance ID, time reference value, and character offset is available to the mark controls so the exact insertion point position or highlight range can be recorded with the instance.
- the outline presentation feature allows the user to select and structure the video and transcript text from event instances.
- the intended use of this feature is to prepare presentations that include selected instances.
- the outline palette for the current outline is opened when Show Outline is requested anywhere. If no current outline is active, the user is prompted to select one by the Select An Outline window shown in FIG. 26. It displays outlines that have been created. The author of each outline is displayed in the scrollable area. The user may select an outline or pushing the plus button will create a new outline. The negative button will delete the selected outline if the user is the author.
- the outline description window is displayed when an outline is created. It has two enterable areas as shown in FIG. 27: the outline name and the description.
- the outline palette is shown in FIG. 28.
- Event instances dragged to the Outline icon ⁇ p on the Video Window of FIG. 20 become part of the current oudine. If there is no current outline, the user is prompted to specify one, or create a new one. The current oudine remains in effect until a different outline is selected.
- the Outline Item window When the event instance is dropped on the Outline icon, the Outline Item window, shown in FIG. 29, is opened to prompt the user for a description of the item.
- the Outline Item window displays all the headers for the current outline (in the same order as specified in the outline) so a header for the item can be specified as an optional step.
- the item is added as the last item under the header. If no outline header is specified, the item is added as the first item in the orphan area.
- an Outline Item is created from the unit, time reference value, event type, and text selection of the instance.
- the outline item is completely independent of the instance.
- the outline item may be edited for In/Out point, text selection, or deleted entirely, without affecting the event instance, and vice-versa.
- Outline items After outline items have been created, they can be structured in the Outline Palette. Outline items can be structured under and moved between headers, and the order of headers can be changed. Once the outline is complete, it can be printed and the video portion can be exported to an MPEG file.
- the outline palette When the outline palette is active, it can be used to control the video display. Clicking an outline item moves the video to the associated time reference value.
- the outline item's time reference value can be edited for In and Out points.
- the oudine item's transcript marking may also be edited.
- Outline items retain association with the utterances (Transcript 1 and Transcript 2) associated with the outline item (by time reference value) corresponding to the video. The user may specify whether these are to be printed with the outline.
- the outiine area 200 is a scrollable area that contains all oudine items and oudine headers. Outline items are indented under outline headers. Drag and drop action is supported in this area to allow headers and outline items to be moved freely through the outline area. Outline headers appear in bold and are numbered with whole numbers. When a header is moved the outline items move with it. A header may be clicked and dragged anywhere in the outline area. Outline items appear in plain text and are numbered with decimal numbers that begin with the header number. Outline items appear with the event code that went along with the event instance from which the item was created. Items may be clicked and dragged anywhere in the outline area - under the same header, under a different header, or to the orphan area.
- the video window If an item is clicked, in the video window, the video is moved to the In point of the outline item, the utterance closest to the current video frame is highlighted, and the current play condition is resumed. If the oudine item points to video from a unit or segment not currently mounted, the user is prompted to insert it.
- the In and Out points of the outline item appear in the Mark controls.
- the Mark controls are enabled when the Oudine window is displayed, so the In and/or Out points of the oudine item can be edited. This has no effect whatsoever on the instance from which the outline item was created. If an item is not associated with a header, it is displayed on the top of the oudine area 200a and is called an 'orphan".
- the study area 201 displays the study from which the event instance was taken to create the highlighted outline item.
- the unit area 202 displays the name of the video unit associated with the highlighted outline item.
- the In point area 203 displays the In point of the video associated with the highlighted outline item.
- the duration area 204 displays the duration of the video associated with the outline item.
- Play Outline button 205 has the Button action to play the video starting at the In Point of the first outline item and continue playing each outline item in the order of appearance in the outline. Play stops at the Out Point of the last oudine item.
- the system supports the creation of a new MPEG file based on the instances that have been moved into an outline. That is, given marked video in and video out points, the system can create a new MPEG file which contains only the marked video content.
- the new MPEG file also contains the relevant additional information such as transcript text, and derivative information such as event, characteristic and instance information.
- the exported MPEG file is viewable.
- LAVA MPEG viewer made by LAVA, L.L.C.
- not only is the MPEG file viewable not only is the MPEG file viewable, but all of the relevant additional and derivative information such as the transcript text, event, characteristic and instance information is viewable and accessible for random positioning, searching, subtitling and manipulation.
- Two types of output can be produced from an Oudine: a printed Outline Report and a MPEG file created from the outline item in the order specified on the oudine containing video from the outline.
- Sampling is the creation of a specific subset of video or event instances that can be used for new instance hunting or the creation of a specific subset of event instances that can be characterized. There are five methods for creating samples.
- the sample method is specified on the sample definition window and displayed on the study definition window for each coding pass.
- the samples are presented to the coder in the Sample Palette so they can be visited one by one.
- the samples are saved in the database so they can be retrieved into the Sample Palette anytime.
- FIG. 30 shows the sample definition window. Area 210 permits a choice of sampling method.
- This sample method means that no samples will be created.
- the coder can use all the video in the search for event instances.
- This method means the sample is to be created from a specified percentage of the total event instances that occur in the Unit that belong to the 'Specify Event" area. The default value for percentage is 100% . An event must be selected from the 'Specify Event" popup if this sample method is chosen.
- This method means the sample is to be created from a specified number of the event instances that occur in the Unit that belong to the 'Specify Event" area. An event must be selected from the 'Specify Event” popup if this sample method is chosen.
- This method means the sample is to be created from a specified number of video clips from the Unit with a specified duration. Two parameters are required for this option: the number of samples to be created from the Unit, and the duration in seconds of each sample.
- the number of clips refers to the entire video, not from each event.
- ⁇ -Sample periods may not overlap
- sample periods may not span from one instance to another; e.g. the sample period must be wholly contained within a single event instance.
- This method means the sample is to be created from randomly selected video clips of a given duration from the Unit.
- the number of samples is given in terms of Samples per minute of video”. Three parameters are required for this option: the number of samples desired, the interval of time over which the samples are to be chosen, and the duration in seconds of each sample.
- the event filter area 219 allows restriction of the selection of event instances or time samples to periods within another event type.
- Time samples restrict the creation of new instances to the sample periods, according to the Event Coding Constraint specified ii, the sample definition.
- - Instance samples allow the retrieval of selected instances, typically for characterization.
- a time sample is created by specifying one of the time sample methods (Random Time Sample or Proportional Random Time Sample).
- An instance sample is created by specifying one of the event sample methods (Fractional Event Sample or Quantitative Event Sample).
- the instance listing on the Video window limits the display of existing instances to only instances with an In point within the time period of the highlighted sample in the Sample Palette. For example, if five event instances are listed in the Sample Palette and one is highlighted, only event instances with an In point within the time period (e.g. From Video In to Video Out) of the highlighted instance would be listed in the event listing (subject to the other controls that specify what event types are to be displayed in the instance listing).
- the sample palette is shown in FIG. 31.
- Checkmarks 223 next to the sample list area 224 may be set.
- the sample list area contains an entry for each sample with time reference values for the In point and Out point of the sample.
- FIG. 32 is the sample information window which is opened from choosing the Show Sample Info button 222 on the sample palette.
- the event filter area is a non- enterable scrollable area that contains text describing the event filter in effect for the current pass.
- the illustration shows the format for how the filter is to be described - it follows the same conventions as the 'Within" area in the Sample Definition Window.
- the analysis module is used to gather statistics about event instances across video units.
- the module provides functions for defining variables, searching for and retrieving information about event instances, displaying the results, and exporting the data. Typically the results of an analysis will be exported for further analysis in a statistical program.
- a window requests the user to designate a unit analysis or an instance analysis.
- the analysis module allows the user to produce statistical information about event instances on either a Unit by Unit basis or an Instance by Instance basis.
- the results can be displayed or exported for further analysis.
- Unit analysis aggregates information about the instances found in a unit and returns statistics such as count, mean, and standard deviation about the event instances found in the unit.
- Event Instance analysis returns characteristic values directly for each instance found in the units included in the analysis.
- FIG. 33 shows the unit analysis window.
- Area 232 is the analysis description area.
- Area 236 is the variable definition area. There are four columns: the sequence number, the variable description, the short variable name and the statistic that will be calculated for the variable such as: count, mean, SD (standard deviation). Variables may be dragged to change order and added or deleted.
- the execute analysis button 242 executes the analysis.
- the analysis results area 243 has a column for each variable defined in variable listing area 237 and a row for each unit in the analysis.
- a unit variable may be added and defined.
- the unit value will be returned for each unit in the analysis.
- An event variable may be added and defined.
- a calculated value will be returned for each unit in the analysis.
- the calculated variable is a statistic about instances matching a description.
- FIG. 34 shows the define unit variable window and
- FIG. 35 shows the define event variable window.
- the event criteria area 255 specifies event instances to be found for analysis. Event instances are found for the event type in area 254 that occur within other instances and/or have specific characteristic values.
- Area 256 sets additional criteria.
- the event variable is calculated using the attribute designated in area 257.
- Area 258 indicates the calculation to perform (mean, count instances, total, standard deviation, total number, sum, minimum, maximum, range, or count before/after for exhaustive segmentation).
- FIG. 36 illustrates the instance analysis window.
- Area 262 describes the analysis.
- Area 264 specifies the event type and is analogous to the define event variable window of FIG. 35 for unit analysis.
- Area 265 is the variable listing area. It has four columns. The first three are the same as for unit analysis. The fourth column is 'origin". The origin is
- Variables may be added and deleted.
- Area 269 gives the analysis results with a column for each variable in variable listing area 265 and a row for each event instance in the analysis.
- FIG. 37 is the define analysis variable window.
- the search module is used to perform ad-hoc searches for text or event instances, display the results, and allow the results to be used to control the Video Window.
- the Search Module allows the user to search for text or event instances across multiple video units.
- the results can be displayed in a palette over the Video Window so each 'find' can be viewed.
- the Search Window is designed to allow multiple iterative searches. Each search can begin with the results of the previous search: the new search results can be added to or subtracted from the previous search results.
- search There are two types of searches: searches for text strings within the transcript text ('Text Search'), and searches for event instances that match a give event type and other criteria ('Instance Search'). Each search has its own window, but most of the controls in each window are identical.
- the search module is accessed from the main button bar for a text search or an instance search.
- FIG. 38 is a search window with features common to text and instance searches.
- Area 271 indicates if it is a text or instance search.
- Area 272 shows the relationship to a previous search.
- Area 277 designates units to search.
- Area 281 specifies what is being searched for: the event instance or word or phrase. Multiple criteria may be set to identify the characteristic or position.
- Button 282 executes the search.
- Area 283 lists the results. Button 284 will add die result to an outline.
- Area 285 gives the instant count.
- search within a study button is selected on the search window a unit selection for search window permits the user to select individual units within a study to limit the search.
- a results palette permits the search results to be examined and there is a checkmark that may be set for each result.
- FIG. 39 shows the event instance search window.
- a search is done for an event type occurring within an event type where a particular characteristic has a valid characteristic value.
- a 'Saved Search' event type This provides several capabilities:
- Characteristic values can be applied to the event instances, and a later pass can be created to record other characteristics.
- FIG. 40 is the text search window.
- the text search can search the text of multiple units of video. It finds all instances of a word or phrase.
- the search term is input in area 291.
- the speaker is input in area 292.
- Area 293 indicates which transcripts are searched.
- Area 294 permits searching text within an event type with a characteristic and selected choice of characteristic.
- the study listing report lists studies in the current selection, sorted in the current sort order.
- the study detail report details one study, giving all details about it.
- the event detail report details one event type belonging to a study, giving all details about it.
- the characteristic detail report details one characteristic belonging to the study, giving all details about it.
- the units in study detail report lists all the units that have been selected for a single study.
- the unit listing report lists all units in the current selection, sorted in the current sort order.
- the unit detail report gives all details about a unit.
- the session listing report prints the contents of the current session list window.
- the session detail report prints the contents of the current session detail window.
- the user listing report lists all users in the current selection, sorted in the current sort order.
- the user detail report details one user.
- the system settings report prints all the system settings.
- the outline report is printed from the outline palette.
- the search report gives results of an event instance search or a text search.
- the search criteria report gives the search criteria.
- the analysis results report prints the data created for the analysis that is displayed.
- the analysis variable definition report prints the description of all die variables defined in the analysis.
- the sample detail report describes the sample and lists the time reference values in the sample.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
- Color Television Systems (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67856396A | 1996-07-12 | 1996-07-12 | |
US678563 | 1996-07-12 | ||
PCT/US1997/012061 WO1998002827A1 (en) | 1996-07-12 | 1997-07-11 | Digital video system having a data base of coded data for digital audio and video information |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1027660A1 true EP1027660A1 (de) | 2000-08-16 |
Family
ID=24723323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP97934108A Withdrawn EP1027660A1 (de) | 1996-07-12 | 1997-07-11 | Digitales videosystem mit datenbank für codierte daten für audio- und videoinformation |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1027660A1 (de) |
JP (1) | JP2001502858A (de) |
AU (1) | AU3724497A (de) |
CA (1) | CA2260077A1 (de) |
MX (1) | MXPA99000549A (de) |
WO (1) | WO1998002827A1 (de) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5963203A (en) | 1997-07-03 | 1999-10-05 | Obvious Technology, Inc. | Interactive video icon with designated viewing position |
US6573907B1 (en) | 1997-07-03 | 2003-06-03 | Obvious Technology | Network distribution and management of interactive video and multi-media containers |
JP2000262479A (ja) * | 1999-03-17 | 2000-09-26 | Hitachi Ltd | 健康診断方法及びその実施装置並びにその処理プログラムを記録した媒体 |
US6771657B1 (en) | 1999-12-09 | 2004-08-03 | General Instrument Corporation | Non real-time delivery of MPEG-2 programs via an MPEG-2 transport stream |
WO2002041634A2 (en) * | 2000-11-14 | 2002-05-23 | Koninklijke Philips Electronics N.V. | Summarization and/or indexing of programs |
MXPA03010679A (es) * | 2001-05-23 | 2004-03-02 | Tanabe Seiyaku Co | Una composicion para acelerar la cicatrizacion de fractura osea. |
EP1262881A1 (de) * | 2001-05-31 | 2002-12-04 | Project Automation S.p.A. | Verfahren für die Verwaltung von Daten die von prozeduralen Aussagen stammen |
US7756393B2 (en) | 2001-10-23 | 2010-07-13 | Thomson Licensing | Frame advance and slide show trick modes |
US8891020B2 (en) * | 2007-01-31 | 2014-11-18 | Thomson Licensing | Method and apparatus for automatically categorizing potential shot and scene detection information |
CN115471780B (zh) * | 2022-11-11 | 2023-06-06 | 荣耀终端有限公司 | 声画时延的测试方法及装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0374241B1 (de) * | 1988-05-27 | 1997-08-27 | Kodak Limited | Dokumentenaufzeichnung und -bearbeitung in einem datenverarbeitungssystem |
US5524193A (en) * | 1991-10-15 | 1996-06-04 | And Communications | Interactive multimedia annotation method and apparatus |
US5600775A (en) * | 1994-08-26 | 1997-02-04 | Emotion, Inc. | Method and apparatus for annotating full motion video and other indexed data structures |
US5596705A (en) * | 1995-03-20 | 1997-01-21 | International Business Machines Corporation | System and method for linking and presenting movies with their underlying source information |
-
1997
- 1997-07-11 AU AU37244/97A patent/AU3724497A/en not_active Abandoned
- 1997-07-11 WO PCT/US1997/012061 patent/WO1998002827A1/en not_active Application Discontinuation
- 1997-07-11 JP JP10506161A patent/JP2001502858A/ja active Pending
- 1997-07-11 MX MXPA99000549A patent/MXPA99000549A/es unknown
- 1997-07-11 CA CA002260077A patent/CA2260077A1/en not_active Abandoned
- 1997-07-11 EP EP97934108A patent/EP1027660A1/de not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO9802827A1 * |
Also Published As
Publication number | Publication date |
---|---|
CA2260077A1 (en) | 1998-01-22 |
AU3724497A (en) | 1998-02-09 |
WO1998002827A1 (en) | 1998-01-22 |
JP2001502858A (ja) | 2001-02-27 |
MXPA99000549A (es) | 2003-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7739255B2 (en) | System for and method of visual representation and review of media files | |
JP3185505B2 (ja) | 会議録作成支援装置 | |
US6332147B1 (en) | Computer controlled display system using a graphical replay device to control playback of temporal data representing collaborative activities | |
US5717879A (en) | System for the capture and replay of temporal data representing collaborative activities | |
US6789109B2 (en) | Collaborative computer-based production system including annotation, versioning and remote interaction | |
US6938029B1 (en) | System and method for indexing recordings of observed and assessed phenomena using pre-defined measurement items | |
US9348829B2 (en) | Media management system and process | |
US5717869A (en) | Computer controlled display system using a timeline to control playback of temporal data representing collaborative activities | |
US5786814A (en) | Computer controlled display system activities using correlated graphical and timeline interfaces for controlling replay of temporal data representing collaborative activities | |
US6366296B1 (en) | Media browser using multimodal analysis | |
US6571054B1 (en) | Method for creating and utilizing electronic image book and recording medium having recorded therein a program for implementing the method | |
US20030078973A1 (en) | Web-enabled system and method for on-demand distribution of transcript-synchronized video/audio records of legal proceedings to collaborative workgroups | |
US20050160113A1 (en) | Time-based media navigation system | |
US20050081159A1 (en) | User interface for creating viewing and temporally positioning annotations for media content | |
US20050080789A1 (en) | Multimedia information collection control apparatus and method | |
JP3574606B2 (ja) | 映像の階層的管理方法および階層的管理装置並びに階層的管理プログラムを記録した記録媒体 | |
WO2010073695A1 (ja) | 編集情報提示装置、編集情報提示方法、プログラム、及び記録媒体 | |
EP1027660A1 (de) | Digitales videosystem mit datenbank für codierte daten für audio- und videoinformation | |
US20040056881A1 (en) | Image retrieval system | |
Knoll et al. | Management and analysis of large-scale video surveys using the software vPrism™ | |
US20070240058A1 (en) | Method and apparatus for displaying multiple frames on a display screen | |
WO2006030995A9 (en) | Index-based authoring and editing system for video contents | |
JP2565048B2 (ja) | シナリオ提示装置 | |
JPH07334523A (ja) | 情報処理装置 | |
Benedetti et al. | A structured video browsing tool |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19990129 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20020201 |