GB2407445A - Selecting highlights from a recorded medium - Google Patents

Selecting highlights from a recorded medium Download PDF

Info

Publication number
GB2407445A
GB2407445A GB0324815A GB0324815A GB2407445A GB 2407445 A GB2407445 A GB 2407445A GB 0324815 A GB0324815 A GB 0324815A GB 0324815 A GB0324815 A GB 0324815A GB 2407445 A GB2407445 A GB 2407445A
Authority
GB
Grant status
Application
Patent type
Prior art keywords
event
highlight
incident
case
method according
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0324815A
Other versions
GB2407445B (en )
GB0324815D0 (en )
Inventor
Catherine Mary Dolbear
Jonathan Teh
Michael Brady
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Solutions Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals

Abstract

A method of selecting highlights from a recorded medium comprises the step of representing at least a first event from the recorded medium as one of a plurality of possible event states, then evaluating candidate highlights of the recorded medium by comparison with representations of at least a first contextually grouped set of events, identified as a highlight of a case selected from a plurality of prior cases. The method produces a relevant selection of highlights, as a final product for a user, from multimedia source material.

Description

A Method And Apparatus For Selecting Highlights From A Recorded Medium

Technical Field

The invention relates to a method and apparatus for selecting highlights from a recorded medium. In particular, it relates to selecting specific highlights from a recorded medium.

Background

The increasing volumes of multimedia information (e.g. text, audio and visual - video or still image) available to and generated by modern-day users is overwhelming.

Significantly, however, much of this information is either unwanted or irrelevant to the user's needs. Moreover, in a marketplace where multimedia downloads may be charged by the byte (or multiples thereof), extraneous information is in fact a costly nuisance to the user. A method of editing such information, for example at a download server, to produce a relevant selection of highlights (significant segments of the information) is eminently desirable.

Unlike browsing stills or video skims where the user is effectively still selecting from amongst the full information material, the intent is to generate a final product for the user's consumption.

Simply presenting video skims in such a finalised manner is unlikely to be satisfactory. Take for example a 90 minute recording of a football (soccer) match; key frame identification using chromaticity measures, lighting, scene movement or ambient sound might well capture all instances where the goal net comes into view - regardless of why whilst ignoring all instances of fouls on the pitch.

Moreover, often such techniques select a few sample frames a,. era _. . . . . . , _ of, say, a lingering shot on the goal mouth, so possibly resulting in footage of an approaching player, followed by players celebrating, but no actual goal.

Clearly this does not result in a satisfactory highlights programme for the user.

Other techniques using low-level audio and visual features have been proposed, as recounted by Xie L., Chang S. F., Divakaran A. and Sun H. in "Structure Analysis of Soccer Video with Hidden Markov Models", IEEE Int. Conf. Acoustics, Speech and Signal Processing, 2002. In this paper, hidden Markov models are used to recognise goal events, break and play events etc. However, this method concerns the distinguishing features of events per se, rather than the interesting features of events. For the purposes of the user, it is their interest or relevance to the user that is paramount.

In contrast to multimedia applications, some attempts at capturing significant events have been described for plain textual analysis: Maybury M.T., in "Generating summaries from event data", Information Processing and Management 31(5) pp 735-751 1995 discusses assigning significance to events (e.g figures 3, 4 and 5 of the cited document) according to frequency and/or cluster analysis, essentially highlighting any occurrence that is rare or whose pattern of activity deviates from past records. The paper also describes the generation of portmanteau terms comprising frequently associated individual events or objects (e.g. collective ce. e.- C _ - - . nouns), as a means of condensing the information to a shorthand form.

Capus L. and Tourigny N., in "Learning Summarization Using Similarities", Computer Assisted Language Learning 1998 11(5) pp 475-488 describe the use of prior cases of summarization in a text summarization task, but do not detail a case selection mechanism or a method of relating the summarization processes of prior cases to the current task, beyond stipulating a manual representation of the meanings of prior stories (e.g. Table 1 of the document).

Clearly, therefore, a need still exists for a method of producing a multimedia highlights programme from overlong source material that is of relevance to a user.

The purpose of the present invention is to address the above need.

Summary of the Invention

The present invention provides a method of selecting highlights from a recorded medium (hereinafter an 'input case'), comprising the step of representing at least a first event instance from the input case as one of a plurality of possible event states, the method further characterized by the step off evaluating candidate highlights of the input case by comparison with representations of at least a first contextually grouped set of events, identified as a highlight of a case selected from a plurality of prior cases. ...

- a a . In a first aspect, the present invention provides a method of selecting highlights from a recorded medium, as claimed in claim 1.

In a second aspect, the present invention provides an apparatus for selecting highlights from a recorded medium, as claimed in claim 13.

In a third aspect, the present invention provides an apparatus for editing media, as claimed in claim 16.

Further features of the present invention are as defined in the dependent claims.

Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which:

Brief description of the drawings

FIG.1 is a flowchart of a method of selecting highlights from a recorded medium according to an embodiment of the present invention.

FIG. 2 is a block schematic diagram of a hierarchical event ontology for the game of football, in accordance with an embodiment of the present invention.

Detailed description

Referring to FIG. 1, a method of selecting highlights from a recorded medium is disclosed.

. Be: *: .a. .-. .-: . . . - - For the purposes of example only, a particular football match is assumed to be the material recorded, and is referred to hereinafter as the input case.

Symbolic descriptions 110 of input case events are

acquired. These may for example be generated by a descriptive means outside the scope of the present invention, or by tokenising a transcription of the football match.

Embodiments of the present invention assume that such

descriptions 110 are available.

In an embodiment of the present invention, in step 120 at least a first event instance from the input case is represented as one of a plurality of possible event states.

As seen in FIG. 2, this plurality of event states is typically organised into a hierarchical ontology describing the kinship and similarity of such states.

To reiterate for the purposes of clarity, the following terminology will be adopted: i. 'event' refers to a specific instance (e.g. a goal scored by Michael Owen after 86 minutes); ii. 'event state' refers to a categorical representation (i.e. class or type) of an event (e.g. 'goal').

Moreover, event states are related according to their position within a hierarchical ontology.

Referring now back to FIG. 1, in step 130 a plurality of prior cases is stored. These prior cases each comprise at .. ...

e.. e _

_ _

least the event descriptions of a prior football match and means to identify the highlighted events of said prior football match.

Such means may be a highlight flag attached to events, a lookup table of highlighted events, or a separate set of

highlight event descriptions.

Clearly, step 130 may only need to be carried out once before any number of input cases are considered.

In an optional embodiment of the present invention, the states of each event are also explicitly stored.

In step 140, a reference case is selected from the plurality of prior cases according to the following steps: i. For each prior case, the frequency of occurrence of an event state in the prior case is compared with the frequency of occurrence of the same state within the input case, to produce a frequency difference value for each class of event.

ii. If a rank or weight is available for each event state, the frequency difference values are scaled to increase the values of high ranked or weighted event states relative to low ranked or weighted event states.

iii. The prior case with the smallest sum of frequency difference values is selected as the reference case.

Thus evaluation of each prior case comprises calculating substantially a metric such as: . . . . -. ee. _ Mlln (a|Isave - Csave (i)| + b|Ishot - Cshot (i' + CIIasslst - Cassist (i)| + d|Iqoal - COa1 (i)|) Where I refers to the input case, C(i) refers to the current of N prior cases being evaluated, the subscripts refer to the event states whose frequencies of occurrence are being compared and a, b, c and d are ranks or weights, if used. So for example, Igoal-coal (i) gives the total goal difference between the input case and the currently evaluated prior case.

In an optional embodiment of the present invention, event states are given a rank or weight, the rank or weight being related to the significance of the state in highlight selection.

In a further embodiment, the rank or weight may in whole or part be responsive to user preferences. For example, the user may specifically request that any bookings of team A in the input case are included in the final programme. Such event specifications would boost the rank or weighting of the corresponding event states.

Advantageously, as the similarity measure and the metrics involved are simple to calculate, an exhaustive search can be performed on the totality of prior cases for little processing cost.

After selection of a reference case in step 140, the corresponding selected highlights for the reference case (hereinafter 'reference highlights') are identified.

e. . . ce . .e Candidate highlights of the input case are then evaluated by comparison with representations of at least a first contextually grouped set of events, identified as a reference highlight.

The representations are typically event states. A causally grouped set of event states is hereinafter referred to as an 'incident'. In other words, an incident is an event- class level description of a contextually grouped set of events.

Advantageously, comparisons using incidents enable selection of causally related events within the input case, enabling in turn production of a coherent highlight for a programme.

This addresses the prior art problem of finding

representative sections that are satisfying to the user; repeating a previous example, a clip of a player approaching the goal followed by a clip of players celebrating a goal is not a causally related sequence and is unlikely to be acceptable to the user. By contrast a contextually related set of clips, either of an approach on goal followed by a goal, or a goal followed by a celebration, or all three elements in a causal sequence, is far more likely to be acceptable.

In step 150, candidate input case incidents are selected as highlights substantially according to the following overall principle; Each highlight incident in the reference case is taken, and compared against each candidate incident in the input case he e ë e. e . e eeee see e in turn. The most similar incident in the input case is selected as a highlight incident, and then removed from further selection processes.

In an embodiment of the present invention, implementation of this overall principle comprises the following steps: For each reference highlight incident RefInc(i) from the reference case: Find all input case incidents InInc(n) comprising a matching sequence of event states to those of RefInc(i).

i. If n=O, substitute a sibling state for at least one event state of RefInc(i) and repeat search.

ii. If n=1, select InInc(1) as a highlight and remove it from further selection processes.

iii. If nil, using an adaptation similarity measure select InInc(j), lj<n, as a highlight and remove it from further selection processes.

Steps i-iii above are now discussed in detail.

Step i. considers the situation where no input case incident produces a sequence of event states that matches that of a reference highlight incident. If so, at least a first event state of the reference highlight incident is substituted by a sibling event state, the sibling event state being in a same supra event state class as defined within an hierarchical ontology of event states.

For example, referring to FIG. 2, wherein a hierarchical event ontology 200 is split into supra event state classes such as 210 or 220, in turn split into event states such as c.e. ..

*cc et e - t / .

_

211 - 215, suppose a reference highlight incident comprised inter alla event state 'corner' 211 followed by event state goal' 221. If no specific instance of this sequence occurred in the input case, event state 'corner' 211 may be substituted by event state 'free kick' 212, as one of a set of sibling event states (211, 212, 213, 214, 215) within the same supra event class 'stoppage' 210 within the hierarchical ontology 200, to produce a highlight incident comprising inter alla event state 'free kick' followed by event state 'goal'.

In an optional embodiment of the present invention, the cumulative level of substitution within RefInc(i) may be limited to not exceed a total subsumption distance S between the original reference highlight incident and a progressively substituted one. The subsumption distance S is a measure of distance of relationship between event states in the hierarchical ontology 200.

In a further embodiment of the present invention, if a total subsumption distance S is reached or exceeded, then a contiguous sub-set of event states of the reference highlight incident may be compared instead with input case incidents.

Step ii. simply determines that if only one input case incident is identified as similar to a reference highlight incident, it is selected as a highlight incident and removed from further selection processes.

Note that the level of similarity required for identification in steps ii. above and iii. below should * * . c ëe e, encompass the substitutions possible by the application of step i.

Step iii. determines that if more than one input case incident is identified as similar to a reference highlight incident, the most similar input case incident is selected as a highlight incident according to an adaptation similarity measure.

The adaptation similarity measure substantially comprises the following steps: i. For each event represented in the reference highlight incident, events belonging to the same class in the input case are scored according to at least a first comparative criterion; ii. Selecting as a highlight that incident within the input case whose corresponding events produce the best overall score.

The adaptation similarity measure thus compares the specific details of each event represented in the reference highlight with specific details of at least one event of the same class represented in the relevant instances of the input case. This enables selection of that input case incident whose constituent represented events are overall the most similar to the reference highlight.

For each reference highlight incident RefInc(i) from the reference case, then: compare each event RefEvt(j) with each selected input case event InEvt(k) corresponding to the same event state in the input case. ë. .

To compare an event RefEvt(j) with an event InEvt(k), a difference score D based upon at least a first comparative criterion, and a subsumption distance S. are calculated.

Typically the difference score D is a scaled sum of scores based upon comparisons between parameters describing the event, such as the event state or the supra event state class, the event start time, extra time or duration time stamps, case specific descriptors such as individual / group participant names or state specific descriptors such as (passed) from, taken(from), to(name), resulting in, (shot)type, off(who), reason(type), booked for(type), dismissed for(type) etc., as appropriate for that class.

It will be clear to a person skilled in the art that other parameters suitable to the given task may be selected.

In the example football case, D may be calculated using such comparative criteria as follows: 1. Compare the class parameter values, by parameter If RefEvt(j).by.name InEvt(k).by.name && RefEvt(j).by.team InEvt(k).by.team, D++ else if RefEvt(j).by.name InEvt(k).by.name && RefEvt(j).by.team = InEvt(k).by. team, D += 0.5; For start time parameter, convert to seconds if necessary; D += (IRefEvt(i).start-time InEvt(k).start_timel) / 5400 (i.e. Absolute difference normalized over 90 minutes) For extra time parameter, convert to seconds if necessary; D += (IRefEvt(i).extra_time - InEvt(k).extra_timel) / 300 1 L I , ., . , _ (i.e. Absolute difference normalised over 5 minutes) For duration parameter, convert to seconds if necessary; D += (IRefEvt(j).durationInEvt(k).durationl) / 60 (i.e. Absolute difference normalised over 60 seconds) 2. If RefEvt(j) and InEvt(k) are from the same class (recall substitution is possible), then for each of L symbolic parameter values (e.g. player/team names), RefSPV(j,l) and InSPV(k,l), If RefSPV(j, l)lInSPV(k,l), D++ else if different classes, for those M parameters RefPV(j,m) and InPV(j,m) of RefEvt(j)and InEvt(k) respectively that are the same, If RefPV(j,m)lInPV(j,m), D++ 3. D is scaled using D = D/N where N is the number of parameters in RefEvt(j).

The subsumption distance S is calculated as the distance between the two corresponding events within the hierarchical ontology 200. S = 0 for the same corresponding event state, S = 1 for sibling corresponding event states (the same parent supra class), and S = 2 for event states with the same grandparent class.

The scaled value of D and the subsumption distance S are added together. The lower the total value, the more similar events RefEvt(j) and InEvt(k) are.

C1# C eea.

ëee r ë c e ._ Once complete for each event RefEvt (j) of the reference highlight, those evaluated events in the input case will have a score (D+ S) quantifying the similarity of the event to the event of the reference highlight.

The input case incident whose constituent events have the lowest overall score is then selected as the most similar incident.

The need to evaluate the specific event stems from the limited differentiating information within the event state representation, but it will be clear to a person skilled in the art that the number and relevance of the event parameters selected may be varied according to the selection task, as may any specific values. It will also be clear that the changes to the values of D and S can be selected to reflect the relevant significance of different event parameters and states.

Once each reference highlight incident has been used in the selection process, a corresponding set of input case incidents will have been selected as highlights of the input case.

Typically, input case events contain time stamps or values, such as the start time and end time of the event. Thus to provide sufficient information to produce a highlight programme from the input case highlights: In an embodiment of the present invention, the start time of the first event represented by an input case highlight incident and the end time of the last event similarly represented by the highlight incident are made available to : :: ce . e :e see -a a media editing means, enabling generation of a highlight of a complete incident.

In an enhanced embodiment of the present invention, the start and end times of each event represented by an input case highlight incident are made available to a media editing means, such as a multimedia server or audio/video editor.

This provides scope for more restricted highlights, for example removing any interstital time periods that occur between events within an incident.

It also provides scope for flexibility in highlight programme length; if a 1-minute highlight programme was requested by a user, and the duration of events of the highlights added up to 1 minute and 20 seconds, then individual events within incidents may be dropped, providing the remaining events within incidents are still contiguous.

The selection of events to be dropped may for example be in inverse priority to the weights a, b c, and d used in reference case selection. This would also therefore incorporate user event preferences, if incorporated into these weights.

It will be clear to a person skilled in the art that any combination of the available time values suitable for a media editing means may be used, for example the start and end times of the incident and the start and end times of the most significant event within the incident, so that in : :: e.:: :: . - this case a shortened version of the incident could still be sure to contain the most significant event.

In an embodiment of the present invention, a media server may hold a complete input case and generate a highlight programme in response to a user request, or may generate and store just one or more typical highlight programmed.

Note that if user-preferences are allowed in generating the highlight programme, then any prior case may itself be treated as an input case if selected by the user.

It will also be clear to a person skilled in the art that the event state (class) of events in the input case and/or prior cases may either be determined during the herein described embodiments, or be pre-determined and stored with the respective case.

It will be understood that the method of selecting highlights from a recorded medium, as described in the above embodiments, provides at least one or more of the following advantages: i. an exhaustive search of prior cases can be performed for little processing cost; ii. adaptation similarity is evaluated over a set of causally related events, selecting the most similar candidate incidents rather than individual events, and thus iii. causally related events within the input case are selected to form a coherent highlight; and iv. The user may specify duration and event preferences within a highlight programme whilst preserving the coherence of the highlights.

ë be: . . A. . ...

Whilst specific implementations of the present invention are described above, it is clear that one skilled in the art could readily apply variations and modifications of such inventive concepts.

Thus, a method of selecting highlights from a recorded medium has been provided where the disadvantages described with reference to prior art methods have been substantially alleviated.

Apparatus for selecting highlights from a recorded medium, according to a method as described in one or more of the above embodiments is also provided. The apparatus comprises comparison means to compare candidate highlights of the input case with representations of at least a first contextually grouped set of events, identified as a highlight of a case selected from a plurality of prior cases.

A storage medium is also provided, storing processor- implementable instructions for controlling one or more processors to carry out the method herein described. A data signal, comprising highlight programme data derived using the method herein described is also provided. A signal carrier carrying a data signal comprising highlight programme data derived using the method herein described is also provided.

Claims (20)

  1. e: .- .- :e Claims 1. A method of selecting from an input case comprising
    a recorded medium at least a first highlight, comprising the step of representing at least a first event instance from the input case as one of a plurality of possible event states, the method including the step of; evaluating candidate highlights of the input case by comparison with representations of at least a first incident comprising a contextually grouped set of events, identified as a highlight of a reference case selected from a plurality of prior cases.
  2. 2. A method according to claim 1, wherein if no input case incident comprises a sequence of event states that matches that of a reference highlight incident comprising a highlight incident of the reference case, then an event state of the reference highlight incident is substituted by at least a first sibling event state, the sibling event state being in a same supra event state class as defined within an hierarchical ontology of event states.
  3. 3. A method according to any one of the preceding claims, wherein if only one input case incident is identified as similar to a reference highlight incident, it is selected as a highlight.
  4. 4. A method according to any one of the above claims, wherein if more than one input case incident is identified as similar to a reference highlight incident, the most similar input case incident is selected as a highlight substantially according to the following steps: ce.e e eve .e e ee e e-.
    i. For each event represented in the reference highlight incident, events belonging to the same class in the input case are scored according to at least a first comparative criterion; ii. Selecting as a highlight that incident within the input case whose corresponding events produce the best overall score.
  5. 5. A method according to any one of claims 3 and 4, wherein any input case incident selected as a highlight is removed from further selection processes.
  6. 6. A method according to any one of claims 3, 4 and 5, wherein a similarity measurement employing comparative criteria may compare any or all of the following set; i. event state; ii. event state class; iii. event duration; iv. event time stamp/s; v. individual participant names; vi. group participant names; and vii. event specific descriptors.
  7. 7. A method according to any one of the preceding claims, wherein event states are given a rank or weight, the rank or weight related to the significance of the event state to highlight selection.
  8. 8. A method according to claim 7 wherein the rank or weight is responsive to user preferences.
    :: : A- -
    - - -
  9. 9. A method according to any one of the preceding claims, wherein a reference case is selected from a plurality of prior cases according to the following steps: i. For each prior case, the frequency of occurrence of an event state in the prior case is compared with the frequency of occurrence of the same state within the input case, to produce a frequency difference value for each class of event.
    ii. If a rank or weight is available for each event state, the frequency difference values are scaled to increase the values of high ranked or weighted event states relative to low ranked or weighted event states.
    iii. The prior case with the smallest sum of frequency difference values is selected as the reference case.
  10. 10. A method according to any one of the preceding claims, wherein the information recorded in the recorded medium may comprise any or all of the following set; i. text; ii. audio; iii. video; and iv. still images.
  11. 11. A method according to any one of the preceding claims, wherein at least a start time value associated with the first event represented by an input case highlight incident and the end time of the last event of the same incident are made available to a media editing means.
  12. 12. A method according to any one of the preceding claims, wherein start and end time values of each event represented c.
    . . . . . - . . .
    -
    e e ee- by an input case highlight incident are made available to a media editing means
  13. 13. Apparatus for selecting highlights from a recorded medium according to a method as claimed in any one of the preceding claims, the apparatus comprising; comparison means to compare candidate highlights of the input case with representations of at least a first contextually grouped set of events, identified as a highlight of a case selected from a plurality of prior cases.
  14. 14. Apparatus according to claim 13 wherein the apparatus is operable to output at least a first time value associated the events represented by a highlighted input case incident.
  15. 15. Apparatus according to any one of claims 13 and 14 wherein the apparatus is operably coupled to a media editing means.
  16. 16. Apparatus according to any one of claims 13 and 14 wherein the apparatus is within a media editing means.
  17. 17. A storage medium storing processor-implementable instructions for controlling one or more processors to carry out the method of any one of claims 1 to 12
  18. 18. A data signal comprising highlight programme data derived using the method of any one of claims 1 to 12.
    .. . e ë
    _ _
  19. 19. A signal carrier carrying a data signal comprising highlight programme data derived using the method of any one of claims 1 to 12.
  20. 20. A method according to claim 1 and substantially as hereinbefore described with reference to the accompanying drawings.
    20. A method according to claim 1 and substantially as hereinbefore described with reference to the accompanying drawings.
    Amendments to the claims have been filed as follows Z3 Claims 1. A method of providing time values for a media editing means, comprising the step of; selecting from an input case comprising a recorded medium at least a first highlight, whereinat least a first event from the input case is represented as one of a plurality of possible event states, and characterized by the step of; evaluating candidate highlights of the input case by comparison with representations of at least a first incident comprising a contextually grouped set of events, identified as a highlight of a reference case selected from a plurality of prior cases.
    2. A method according to claim 1, wherein if no input case incident comprises a sequence of event states that matches that of a reference highlight incident comprising a highlight incident of the reference case, then an event state of the reference highlight incident is substituted by at least a first sibling event state, the sibling event state being in a same supra event state class as defined within an hierarchical ontology of event states.
    3. A method according to any one of the preceding claims, wherein if only one input case incident is identified as similar to a reference highlight incident, it is selected as a highlight.
    4. A method according to any one of the above claims, wherein if more than one input case incident is identified as similar to a reference highlight incident, the most similar input case incident is selected as a highlight substantially according to the following steps: i. For each event represented in the reference highlight incident, events belonging to the same class in the input case are scored according to at least a first comparative criterion) ii. Selecting as a highlight that incident within the input case whose corresponding events produce the best overall score.
    5. A method according to any one of claims 3 and 4, wherein any input case incident selected as a highlight is removed from further selection processes.
    6. A method according to any one of claims 3, 4 and 5, wherein a similarity measurement employing comparative criteria may compare any or all of the following set; i. event state; ii. event state class; iii. event duration; iv. event time stamp/s; v. individual participant names; vi. group participant names; and vii. event specific descriptors.
    7. A method according to any one of the preceding claims, wherein event states are given a rank or weight, the rank or weight related to the significance of the event state to highlight selection.
    8. A method according to claim 7 wherein the rank or weight is responsive to user preferences. 79 <
    9. A method according to any one of the preceding claims, wherein a reference case is selected from a plurality of prior cases according to the following steps: i. For each prior case, the frequency of occurrence of an event state in the prior case is compared with the frequency of occurrence of the same state within the input case, to produce a frequency difference value for each class of event.
    ii. If a rank or weight is available for each event state, the frequency difference values are scaled to increase the values of high ranked or weighted event states relative to low ranked or weighted event states.
    iii. The prior case with the smallest sum of frequency difference values is selected as the reference case.
    10. A method according to any one of the preceding claims, wherein the information recorded in the recorded medium may comprise any or all of the following set; i. texts ii. audio; iii. video; and iv. still images.
    11. A method according to any one of the preceding claims, wherein at least a start time value associated with the first event represented by an input case highlight incident and the end time of the last event of the same incident are made available to a media editing means.
    12. A method according to any one of the preceding claims, wherein start and end time values of each event represented by an input case highlight incident are made available to a media editing means 13. Apparatus for selecting highlights from a recorded medium according to a method as claimed in any one of the preceding claims, the apparatus comprising; comparison means to compare candidate highlights of the input case with representations of at least a first contextually grouped set of events, identified as a highlight of a case selected from a plurality of prior cases.
    14. Apparatus according to claim 13 wherein the apparatus is operable to output at least a first time value associated the events represented by a highlighted input case incident.
    15. Apparatus according to any one of claims 13 and 14 wherein the apparatus is operably coupled to a media editing means.
    16. Apparatus according to any one of claims 13 and 14 wherein the apparatus is within a media editing means.
    17. A storage medium storing processor-implementable instructions for controlling one or more processors to carry out the method of any one of claims 1 to 12 18. A data signal comprising highlight programme data derived using the method of any one of claims 1 to 12.
    19. A signal carrier carrying a data signal comprising highlight programme data derived using the method of any one of claims 1 to 12.
GB0324815A 2003-10-24 2003-10-24 A method and apparatus for selecting highlights from a recorded medium Active GB2407445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0324815A GB2407445B (en) 2003-10-24 2003-10-24 A method and apparatus for selecting highlights from a recorded medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0324815A GB2407445B (en) 2003-10-24 2003-10-24 A method and apparatus for selecting highlights from a recorded medium

Publications (3)

Publication Number Publication Date
GB0324815D0 GB0324815D0 (en) 2003-11-26
GB2407445A true true GB2407445A (en) 2005-04-27
GB2407445B GB2407445B (en) 2005-12-28

Family

ID=29595732

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0324815A Active GB2407445B (en) 2003-10-24 2003-10-24 A method and apparatus for selecting highlights from a recorded medium

Country Status (1)

Country Link
GB (1) GB2407445B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002008948A2 (en) * 2000-07-24 2002-01-31 Vivcom, Inc. System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US6408301B1 (en) * 1999-02-23 2002-06-18 Eastman Kodak Company Interactive image storage, indexing and retrieval system
US6544294B1 (en) * 1999-05-27 2003-04-08 Write Brothers, Inc. Method and apparatus for creating, editing, and displaying works containing presentation metric components utilizing temporal relationships and structural tracks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6408301B1 (en) * 1999-02-23 2002-06-18 Eastman Kodak Company Interactive image storage, indexing and retrieval system
US6544294B1 (en) * 1999-05-27 2003-04-08 Write Brothers, Inc. Method and apparatus for creating, editing, and displaying works containing presentation metric components utilizing temporal relationships and structural tracks
WO2002008948A2 (en) * 2000-07-24 2002-01-31 Vivcom, Inc. System and method for indexing, searching, identifying, and editing portions of electronic multimedia files

Also Published As

Publication number Publication date Type
GB2407445B (en) 2005-12-28 grant
GB0324815D0 (en) 2003-11-26 grant

Similar Documents

Publication Publication Date Title
US7801729B2 (en) Using multiple attributes to create a voice search playlist
US8078603B1 (en) Various methods and apparatuses for moving thumbnails
US7383497B2 (en) Random access editing of media
US6862038B1 (en) Efficient image categorization
US20060161867A1 (en) Media frame object visualization system
US6907397B2 (en) System and method of media file access and retrieval using speech recognition
US20020078029A1 (en) Information sequence extraction and building apparatus e.g. for producing personalised music title sequences
US20070053268A1 (en) Techniques and graphical user interfaces for categorical shuffle
US8132103B1 (en) Audio and/or video scene detection and retrieval
Foote An overview of audio information retrieval
US20090307207A1 (en) Creation of a multi-media presentation
Christel et al. Informedia digital video library
US6446083B1 (en) System and method for classifying media items
US20090199251A1 (en) System and Method for Voting on Popular Video Intervals
US20050055372A1 (en) Matching media file metadata to standardized metadata
US20050283752A1 (en) DiVAS-a cross-media system for ubiquitous gesture-discourse-sketch knowledge capture and reuse
US20110218997A1 (en) Method and system for browsing, searching and sharing of personal video by a non-parametric approach
US20060230065A1 (en) Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed
Chechik et al. Large-scale content-based audio retrieval from text queries
US20110004462A1 (en) Generating Topic-Specific Language Models
US20040128141A1 (en) System and program for reproducing information
Zhu et al. Video data mining: Semantic indexing and event detection from the association perspective
Sundaram et al. A utility framework for the automatic generation of audio-visual skims
US6751776B1 (en) Method and apparatus for personalized multimedia summarization based upon user specified theme
US20070255565A1 (en) Clickable snippets in audio/video search results