CN100538696C - The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data - Google Patents

The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data Download PDF

Info

Publication number
CN100538696C
CN100538696C CNB2004800357507A CN200480035750A CN100538696C CN 100538696 C CN100538696 C CN 100538696C CN B2004800357507 A CNB2004800357507 A CN B2004800357507A CN 200480035750 A CN200480035750 A CN 200480035750A CN 100538696 C CN100538696 C CN 100538696C
Authority
CN
China
Prior art keywords
film
screen play
dialogue
data
extrinsic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004800357507A
Other languages
Chinese (zh)
Other versions
CN1906610A (en
Inventor
N·迪米特罗瓦
R·图尔特斯基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of CN1906610A publication Critical patent/CN1906610A/en
Application granted granted Critical
Publication of CN100538696C publication Critical patent/CN100538696C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Be provided for originally the seek peace system of extrinsic audio-visual information of analysis-by-synthesis, such as be used for analyzing with the characteristic of relevant film and this film in do not occur but system by the obtainable characteristic in the Internet.This system comprises and communicates to connect to the audio-visual source intrinsic content analyser of film source for example, the intrinsic data that are used for searching for this film, and use extraction algorithm to extract the intrinsic data.Further, this system comprises the extrinsic content analyser that communicates to connect to the extrinsic information source, such as by the obtainable screen play in the Internet, is used to search for the extrinsic information source, and uses searching algorithm to retrieve extrinsic data.The intrinsic data are carried out relevant with extrinsic data with the multi-source data structure.The multi-source data structure is converted into the high-level information structure, and this high-level information structure is presented to the user of this system.The user can browse this high-level information structure, to check the information of discerning (evaluation) in the film such as actor.

Description

The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data
Technical field
The present invention relates to the analysis-by-synthesis of intrinsic (intrinsic) and extrinsic (extrinsic) audio-visual information, and the present invention relates to especially not occur in characteristic (feature) and the film in the film for example but the analysis for example by the obtainable characteristic in the Internet and relevant.
Background technology
Interested people have to consult the macropaedia of books, printed magazine or printing for many years to film, to obtain the additional information of relevant concrete film.Along with the appearance of the Internet, many Internets set foot in the film related data.An example is Internet Movie Database (the Internet movies database) (http://www.imdb.com), and this is the very detailed and exquisite website that a large amount of various additional informations are provided for a large amount of films.Even the Internet provides convenience for visit additional electrical shadow information, still must in obtainable magnanimity information, manage to realize its purpose by the Internet by the user.
Along with (DVD) appearance of media of Digital Versatile Disk (digital versatile disc), the additional information relevant with film can obtain with menu format on the basic menu of DVD film usually.Often can obtain interview, optional film scene, the credits present of expansion, various trival matters etc.Further, the DVD form be convenient to that scene is browsed, the bookmark of plot summary, different scenes etc.Even additional information can obtain on many DVD, but this additional information is to be selected by the wright of film, and further, this additional information is subjected to the restriction of free space on the DVD dish, and it is a static information.
Obtainable film quantity is huge with the obtainable additional information amount that relates to various films, performer, director etc., and the user suffers " information overload ".To the interested people of film often make great efforts to solve relevant they can how to find the thing that they wanted exactly, and problem how to find the new things that they like.In order to address this problem, people have developed the various system and methods that are used to search for and analyze audio-visual data.Can obtain dissimilar such systems, for example be used for the system of general introduction automatically, such system description is in US application 2002/0093591.The system of another type be used for based on selected view data for example for example the film performer image and carry out the system of target search, such system description is in US application 2003/0107592.
The inventor have realized that can integrated the system of the extrinsic audio-visual data of seeking peace (such as, the additional information that is integrated in the audio-visual data on the DVD film and is finding on the Internet) have benefit, and therefore designed the present invention.
Summary of the invention
The present invention attempts to be provided for analyzing the improvement system of audio-visual data.Better, the present invention individually or relax in combination or alleviate one or more above-mentioned defective.
Therefore, in first aspect, the system of the analysis-by-synthesis of the extrinsic audio-visual information that is provided for originally seeking peace, this system comprises:
The intrinsic content analyser, this intrinsic content analyser communication (communicatively) is connected to audio-visual source, and this intrinsic content analyser is suitable for searching for this audio-visual source, searching the intrinsic data, and is suitable for using extraction algorithm to extract the intrinsic data,
Extrinsic content analyser, this extrinsic content analyser communicates to connect to the extrinsic information source, and this extrinsic content analyser is suitable for searching for this extrinsic information source, and is suitable for using searching algorithm to retrieve extrinsic data,
Wherein the intrinsic data are relevant with extrinsic data, thereby the multi-source data structure is provided.
The audiovisual system that audiovisual system for example is suitable for family's use can contain treating apparatus, and it can be analyzed audio-visual information.Can envision the audiovisual system of any kind, for example such system comprises the unit that DigitalVersatile Disk (DVD) unit maybe can show stream-type video, this video for example is the video of mpeg format, perhaps the form that is adapted to pass through the data network transmission of any other type.This audiovisual system also can be to be suitable for or to receive and show for example " top set " box type system of TV and film of audio-visual content via satellite or by cable.This system comprises and is used for presenting audio-visual content to the user (that is) device, the intrinsic content perhaps is used to export and can makes audio-visual content present to the device of user's signal.Adjective " intrinsic " should broadly be explained.The intrinsic content can be the content that can extract from the signal of film source.Text that the intrinsic content can be vision signal, sound signal, extract from signal etc.
This system comprises the intrinsic content analyser.This intrinsic content analyser can be analyzed the treating apparatus of audio-visual data typically, and this intrinsic content analyser communicates to connect to audio-visual source, for example is connected to film source.This intrinsic content analyser is searched for this audio-visual source by using extraction algorithm, and extracted data therefrom.
This system also comprises extrinsic content analyser.Adjective " extrinsic " should broadly be explained.Extrinsic content is not to be included in the intrinsic content, perhaps can not therefrom extract, and perhaps just is difficult to the content of extracting from the intrinsic content.Extrinsic content can be typically the content of screen play, storyboard, comment, analysis etc. for example.The extrinsic information source can be the Internet, comprise the data carrier of related data etc.
This system also comprises the device of the extrinsic data of originally seeking peace that is used for relevant multi-source data structure.Instructing this relevant rule can be to extract and/or the part of searching algorithm.Related algorithm also can be existing, and this related algorithm is correlated with to the extrinsic data of originally seeking peace in the multi-source data structure.The multi-source data structure can be the low level data structure, and it is for example by the data pointer data of different types of being correlated with.The multi-source data structure may can not insert for the user of system, but can insert for the supplier of system.The multi-source data structure is formatted as the high-level information structure usually, and this high-level information structure is presented to the user of system.
The intrinsic content can use suitable extraction algorithm to extract from audio-visual source, and extrinsic content can be retrieved from the extrinsic information source.The retrieval of extrinsic data can be based on the data that extracted, yet the retrieval of extrinsic data also can be based on the data that offer searching algorithm, and irrelevant with the intrinsic content.
Extract and/or searching algorithm can be the part of system with the same manner that innately has a fixed function with many electronic equipments.Yet module can provide alternatively and extract and/or searching algorithm.Utilize module to provide these algorithms to have superiority,, and therefore can provide bigger dirigibility because different users for example has different preferences and hobby for film.This module can be a for example electronic module of hardware module, for instance, is suitable for inserting slot (slot), and still, this module also can be a software module, the data file on the data carrier for example, or the data file that can provide is provided by network.
This system can support to be provided by the user function of inquiry, and this inquiry can be provided to be extracted and/or searching algorithm, extracts intrinsic and/or extrinsic data thereby inquire about according to this.Provide this function because the diversity of style and content but its advantage is arranged in the audio-visual data.Therefore, can provide system with big dirigibility.This inquiry can be semantic inquiry, that is, this inquiry can use query language to represent.This inquiry can be selected from question blank, for instance, links to each other with inquire button on the telepilot and to select this inquiry, and this inquire button provides the tabulation that may inquire about that may make to the user when being pressed.
Audio-visual source can be a film, and the intrinsic data that wherein are extracted can be including but not limited to characteristic text, audio frequency and/or video.
The extrinsic information source can be connected to the Internet and can pass through the Internet accessed.The extrinsic information source can be for example general internet site, as Internet Movie Database, yet the extrinsic information source also can be special-purpose internet site, for example has the internet site that the specific purpose of additional information is provided to system of the present invention.
The extrinsic information source can be a screen play.The film of finalizing a text often departs from screen play.Film producing process is usually based on original drama, and its version is also based on the development of storyboard.Use this information just as the secret formula that uses film.Can not or be difficult to use the audio visual signal processing automatically to extract with the analysis of relevant film analysis with screen play from the senior semantic information that audio-visual content extracts.This has superiority, because oracle may include the data of film, and these data can not extract by visual analysis fully, if perhaps can be extracted, its reliability also is low-down.
Extrinsic content analyser can comprise the knowledge about the screen play grammer, and the information of wherein using the utilization of drama grammer to extract from drama is retrieved extrinsic data.The actual content of screen play adopts rule schemata usually.Utilize the knowledge of this form, for example scene will occur in the inside or outside, position, the information such as time in a day can be extracted.Such information is impossible based on the extraction of intrinsic data only, if perhaps possible words also only may obtain low-down determinacy.
An identity that importance is personage in the film of any film.Such information can be by obtaining movie contents is relevant with drama, because screen play is listed in all personages that occur in the special scenes.By using the drama grammer, personage's identity can be extracted in scene.The identity of extracting from drama can for example make up with audio frequency and/or video identity marks (device), for example, and to distinguish the some personages in the scene.Any characteristic that can extract from drama can be used for presenting to user's film analysis.Other possibilities that can be extracted and be presented to the user are that extraction, film structure analysis, emotion (mood) scene analysis, position/time/be provided with detection, clothes analysis, character contour, dialogue analysis, school/Arians detection, auteurism detection etc. are described and described to semantic scene.
Originally the relevant of the extrinsic data of seeking peace can be time correlation, and the result can be the multi-source data structure, and wherein the characteristic that reflects in characteristic that reflects in the intrinsic data and the extrinsic data is time correlation.Originally the characteristic that reflects in the extrinsic data of seeking peace is including but not limited to text, video and/or video properties.
Time correlation can obtain by aiming at of dialogue (spoken text) in the dialogue in the drama and the film.Dialogue in the film can be included in the closed captioning (closed caption), and it can extract from subtitle, can use speech recognition system to extract, and maybe can use different modes to provide.But in a single day the dialogue in the film is provided, this dialogue can be compared with the dialogue in the drama and be complementary.Time correlation can provide the transcript of the time mark of film.This compares and mates, and the self-similarity matrix obtains by for example using.
As mentioned above, the high-level information structure can generate according to the multi-source data structure.The high-level information structure can the user and the difference in functionality of system between interface (interface) is provided.The high-level information structure can be corresponding to the user interface (interface) that for example appears in many electronic installations.
The high-level information structure can be stored in the medium, and this has advantage, because according to the extrinsic information of originally seeking peace, may need significant data to verify and extract the high-level information structure.Further, can generate the high-level information structure of renewal, wherein the high-level information structure of Geng Xining is the existing high-level data structure according to the multi-source data topology update.For example, only need the user under the occasion of finite analysis, this may have advantage.Perhaps, for example under the occasion that has been updated in the extrinsic information source, and wish to upgrade the high-level information structure according to the extrinsic information source.
Content analysis can comprise the result who utilizes searching algorithm to obtain.Content analysis and searching algorithm can be dynamic algorithms, to be adapted to dynamically to comprise additional function based on the extrinsic data of retrieval.Therefore, content analysis and searching algorithm can be open algorithms, and it can constantly learn and upgrade preliminary classification (in new classification drawing-in system).Additional function can by on from the characteristic group of intrinsic data, use the label from extrinsic data, obtain in the family of this system deployment the user after operating period of this system train searching algorithm to obtain.
Characteristic group from the intrinsic data can be specific data set, for example can be the speaker of film, wherein uses the present invention, for example knows speaker ID from the label of speaker ID (identity).The user can for example be chosen in the data set that uses in the training, and the selection of this data set is looked user's convenience and is.According to the present invention, this data set also can be provided by the supplier of system.Can use neural network to obtain training, that is, searching algorithm for example can comprise or be connected to neural network.
Can use at least one photodrama to carry out training originally.Therefore, training can be carried out at least one drama by selecting data set.This is for can training system being useful with the support new features, because for example new performer occurs, unknown performer may catch on, and people's hobby is different, or the like.By this way, can provide more flexible and strong system.The training of system also can be blind training so that video understand in the classification of object and semantic concept.
The multi-source data structure is used for providing automatic brass tacks identification at film, and this can be used for definite reference point (benchmarking) algorithm on the audio-visual content.Automatic mark in the film also can obtain based on the multi-source data structure, and this is favourable for automatically handling movie contents.
Another application is to use textual description and the use in the drama to understand from the audiovisual scene content of the audiovisual characteristic of video content.Such system can be provided, and it is trained to the text description of rudimentary and intermediate audio/video/characteristic being given scene.Can use Support Vector Machine (support vector machine) or Hidden-Markov Model (hidden Markov model) to finish training.Classification can be only based on the audio/video/text characteristic.
By using the textual description in the drama, can obtain the automatic scene content understanding.Such understanding may not be extracted from film self.
According to a second aspect of the invention, be provided for originally the seek peace method of extrinsic audio-visual information of analysis-by-synthesis, the method includes the steps of:
Search intrinsic data audio-visual source searching data, and uses extraction algorithm to extract the intrinsic data,
Search extrinsic information source, and, use searching algorithm to retrieve extrinsic data based on the intrinsic data that are extracted,
Relevant intrinsic data and extrinsic data, thus the multi-source data structure is provided.
This method can further comprise the step that generates the high-level information structure according to the multi-source data structure.
These and other aspects of the present invention, characteristic and/or advantage will be clearly from the embodiment that describes subsequently, and will set forth in conjunction with these embodiment.
Description of drawings
Now, will elaborate first-selected embodiment of the present invention with reference to the accompanying drawings, wherein:
Accompanying drawing 1 is the higher structure chart of one embodiment of the present of invention,
Accompanying drawing 2 is block schemes of an alternative embodiment of the invention, and this embodiment is the sub-embodiment of 1 described embodiment in conjunction with the accompanying drawings,
Accompanying drawing 3 be illustrating of aiming at of drama and closed captioning and
Accompanying drawing 4 is the illustrating of speaker identification in the film.
Embodiment
Accompanying drawing 1 has been set forth the senior chart of optimum embodiment of the present invention.A specific embodiment according to this senior chart is provided in accompanying drawing 2.
Table 1
Numbering Title
1. The text based scene
2. Performer's identification based on audio frequency
3. Scene description based on audio frequency
4. Performer's identification based on face
5. The super model that is used for performer ID
6. Tracing point detects
7. Set up shot detection
8. Compression describe summary
9. The scene boundary detection semantic scene is described
10. The intrinsic resource
11. Extrinsic resource
101. Video
102. Screen play
103. The Internet
104. Subtitle
105. Audio frequency
106. Video
107. Time mark
108. MFCC
109. Tone
110. Speaker's transition detection
111. Emotion audio frequency linguistic context
112. Voice/music/SFX segmentation
113. The histogram scene boundary
114. The facial detection
115. Teletext detects
116. The higher structure grammatical analysis
117. The personage
118. The scene orientation
119. Scene description
120. Dialogue
121. Text based time mark drama
122. X-related person title w/ performer
123. Mask
124. Emotion model
125. Sound model
Appear at chart 100 in the accompanying drawing 1 and set forth the model that is used for the extrinsic and intrinsic audio-visual information of according to the present invention analysis-by-synthesis.The title of each ingredient provides in table 1.In the drawings, the intrinsic audio-visual information is an example with TV (film) sheet 101, that is, be example with story (just) sheet on data carrier such as the DVD dish.Intrinsic information is such as the information that can extract from audio visual signal, i.e. the information of extracting from view data, voice data and/or transcript data (with the form of subtitle or closed captioning or teletext transcript).Extrinsic audio-visual information is example at this with the extrinsic visit for screen play 102, for example connects 103 visits by the Internet.Further, extrinsic information also can be the end of books, the additional scene in the film, the film of storyboard, publication, for example to interview of director and/or performers and clerks, Movie Reviewers etc.Such information can connect 103 by the Internet and obtain.These further extrinsic informations may be wished drama 102 experience higher structure grammatical analyses 116.In the square frame 102 to screen play emphasize it is an example, the type of the extrinsic information of any kind and above-mentioned especially extrinsic information can be inserted in the square frame 102 of chart in principle effectively.
As first step, use the intrinsic content analyser, handle intrinsic information.The intrinsic content analyser can be a computer program, and it is suitable for searching for and analyzing the intrinsic content of film.Video content can be handled along three paths (104,105,106).Along the path 1, from signal, extract dialogue, this dialogue utilizes subtitle 104 to represent usually.Extraction comprises the voice-to-text conversion, the closed captioning from the user data of MPEG extracts and/or the teletext from vision signal or webpage extracts.Output is the transcript 107 of time mark.Along the path 2, processing audio 105.The Audio Processing step comprises acoustic characteristic and extracts, and follows by audio parsing and classification.Mel Cepstral Frequency Coefficients (mark ear cepstrum coefficient of frequency) (MFCC) 108 can be used to detect the speaker and changes 110 and form the part that the emotion linguistic context is determined.Mel-scale (mark ear tag degree) is frequency binning (binning) method, and it is based on the frequency resolution of ear.By with mel scale frequency of utilization binning, calculate MFCC, with parameterised speech.MFCC is that good ear is differentiated indicator (symbol).Therefore, by carrying out equilibrium, utilize the subtraction in the scramble spectral domain, opposite with multiplication in the spectrum domain, MFCC can be used for the compensating distortion channel.Tone 109 also can form the part that the emotion linguistic context is determined, and tone also can be used for the segmentation with respect to voice, music and sound equipment (sound) effect 112.Speaker's transition detection 110, emotion audio frequency linguistic context 111 and voice/music/SFX segmentation 112 by sound model and emotion model be coupled to the performer discern 2 and scene description 3 based on audio classification.Along the path 3, analysis video picture signal 106.This visual processing comprises visible characteristic to be extracted, and analyzes as color histogram 113, facial detection 114, teletext detection 115, high brightness detection, the tone (keynote) etc.The performer that facial detection is coupled to based on face by mask discerns 4.Color histogram is a histogram of representing the colour (in the color space of selecting) and the frequency of occurrences in image thereof.
As second step, use extrinsic content analyser to handle extrinsic information.Extrinsic content analyser can be suitable for based on the intrinsic data search extrinsic information that extracts.Extrinsic information can resemble the movie title simple, yet the intrinsic data that extracted also can relate to one group of data of the complexity of film.Extrinsic content analyser can comprise the model of the analysis of drama grammatical analysis, storyboard analysis, books grammatical analysis, additional audiovisual materials such as interview, propaganda run-out (movie trailer) etc.Output is data structure, and its high-level information to relevant scene, performers and clerks' mental state (keynote) etc. is encoded.For example, on screen play 102, carry out higher structure grammatical analysis 116.For example,, for example consult database,, determine personage 117, and these personages and performer carry out cross-reference such as Internet Movie Database based on the Internet according to information by access to the Internet.Scene location 118 and scene description 119 are used for text based scene description 1, and talk with 120 relevant with the transcript of time mark, to obtain the drama of text based time mark.Text based time mark drama will provide the approximate bounds of scene according to the time mark of the dialogue that is used for text based scene description 1.
Set up personage's name and performer 120, discern 2 based on text scene description 1, based on text time mark drama 121, based on the performer of audio frequency, based on the scene description 3 of audio frequency with after based on the cross-reference between performer's identification of face, can carry out multi-source and aim at.Therefore, the extrinsic data of originally seeking peace can be correlated with, to obtain the multi-source data structure.Some external files such as drama do not comprise temporal information, and by relevant extrinsic and intrinsic data, the time tag information that extracts from the intrinsic audio visual signal can be aimed at the information that provides from external source.Output is very detailed multi-source data structure, and it comprises the superset of the information that can obtain from extrinsic and intrinsic source.
Use the multi-source data structure, can generate the high-level information structure.In current embodiment, the high-level information structure is made up of three parts: the super model 5 of performer ID, compression describe summary 8 and scene boundary detection and description, it can provide semantic scene to describe 9.The super model of performer ID is except from comprising the audiovisual person recognition the person recognition of multi-source data structure.Therefore, can present all performers' that occur in the film tabulation to the user, and for example, can present relevant this performer's additional information, for example this performer other films or other the relevant specific actors or information of personage of taking part in a performance to the user by selecting the performer.Compression is described summary module and can be comprised tracing point and story and less important story arc (arc).These are points the most interesting in the film, and this high-level information is very important for the film summary.Therefore the user can obtain dissimilar describing (plot) summary, and the sort of type that this generally is not on the DVD to be provided, perhaps the user can select this user the type of interested summary.In semantic scene detects, set up the shooting that is used for scene and scene boundary.The user can be presented the complete list of the corresponding scene in scene and the screen play, for example, so that relatively direct explanation for the drama of different scenes, perhaps allows the user to locate the scene that comprises specific personage.
In embodiment subsequently, focus is aiming at of screen play and film.
Nearly all long film all need produce originally by means of photodrama, and photoplay, environment, dialogue and movable unified description the-and provide for cineaste, performer and staff and create its starting point to the art innovation version of life are provided screen play.Participate in the people of the content-based analysis of films for those, drama be the current important semantic objects that is used for obtaining film textual description do not utilize resource.The problem that this not only helps bypass (walking around) semantic gap (for example, audio visual signal being transformed into a series of text descriptor) also helps to make the described description person that directly comes from the Moviemaking.Drama can be used for thousands of films, and it follows semi custom format standard, and is reliable data source therefore.
Is dual with drama as the difficulty that runs in the shortcut of content-based analysis.At first, text in the drama text and the time cycle in the film (during) between do not have intrinsic correlativity.In order to resist this restriction, the row (lines) of talking with in the drama is aimed at the time mark closed captioning stream that extracts from film DVD.The obstruction that other faced is that in many cases, drama was finished before Making Movies, so dialogue lines or whole scene can be added, delete, revise or change the position.In addition, the text of closed captioning usually just with screen on dialogue that the personage said roughly close.In order to overcome these influences, using for scene/dialog modification is that the alignment methods of robust is absolutely necessary.Our experience shows, has only about 60% the dialogue lines can be by time mark in film.Yet the time mark dialogue that utilizes alignment procedures to find can be used as the label of statistical model, and it can estimate there is not found descriptor.This is the equal of autonomous type (independence), unsupervised process, is used for the semantic objects (object) of automatic video frequency content analysis of film and the labelling of audio-visual-materials that has " skill " of making it.
We must be pointed out that at this alternative (thing) of drama is shooting script manuscript (continuity) (script).The shooting script manuscript is write after all working of film is finished.Adopt in two linguistic context of being everlasting during term shooting script manuscript-at first, and the shooting script of taking one by one of film, it also comprises camera arrangements and motion except the information from drama.In addition, the shooting script manuscript also can refer to the definite transcript of film dialogue.These two kinds of forms can be used by closed captioning mechanism.Though the shooting script manuscript from certain movie can be printed sale, for the public, generally can not these manuscripts of online acquisition.This has excited for shooting acript is the analysis of drama, although there is defective in it.
The reason that drama more is not widely used in the content-based analysis is because the dialogue, action and the scene description that appear in the drama do not have associated time mark.This has hindered the effectiveness of the particular fragments of film being distributed to one section text.The source of another film transcript, closed captioning has the text of dialogue in the film, but it does not comprise the personage's who says every lines identity, and closed captioning does not have the scene description that is difficult to extract yet from vision signal.By the dialogue of aligning drama and the text of film closed captioning, we have obtained optimum efficiency.
Secondly, lines and scene often are incomplete, shearing or position change.In order to be firm in the face of scene, can one the aiming at of next scene ground rearrangement drama and closed captioning.This has also alleviated the storer-intensive creation (work) with complete self-similarity matrix.
At last, owing to can not in drama, find correlative for every dialogue, must adopt the information that from the time mark drama, extracts, with multimodal (pattern) segmentation (audio of film, closed captioning, from the external website information of imdb.com for example) combination, to create the statistical model of incident.These incidents can be between the film or the incident of film inside, and make a promise to provide the ability of the textual description of scene, and these are described is not to utilize to aim at lumen and really find.
The identification that importance is the speaker that drama is aimed at.Visited at any special time talker thing and had link and the inner application of inquiring about of film that provides for relevant performer's external data based on sound allowing.The unsupervised speaker identification of relevant film dialogue is very difficult problem, because characteristics of speech sounds be subjected to the speaker emotional change, different sense of hearing conditions (for example changes in different actual or imitation positions, " room tone ") influence, be subjected to the influence of the large-scale activity in sound channel, environmental noise and the background equally.
Our solution provides the mark example of self aligned time mark as " black box (black box) " specificator (device), understands the characteristics of sound under different environment and mood.In fact, by having a large amount of next self aligned training datas, we can " allow data speak (let the data do thetalking) ", and our method is not subjected to supervision purely, because in case drama and film audio frequency are captured with machine-readable form, then without any need for manually anticipating.
After the main shooting of film finished, editing machine (person) can utilize considered or can not consider that the mode of drama assembles different taking lenss.Sometimes, aspect name coordination, shooting script or studio's policy, scene will be sheared, and perhaps possible words are picked up needed camera lens.Lift an extreme example, the ending of film Double Indemnity is left on shears the room on the ground, and wherein the hero is positioned at the gas chamber.Swingers is a love story originally, but the editor has accelerated the speed of dialogue, and this film has been become successful comedy.
The true content of the drama form that follows the principles usually.For example, first of arbitrary scene or the camera site row is called slug line (slow-action row).Slug line represents that scene occurs in title indoor or outdoor, the position, and this can specify the time in one day potentially.Slug line is best scene boundary indicator, because scene might occur in many places.It after slug line the description of position.This description will be introduced the new personage of (introduction) any appearance and the action that does not have dialogue of generation.
(volume) size of drama is that dialogue is described, and dialogue is indented in page or leaf, with easy-to-read, and gives the place that performer and cineaste take notes.If the scenarist is also not obvious in dialogue to performer's guidance, then it can be pointed out in description.The script format of standard can utilize syntax rule to come grammatical analysis:
SCENE_START:.*|SCENE_START|DIAL_START|SLUG|TRANSITION
DIAL_START:\t+<CHAR?NAME>(V.O.|O.S.)?\n
\t+DIALOGUE|PAREN
DIALOGUE|:\t+.*?\n\n
PAREN: \t+(.*?)
TRANSITION: \t+<TRANS?NAME>:
SLUG: <SCENE#>?.<INT/EXT><ERNAL|.>?-<LOC><-TIME>?
In this grammer, " n " expression new-line character, " t " expression tabulation (symbol)." .*? " be the term of expressing from the routine of Perl, and expression " any amount of the arbitrary things in sequence before the next pattern match ".The back is followed the question mark of character and is represented that this character may occur or not occur." | " allow to select-for example,<O.S.|V.O.〉appearance of expression V.O. or O.S. will help good coupling.At last, "+" represents that we will accept still to be considered characters before coupling one or more-for example, with " tHello ", " t tHello " or " t t tHello " row of beginning can be dialogue, but utilize the row of " Hello " beginning then not to be.
The format guide of drama is suggestion just, and non-standard.Yet, might catch maximum utilizations drama of conventional statement simply but flexibly
Be made into hundred drama copy and be used for the film making of any scale, drama can use and be made again for amateur or specialty, and onlinely obtains thousands of drama.
In accompanying drawing 2, express in single film, comprise anticipate, the system survey of aligning and speaker identification.
The text of screen play text 20 is by grammatical analysis, so scene and white boundary and metadata be transfused to unified data structure.Closed captioning 21 and acoustic characteristic 22 extract from the vision signal 23 of film.In the vital stage, drama and closed captioning text are aligned 24.This is aligned in following detailed description.In aligning, talk with by time mark, and relevant with specific character.Yet,, can not in drama, all find correlative for each sentence dialogue.Must adopt from the drama of time mark, extract, create the statistical model 25 of incident with the information of the multi-mode segment (audio, closed captioning is from external website information) of film combination.
By this way, might in the natural noise environment of film, obtain very high speaker identification accuracy.Be important to note that this identification can use the learning method of supervision to carry out, but generate brass tacks automatically, therefore in assorting process, do not need artificial interference.
Therefore, during the film at any time talker's thing can be determined 26.This personage ID can be relevant with internet data storehouse 27, to obtain the identification 28 of personage's performer in the film.
Except speaker identification, also can extract between position, time and the description of scene, lines dialogue separately and orator thereof, performer's additional comments and action director and the scene any suggestion conversion and (shear decay, wipe be fade-in fade-out (dismission) etc.).
In order to aim at and the speaker identification task, need audio frequency and the closed captioning stream of film DVD.
The User Data Field (user data fields) of DVD comprises the subtitle stream of textual form, and this is not the part of DVD official standard, and thereby does not guarantee to appear on all dishes.For the film that does not have available subtitle information, alternative is by carrying out OCR (optical character identification) on the subtitle stream of DVD, obtaining closed captioning.This is half interactive process, and its when running into new font (this normally each make-up room once) just need the user to interfere, but it is complete autonomous type.The only problem that we run into is that lowercase " 1 " and capitalization " I " are obscured sometimes, and we find and all L must be bent into I, occurs obscuring when the word relatively avoiding.OCR can use the SubRip program to realize, and OCR provides the time mark that has millisecond resolution for every capable closed captioning.
Drama dialogue and closed captioning text are aimed at by using dynamic programming, to search " optimal path " that passes through the self-similarity matrix.By on optimal path, using median filter, extract correct aligning corresponding to scene.The dialogue segmentation of degree of accuracy rationally is broken down into the piece (chunk) of the capable size of closed captioning, this means that we can directly become the dialogue block translation segmentation of time mark.Below each ingredient will be discussed.
Similar matrix is a kind of mode of the similar media of two different editions of comparison, and it is a kind of expansion of self-similarity matrix, and it is the tool master of the content-based analysis of audio frequency now.
In similar matrix, each the speech j in the drama in the closed captioning of each speech i of scene and whole film compares.Thereby, matrix is provided:
SM(i,j)←screenplay(scene_num,i)=subtitle(j)
In other words, if the speech i of scene is identical with the speech j of closed captioning, then SM (i, j)=1, and if they are inequality, then SM (i, j)=0.The screen time, therefore when the diagonal line of drama alignd arrangement with the line of text of closed captioning, we saw 1 real diagonal line at expectation along diagonal line i=j linear progression.Accompanying drawing 3 is represented the example part charge of similar matrixes 30, is used for the drama 32 of comparison closed captioning 31 and film " WallStreet " scene 87.In similar matrix, the speech that appears in drama and the closed captioning can be according to whether finding coupling by characterization.Therefore, if do not find coupling, then each matrix element can be labeled as and not match 32, if the coupling of finding then can be labeled as coupling 33.Naturally, can find many consistent couplings, but can find locus of discontinuity, and set up optimal path by this track.Be positioned on this optimum trajectory unmatched speech by mark 34 correspondingly.
Speaker identification is difficult in the film, because in the film time-continuing process, and sound variation, and sense of hearing condition is also changing.Thereby, in order under different condition, to classify, may need many data.Accompanying drawing 4 is expressed this particular problem.Schematically express two scenes 40,41.In first scene 40, three people have appearred, and these three people are towards spectators, and can expect that their this moments people is in speech.Therefore, only use the intrinsic data, just might utilize high determinacy to extract speaker's identity, for example.Use sound fingerprint and mask.In second scene 41, five people occurred, and the one-man to be just in the face of spectators, and many discussion may occur, people may speak at the same time, and dramatic background music can be used for strengthening nervous mental state.Use intrinsic information, may not carry out speaker identification.But, using the drama that wherein indicates dialogue and speaker, speaker ID can be used for detecting all speakers of scene.
In order to classify and to make things convenient for speaker identification, can use following program based on acoustic characteristic:
1) selects training/test/verification setting
2) remove and mourn in silence
3), remove music/noise part potentially based on the audio classifiers of Martin McKinney
4) sampling to 8kHz under, is 3.4kHz because the crest frequency of voice is similar to
5) calculate CMS, the delta characteristic on the 50ms window has the jump size of 12.5ms
6) proper vector is deposited in together, to create long analysis block
7) carry out PCA, to reduce the dimension of test setting
8) neural network training or GMM
9) analog network/GMM on whole film
10) trainee's in this summer brass tacks is relatively done to such an extent that how well have to understand us.
The present invention also may be embodied as computer program, can be stored on the medium and makes computing machine be programmed to carry out the method according to this invention, and this is obvious to those skilled in the art.This computing machine may be embodied as multi-purpose computer, as personal computer or network computer, but also may be embodied as the special-purpose consumption electronic product that has processing core able to programme.
As previously mentioned, know also that mentioned odd number also is scheduled to comprise a plurality of, and vice versa.In addition, statement will be interpreted as non-exclusive such as " comprising ", " comprising ", " containing ", " having ", " being incorporated into ", " holding ", " encirclement ", i.e. these statements will be interpreted as not getting rid of other appearance.
Though set forth the present invention in conjunction with most preferred embodiment, the concrete form of not planning to limit the invention to here and being set forth.On the contrary, scope of the present invention only utilizes appended claim to limit.

Claims (11)

1, a kind of system that is used for analysis-by-synthesis film and screen play data, this system comprises:
Film analyzer, this film analyzer communicates to connect to film, and this film analyzer is suitable for searching for this film searching the dialogue text, and is suitable for using extraction algorithm to extract this dialogue text,
The screen play analyzer, this screen play analyzer communicates to connect to screen play, this screen play analyzer is suitable for searching for this screen play, and be suitable for using searching algorithm to retrieve dialogue line in this screen play, wherein this screen play analyzer is configured to come the retrieval dialog row according to the knowledge of relevant screen play grammer
The device that is used for dialogue line and the dialogue text by the time alignment screen play and provides the transcript of the time mark of film to carry out time correlation, the dialogue text that wherein in film, reflects relevant with the dialogue line in the screen play and
Be used for by adopting the information that extracts from the transcript of time mark to create the device of the statistical model of incident with the multimode segmentation of film combination.
2, according to the system of claim 1, wherein the dialogue text is the closed captioning stream of time mark.
3, according to the system of claim 2, this system is configured to obtain closed captioning by the subtitle stream of film is carried out optical character identification.
4, according to the system of claim 1, wherein film analyzer is suitable for extracting the dialogue text according to inquiry, and this inquiry is provided by the user.
5, according to the system of claim 1, wherein the screen play analyzer is suitable for retrieving the screen play dialogue line according to inquiry, and this inquiry is provided by the user.
6, according to the system of claim 1, wherein screen play is connected to the Internet, and can be accessed by the Internet.
7, according to the system of claim 1, wherein film analyzer is configured to analyze characteristic in the film based on the information that comprises in the screen play.
8, according to the system of claim 1, wherein this system is configured to discern in the film just at talker's thing from the transcript of time mark.
9, system according to Claim 8, wherein this system be configured to personage's identifier relevant with the internet data storehouse, to obtain the performer's identification of personage in the film.
10, according to the system of claim 1, wherein this system is configured to utilize the self-similarity matrix to come dialogue line in the comparison screen play and the dialogue text in the film.
11, a kind of method that is used for analysis-by-synthesis film and screen play data, the method includes the steps of:
The search film searching the dialogue text, and uses extraction algorithm to extract the dialogue text,
The search screen play, and use searching algorithm to retrieve dialogue line in the screen play, wherein come the retrieval dialog row according to the knowledge of relevant screen play grammer,
Dialogue line in the time alignment screen play and dialogue text, and the transcript of the time mark of film is provided, the dialogue text that wherein in film, reflects relevant with the dialogue line in the screen play and
By adopting the information that from the transcript of time mark, extracts, create the statistical model of incident with the multimode segmentation combination of film.
CNB2004800357507A 2003-12-05 2004-11-30 The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data Expired - Fee Related CN100538696C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US52747603P 2003-12-05 2003-12-05
US60/527,476 2003-12-05
EP04100622.2 2004-02-17

Publications (2)

Publication Number Publication Date
CN1906610A CN1906610A (en) 2007-01-31
CN100538696C true CN100538696C (en) 2009-09-09

Family

ID=37675003

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004800357507A Expired - Fee Related CN100538696C (en) 2003-12-05 2004-11-30 The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data

Country Status (1)

Country Link
CN (1) CN100538696C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510453B2 (en) 2007-03-21 2013-08-13 Samsung Electronics Co., Ltd. Framework for correlating content on a local network with information on an external network
US8843467B2 (en) 2007-05-15 2014-09-23 Samsung Electronics Co., Ltd. Method and system for providing relevant information to a user of a device in a local network
US8935269B2 (en) 2006-12-04 2015-01-13 Samsung Electronics Co., Ltd. Method and apparatus for contextual search and query refinement on consumer electronics devices
US8938465B2 (en) 2008-09-10 2015-01-20 Samsung Electronics Co., Ltd. Method and system for utilizing packaged content sources to identify and provide information based on contextual information
DE102009060687A1 (en) 2009-11-04 2011-05-05 Siemens Aktiengesellschaft Method and device for computer-aided annotation of multimedia data
CN105744291B (en) * 2014-12-09 2018-11-27 北京奇虎科技有限公司 Video data handling procedure and system, video playback apparatus and cloud server
CN108882024B (en) * 2018-08-01 2021-08-20 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN111931482B (en) * 2020-09-22 2021-09-24 思必驰科技股份有限公司 Text segmentation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Named faces:putting names to faces. Houghton R.IEEE intelligent systems,IEEE computer society,Vol.14 No.5. 1999
Named faces:putting names to faces. Houghton R.IEEE intelligent systems,IEEE computer society,Vol.14 No.5. 1999 *

Also Published As

Publication number Publication date
CN1906610A (en) 2007-01-31

Similar Documents

Publication Publication Date Title
EP1692629B1 (en) System &amp; method for integrative analysis of intrinsic and extrinsic audio-visual data
US9066049B2 (en) Method and apparatus for processing scripts
Hauptmann et al. Informedia: News-on-demand multimedia information acquisition and retrieval
US6434520B1 (en) System and method for indexing and querying audio archives
EP0786114B1 (en) Method and apparatus for creating a searchable digital video library
US7292979B2 (en) Time ordered indexing of audio data
KR100828166B1 (en) Method of extracting metadata from result of speech recognition and character recognition in video, method of searching video using metadta and record medium thereof
US20110239107A1 (en) Transcript editor
Kamabathula et al. Automated tagging to enable fine-grained browsing of lecture videos
Fleischman et al. Grounded language modeling for automatic speech recognition of sports video
CN100538696C (en) The system and method that is used for the analysis-by-synthesis of intrinsic and extrinsic audio-visual data
Wilcox et al. Annotation and segmentation for multimedia indexing and retrieval
Gagnon et al. A computer-vision-assisted system for videodescription scripting
JP2004302175A (en) System, method, and program for speech recognition
KR101783872B1 (en) Video Search System and Method thereof
Nouza et al. System for producing subtitles to internet audio-visual documents
Lindsay et al. Representation and linking mechanisms for audio in MPEG-7
Nouza et al. Large-scale processing, indexing and search system for Czech audio-visual cultural heritage archives
Hauptmann et al. Informedia news-on-demand: Using speech recognition to create a digital video library
Barbosa et al. Browsing videos by automatically detected audio events
KR102413514B1 (en) Voice data set building method based on subject domain
KR20070003778A (en) System &amp; method for integrative analysis of intrinsic and extrinsic audio-visual data
Vander Wilt et al. Deploying Prerecorded Audio Description for Musical Theater Using Live Performance Tracking
Zdansky et al. Joint audio-visual processing, representation and indexing of TV news programmes
Yoshida et al. A keyword accessible lecture video player and its evaluation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090909

Termination date: 20121130