CN1842151A - Information processing apparatus and method - Google Patents

Information processing apparatus and method Download PDF

Info

Publication number
CN1842151A
CN1842151A CN 200610066969 CN200610066969A CN1842151A CN 1842151 A CN1842151 A CN 1842151A CN 200610066969 CN200610066969 CN 200610066969 CN 200610066969 A CN200610066969 A CN 200610066969A CN 1842151 A CN1842151 A CN 1842151A
Authority
CN
China
Prior art keywords
data
keyword
information
unit
matching result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200610066969
Other languages
Chinese (zh)
Inventor
桃崎浩平
上原龙也
井本和范
正井康之
阿部一彦
永尾学
筱岛宗彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN1842151A publication Critical patent/CN1842151A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An information processing apparatus is provided in which with respect to video/audio data to be recorded and stored, the determination of division and a control point suitable for viewing and listening and the giving of relevant information can be performed without requiring a manual operation each time. The information processing apparatus includes a recording medium 90 , a video data acquisition unit 48 , a video data specification unit 47 , an audio data separation unit 25 , a key creation unit 31 , a key relevant data acquisition unit 55 , and a key data management unit 10 . When a key is created while specifying a section in first audio data, a name and attribute information based on a near division point and control point are stored, and when an audio section similar to an audio pattern of the key is detected from second audio data, in accordance with the stored attribute information, a division point and a control point are determined on the basis of starting and terminal ends of the detected section, and the stored name or a name given in accordance with a naming method is set for the divided section, the control point or the whole audio data.

Description

Information processor and method
Technical field
The present invention relates to be used to carry out the information processor of the processing of video/audio or audio recording, and information processing method.
Background technology
In recent years, the main flow equipment that is used for record audio and video is converted to digital disk, semiconductor memory etc. from the traditional analog tape.In the videograph and reproducer that use big capacity hard disk, recordable capacity increases significantly especially.When using this kind equipment, the video of a plurality of programs that provide by broadcasting or communication is stored, and the user can freely select and watches them.
Here, when the video of the described storage of management, use title (program) to form file as program cells etc., provide title and other information, and when listing them, can arrange and show typical image (sketch map), title etc.In addition, a program (title) is divided into a plurality of unit that are called chapters and sections (segment), and also can carry out according to the chapters and sections unit and reproduce and editor.When providing the chapters and sections title, and when showing the typical image (sketch map) of chapters and sections, can from the chapters and sections tabulation, select and reproduce comprise like the chapters and sections of scene, perhaps selected chapters and sections can be arranged creates playlist etc.As rule, there is VR (videograph) pattern of DVD (digital universal disc) about the management method of these contents.
Incidentally, be used for mark and comprise the corresponding recovery time information of time location when reproducing video and audio content in program (title) specified segment (section) and position.And, according to an equipment, except the chapter marker of expression chapters and sections cut-point, also there is a kind of situation, promptly when edit operation, be used to specify editor's mark of object segmentation, or when prompting operation, be used to specify the index marker of the point of redirect destination.Incidentally, " mark " in this specification also is used for above-mentioned implication.
For programm name, when the programme information that provides by EPG (electronic program guides) etc. is provided, the file that can automatically programm name is recorded and store.For the programme information that provides by EPG, there is ARIB (Japanese radio industry and commercial guild) standard (STD-B10).
Yet, inner content for a program, though can conceive various schemes (such as the information that provides position sliced time and make the title of each partitioning portion of sign easily) as support to watch, during editor etc. and when carrying out control automatically useful metadata, very difficultly provide these general metadata from the outside.Thus, in common beholder's equipment, for the device aspect, it is necessary creating metadata based on the Voice ﹠ Video of described record.
There is MPEG-7 in universal description form as the metadata that relates to video and audio content, and has a kind of method, in the method, makes metadata corresponding to content, and with this metadata store in XML (extend markup language) database.In addition,, there is ARIB (Japanese radio industry and commercial guild) standard (STD-B38) for the transmission system of the metadata in broadcasting, and also can be according to the described metadata of these format records.
As what can automatically perform by device, also there is a kind of situation, in this case, provide by detecting the chapters and sections dividing function (for example, referring to patent document 1 (JP-A-2003-36653)) that mute part, switching (shearing) video, switch audio multiplex's pattern (dual track of monophony, stereo, bilingual broadcasting) realize.Yet might not be carried out suitably described cutting apart, and the user must carry out manual operation quite a lot ofly, and this manual operation comprises the importance that provides each chapters and sections of cutting apart and provides title.
In addition, for the metadata creation of using the automatic keyword extraction of being undertaken etc. by the language message of projector (telop) image recognition and speech recognition acquisition, in full-text search, use and become possibility (for example, referring to patent document 2 (JP-A-8-249343)).Yet for the part of cutting apart and provide title such as chapters and sections, whole application is difficult under existing environment.
On the other hand, though conceived sound retrieval (acoustic retrieval) or audio frequency strong (robust) coupling with the consistency of retrieval sound or the method for similitude, but the great majority in these methods are to use according to a kind of like this mode, i.e. expectation is watched and the music listened to etc. is retrieved and reproduced, and this structure and the metadata creation that is not suitable for video are (for example, referring to patent document 3 (JP-A-2000-312343).
As mentioned above, in correlation technique, when the video of a large amount of storages of management, particularly when cutting apart a program, there is a problem, promptly can not easily carries out and be suitable for watching and cutting apart of listening to, control point definite and provide relevant information.
So, the present invention has been proposed at above-mentioned environment.An object of the present invention is to provide a kind of data processing equipment, in this data processing equipment, for video to be recorded and storage, do not requiring under the manually operated situation at every turn, can carry out and be suitable for watching and the determining and provide relevant information of cutting apart of listening to, control point.
Summary of the invention
According to an aspect of the present invention, in information processor, this information processor is used for creating supports data to support the user, make when the user reproduces, editor or retrieve comprises video/audio data or only during the use object data of voice data, the user can carry out according to the operation of expectation and reproduce, edits or retrieval, described information processor comprises: keyword voice data (key audio data) acquiring unit, and it is configured to obtain the keyword voice data that is used to create described support data; Keyword appointed information input unit, it is configured to be used to import the keyword appointed information of the whole of described keyword voice data or part fragment; The keyword creating unit, it is configured to based on described keyword appointed information, creates the audio mode data as search key by shearing the whole of described keyword voice data or part fragment; Keyword related data acquiring unit, it is configured to obtain the keyword related data relevant with described keyword voice data based on described keyword appointed information; And support the data creating unit, it is configured to create described support data by the key data and the described use object data that comprise described audio mode data and described keyword related data are mated.
In addition, according to a further aspect in the invention, in information processor, this information processor is used for creating supports data to support the user, make when the user reproduces, editor or retrieve comprises video/audio data or only during the use object data of voice data, the user can carry out reproduction, editor or retrieval according to the operation of expectation, and described information processor comprises: first supports data input cell, and it is configured to import with first and uses the first relevant support data of object data; Keyword voice data acquiring unit, it is configured to obtain with described first supports the relevant keyword voice data of data; Keyword appointed information creating unit, it is configured to support data creation keyword appointed information to select a part of fragment of described keyword voice data based on first of described input; The keyword creating unit, it is configured to create the audio mode data as search key based on described keyword appointed information by the part fragment of shearing described keyword voice data; Keyword related data acquiring unit, it is configured to based on described keyword appointed information, obtains the keyword related data relevant with described keyword voice data; And second support the data creating unit, and it is configured to the key data and described use object data coupling that comprise described audio mode data and described keyword related data by making, creates the second support data.
In addition, according to a further aspect in the invention, in information processor, this information processor is used for creating supports data to support the user, make when the user reproduces, editor or retrieve comprises video/audio data or only during the use object data of voice data, the user can carry out according to the operation of expecting and reproduce, edits or retrieval, and described information processor comprises: keyword voice data acquiring unit, and it is configured to obtain the keyword voice data that is used to create the first support data; First supports the data creating unit, and it is configured to detect the change point of described keyword voice data and creates first and support data; Keyword appointed information creating unit, it is configured to support data based on first of described establishment, creates a part of fragment that the keyword appointed information is selected described keyword voice data; The keyword creating unit, it is configured to create the audio mode data as search key based on described keyword appointed information by the part fragment of shearing described keyword voice data; Keyword related data acquiring unit, it is configured to obtain the keyword related data relevant with described keyword voice data based on described keyword appointed information; And second support the data creating unit, and it is configured to the key data and described use object data coupling that comprise described audio mode data and described keyword related data by making, creates the second support data.
According to embodiments of the invention, the designated audio frequency of shearing of a fragment in described keyword voice data, and the audio mode data that make the voice data of described shearing or feature extraction are during as search key, also preserve based on the information of the existence at the control point near cut-point and specified segment and the attribute that gives these title etc.
In addition, according to embodiments of the invention, when cut-point that is provided with described use object voice data and control point, near it fragment is sheared as described keyword voice data, and make the voice data of described shearing or the audio mode data of feature extraction become search key, and preserve based on cut-point, control point, give the attribute of title and other information of these points and described search key.
Then, detection type is similar to the fragment of described search key from described use object voice data.Attribute according to described search key preservation, and based on the starting end of detection (audio frequency) fragment in the described use object voice data with finish among two of end one or these two, determine cut-point and control point, for the described front of cutting apart or back (audio frequency) fragment, control point or whole use object voice data are provided with the title of previous appointment or the title that provides according to previous appointment naming method.
Therefore, according to embodiments of the invention, make the AD HOC audio frequency (such as angle (corner) program music) of each appearance become keyword, and can carry out from its head and reproduce, skip described program music, carry out reproduction from the major part at described angle, the chapters and sections title is given time point or cuts apart chapters and sections, and the programm name that comprises this angle.
Description of drawings
Fig. 1 is the block diagram of structure that first embodiment of video/audio processing unit of the present invention is shown;
Fig. 2 is the form that is illustrated in the example of key data administrative unit 10 information of managing of first embodiment and search key;
Fig. 3 shows and is performed in first embodiment with the corresponding also form of the example of the operation of Be Controlled (regulate) with attribute;
Fig. 4 is illustrated in [BGM attribute 1] or the schematic diagram of the example of corresponding attribute information with it are set in the keyword related data acquiring unit 55 of first embodiment;
Fig. 5 is illustrated in [prelude (opening) music attribute 1] or the schematic diagram of the example of corresponding attribute information with it are set in the keyword related data acquiring unit 55 of first embodiment;
Fig. 6 is illustrated in [angle music attribute 1] or the schematic diagram of the example of corresponding attribute information with it are set in the keyword related data acquiring unit 55 of first embodiment;
Fig. 7 is illustrated in [contest begins event attribute 1] or the schematic diagram of the example of corresponding attribute information with it are set in the keyword related data acquiring unit 55 of first embodiment;
Fig. 8 is the flow chart that is illustrated in the handling process in the keyword related data acquiring unit 55 of first embodiment;
Fig. 9 is the flow chart that the detailed process flow process in the key attribute set handling of step S131 of Fig. 8 is shown;
Figure 10 is the block diagram of structure that second embodiment of apparatus for processing audio of the present invention is shown;
Figure 11 is the form that is illustrated in the example of key data administrative unit 10 information of managing of second embodiment and search key;
Figure 12 shows the form that is performed in a second embodiment with the example of and controlled operation corresponding with attribute;
Figure 13 is the block diagram of structure that the 3rd embodiment of video/audio processing unit of the present invention is shown;
Figure 14 is the schematic diagram that the example of the information that writes down according to the control operation of [BGM attribute 1] in the matching result recording instruction unit 35 of the 3rd embodiment is shown;
Figure 15 is the schematic diagram that the example of the information that writes down according to the control operation of [prelude music attribute 1] in the matching result recording instruction unit 35 of the 3rd embodiment is shown;
Figure 16 is the schematic diagram that the example of the information that writes down according to the control operation of [angle music attribute 1] in the matching result recording instruction unit 35 of the 3rd embodiment is shown;
Figure 17 is the schematic diagram that the example of the information that writes down according to the control operation of [contest begins event attribute 1] in the matching result recording instruction unit 35 of the 3rd embodiment is shown;
Figure 18 is the block diagram of structure that the 4th embodiment of apparatus for processing audio of the present invention is shown;
Figure 19 is the block diagram of structure that the 5th embodiment of apparatus for processing audio of the present invention is shown;
Figure 20 is the block diagram of structure that the 6th embodiment of apparatus for processing audio of the present invention is shown;
Figure 21 is the diagrammatic sketch that is illustrated in when detecting search key A in the keyword matching unit by the example of the metadata of matching result recording instruction unit record on recording medium;
Figure 22 is the diagrammatic sketch that is illustrated in when detecting search key B in the keyword matching unit by the example of the metadata of matching result recording instruction unit record on recording medium;
Figure 23 is the block diagram of structure that the 7th embodiment of video/audio processing unit of the present invention is shown;
Figure 24 is the block diagram that another structure of the 7th embodiment is shown;
Figure 25 is the block diagram of structure that the 8th embodiment of video/audio processing unit of the present invention is shown;
Figure 26 is the block diagram of second structure that the video/audio processing unit of the 8th embodiment is shown;
Figure 27 is the block diagram of the 3rd structure that the video/audio processing unit of the 8th embodiment is shown;
Figure 28 is the block diagram of structure that the 9th embodiment of video/audio processing unit of the present invention is shown;
Figure 29 is the block diagram that another structure of the 9th embodiment is shown;
Figure 30 is the block diagram of structure that the tenth embodiment of apparatus for processing audio of the present invention is shown;
Figure 31 is the block diagram that another structure of the tenth embodiment is shown;
Figure 32 is the block diagram of structure that the 11 embodiment of apparatus for processing audio of the present invention is shown;
Figure 33 is the block diagram of structure that the 12 embodiment of apparatus for processing audio of the present invention is shown;
Figure 34 is the block diagram of structure that the 13 embodiment of video/audio processing unit of the present invention is shown;
Figure 35 is the block diagram that is illustrated in first structure of the structural detail relevant with key search among the 13 embodiment;
Figure 36 is the block diagram of another example that is illustrated in first structure of the structural detail relevant with key search among the 13 embodiment;
Figure 37 is the block diagram that is illustrated in second structure of the structural detail relevant with key search among the 13 embodiment;
Figure 38 is the block diagram that is illustrated in the 3rd structure of the structural detail relevant with key search among the 13 embodiment;
Figure 39 is the block diagram of another example that is illustrated in the 3rd structure of the structural detail relevant with key search among the 13 embodiment;
Figure 40 is the block diagram that is illustrated in the 4th structure of the structural detail relevant with key search among the 13 embodiment;
Figure 41 is the block diagram of another example that is illustrated in the 4th structure of the structural detail relevant with key search among the 13 embodiment;
Figure 42 is the block diagram that is illustrated in the 5th structure of the structural detail relevant with key search among the 13 embodiment;
Figure 43 is the block diagram of structure that the 14 embodiment of video/audio processing unit of the present invention is shown;
Figure 44 is the block diagram that is illustrated in the structural detail relevant with key search among the 14 embodiment;
Figure 45 is the block diagram that is illustrated in another example of the structural detail relevant with key search among the 14 embodiment;
Figure 46 is the flow chart that is illustrated in the processing among the 14 embodiment;
Figure 47 is the flow chart that is illustrated in the processing in another structure of the 14 embodiment;
Figure 48 be illustrate with Figure 46 in step S341 and [retrieval keeps or carries out] relevant data of the step S391 of Figure 47 obtain the flow chart of the processing of control unit 81;
Figure 49 be illustrate with Figure 46 in step S341 and [retrieval keeps or carries out] relevant data of the step S391 of Figure 47 obtain the flow chart of another processing of control unit 81;
Figure 50 is that the data that are illustrated in another structure of the 14 embodiment are obtained the flow chart of the processing of control unit 86 grades;
Figure 51 is used for illustrating that the 14 embodiment chapters and sections (chapters and sections are cut apart and the setting of chapters and sections title) are used as the diagrammatic sketch of the example of the situation of supporting data processing.
Detailed Description Of The Invention
Hereinafter, embodiment with reference to the accompanying drawings to describe the present invention.
First embodiment
Referring to figs. 1 through Fig. 9 video/audio processing unit according to the first embodiment of the present invention is described.Support the user in the metadata of creating as supporting data, make when user's reproduction, editor or retrieve video/voice data, can carry out under the situation of reproducing, editing or retrieving according to the operation of user expectation, the video/audio data device of present embodiment is to be used to create the device of key data as the origin of metadata, and this key data comprises search key and keyword related data.
(1) structure of video/audio processing unit
Fig. 1 shows the structure of the video/audio processing unit of present embodiment.
Video/audio processing unit shown in Fig. 1 comprises recording medium 90, video data acquiring unit 48, video data designating unit 47, voice data separative element 25, keyword creating unit 31, keyword related data acquiring unit 55 and key data administrative unit 10.
Described video/audio data or video/audio signal are recorded on the recording medium 90 in advance.In addition, on recording medium 90 record be used to be divided into a plurality of unit (such as the title or the chapters and sections of video and audio frequency) information and with relevant information such as its title, attribute.
Described video data acquiring unit 48 reads and obtains the video/audio data of record on recording medium 90, and it is delivered to video data designating unit 47.In addition, read and obtain analog video/audio signal, and this analog video/audio signal is converted to digital video/audio data, subsequently, this digital video/audio data can be delivered to video data designating unit 47.Incidentally, except these are handled, when needed, can carry out video/audio data decryption processing (for example, B-CAS:BS conditional access system), (for example, MPEG2), format conversion processing (for example for decoding processing, TS/PS), speed (compression speed) conversion process etc.
Video data designating unit 47 specifies in the whole or part fragment of the video/audio data that obtains in the video data acquiring unit 48.In described specified segment is under the situation about being obtained by user's operation, and for example, what it is contemplated that is, uses the equipment such as mouse or Long-distance Control, yet, can use another kind of method.Reproduce and show described video/audio data, and the user position that can when confirm video/audio data, specify described starting end and finish end.In addition, from the sketch map image list of chapters and sections, select chapters and sections, and whole chapters and sections can be considered to specified segment.
Isolate voice data in the video/audio data of voice data separative element 25 appointment from video data designating unit 47, and it is delivered to keyword creating unit 31.For example, voice data separative element 25 is carried out the demultiplexing (Demux) of MPEG2 data, extracts the MPEG2 audio ES that comprises voice data, and to its decode (AAC etc.).
Described keyword creating unit 31 is created the audio mode data based on the voice data of sending from voice data separative element 25, uses in the keyword matching unit 30 that these audio mode data are described in following the 3rd to the 6th embodiment respectively.Here, the audio mode data of preserving as search key can be for example reproducible voice datas or by described voice data of feature extraction and the data by its parametrization is obtained.
Described keyword related data acquiring unit 55 extracts the keyword related data from recording medium 90, this keyword related data is the relevant information of fragment with the video/audio data of appointment in video data designating unit 47.
For example, when having the title corresponding or during with the corresponding chapters and sections title of the fragment of described appointment, described information is extracted with the video/audio data of described appointment.Designated in the fragment corresponding with previous result for retrieval, and under the keyword related data of the described retrieve data situation about being saved, the keyword related data shown in Fig. 2 is extracted.In addition, can import the keyword related data from the outside.
In addition, even there is not direct corresponding relation with described specified segment, when adjacent chapters and sections or mark were retrieved and find, described information was extracted, and provides described information by the relation of the position between described specified segment and described chapters and sections or the mark.
A plurality of audio mode data that described key data administrative unit 10 will produce in keyword creating unit 31 manage as search key.In addition, for each search key, the keyword related data of obtaining in keyword related data acquiring unit 55 such as related names and attribute, can be added and manage.
In this manual, " coupling " means and uses object data (video/audio data or voice data) and compare each other as the audio mode data of search key, and it is corresponding with described audio mode data to detect which position or the fragment used in the object data.This coupling is to be carried out by the keyword matching unit among the 3rd to the 6th embodiment 30.
In this manual, " interpolation " mean that keyword related data as the audio mode data of search key and attribute information etc. is associated.This interpolation is carried out by key data administrative unit 10.
(2) information of managing in the key data administrative unit 10
Fig. 2 shows information of managing in key data administrative unit 10, and the audio mode data A of the search key of creating as result is to D.Here, management keyword name, title, attribute, matching process and parameter.Hereinafter, these data will be called as the keyword related data.
For search key A, information " divines the angle ", " information television early ", " BGM attribute 1 (BGM-1) ", " front coupling " and " BGM " are managed.
For search key B, information " prelude (opening) ", " drama series in night ", " prelude music attribute 1 (OPM-1) ", " coupling fully " and " absolute music (clear music, CLM) " are managed.
For search key C, information " physical culture angle ", " 10 news ", " angle music attribute 1 (CNM-1) ", " coupling fully " and " forte happy (robust music, RBM) " are managed.
For search key D, information " swimming beginning sound ", " (sky) ", " the contest incident begins attribute 1 (SGE-1) ", " front coupling " and " potent fruit sound (robust effective sound, RBS) " are managed.
Fig. 3 shows corresponding with attribute among Fig. 2 and to the example of its recording instruction of controlling operation.Incidentally, in the 3rd to the 6th embodiment, this attribute is used to the recording instruction operation in matching result recording instruction unit 35 based on the testing result of keyword matching unit 30.
" BGM attribute 1 (BGM-1) " is the attribute that adds the search key of recording instruction operation to, wherein, make the fragment former state ground of whole detection become labeled fragment, and the title of described fragment is set to " (title of keyword) " (when detecting a plurality of fragment, being " (title of keyword)-numeral ").Incidentally, " # " among Fig. 3 represents a numeral.
" prelude music attribute 1 (OPM-1) " is the attribute that adds the search key of recording instruction operation to, wherein, carrying out chapters and sections at the starting end that detects fragment with the end end cuts apart, the title that is clipped in the described chapters and sections between described starting end and the end end is set to " (prelude)-numeral ", finishing to hold the title of the back chapters and sections of cutting apart to be set to " [major part]-numeral ", and under the situation that title is not set, " title " relevant with keyword is set to title.
" angle music attribute 1 (CNM-1) " is the attribute that adds the search key of recording instruction operation to, wherein, carrying out chapters and sections at the starting end that detects fragment cuts apart, the title of the back chapters and sections of cutting apart is set to " (title of keyword) " (when detecting a plurality of fragment, be " (title of keyword)-numeral "), and under the situation that title is not set, " title " relevant with keyword is set to title.
" contest begins event attribute 1 (SGE-1) " is the attribute that adds the search key of recording instruction operation to, wherein, made before the starting end of described detection fragment two seconds point become gauge point, and the title of mark is set to " (title of keyword)-numeral ".
(3) attribute information is added to search key A
With reference to the schematic diagram of Fig. 4, so that how to add attribute when creating search key A (audio mode data A) be described at the part of video/audio data of record on recording medium 90 is designated.Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.Will be described in detail later and how obtain this attribute based on the processing of keyword related data acquiring unit 55.
Oblique line in the belt shown in Fig. 4 partly represents to have write down the fragment of mark, the title of sphere (balloon) expressive notation, and intensive mark contiguous on the belt is represented the designated fragment of creating search key A.
Create under the situation of keyword " divining angle-1 " by assigned tags in title " early between information television 12/22 ", " information television early " program (1 hour 54 minutes) of record broadcasting on December 22 in this title, judge the consistency of described keyword specified portions and mark part, and based on the title of described mark etc., the operation when detecting is added as the attribute information of search key A.
For example, from " information television 12/22 early ", obtain programm name " information television early " respectively, from " divining angle-1 ", obtain keyword name and " divine the angle ", and add attribute, utilize this attribute, make whole detection fragment former state ground become labeled fragment.And add the title of the fragment that becomes " [divining the angle]-numeral ".
In addition, the mark starting end of " divining angle-1 " and the starting end that finishes end and keyword specified portions and end end are compared each other, and substantially in same point, and finish end under the situation of difference at starting end only, described matching process is set to " front coupling ".
Incidentally, can be set to comprise these " BGM attribute 1 (BGM-1) " as control operation.
(4) attribute information is added to search key B
With reference to the schematic diagram of Fig. 5, so that how to add attribute when creating search key B (audio mode data B) be described at the part of the video/audio data of record on specifying in recording medium 90.Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.Will be described in detail later and how obtain this attribute based on the processing of keyword related data acquiring unit 55.
Vertical line in the belt shown in Fig. 5 represents to carry out the point that chapters and sections are cut apart, sphere (balloon) expression chapters and sections title, and intensive mark contiguous on the belt is represented the designated part of creating search key B.
Creating under the situation of keyword by selection chapters and sections " prelude-2 " in title " night, the drama sequence 12/23 ", the five collection replays (1 hour 40 minutes) of " the drama series in night " of broadcasting on December 23 in described title, have been write down, and based on the chapters and sections " prelude-2 " of described appointment, have the title etc. of the contiguous chapters and sections " major part-2 " of identical suffix " 2 ", the operation when detecting is added as the attribute information of search key B.
For example, from " night, drama series 12/23 ", obtain programm name " drama series in night " respectively, from " prelude-2 ", obtain keyword name " prelude ", carrying out chapters and sections at the starting end that detects fragment with the end end cuts apart, and interpolation attribute, utilize this attribute, make the title that is clipped in starting end and finishes the chapters and sections between the end become " [prelude]-numeral ".And when the end end was cut apart, the title of back chapters and sections became " [major part]-numeral ".
Incidentally, can be set to comprise these " prelude music attribute 1 (OPM-1) " as control operation.
In addition, the information of the type " drama " that is provided with in title " night, drama series 12/23 ", storage purpose medium " HDD ", storage purpose file " my drama ", last memory rate (compression speed) " low " etc. is provided with together, when record wherein detects the title of keyword B, replace title, or except title, type " drama " can be set, make the storage purpose dish become " my drama " file of HDD, perhaps the last memory rate of basis is stored be lowered to " low " speed in quality after.
In addition, under the situation that the playlist " drama series-major part in night " of the chapters and sections " major part " of having collected program " drama series in night " exists, new chapters and sections " [major part]-numeral " are added in the playlist " drama series-major part in night ", and make the chapters and sections title on the playlist become " 12/23 broadcasting-numeral ".
Make sound progressively increase under the situation of (crescendo) in beginning prelude music, the starting end of search key moves backward, and a stable fragment can be designated as search key.Equally in this case, because the relation of the position between keyword fragment and the chapters and sections is considered, regardless of the time that starting end shifts, the chapters and sections that normally detect in the fragment are cut apart.
(5) attribute information is added to search key C
With reference to the schematic diagram of Fig. 6, be described in the part of the video/audio data that specifies on the recording medium 90 record so that how to add attribute when creating search key C (audio mode data C).Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.Will be described in detail later and how obtain this attribute based on the processing of keyword related data acquiring unit 55.
Vertical line in the belt shown in Fig. 6 is represented to carry out the point that chapters and sections are cut apart, and sphere is represented the chapters and sections title, and intensive mark contiguous on the belt is represented the designated fragment of creating search key C.
Creating under the situation of keyword by first jiao of musical portions of selection chapters and sections " physical culture angle " in title " 10 news 12/24 ", in this title " 10 news " (60 minutes) of record broadcasting on December 24, be adjacent to the starting end of specified segment of the starting end of chapters and sections " physical culture angle " by use, the operation when detecting is added as the attribute information of search key C.
For example, carry out chapters and sections at the starting end that detects fragment and cut apart, and add attribute, utilize this attribute, make back chapters and sections title become " physical culture angle ", and to make title be " 10 news ".Can be set to comprise these " angle music attribute 1 (CNM-1) " as control operation.
(6) attribute information is added to search key D
With reference to the schematic diagram of Fig. 7, so that how to add attribute when creating search key D (audio mode data D) be described at the part of the video/audio data of record on specifying in recording medium 90.Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.Will be described in detail later and how obtain this attribute based on the processing of keyword related data acquiring unit 55.
Sphere expressive notation point and its title in the belt shown in Fig. 7, the intensive mark on the belt is represented the designated part of creating search key D.
Creating under the situation of keyword by the part of the beginning sound of appointment swimming in title " ISWPDSS contest Live Audio 08/19 ", " 7 news 08/19 " and " today, sports news 08/19 ", in title " ISWPDSS contest Live Audio 08/19 ", " the ISWPDSS contest Live Audio " of record broadcasting on August 19.By the mark " swimming beginning sound-5 " of two wonderful existence before the starting end that uses specified segment, the operation during with detection is added as the attribute information of search key D.
For example, make before the starting end that detects fragment two seconds point become gauge point, and add attribute, utilize this attribute, make the mark title become " swimming beginning sound-numeral ".
In addition,, can be provided with, make when detecting, not provide for title.Can be set to comprise these " contest begin event attribute 1 (SGE-1) " as control operation.
(7) processing on keyword related data acquiring unit 55
Then, with reference to the flow process of the processing of flow chart description in keyword related data acquiring unit 55 of Fig. 8.
As mentioned above, keyword related data acquiring unit 55 from recording medium 90, extract with video data designating unit 47 in the relevant keyword related data (chapters and sections title and mark) of fragment of video/audio data of appointment, will sequentially describe to extract and handle.
At first, as the relevant information of the video/audio data of appointment in the video data designating unit 47, obtain the starting end time T sb of specified segment and finish end time T se (step S101).
Then, obtain in the chapters and sections border of starting end time T sb existence and the information (step S111) of mark.Substantially be the very first time width t1 of identical time for being considered to, for example, 200 milliseconds, near the inspection Tsb.The information of obtaining at the chapters and sections border comprises the previous section Cbf on chapters and sections border time T bc, chapters and sections border, and back chapters and sections Cbl etc.The information of obtaining at mark comprises starting end time T bp, some mark Mbm, the fragment label Mbp etc. of a mark time T bm, fragment label.
Equally, obtain in the chapters and sections border of finishing end time T se existence and the information (step S112) of mark.For being considered to is the very first time width t1 of identical time substantially, near the inspection Tse.The information of obtaining at the chapters and sections border comprises the previous section Cef on chapters and sections border time T ec, chapters and sections border, and back chapters and sections Cel etc.The information of obtaining at mark comprises starting end time T ep, some mark Mem, the fragment label Mep etc. of a mark time T em, fragment label.
At step S111 and step S112, when the information of chapters and sections that obtain existence and mark, in the middle of these information, obtain chapters and sections Cc and fragment label Mcp, wherein said starting end and finish the fragment consistent (step S113) of end and described appointment.
In addition, at step S111, under the chapters and sections that when not obtaining, exist or the situation of mark at starting end time T sb, for being considered to the second time width t2 relevant with specified segment, for example, and 10 seconds, near (the step S121) of inspection Tsb.
At this moment, because the aft section (between Tsb+t1 and Tsb+t2) of Tsb is included in the described selection fragment, so can provide priority.Be similar to step S111, the information of obtaining at the chapters and sections border comprises the previous section Cbf ' on chapters and sections border time T bc ', chapters and sections border, and back chapters and sections Cbl ' etc.The information of obtaining at mark comprises starting end time T bp ', some mark Mbm ', the fragment label Mbp ' etc. of a mark time T bm ', fragment label.
At step S112, do not obtaining under the situation that is finishing chapters and sections that end time T se exists and mark, for being considered to the second time width t2 relevant, near (the step S122) of inspection Tse with specified segment.At this moment, because the previous section (between Tse-t2 and Tse-t1) of Tse is included in the described selection fragment, so can provide priority.Be similar to step S112, the information of obtaining at the chapters and sections border comprises the previous section Cef ' on chapters and sections border time T ec ', chapters and sections border, and back chapters and sections Cel ' etc.The information of obtaining at mark comprises starting end time T ep ', some mark Mem ', the fragment label Mep ' etc. of a mark time T em ', fragment label.
Incidentally, at step S121 or step S122, under the situation that finds chapters and sections border or mark, historical record with reference to the fragment appointment, and in the fragment of this appointment, exist under the situation of the part approach the chapters and sections that found or mark, do not adopt described chapters and sections or described mark (step S123).
Then, based on by the chapters and sections that find from step S111 to the processing of step S123 and the information of mark, the attribute (step S131) of the keyword corresponding with the fragment of described appointment is set.
At last, the attribute and the existing attribute of described setting are compared (step S141), under the attribute section of the described setting situation consistent with existing attribute, setting comprises the described existing attribute (step S142) of another attribute, and under inconsistent situation, independent attribute (step S143) is set former state.
(8) the attribute set handling of keyword
Then, with reference to the flow chart description of Fig. 9 details in the attribute set handling of the keyword of step S131.
(8-1) when obtaining with starting end and finishing to hold consistent chapters and sections Cc
In the step S113 of Fig. 8, when obtaining with starting end and finishing to hold consistent chapters and sections Cc, set a property respectively: keyword name=basic designation (chapters and sections title (Cc)), matching process=mate fully, operation=chapters and sections, chapters and sections starting end=detection fragment starting end, chapters and sections finish end=detection fragment and finish end, chapters and sections title=existing title rule (keyword name) (step S201).
(8-2) when obtaining with starting end and finishing to hold consistent fragment label Mcp
In step S113, when obtaining with starting end and finishing to hold consistent fragment label Mcp, set a property respectively: keyword name=basic designation (mark title (Mcp)), matching process=mate fully, operation=fragment label, mark starting end=detection fragment starting end, mark finishes end=detection fragment and finishes end, mark title=existing title rule (keyword name) (step S202).
(8-3) when having the chapters and sections border consistent with starting end
At the step S111 of Fig. 8, when having the chapters and sections border consistent, set a property respectively: matching process=front coupling, operation=chapters and sections, chapters and sections cut-point=detection fragment starting end (step S211) with starting end.
In addition, judge whether previous section Cbf has title with back chapters and sections Cbl and whether suffix is identical, under identical situation, set a property respectively: keyword name=basic designation (chapters and sections title (Cbl)), previous section title=existing title rule (basic designation (chapters and sections title (Cbf)), and back chapters and sections title=existing title rule (keyword name) (step S213).
Inequality or do not exist under the situation of suffix at suffix, and only do not exist under the situation of title in the previous section, set a property respectively: keyword name=basic designation (chapters and sections title (Cbl)), back chapters and sections title=existing title rule (keyword name) (step S214).Only do not exist under the situation of title in the chapters and sections of back, setting a property respectively: keyword name=basic designation (chapters and sections title (Cbf)), previous section title=existing title rule (keyword name) (step S215).
(8-4) when obtaining the some mark Mbm consistent with starting end
At the step S111 of Fig. 8, when obtaining the some mark Mbm consistent, set a property respectively: matching process=front coupling, an operation=mark, gauge point=detection fragment starting end (step S221) with starting end.
In addition, when a mark Mbm has title, set a property respectively: keyword name=basic designation (mark title (Mbm)), and mark title=existing title rule (keyword name) (step S222).
In addition, when obtaining the segment mark Mbp consistent, set a property respectively with starting end: matching process=front coupling, operation=segment mark, and mark segment=detection lug disconnects top-mark lengths (Mbp) (step S223).
In addition, when segment mark Mbp has title, set a property respectively: keyword name=basic designation (mark title (Mbp)), and mark title=existing title rule (keyword name) (step S224).
(8-5) when having the chapters and sections border consistent with the end end
At the step S112 of Fig. 8, when having the chapters and sections border consistent, set a property respectively with the end end: matching process=back coupling, operation=chapters and sections, chapters and sections cut-point=detection fragment finishes end (step S231).
In addition, judge whether previous section Cef has title with back chapters and sections Cel and whether suffix is identical, under identical situation, set a property respectively: keyword name=basic designation (chapters and sections title (Cel)), previous section title=existing title rule (keyword name), and the back is to chapters and sections title=existing title rule (basic designation ((chapters and sections title (Cel))) (step S233).
Under suffix situation inequality, under the situation that does not have suffix, and only do not exist under the situation of title in the chapters and sections of back, set a property respectively: keyword name=basic designation (chapters and sections title (Cef)), back chapters and sections title=existing title rule (keyword name) (step S234).Only do not exist under the situation of title in the previous section, setting a property respectively: keyword name=basic designation (chapters and sections title (Cel)), previous section title=existing title rule (keyword name) (step S235).
(8-6) when obtaining when finishing to hold consistent some mark Mem
At the step S112 of Fig. 8,, set a property respectively when obtaining when finishing to hold consistent some mark Mem: matching process=back coupling, an operation=mark, (step S241) held in gauge point=detections fragment end.
In addition, when a mark Mem has title, set a property respectively: keyword name=basic designation (mark title (Mem)), and mark title=existing title rule (keyword name) (step S242).
(8-7) when obtaining when finishing to hold consistent segment mark Mep
At the step S112 of Fig. 8,, set a property respectively: matching process=back coupling, operation=segment mark, and mark segment=detection segment end end-mark lengths (Mep) (step S243) when obtaining when finishing to hold consistent segment mark Mep.
In addition, when segment mark Mep has title, set a property respectively: keyword name=basic designation (mark title (Mep)), and mark title=existing title rule (keyword name) (step S244).
Second embodiment
Arrive Figure 12 description apparatus for processing audio according to a second embodiment of the present invention with reference to Figure 10.
Difference between this embodiment and described first embodiment is, though handle video/audio data in first embodiment, processing audio data only in this embodiment.
(1) structure of apparatus for processing audio
Figure 10 shows the structure of the apparatus for processing audio of present embodiment.
Apparatus for processing audio shown in Figure 10 comprises recording medium 90, voice data acquiring unit 28, voice data designating unit 27, keyword creating unit 31, keyword related data acquiring unit 55 and key data administrative unit 10.
Voice data, audio signal or video/audio signal are previously recorded on the recording medium 90.In addition, record is used to be divided into the information of unit on recording medium 90, such as the title or the chapters and sections of voice data, and with these title related keyword related datas, attribute etc.
Described voice data acquiring unit 28 reads and obtains the voice data of record on recording medium 90, and it is delivered to voice data designating unit 27.In addition, read and obtain the simulated audio signal of record on recording medium 90, perhaps read in the analog video/audio signal of record on the recording medium 90 and only obtain audio signal, after converting them to digital audio-frequency data, it can be delivered to voice data designating unit 27.Incidentally, except these are handled, when needing occurring, can carry out decryption processing, decoding processing, the format conversion processing of voice data, rate transition processing etc.
Voice data designating unit 27 specifies in the fragment in whole or in part of the voice data that obtains in the voice data acquiring unit 28.In described specified segment is under the situation about being obtained by user's operation, for example, though it is contemplated that be, use equipment such as mouse or Long-distance Control, yet, can use another kind of method.Reproduce described voice data, and the user can specify the position of starting end and end end when confirming voice data.In addition, from chapters and sections name list etc., select chapters and sections, and whole chapters and sections can be considered to specified segment.
Described keyword creating unit 31 is created the audio mode data based on the voice data of sending from voice data designating unit 27, and these audio mode data are used in the keyword matching unit 30 of the 3rd to the 6th embodiment.Here, the audio mode data of preserving as keyword can be for example reproducible voice datas or by described voice data of feature extraction and the data by its parametrization is obtained.
Described keyword related data acquiring unit 55 extracts the keyword related data relevant with the fragment of the voice data of appointment in voice data designating unit 27 from recording medium 90.
For example, when having the title corresponding or during with the corresponding chapters and sections title of the fragment of described appointment, described keyword related data is extracted with the voice data of described appointment.In addition, designated in the fragment corresponding with previous result for retrieval, and under the stored situation of the key data of described result for retrieval, the key data shown in Figure 11 is extracted.In addition, can import the keyword related data from the outside.
In addition, even can not obtain direct corresponding relation with described specified segment, when finding approaching chapters and sections or mark by retrieval, described keyword related data is extracted, and provides information by the relation of the position between described specified segment and described chapters and sections or the mark.
Be similar to first embodiment, a plurality of audio mode data that described key data administrative unit 10 will be created in keyword creating unit 31 manage as search key.In addition, for each search key, can manage the keyword related data of in keyword related data acquiring unit 55, obtaining together, such as related names and attribute.
(2) information of managing in the key data administrative unit 10
Figure 11 shows in the key data administrative unit 10 of present embodiment the keyword related data of management, and an example of the audio mode data of the search key of creating as result.
Here, keyword name, title, attribute, matching process and parameter are managed as the keyword related data.
For search key E (audio mode data E), information " road congestion information ", " road information radio broadcasting ", " BGM attribute 2 (BGM-2) ", " front coupling " and " BGM " are managed.
For search key F (audio mode data F), information " end ", " Mr.'s X talk show ", " finishing music attribute 2 (EDM-2) ", " back coupling " and " forte happy (RBM) " are managed.
For search key G (audio mode data G), information " cultural angle ", " travelling talk ", " angle music attribute 2 (CNM-2) ", " coupling fully " and " absolute music (CLM) " are managed.
For search key H (audio mode data H), information " sound of metal bat ", " (not having title) ", " contest attract attention event attribute 2 (AGE-2) ", " front coupling " and " potent fruit sound (RBS) " are managed.
In addition, search key J1 and J2 (audio mode data J1 and J2) for paired operation, information " title of song " A " ", " (not having title) " " beginning 2 (BOM-2) of music attribute ", " front coupling " and " absolute music (CLM) " and " title of song " A " end ", " (not having title) ", " end of music attribute 2 (EOM-2) ", " back coupling " and " absolute music (CLM) " are managed.
Figure 12 shows the example of the controlled recording instruction operation corresponding with attribute among Figure 11.Incidentally, in the 3rd to the 6th embodiment, this attribute is used to the recording instruction operation in matching result record cell 35 based on the testing result of keyword matching unit 30.
" BGM attribute 2 (BGM-2) " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, make whole detection fragment former state ground become labeled fragment, the airtime of detection site was acquired as " HH:MM " (00 to 23 hour, 00 to 59 minute), and the title of described fragment is set to " (title of keyword)-time " subsequently.Incidentally, the temporal information of " %R " representative " HH:MM " form among Figure 12.
" finishing music attribute 2 (EDM-2) " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, carrying out chapters and sections at the starting end that detects fragment with the end end cuts apart, making the title that is clipped in described starting end and finishes the described chapters and sections between the end become " (end) " (is detecting under the situation of a plurality of fragments, be " (end)-numeral "), and under the situation that title is not set, " title " relevant with described keyword is set to title.
" angle music attribute 2 (CNM-2) " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, carrying out chapters and sections at the starting end that detects fragment cuts apart, make the name of the back chapters and sections of cutting apart be called " (title of keyword) ", and under the situation that title is not set, " title " relevant with described keyword is set to title.
" contest attract attention event attribute 2 (AGE-2) " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, eight seconds point becomes gauge point before the starting end of described detection fragment, and the title of mark is set to " (title of keyword)-numeral ".
" beginning 2 (BOM-2) of music attribute " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, carry out chapters and sections at the starting end that detects fragment and cut apart, and the title of the back chapters and sections of cutting apart is set to " (keyword name) ".
" end 2 (EOM-2) of music attribute " is the attribute that adds the search key that is used for the recording instruction operation to, wherein, carries out chapters and sections at the end end that detects fragment and cuts apart.
(3) attribute information is added to search key E
Will be described in specify in the part of the voice data of record on the recording medium 90 so that how to add attribute when creating search key E (audio mode data E).Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.
For example, will be described in the part of the audio frequency that specifies on the recording medium 90 record below so that the operation when creating search key E.
Under mark in the title that records " road information radio broadcasting " program " road congestion information-10:28 " situation that part is designated and keyword is created, judge the consistency between described keyword specified portions and the described mark part, and based on the title of described mark, the operation during with detection is added as the attribute information of keyword.
For example, from mark " road congestion information-10:28 ", obtain keyword name " road congestion information ", make whole detection fragment former state ground become labeled fragment, and add this attribute, utilize this attribute, the title of fragment is become " [road congestion information]-time ".In addition, the starting end of mark " road congestion information-10:28 " and the starting end of end end and keyword specified portions are compared each other with finishing to hold, at starting end only is same point basically, is under the situation of difference and finish end, and matching process is set to " front coupling ".
Incidentally, can be set to comprise these " BGM attribute 2 (BGM-2) " as control operation.
(4) attribute information is added to search key H
Be described so that how to add attribute when creating search key H (audio mode data H) at the part of the voice data of record on specifying in recording medium 90.Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.
The part of the metal bat in the title that " senior middle school's baseball league matches " program is arranged by designated recorder assigns to create under the situation of keyword, by use the mark " sound of metal bat-3 " that existed in 8 seconds before described specified segment, the operation during with detection is added as the attribute information of keyword.
For example, make before the detection position 8 seconds point become gauge point, and add attribute, utilize this attribute, the title of this mark is become " [sound of metal bat]-numeral ".
In addition,, can be provided with, make when detecting, not provide this title for title.Can be set to comprise these " contests attract attention event attribute 2 (AGE-2) " as control operation.
(5) attribute information is added to search key J1 and J2
How the part of voice data that record is described below on specifying in recording medium 90 is so that to add attribute when creating search key J1 (audio mode data J1) and search key J2 (audio mode data J2).Incidentally, this attribute is the information from 55 inputs of keyword related data acquiring unit.
Create under the situation of keyword during the beginning part of the music of the title of song in the title of designated recorder music program " A ", by using from the existing way of the chapters and sections of the song " A " that begins with the essentially identical point of the starting end of described specified segment, the operation when detecting is added as the attribute information of keyword.For example, carry out chapters and sections at the starting end that detects fragment and cut apart, and add attribute, utilize this attribute, the title of back chapters and sections is become " title of song " A " ".
Equally, under the situation of end end of specifying same music and establishment keyword, by using in the existing way of holding the chapters and sections of " title of song " A " " that essentially identical point finishes with the end of described specified segment, the operation when detecting is added as the attribute information of keyword.For example, carry out chapters and sections at the end end that detects fragment and cut apart, and add attribute, utilize this attribute, the title of previous section is become " title of song " A " ".In addition,, can be provided with, make when detecting, not provide this title for title.Can be set to comprise these " beginnings 2 (BOM-2) of music attribute " as control operation.
The 3rd embodiment
The video/audio processing unit of a third embodiment in accordance with the invention is described to Figure 17 with reference to Figure 13.
Support the user in the metadata of creating as supporting data, make when user's reproduction, editor or retrieve video/voice data, carry out under the situation of reproducing, editing or retrieving according to the operation of user expectation, video/audio processing unit according to first embodiment is to be used to create the device of key data as the origin (origin) of metadata, and this key data comprises search key and keyword related data.Except above-mentioned functions, the video/audio data device of this embodiment has based on described key data, will advance as the function in the video/audio data that uses object data as the metadata record of supporting data.
(1) structure of video/audio processing unit
Figure 13 shows the structure according to the video/audio processing unit of present embodiment.
Video/audio processing unit shown in Figure 13 comprises, as creating relevant structural detail, video data acquiring unit 48, video data designating unit 47, voice data separative element 25, keyword creating unit 31, keyword related data acquiring unit 55 with keyword.In addition, as the structural detail relevant with key search, described video/audio processing unit comprises video data acquiring unit 41, voice data separative element 22, keyword matching unit 30 and matching result recording instruction unit 35.In addition, as public structural detail, it comprises recording medium 90 and key data administrative unit 10.
To create relevant structural detail identical with among first embodiment those with keyword, will their description of omission.
The video data acquiring unit 41 relevant with key search obtains from the video/audio data of the receiver turning of external digital video camera, digital broadcasting etc. or the input of another digital device, it is recorded on the recording medium 90, then it is delivered to voice data separative element 22.In addition, obtain from the analog video/audio signal of external video camera, broadcast reception tuner or the input of another equipment, after being converted into digital video/audio data, this digital video/audio data can being recorded on the recording medium 90 or being delivered to voice data separative element 22.Incidentally, except these are handled, when needed, can carry out video/audio data decryption processing (for example, B-CAS), decoding processing (for example, MPEG2), format conversion processing (for example, TS/PS), speed (compression speed) conversion process etc.
Voice data separative element 22 is isolated voice data from the video/audio data that obtains video data acquiring unit 41, and it is delivered in the keyword matching unit 30.
Be similar to first embodiment, described key data administrative unit 10 manages a plurality of audio mode data as search key.In addition, for each search key, can manage information together such as related names and attribute.As the result who creates keyword, the information shown in Fig. 2 is managed.
In audio mode data as the search key management, one or more audio mode data that keyword matching unit 30 will before have been selected and isolated voice data coupling in voice data separative element 22, and fragment like the detection type.
(2) description of associated keyword retrieval
For search key A, according to information " front coupling " and " BGM ", use a kind of algorithm, in this algorithm, frequency zones by shielding voice is paid close attention to the music composition, with the assessment degree of consistency, and the part that becomes consistent from the head of search key to pattern (it is idle finishing end simultaneously) detects.
For search key B, according to information " coupling " fully and " absolute music ", use a kind of algorithm, in this algorithm,,, and detect the place that the whole pattern of search key becomes unanimity with the assessment degree of consistency to the additional importance degree of music composition.
For search key C,, use a kind of algorithm according to information " coupling fully " and " forte is happy ", in this algorithm, when adding importance degree, the music composition allowing some noises, the assessment degree of consistency, the place that the whole pattern of detection search key becomes consistent.
For search key D, according to information " front coupling " and " potent fruit music ", use a kind of algorithm, in this algorithm, pay close attention to spectrum peak with the assessment degree of consistency, and detect from the part (it is idle to finish end simultaneously) that the head of search key becomes consistent to pattern.
Incidentally, though hypothesis each information among Fig. 2 be set in advance and manage with search key, when keyword matching unit 30 being selected and being provided with to be used for actual detected and when retrieving, part or all of information is changed and uses.For example, though search key B has " coupling fully " and " absolute music (CLM) " usually, when as " front coupling " and " BGM ", it becomes and is suitable for watching in advance of program retrieved and detected.
(3) matching result recording instruction unit 35
Matching result recording instruction unit 35 obtains the key data that detects in keyword matching unit 30 from key data administrative unit 10.It is used as metadata record on recording medium 90, makes according to the attribute of the search key in this key data, can easily carry out reproduction, editor and retrieval.The metadata of record for example has the structure by VR (videograph) schema management of DVD (digital universal disc) on recording medium 90.
The example that is formed recording instruction operation corresponding with attribute and control in matching result recording instruction unit 35 is described with reference to Fig. 3.
For " BGM attribute 1 (BGM-1) ", the recording instruction operation to recording medium 90 is carried out in matching result recording instruction unit 35, make described whole detection fragment former state ground become labeled fragment, and the title of described fragment is set to " (title of keyword) " and (is detecting under the situation of a plurality of fragments, be " (title of keyword)-numeral "), and based on recording instruction operation, recording medium 90 is recorded as metadata with it.Incidentally, " # " among Fig. 3 represents a numeral.
For " prelude music attribute 1 (OPM-1) ", the recording instruction operation to recording medium 90 is carried out in matching result recording instruction unit 35, make that carrying out chapters and sections at the starting end that detects fragment with the end end cuts apart, the title that is clipped in the described chapters and sections between described starting end and the end end is set to " (prelude)-numeral ", when when the end end is cut apart, the title of back chapters and sections is set to " [major part]-numeral ", and under the situation that title is not set, " title " relevant with keyword is set to title, and based on described recording instruction operation, the record that recording medium 90 carries out as metadata.
For " angle music attribute 1 (CNM-1) ", the recording instruction operation to recording medium 90 is carried out in matching result recording instruction unit 35, make that carrying out chapters and sections at the starting end that detects fragment cuts apart, the title of the back chapters and sections that this is cut apart is set to " (title of keyword) " (when detecting a plurality of fragment, be " (title of keyword)-numeral "), and under the situation that title is not set, " title " relevant with keyword is set to title, and based on described recording instruction operation, the record that recording medium 90 carries out as metadata.
For " contest begins event attribute 1 (SGE-1) ", the recording instruction operation to recording medium 90 is carried out in matching result recording instruction unit 35, the point that made before the starting end of described detection fragment two seconds becomes gauge point, and the title of mark is set to " (title of keyword)-numeral ", and based on described recording instruction, the record that recording medium 90 carries out as metadata.
(4) operation of the recording instruction when detecting search key A
When detecting search key A in keyword matching unit 30, matching result recording instruction unit 35 is carried out according to the control operation of " BGM attribute 1 " recording instruction of recording medium 90 is operated.Figure 14 is the schematic diagram that is illustrated in the information of record on the recording medium 90.
In the fragment of the beginning of distance broadcasting " divining the angle " in twice detection " information television early " program (1 hour 54 minutes) in the time of 58 minutes and when 1 hour 51 minutes (by the intensive mark indication on the belt), and provide that title " is divined angle 1 " and the mark of " divining angle 2 " (part of representing by the oblique line in the belt) in broadcasting on December 22.
By this, becoming possible is that for example, the part of only divining the angle is extracted, and is re-encoded with high compression rate, and is sent to portable equipment.
(5) operation of the recording instruction when detecting search key B
When in keyword matching unit 30, detecting search key B, matching result recording instruction unit 35 is carried out according to the control operation of " prelude music attribute 1 " recording instruction of recording medium 90 is operated, and Figure 15 is the schematic diagram that is illustrated in the information of record on the recording medium 90.
In the time of 0 minute 30 seconds, when 20 minutes 15 seconds etc. (by the intensive mark indication on the belt), detect " prelude " fragments in the five serial acute repeat programmes of collection (1 hour 40 minutes) of " the drama series in night " of broadcasting on December 23 altogether for five times, and in each chapters and sections, cut apart (representing), such as " major part-1 ", second " prelude-2 " after before chapters and sections (not having title), first " prelude-1 " of first " prelude ", first prelude, " major part-2 " after second prelude etc. by the vertical line in the belt.In addition, title " drama series in night " is set.Here, B is associated with search key, except the title exercise question, type " drama " is being set, storage purpose medium " HDD ", storage purpose file " my drama ", under the situation of last memory rate (compression speed) " low " and playlist " drama series-major part in night ", when detecting search key B, replace title or except title, type " drama " can be set, can make the storage purpose dish is the file " my drama " of HDD, can after being transformed into " low " speed, (wherein quality is lower) store according to last memory rate, perhaps new chapters and sections " (major part)-numeral " can be added to playlist " drama series-major part in night ".
By this, for example, only expecting to watch under the 3rd situation about collecting of replay Wednesday, from chapters and sections tabulations, select " prelude-3 " and reproduce, perhaps by carry out the operation of " jumping to next chapters and sections " at the prelude reproduction period, repeatedly do not watching under the situation of identical prelude, only described major part can all be watched.In addition, be independent of EPG and carry out the title setting, and automatically control types setting, storage purpose file are provided with etc. and become possibility.
(6) operation of the recording instruction when detecting search key C
When in keyword matching unit 30, detecting search key C, matching result recording instruction unit 35 is carried out according to the control operation of " angle music attribute 1 " recording instruction of recording medium 90 is operated, and Figure 16 is the schematic diagram that is illustrated in the information of record on the recording medium 90.
The music at " the physical culture angle " of detection in " 10 news " (60 minutes) of December 24 broadcasting is carried out chapters and sections at the head (35 minutes 30 seconds) of described angle music and is cut apart, and provides the chapters and sections title at " physical culture angle ".By this, for example, only can from the chapters and sections tabulation, select and reproduce " physical culture angle " to the physical culture users interest.In addition, execution is watched and is listened to and becomes possibility in the following manner, promptly after the head from program begins to watch main news a little while, when losing interest, carry out the operation of " jumping to next chapters and sections " etc., make and omit " physical culture angle " part of having seen half before.
(7) operation of the recording instruction when detecting search key D
When in keyword matching unit 30, detecting search key D, matching result recording instruction unit 35 is carried out according to the bookkeeping of " contest begins event attribute 1 " recording instruction of recording medium 90 is operated, and Figure 17 is the schematic diagram that is illustrated in the information of record on the recording medium 90.
" swimming beginning sound " is detected 12 times in " ISWPDSS contest Live Audio " program, in " 7 news " program of broadcasting on the same day, be detected twice, and in " sports news today " program, be detected 5 times, will give among them a part before each two seconds such as " swimming beginning sound-1 " or " swimming beginning sound-2 ".By this, operation that can be by carrying out " jumping to next mark " etc. can have access to the beginning scene of each match, and for example, exist because special exercise person participates in competition expects under the situation of the bout watched, when watching the reproduction video, can carry out redirect by adjoining land, thereby can find the match of expectation.
Incidentally, as the video data acquiring unit 41 of the structural detail relevant and voice data separative element 22 with key search carry out with as creating the similar processing of the video data acquiring unit 48 of relevant structural detail and voice data separative element 25 with keyword, and can to make them be shared.
The 4th embodiment
The apparatus for processing audio of a fourth embodiment in accordance with the invention is described with reference to Figure 18.
Difference between this embodiment and described the 3rd embodiment is, though handle video/audio data in the 3rd embodiment, processing audio data only in this embodiment.
(1) structure of apparatus for processing audio
Figure 18 shows the structure of the apparatus for processing audio of present embodiment.
Comprise at the apparatus for processing audio shown in this accompanying drawing, as creating relevant structural detail, voice data acquiring unit 28, voice data designating unit 27, keyword creating unit 31, keyword related data acquiring unit 55 with keyword.In addition, as the structural detail relevant with key search, described apparatus for processing audio comprises voice data acquiring unit 21, keyword matching unit 30 and matching result recording instruction unit 35.In addition, as public structural detail, described apparatus for processing audio comprises recording medium 90 and key data administrative unit 10.
To create relevant structural detail identical with among second embodiment those with keyword, will their description of omission.
The voice data acquiring unit 21 relevant with key search obtains from the voice data of the receiver turning of external digital microphone, digital broadcasting etc. or the input of another digital device, it is recorded on the recording medium 90, then it is delivered to keyword matching unit 30.In addition, obtain, after being converted into digital audio-frequency data, this digital audio-frequency data can being recorded on the recording medium 90 or being delivered to keyword matching unit 30 from the simulated audio signal of external microphone, broadcast reception tuner or the input of another equipment.In addition, when needed, except these are handled, can carry out decryption processing, decoding processing, the format conversion processing of voice data, rate transition processing etc.
Be similar to second embodiment, described key data administrative unit 10 manages a plurality of audio mode data as search key.In addition, for each search key, manage information together such as related names and attribute.
(2) description of associated keyword retrieval
Suppose the information shown in Figure 11 is managed as the result who creates keyword, and key search is described.
Keyword matching unit 30 will be complementary as the one or more audio mode data selected in advance in the audio mode data of search key management and the voice data that obtains in voice data acquiring unit 21 in key data administrative unit 10, and fragment like the detection type.
For search key E, according to information " front coupling " and " BGM ", use a kind of algorithm, in this algorithm, frequency zones by shielding voice is paid close attention to the music composition, with the assessment degree of consistency, and the part that becomes consistent from the head of search key to pattern detects (it is idle to finish end simultaneously).
For search key F, according to information " back coupling " and " forte is happy ", use a kind of algorithm, in this algorithm, when adding importance degree, the music composition allowing some noises, assess the degree of consistency, and detect (starting end free time simultaneously) to the part that pattern becomes consistent from the afterbody of search key.
For search key G, according to information " coupling fully " and " absolute music ", use a kind of algorithm, in this algorithm, to the additional importance degree of music composition, with the assessment degree of consistency, the place that the whole pattern of detection search key becomes consistent.
For search key H, according to information " front coupling " and " potent fruit music ", use a kind of algorithm, in this algorithm, pay close attention to spectrum peak with the assessment degree of consistency, and detect (it is idle to finish end simultaneously) to the part that pattern becomes consistent from the head of search key.
For search key J1, according to information " front coupling " and " absolute music ", use a kind of algorithm, in this algorithm, to the additional importance degree of music composition, with the assessment degree of consistency, and the part that becomes consistent from the head of search key to pattern detects (it is idle to finish end simultaneously).
For search key J2, according to information " back coupling " and " absolute music ", use a kind of algorithm, in this algorithm, to the additional importance degree of music composition, with the assessment degree of consistency, and the part that becomes consistent from the afterbody of search key to pattern detects (it is idle to finish end simultaneously).
Incidentally, though hypothesis sets in advance and manages each information and the search key shown in Figure 11, when for actual detected and retrieval keyword matching unit 30 being selected and being provided with, change and use partly or whole information.For example, though search key J1 has " absolute music (CLM) " usually, when as " BGM ", becoming is suitable for retrieving and detecting in music program with a form or CM (commercial message), in this form, narration is placed in the beginning of described music.
(3) matching result recording instruction unit 35
Matching result recording instruction unit 35 obtains the key data that detects in keyword matching unit 30 from key data administrative unit 10.This key data is used as metadata record on recording medium 90, makes according to the attribute of the search key in this key data, can easily carry out reproduction, editor and retrieval.
Figure 12 shows the example of recording instruction operation that carry out for the purpose corresponding with attribute and controlled in matching result recording instruction unit 35.Because described content is identical with second embodiment, with the descriptions thereof are omitted.
(4) when detecting search key E
For example, when detecting search key E, control operation according to " BGM attribute 2 ", " road congestion information " fragment in repeated detection " road information radio broadcasting " program, and, the mark of the title of " road congestion information-9:55 ", " road congestion information-10:28 ", " road congestion information-10:56 " etc. is appended to the fragment of described detection according to the airtime.By this, for example, from up-to-date information, sequentially only extract the road congestion information and listen to it and become possibility.
(5) when detecting search key H
When detecting search key H, control operation according to " contest attract attention event attribute 2 ", detect " sound of metal bat " in " senior middle school's baseball league matches " program, and because mark was placed in each detection site preceding 8 seconds, so can only reproduce continuously just from impacting that throwing motion (pitchingmotion) begins before.
(6) when detecting search key J1 and J2
When detecting search key J1 and J2, combination according to the control operation of " music begins attribute 2 " and " music finish attribute 2 ", in the beginning of the music of " title of song " A " " with finish to carry out on both chapters and sections and cut apart, and the fragment of this music becomes the chapters and sections of " title of song " A " ".
The 5th embodiment
With reference to Figure 19 description video/audio processing unit according to a fifth embodiment of the invention.
Difference between this embodiment and described the 3rd embodiment is, not video/audio data executive logging and the processing to obtaining from the outside; But the video/audio data that has write down is carried out processing.
Figure 19 shows the structure of the video/audio processing unit of this embodiment.
As creating relevant structural detail with keyword, video/audio processing unit shown in Figure 19 comprises, video data acquiring unit 48, video data designating unit 47, voice data separative element 25, keyword creating unit 31 and keyword related data acquiring unit 55.In addition, as the structural detail relevant with key search, described video/audio processing unit comprises video data acquiring unit 46, voice data separative element 22, keyword matching unit 30 and matching result recording instruction unit 35.In addition, as public structural detail, described video/audio processing unit comprises recording medium 90 and key data administrative unit 10.
The structural detail relevant with the keyword establishment is similar to those among first embodiment, will omit their description.
Video/audio data or video/audio signal are recorded on the recording medium 90 in advance.In addition, be used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of video, audio frequency) and the information relevant with title, attribute etc. is recorded in recording medium 90.
The described video data acquiring unit 46 relevant with described key search reads and obtains the video/audio data of record on recording medium 90, and it is delivered to voice data separative element 22.In addition, read and obtain analog video/audio signal, and after this analog video/audio signal is converted to digital video/audio data, this digital video/audio data can be delivered to voice data separative element 22.In addition, except these are handled, when needed, can carry out decryption processing, decoding processing, the format conversion processing of video/audio data, rate transition processing etc.
Voice data separative element 22 is isolated voice data from the video/audio data that obtains video data acquiring unit 46, and it is delivered to keyword matching unit 30.For example, the MPEG2 data are carried out demultiplexing comprise the MPEG2 audio ES of voice data with extraction, and to its decode (AAC etc.).
Be similar to the 3rd embodiment, key data administrative unit 10 manages a plurality of audio mode data as search key.In addition, for each search key, manage the information of related names and attribute together.
For example, as shown in Figure 2, for search key A, to " divine the angle ", " early between information television " and " BGM attribute 1 " etc. manage as the keyword relevant information, for search key B, " prelude ", " drama series in night " and " prelude music attribute 1 " etc. are managed as the keyword relevant information.
Keyword matching unit 30 will mate as selecteed in advance one or more audio mode data in the audio mode data of search key management and the voice data that obtains in voice data acquiring unit 26 in key data administrative unit 10, and fragment like the detection type.
Matching result recording instruction unit 35 obtains the key data that detects in keyword matching unit 30 from key data administrative unit 10.It is used as metadata record on recording medium 90, makes according to the attribute of the search key in this key data, can easily carry out reproduction, editor and retrieval.
Be similar to Fig. 3, be each property control recording instruction operation, for example, for " the BGM attribute 1 " of search key A, the fragment of whole detection is set to " (title of keyword) "; For " the prelude music attribute 1 " of search key B, detect the starting end of fragment and the part between the end end and be set to " prelude ", the aft section that finishes end is set to " major part ", and title is set.
In addition, in matching result recording instruction unit 35, the metadata of record has by for example structure of ARIB STD-B38 control on recording medium 90.
Figure 21 shows the example that is recorded in the metadata on the recording medium 90 when detecting search key A in keyword matching unit 30 by matching result recording instruction unit 35.Record begins 120 seconds " divining angle-1 " of back 3480 seconds (58 minutes) beginning and since two segments and " divining the angle " segment group of 180 seconds " the divining angle-2 " of 6660 seconds (1 hour 51 minutes) from program, and these are divined the angle and are extracted out in this segment group of " divining the angle ".
Figure 22 shows the example that is recorded in the metadata on the recording medium 90 when detecting search key B in keyword matching unit 30 by matching result recording instruction unit 35.For described program, the information of record name (title) " drama series in night ", type " drama " etc., and recorded program begins 70 seconds " prelude-1 " of beginning in back 30 seconds and since two segments of " prelude-2 " of 1215 seconds (20 minutes 15 seconds), and in " major part-1 " and " major part-2 " of their centres etc.
The 6th embodiment
With reference to Figure 20 description apparatus for processing audio according to a sixth embodiment of the invention.
Difference between this embodiment and described the 4th embodiment is, not voice data executive logging and processing to obtaining from the outside, but the voice data that has write down is carried out processing.
Figure 20 shows the structure of the apparatus for processing audio of this embodiment.
Apparatus for processing audio shown in this accompanying drawing comprises, as creating relevant structural detail, voice data acquiring unit 28, voice data designating unit 27, keyword creating unit 31 and keyword related data acquiring unit 55 with keyword.In addition, as the structural detail relevant with key search, described apparatus for processing audio comprises voice data acquiring unit 26, keyword matching unit 30 and matching result recording instruction unit 35.In addition, as public structural detail, described apparatus for processing audio comprises recording medium 90 and key data administrative unit 10.
The structural detail relevant with the keyword establishment is similar to those among second embodiment, will omit their description.
Be similar to the 4th embodiment, key data administrative unit 10 manages a plurality of audio mode data as search key.In addition, for each search key, manage information such as related names and attribute etc. together.
Voice data, audio signal or video/audio signal are recorded on the recording medium 90 in advance.In addition, information that is used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of voice data) and their title, attribute etc. is recorded in recording medium 90.
The described voice data acquiring unit 26 relevant with described key search reads and obtains the voice data of record on recording medium 90, and it is delivered to keyword matching unit 30.
Described voice data acquiring unit 26 reads and obtains the simulated audio signal that is recorded on the recording medium 90, perhaps analog video/the audio signal of reading and recording on recording medium 90 also only obtained audio signal, and after this simulated audio signal is converted to digital audio-frequency data, this digital audio-frequency data is delivered to keyword matching unit 30.Incidentally, except these are handled, when needed, can carry out the decryption processing, decoding processing, format conversion processing, rate transition processing of voice data etc.
Be similar to the 4th embodiment, key data administrative unit 10 manages a plurality of audio mode data as search key.In addition, for each search key, manage the information of related names and attribute etc. together.
Suppose that the information shown in Figure 11 is used as the result that keyword creates and manages, and describe key search.
Keyword matching unit 30 will be complementary as the one or more audio mode data selected in advance in the audio mode data of search key management and the voice data that obtains in voice data acquiring unit 26 in key data administrative unit 10, and fragment like the detection type.
Matching result recording instruction unit 35 obtains the key data that detects in keyword matching unit 30 from key data administrative unit 10.It is used as metadata record on recording medium 90, makes according to the attribute of the search key in this key data, can easily carry out reproduction, editor and retrieval.
The 7th embodiment
With reference to Figure 23 and Figure 24 according to a seventh embodiment of the invention video/audio processing unit is described.
Support the user in the metadata of creating as supporting data, make when user's reproduction, editor or retrieve video/voice data, can carry out under the situation of reproducing, editing or retrieving according to the operation of user expectation, the video/audio processing unit of present embodiment is to be used to create the device of key data as the source (source) of metadata, and this key data comprises search key and keyword related data.
The dissimilarity of this embodiment and first embodiment is: the fragment that becomes search key is from outside appointment, supports the input of data to determine and be based on first.
(1) structure of video/audio processing unit
Figure 23 is the block diagram of structure that the video/audio processing unit of this embodiment is shown.
Video/audio processing unit shown in Figure 23 comprises the first video data acquiring unit 43, the first voice data separative element 25, the first support data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
The first video data acquiring unit 43 obtains the video/audio data of creating relevant first content with keyword from the outside, and it is recorded on the recording medium 91A.
Isolate voice data in the video/audio data that the first voice data separative element 25 obtains from the first video data acquiring unit 43, and it is delivered to keyword appointed information creating unit 61.For example, the first voice data separative element is carried out demultiplexing to the MPEG2 data, extracts the MPEG2 audio ES that comprises voice data, and to its decode (AAC etc.).
First supports the data input cell 66 inputs support data relevant with first content, and it is recorded on the recording medium 91A.
Keyword appointed information creating unit 61 is determined to become the fragment of search key based on the time of the support data of input in the first support data input cell 66 the voice data of sending from the first voice data separative element 25.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of video/audio data of appointment.As the keyword related data, there are support data that are recorded on the recording medium 91A or the data of in the first support data input cell 66, importing.Information relevant with the program of the first content that obtains from electronic program guides or the metadata by broadcasting or providing from the outside can be provided these first support data, such as the program title of the attribute of program, type, broadcast date and the time (what day, time, date), broadcast channel (broadcasting station), generation source, program set (program series derives from the source program) etc.
Be similar to first embodiment, keyword creating unit 31 is created in the audio mode data of using in the keyword matching unit 30 of the 3rd to the 6th embodiment and following each embodiment based on the voice data in the fragment of determining in keyword appointed information creating unit 61.Here, the audio mode data of preserving as search key can be for example reproducible voice datas, perhaps can be described voice data by feature extraction with and by parametrization.
Be similar to first embodiment, a plurality of audio mode data that key data administrative unit 10 will be created in keyword creating unit 31 manage as search key.In addition,, be added on the keyword related data of obtaining in the keyword related data acquiring unit 55 for each search key, such as related names and attribute, and to its execution management.
(2) another structure of video/audio processing unit
Figure 24 is the block diagram that another structure of this embodiment is shown.
The first video data acquiring unit 48 among Figure 24 and the dissimilarity of the first video data acquiring unit 43 among Figure 23 are: not video/audio data executive logging and the processing to obtaining from the outside; But the video/audio data that writes down is carried out processing on recording medium 91B.
On recording medium 91B, write down the video/audio data or the video/audio signal of first content in advance.In addition, be used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of video/audio) and be recorded in recording medium 91B with these titles, information that attribute is relevant.
The 8th embodiment
With reference to Figure 25,26 and 27 video/audio processing unit according to the eighth embodiment of the present invention is described.
(1) first structure of video/audio processing unit
Figure 25 is the block diagram of first structure that the video/audio processing unit of this embodiment is shown.
Video/audio processing unit shown in Figure 25 comprises the first video data acquiring unit 43, the first voice data separative element 25, the first support data creating unit 65, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
The dissimilarity of this embodiment and the 7th embodiment is, replaces first and supports data input cell 66, provides first to support data creating unit 65.
First supports data creating unit 65 to detect the change point of isolated voice data in the first voice data separative element 25, creates first and supports data, and it is recorded on the recording medium 91A.For example, detect mute fragment, locate to cut apart at its starting point, end point, intermediate point etc.In addition, detect the switching of sound multiplex (MUX) pattern, on the border between the cm section of the major part of monophonic mode and stereo mode, cut apart.
Keyword appointed information creating unit 61 is determined a fragment based on the time of the support data of creating the voice data of sending from the first voice data separative element 25 in the first support data creating unit 65, this segment is used as search key.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of described fragment of video/audio data of appointment.As the keyword related data, there are support data that are recorded on the recording medium 91A or the support data of in the first support data creating unit 65, creating.
The first video data acquiring unit 43, the first voice data separative element 25, keyword creating unit 31, recording medium 91A and key data administrative unit 10 are identical with among the 7th embodiment those, and the descriptions thereof are omitted.
Incidentally, be similar to the Figure 24 among the 7th embodiment, equally in this embodiment, replacement can provide the video/audio data that is recorded on the recording medium 91B is carried out the first video data acquiring unit of handling 48 the video/audio data executive logging that obtains from the outside and the first video data acquiring unit 43 of processing.
(2) second structure of video/audio processing unit
Figure 26 is the block diagram that second structure of this embodiment is shown.
First of Figure 26 supports first among data creating unit 65 and Figure 25 to support different being of data creating unit 65: based on the video/audio data that obtains in the first video data acquiring unit 43, rather than create the first support data based on isolated voice data in the first voice data separative element 25.For example, detect the switching (shearing) of video image, and cut apart.By using the voice data of this moment, cut apart at the shearing point place of mute part.In addition, as JP-A-2005-130416, can cut apart based on the similitude between the video image.
(3) the 3rd structure of video/audio processing unit
Figure 27 is the block diagram that the 3rd structure of this embodiment is shown.
First of Figure 27 supports first among data creating unit 65 and Figure 25 to support different being of data creating unit 65: based on the video/audio data that obtains in the first video data acquiring unit 48, rather than create the first support data based on isolated voice data in the first voice data separative element 25.
In addition, the first video data acquiring unit 48 among Figure 27 is with first the different of video data acquiring unit 43 among Figure 25: not video/audio data executive logging and processing to obtaining from the outside, but carry out and handle being recorded in video/audio data on the recording medium 91B.
The 9th embodiment
With reference to Figure 28 and the 29 video/audio processing unit of describing according to the ninth embodiment of the present invention.
(1) structure of video/audio processing unit
Figure 28 is the block diagram of structure that the video/audio processing unit of this embodiment is shown.
Video/audio processing unit shown in Figure 28 comprises the first video data acquiring unit 43, the first voice data separative element 25, the first support data creating unit 65, the first support data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
Difference between this embodiment and the 7th, the 8th embodiment is: provide first support data input cell 66 and first support data creating unit 65 both.
First supports data creating unit 65 to detect the change point of isolated voice data in the first voice data separative element 25, and creates and support data.For example, detect mute fragment, locate to cut apart at its starting point, end point, intermediate point etc.In addition, detect the switching of sound multiplex (MUX) pattern, can on the border between the cm section of the major part of monophonic mode and stereo mode, cut apart.
First support data input cell 66 based on or except the first support data of supporting to create in the data creating unit 65, import the support data relevant, and it be recorded on the recording medium 91A with first content.When not importing, first supports the support data of creating in the data creating unit 65 can be used as reference value is presented on the entr screen, perhaps adopts as the initial value or the value of setting.For example, the text message such as the title of CM is imported in first cm section of supporting to cut apart in the data creating unit 65.
Keyword appointed information creating unit 61 is determined to be used as the fragment of search key based on supporting the times that create in the data creating unit 65 or the support data that the first support data input cell is imported first the voice data of sending from the first voice data separative element 25.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of video/audio data of appointment.As the keyword related data, there are the support data that are recorded on the recording medium 91A, the support data of input in the first support data input cell 66 or the data of in the first support data creating unit 65, creating.
The first video data acquiring unit 43, the first voice data separative element 25, keyword creating unit 31, recording medium 91A and key data administrative unit 10 are identical with among the 7th embodiment those, and the descriptions thereof are omitted.
Incidentally, be similar to the Figure 24 among the 7th embodiment, equally in this embodiment, replacement can provide the video/audio data that is recorded on the recording medium 91B is carried out the first video data acquiring unit of handling 48 the video/audio data executive logging that obtains from the outside and the first video data acquiring unit 43 of processing.
(2) another structure of video/audio processing unit
Figure 29 is the block diagram of another structure that the video/audio processing unit of this embodiment is shown.
Be similar to the Figure 26 among the 8th embodiment, first of Figure 29 supports first among data creating unit 65 and Figure 28 to support different being of data creating unit 65: based on the video/audio data that obtains in the first video data acquiring unit 43, rather than create the first support data based on isolated voice data in the first voice data separative element 25.For example, detect the switching (shearing) of video image, and cut apart.By using the voice data of this moment, cut apart at the shearing point place of mute part.In addition, as JP-A-2005-130416, can cut apart based on the similitude between the video image.In addition, supporting that first the part of cutting apart in the data creating unit 65 is under the situation of thematic unit based on the similitude of video image, first supports the text message of data input cell 66 inputs such as theme or angle title.
Incidentally, be similar to the Figure 27 among the 8th embodiment, replacement can provide the video/audio data that is recorded on the recording medium 91B is carried out the first video data acquiring unit of handling 48 the video/audio data executive logging that obtains from the outside and the first video data acquiring unit 43 of processing.
The tenth embodiment
With reference to Figure 30 and 31 apparatus for processing audio of describing according to the tenth embodiment of the present invention.
The dissimilarity of this embodiment and the 7th embodiment is: though handle video/audio data in the 7th embodiment, processing audio data only in this embodiment.
(1) structure of apparatus for processing audio
Figure 30 is the block diagram of structure that the apparatus for processing audio of this embodiment is shown.
Apparatus for processing audio shown in Figure 30 comprises the first voice data acquiring unit 23, the first support data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
The first voice data acquiring unit 23 obtains the voice data of creating relevant first content with keyword from the outside, it is recorded on the recording medium 91A, and it is delivered to keyword appointed information creating unit 61.
First supports the data input cell 66 inputs support data relevant with first content, and it is recorded on the recording medium 91A.
Keyword appointed information creating unit 61 is determined to be used as the fragment of search key based on the time of the support data of input in the first support data input cell 66 the voice data of sending from the first voice data acquiring unit 23.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of voice data of appointment.As the keyword related data, there are support data that are recorded on the recording medium 91A or the support data of in the first support data input cell 66, importing.Information relevant with the program of the first content that obtains from electronic program guides or the metadata by broadcasting or providing from the outside can be provided these first support data, such as the program title of the attribute of program, type, broadcast date and the time (what day, time, date), broadcast channel (broadcasting station), generation source, program set (program series derives from the source program) etc.
Described keyword creating unit 31, recording medium 91A and key data administrative unit 10 are identical with among the 7th embodiment those, and the descriptions thereof are omitted.
(2) another structure of apparatus for processing audio
Figure 31 is the block diagram that another structure of this embodiment is shown.
The first voice data acquiring unit 28 among Figure 31 and the dissimilarity of the first voice data acquiring unit 23 among Figure 30 are: not voice data executive logging and the processing to obtaining from the outside; But the voice data that writes down is carried out processing on recording medium 91B.
On recording medium 91B, write down the voice data or the audio signal of first content in advance.In addition, be used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of audio frequency) and the information relevant etc. and be recorded in recording medium 91B with these title, attribute.
The 11 embodiment
With reference to the apparatus for processing audio of Figure 32 description according to the 11st embodiment of the present invention.
The dissimilarity of this embodiment and the 8th embodiment is: though handle video/audio data in the 8th embodiment, processing audio data only in this embodiment.
Figure 32 is the block diagram of structure that the apparatus for processing audio of this embodiment is shown.
Apparatus for processing audio shown in Figure 32 comprises the first voice data acquiring unit 23, the first support data creating unit 65, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
The dissimilarity of this embodiment and the tenth embodiment is: replace first and support data input cell 66, provide first to support data creating unit 65.
First supports data creating unit 65 to detect the change point of the voice data that obtains in the first voice data acquiring unit 23, creates first and supports data, and it is recorded on the recording medium 91A.For example, detect mute fragment, locate to cut apart at its starting point, end point, intermediate point etc.In addition, detect the switching of sound multiplex (MUX) pattern, on the border between the cm section of the major part of monophonic mode and stereo mode, cut apart.
Keyword appointed information creating unit 61 is determined to be used as the fragment of search key based on the time of the support data of creating the voice data of sending from the first voice data acquiring unit 23 in the first support data creating unit 65.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of voice data of appointment.As the keyword related data, there are support data that are recorded on the recording medium 91A or the support data of in the first support data creating unit 65, creating.
The first voice data acquiring unit 23, keyword creating unit 31, recording medium 91A and key data administrative unit 10 are identical with among the tenth embodiment those, and the descriptions thereof are omitted.
Incidentally, be similar to the Figure 31 among the tenth embodiment, equally in this embodiment, replacement can provide the voice data that is recorded on the recording medium 91B is carried out the first voice data acquiring unit of handling 28 the voice data executive logging that obtains from the outside and the first voice data acquiring unit 23 of processing.
The 12 embodiment
With reference to the apparatus for processing audio of Figure 33 description according to the 12nd embodiment of the present invention.
The dissimilarity of this embodiment and the 9th embodiment is: though handle video/audio data in the 9th embodiment, processing audio data only in this embodiment.
Figure 33 is the block diagram of structure that the apparatus for processing audio of this embodiment is shown.
Apparatus for processing audio shown in Figure 33 comprises the first voice data acquiring unit 23, the first support data creating unit 65, the first support data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, recording medium 91A and key data administrative unit 10.
The dissimilarity of this embodiment and the tenth embodiment and the 11 embodiment is: provide first support data input cell 66 and first support data creating unit 65 both.
First supports data creating unit 65 to detect the change point of the voice data that obtains in the first voice data acquiring unit 23, and creates and support data.For example, detect mute fragment, locate to cut apart at its starting point, end point, intermediate point etc.In addition, detect the switching of sound multiplex (MUX) pattern, can on the border between the cm section of the major part of monophonic mode and stereo mode, cut apart.
First support data input cell 66 based on or except supporting the support data created in the data creating unit 65 first, import the support data relevant, and it be recorded on the recording medium 91A with first content.When not importing, first supports the support data of creating in the data creating unit 65 can be used as reference value is presented on the entr screen, perhaps adopts as the initial value or the value of setting.For example, the text message such as the title of CM is imported in first cm section of supporting to cut apart in the data creating unit 65.
Keyword appointed information creating unit 61 is determined to be used as the fragment of search key based on supporting the times that create in the data creating unit 65 or the support data that the first support data input cell is imported first the voice data of sending from the first voice data acquiring unit 23.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of video/audio data of appointment.As the keyword related data, there are the support data that are recorded on the recording medium 91A, the support data of input in the first support data input cell 66 or the support data of in the first support data creating unit 65, creating.
The first voice data acquiring unit 23, keyword creating unit 31, recording medium 91A and key data administrative unit 10 are identical with among the tenth embodiment those, and the descriptions thereof are omitted.
Incidentally, be similar to the Figure 31 among the tenth embodiment, equally in this embodiment, replacement can provide the voice data that is recorded on the recording medium 91B is carried out the first voice data acquiring unit of handling 28 the voice data executive logging that obtains from the outside and the first voice data acquiring unit 23 of processing.
The 13 embodiment
With reference to Figure 34,35,36 and 37 video/audio processing unit according to the 13rd embodiment of the present invention is described.
Support the user in the metadata of creating as supporting data, make when user's reproduction, editor or retrieve video/voice data, can carry out under the situation of reproducing, editing or retrieving according to the operation of user expectation, video/audio processing unit according to the 13 embodiment is to be used to create the device of key data as the source of metadata, and this key data comprises search key and keyword related data.Except described function, the video/audio processing unit of this embodiment has following function: based on described key data, make that object data is used in conduct in video/audio data as the metadata record of supporting data.
The dissimilarity of this embodiment and the 3rd embodiment is: the fragment that becomes search key is from outside appointment, supports the input of data and definite and be based on first.
(1) structure of video/audio processing unit
Figure 34 illustrates the structure of the video/audio processing unit of this embodiment.
Video/audio processing unit shown in Figure 34 comprises: as creating relevant structural detail with keyword, the first video data acquiring unit 48, the first voice data separative element 25, first are supported data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, with and go up the recording medium 91B of record first content.In addition, as the structural detail relevant with key search, the recording medium 92A that the second video data acquiring unit 41, the second voice data separative element 22, keyword matching unit 30, matching result recording instruction unit 35 is provided and is used to write down second content.In addition, as public structural detail, provide key data administrative unit 10.
(2) create relevant structural detail with keyword
Though those among the structural detail relevant with the keyword establishment in this structure and Figure 24 of the 7th embodiment are identical, can adopt the structure among Figure 23.
In addition, the structural detail relevant with the keyword establishment among this embodiment can have the structure of describing among the 8th embodiment.In this case, difference is, replaces first and supports data input cell 66, provides first to support data creating unit 65.
In addition, the structural detail relevant with the keyword establishment among this embodiment can have the structure of describing among the 9th embodiment.In this case, difference is, provide first support data input cell 66 and first support data creating unit 65 both.
(3) structural detail relevant with key search
(3-1) first structure of the structural detail relevant with key search
Figure 35 is the block diagram that first structure of structural detail relevant with key search among this embodiment is shown.
Structural detail shown in Figure 35 comprises the second video data acquiring unit 41, the second voice data separative element 22, keyword matching unit 30, matching result recording instruction unit 35 and is used to write down the recording medium 92A of second content, and key data administrative unit 10.
Figure 36 is the block diagram of another example that first structure of structural detail relevant with key search among this embodiment is shown.
The second video data acquiring unit 46 among Figure 36 is with the second video data acquiring unit, 41 differences among Figure 35: not video/audio data executive logging and processing to obtaining from the outside, but the video/audio data that is recorded on the recording medium 92B is carried out processing.
On recording medium 92B, write down the video/audio data or the video/audio signal of second content in advance.In addition, be used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of video/audio) and be recorded in recording medium 92B with these titles, information that attribute is relevant etc.
(3-2) second structure relevant with key search
Figure 37 is the block diagram that second structure of structural detail relevant with key search among this embodiment is shown.
In Figure 37, replace matching result recording instruction unit 35, matching result indicative control unit 39 is provided.
Matching result indicative control unit 39 obtains the key data that detects in keyword matching unit 30 from key data administrative unit 10.Subsequently, according to the attribute of the search key in this key data, carry out control and on unshowned display device, be shown as metadata.For example, on display device, show such as the text message of footmark topic, from the temporal information of split position and numeral and the character of representing other attributes, to show second video data.
Incidentally, substitute matching result recording instruction unit 35 or except matching result recording instruction unit 35, can provide matching result indicative control unit 39.Equally, with respect to the matching result recording instruction unit 35 among third and fourth embodiment, similarly, substitute matching result recording instruction unit 35 or except matching result recording instruction unit 35, can provide matching result indicative control unit 39.Described display device can be different from the video data display.In addition, the second video data acquiring unit 41 is the video/audio data executive logging to obtaining from the outside not, and can only it be handled, thereby recording medium 92A can be provided.
(3-3) the 3rd structure of the structural detail relevant with described key search
Figure 38 is the block diagram that the 3rd structure of structural detail relevant with key search among this embodiment is shown.
In Figure 38, except first structure shown in Figure 35, provide second to support data creating unit 62.
Second supports data creating unit 62 to detect the change point of video data, and creates second and support data.For example, detect the shearing point of video image, and create information about cutting apart.Can use voice data this moment, perhaps can cut apart based on the similitude of video image.
Except the second second support data of supporting to create in the data creating unit 62, the support data relevant with second content are created in matching result recording instruction unit 35, and it is recorded on the recording medium 92A.For example, in second part of supporting to cut apart in the data creating unit 62, create text message such as the title at angle.
Figure 39 is the block diagram of another example that the 3rd structure of structural detail relevant with key search among this embodiment is shown.
The second video data acquiring unit 46 among Figure 39 is with the second video data acquiring unit, 41 differences among Figure 38: not video/audio data executive logging and processing to obtaining from the outside, but the video/audio data that is recorded on the recording medium 92B is carried out processing.
(3-4) the 4th structure of the structural detail relevant with key search
Figure 40 is the block diagram that the 4th structure of structural detail relevant with key search among this embodiment is shown.
This structure is with the different of first structure: though handle video/audio data in first structure, processing audio data only in this structure.
Figure 41 is the block diagram that another example of the 4th structure is shown.
The second voice data acquiring unit 26 among Figure 41 is with the second voice data acquiring unit, 21 differences among Figure 40: the voice data that obtains from the outside is not carried out and handled, but the voice data that is recorded on the recording medium 92B is carried out processing.
On recording medium 92B, write down the voice data or the audio signal of second content in advance.In addition, be used to be divided into the information of a plurality of unit (such as the title and the chapters and sections of audio frequency) and be recorded in recording medium 92B with these titles, information that attribute is relevant etc.
(3-5) the 5th structure of the structural detail relevant with described key search
Figure 42 is the block diagram that the 5th structure of structural detail relevant with key search among this embodiment is shown.
The difference of this structure and second structure is: though handle video/audio data in second structure, processing audio data only in this structure.
The 14 embodiment
With reference to Figure 43, the 44 and 45 video/audio processing unit of describing according to the 14th embodiment of the present invention.
Support the user in the metadata of creating as supporting data, make when user's reproduction, editor or retrieve video/voice data, carry out under the situation of reproducing, editing or retrieving according to the operation of user expectation, video/audio processing unit according to the 14 embodiment is to be used to create the device of key data as the source of metadata, and this key data comprises search key and keyword related data.Except above-mentioned functions, the video/audio processing unit of this embodiment has based on described key data, obtains video/audio data selectively as the use object data, and makes the function that is recorded as the described metadata of supporting data.
Figure 43 is the block diagram that illustrates according to the structure of the video/audio processing unit of the 14th embodiment of the present invention.
The dissimilarity of this embodiment and the 13 embodiment is: provide data to obtain control unit 81.
(1) structure of video/audio processing unit
Figure 43 shows the structure according to the video/audio processing unit of this embodiment.
Video/audio processing unit shown in Figure 43 comprises: as creating relevant structural detail with keyword, the first video data acquiring unit 48, the first voice data separative element 25, first are supported data input cell 66, keyword appointed information creating unit 61, keyword creating unit 31, keyword related data acquiring unit 55, with and go up the recording medium 91B of record first content.In addition, as the structural detail relevant with key search, the recording medium 92A that provides data to obtain control unit 81, the second video data acquiring unit 41, the second voice data separative element 22, keyword matching unit 30, matching result recording instruction unit 35 and be used to write down second content.In addition, as public structural detail, provide key data administrative unit 10.
(2) create relevant structural detail with described keyword
As creating relevant structural detail with described keyword among this embodiment, though illustration be similar to those elements of the Figure 24 among the 7th embodiment, but be similar to the situation of the 13 embodiment, can adopt any structure of in the 7th to the 9th embodiment, describing.That is to say, replacing first supports data input cell 66 or supports data input cell 66 except first, can provide first to support data creating unit 65, perhaps, be used for carrying out the first video data acquiring unit of handling 48 to recording medium 91A executive logging and processing and to the video/audio data that the first video data acquiring unit 43 of the video/audio data executive logging that obtains from the outside and processing can be provided to replace being recorded on the recording medium 91B.
In addition, be similar to first embodiment, recording medium 91B, the first video data acquiring unit 48, video data designating unit 47, the first voice data separative element 25, keyword creating unit 31, keyword related data acquiring unit 55 and key data administrative unit 10 can be provided.Also can be provided for replacing the video/audio data that is recorded on the recording medium 91B is carried out the first video data acquiring unit of handling 48 to recording medium 91A executive logging and processing and to the first video data acquiring unit 43 of the video/audio data executive logging that obtains from the outside and processing.
(3) structural detail relevant with key search
Figure 44 is the block diagram that structural detail relevant with described key search among this embodiment is shown.
Structural detail shown in Figure 44 comprises that data obtain control unit 81, the second video data acquiring unit 41, the second voice data separative element 22, keyword matching unit 30, matching result recording instruction unit 35, are used to write down the recording medium 92A of second content, and key data administrative unit 10.
It is to satisfy the data of predetermined condition with the data limit of obtaining from the outside in the second video data acquiring unit 41 that data are obtained control of control unit 81 execution.For example, obtain the attribute of the program relevant based on electronic program guides or program metadata, and only obtain video/audio data consistent with the attribute of the program of the first content that in keyword related data acquiring unit 55, obtains or the program that part is consistent as the keyword related data with the data of obtaining from the outside.Specify in advance or retrieve and satisfy the program of predetermined condition, and when programming, automatically carry out channel setting and carry out reception, and can carry out automatic processing (reservation).
What day as the condition relevant, only obtain in program title, type, broadcast date and time (, time, date), broadcast channel (broadcasting station), generation source, aspect such as program set (program series, derive from the source program) is consistent each other or the video/audio data of the program of part unanimity with the attribute of described program.By this, can avoid occurring following situation, promptly by incoherent video/audio data execution processing being increased load or creating mistakenly and support data.
Figure 45 is the block diagram that another example of structural detail relevant with described key search among this embodiment is shown.
The second video data acquiring unit 46 among Figure 45 is with the second video data acquiring unit, 41 differences among Figure 44: not video/audio data executive logging and processing to obtaining from the outside, but the video/audio data that is recorded on the recording medium 92B is carried out processing.
Satisfy those data of predetermined condition in the data that it is described record with the data limit of obtaining in the second video data acquiring unit 46 that data are obtained control of control unit 86 execution.For example, obtain the attribute of the program relevant based on the support data that comprise program metadata, and only obtain video/audio data consistent with the attribute of the program of the first content that in keyword related data acquiring unit 55, obtains or the program that part is consistent as the keyword related data with the video data of described record.
Keyword related data acquiring unit 55 obtain with keyword appointed information creating unit 61 in the relevant keyword related data of fragment of video/audio data of appointment.As the keyword related data, there are support data that are recorded on the recording medium 91B or the support data of in the first support data input cell 66, importing.Information relevant with the program of the first content that obtains from electronic program guides or the metadata by broadcasting or providing from the outside can be provided these first support data, such as the program title of the attribute of program, type, broadcast date and the time (what day, time, date), broadcast channel (broadcasting station), generation source, program set (program series derives from the source program) etc.
(5) processing in the video/audio processing unit
Then, with reference to Figure 46 processing in the video/audio processing unit of this embodiment is described to Figure 50.
Figure 46 is the flow chart that the processing among this embodiment is shown.
Support in the data input cell 66 described first, be provided with and watch and listen to support information, " watch and listen to support information setting " (step S301).
In keyword appointed information creating unit 61, determine the keyword fragment, " the keyword fragment is determined " (step S311).
In keyword creating unit 31, create keyword, " keyword establishment " (step S321).
In keyword related data acquiring unit 55, obtain relevant information, " relevant information is obtained " (step S331).
Obtain in the control unit 81 in data, implement retrieval and keep (reservation), perhaps obtain in control unit 86 or the similar units, implement to carry out, " retrieval keeps or carries out " (step S341) in data.
Figure 47 is the flow chart of processing that another structure of this embodiment is shown.
Support to create and to watch and listen to support information in the data creating unit 65 " watch and listen to support information and create " (step S351) first.
In keyword appointed information creating unit 61, determine the keyword fragment, " the keyword fragment is determined " (step S361).
In keyword creating unit 31, create keyword, " keyword establishment " (step S371).
At crucial related data acquiring unit 55, obtain relevant information, " relevant information is obtained " (step S381).
Obtain in the control unit 81 in data, implement retrieval and keep, perhaps obtain in control unit 86 or the similar units, implement to carry out, " retrieval keeps or carries out " (step S391) in data.
The processing of " retrieval keeps or carries out " of the step S391 of the step S341 of Figure 46 and Figure 47 then, will be described with reference to Figure 48 to 50.
Figure 48 is that the data that are illustrated in this embodiment are obtained the flow chart of the processing in the control unit 81.
Carry out a circulation, in this circulation, according to electronic program guides (EPG), the program of predetermined condition is satisfied in retrieval from the program that can obtain after this, and to the processing of each execution in these programs, " EPG program search " (step S401 is to step S421).
For wherein each, executive logging reservation, and be provided for when record, carrying out the keyword of key search, " setting of recording reservation keyword " (step S411).
Figure 49 is that the data that illustrate are in this embodiment obtained the flow chart of another processing of control unit 81.
Carry out a circulation, in this circulation, according to the record reservation tabulation of management accounts reservation, the program of predetermined condition is satisfied in retrieval from the program of recording reservation, and to the processing of each execution in these programs, " preserved program retrieval " (step S431 is to step S435).
For wherein each, the keyword that makes record the time be used to carry out key search is associated with recording reservation, and is set up " keyword setting " (step S441).
Figure 50 is that the data that illustrate in another structure of this embodiment are obtained the flow chart of the processing in control unit 86 or the similar units.
Carry out a circulation, in this circulation, obtain in the control unit 86 and the second video data acquiring unit 46 in data, retrieve and obtain the program that satisfies predetermined condition in the program that from recording medium 92B, writes down, and to the processing of each execution in these programs, " retrieval of the program of record " (step S461 is to S491).
For wherein each, in keyword matching unit 30, carry out coupling " keyword coupling " (step S471) with search key.
In matching result recording instruction unit 35, based on described keyword matching result, create second and support data, " watch and listen to support information and create " (step S481).
(6) instantiation of the processing in the video/audio processing unit
The instantiation of the processing in the video/audio processing unit of this embodiment is described with reference to Figure 51 subsequently.
Figure 51 is the diagrammatic sketch that is used for the example of explanation when chapters and sections (chapters and sections are cut apart and the setting of chapters and sections title) are used as the situation of supporting data processing.First content (program 1-1) and second content (program 1-2) are 30 minutes programs, and left side indication is corresponding to the time (time 0:00:00.00) of program starting end; And the right indication finishes the time (time 0:30:00.00) of end corresponding to program.
(a) example of the first support data of input in the first support data input cell 66.For first content, support data to carry out chapters and sections as first and cut apart and the setting of chapters and sections title.
(b1) the definite example (chapters and sections cut-point 1) of the keyword fragment in the keyword appointed information creating unit 61.
" have the sound keyword of determining time span before or after the cut-point " X1 and Y1.Though should determine that the time can for example be several seconds, corresponding to about 4 measurements of music, it can be 8 seconds; Corresponding to a CM unit, it can be 15 seconds.Under the situation about remedying by part unanimity etc. in when coupling, can adopt the longer time.In addition, the trend of the length of the decay of the sound before considering to separate, the effect of diminuendo etc. and mute part etc. can make before the cut-point and time span afterwards differs from one another, for example, X1 is 10 seconds before the chapters and sections cut-point, and Y1 is 5 seconds after the chapters and sections cut-point.Under any circumstance, because the chapters and sections cut-point is present in the starting end place of end end and the Y1 of X1, so it is set to the wherein keyword relevant information of each.
In the input of supporting data, under and situation that chapters and sections title be transfused to designated, in the front and back that can discern a border, under which viewed situation, can only use the keyword (Y1) of viewed side in fragment.In addition, before analyzing and sound characteristic afterwards, and for example, voice be present in described chapters and sections cut-point near in mute part before, and music is present under thereafter the situation, can only use music side (Y1).
" near the sound keyword of musical portions " Z1.Retrieval is near the musical portions (having definite time or longer) of (in several seconds scope) described chapters and sections cut-point, and makes the time of should determining apart from the musical portions border become keyword.For example, making 4 seconds long fragments apart from beginning in two seconds after the chapters and sections cut-point is keyword.In this case, because the chapters and sections cut-point is positioned at before the starting end of Z1 two seconds, so it is managed as the keyword relevant information.
Here, in addition, also the mutual information of related keyword can be managed as the keyword relevant information.For example, owing to have X1, Y1 and the Z1 relevant, be used for the information between each keyword, distinguished, perhaps be used for only indicating the information of three keywords of existence so can manage with same chapters and sections cut-point.
(b2) the definite example (chapters and sections cut-point 2) of the keyword fragment in the keyword appointed information creating unit 61
" have the sound keyword of determining time span before or after the cut-point " X2 and Y2.Incidentally, can select keyword based on the sound of determined fragment.For example, judge the size of sound and whether comprise music, and can from described keyword, delete and do not comprise loud keyword, do not comprise the keyword of music etc.
" across the effect sound sound keyword partly of cut-point both sides " Z2.Near the chapters and sections cut-point part is music (effect sound) part, the border of the musical portions of (in several seconds) before the retrieval chapters and sections cut-point and afterwards, and make segment therebetween become keyword.In addition, the ahead boundaries of retrieval chapters and sections cut-point can make the fragment with definite time span that begins from the described ahead boundaries of this chapters and sections cut-point become keyword.For example, making the length that begins from the last second point of this chapters and sections cut-point is that two seconds fragment becomes keyword.
Here, in addition, also the mutual information of related keyword can be managed as the keyword relevant information.For example, with same chapters and sections title relevant X1, Y1, Z1, X2, Y2 and Z2 be set, be used for the information between each keyword, distinguished, perhaps to be used for only indicating the information of six keywords of existence so can manage owing to exist.
In addition, describe, in Figure 26,27 or 29 structure, (a) support to carry out in the data creating unit 65 first and cut apart based on the chapters and sections of video features as another of Figure 51.In addition, can support to carry out in the data input cell 66 setting of chapters and sections title first.Cut apart for chapters and sections, with reference to (b) based on video features.The switching (shearing) that has the scene of characteristics of image by detection on the border between camera lens (shot) A1 and the camera lens B1, and is carried out chapters and sections and is cut apart on the border between camera lens A2 and the B2.Can comprise the judgement of the mute part of sound characteristic.
(c) matching result recording instruction unit 35.Suppose to detect keyword X1, Z1, X2 and Z2 for second content.Based on the keyword relevant information of each wherein, carry out chapters and sections and cut apart and the setting of chapters and sections title, support data as second.For example, some place of two seconds or carry out chapters and sections at the starting end place of keyword Y1 and cut apart before the starting end of keyword Z1, some place of one second or carry out chapters and sections at the end end place of keyword X2 and cut apart after the starting end of keyword Z2, and between them, carry out the setting of chapters and sections title.
Detecting under the situation of a plurality of keywords, waiting based on the score (score) of coupling, the preference of before having distributed to keyword and select, perhaps determining based on main judgement.For example, under the situation that three keyword X1, Y1 that will be relevant with chapters and sections cut-point 1 and Z1 manage as the keyword relevant information, can when detect both keyword (both keyword just accounts for wherein bigger part), carry out chapters and sections and cut apart.In addition, be provided with under the situation of relevant six keyword X1, Y1, Z1, X2, Y2 and Z2 with the chapters and sections title, can when detecting four keywords (four keywords account for wherein major part), carry out the setting of chapters and sections title in management.
As another example of supporting data, " playlist " substitutes " setting of chapters and sections title ".Select chapters and sections of before having cut apart, and create a new playlist, perhaps new playlist is added in the existing playlist.Create the keyword relevant, and this playlist is managed as the keyword relevant information with described chapters and sections.In matching result recording instruction unit 35, when detecting described keyword,, except chapters and sections are cut apart, described playlist is added based on described keyword relevant information.For example, chapters and sections of the angle part in making program become under the situation of playlist, realize a kind of function, and in this function, chapters and sections are cut apart (chaper-divided) and gone out described angle from each broadcasting, and add this angle to described playlist.
[modification example]
The present invention is not limited to described each embodiment, and can carry out various modifications in the scope that does not deviate from described spirit.
For example, in described each embodiment,,, also can use another kind of data format as long as described information can be supported to reproduce, be edited and retrieval though described metadata is used as described support data.

Claims (35)

1, a kind of information processor, be used for creating and support data to support the user, make when described user reproduces, editor or retrieve comprises video/audio data or only during the use object data of voice data, described user can carry out according to the operation of expectation and reproduce, edits or retrieval, and described information processor comprises:
Keyword voice data acquiring unit, it is configured to obtain the keyword voice data that is used to create described support data;
Keyword appointed information input unit, it is configured to import the keyword appointed information that is used to specify the whole of described keyword voice data or part fragment;
The keyword creating unit, it is configured to based on described keyword appointed information, creates the audio mode data as search key by shearing the whole of described keyword voice data or part fragment;
Keyword related data acquiring unit, it is configured to obtain the keyword related data relevant with described keyword voice data based on described keyword appointed information; And
Support the data creating unit, it is configured to be complementary by key data that will comprise described audio mode data and described keyword related data and described use object data, creates described support data.
2, information processor as claimed in claim 1, wherein, described support data creating unit comprises:
The voice data acquiring unit, it is configured to only obtain voice data as using the object voice data from described use object data;
The key data administrative unit, it is configured to add described keyword related data to described audio mode data, and it is recorded as key data;
The keyword matching unit, it is configured to based on predetermined condition, described use object voice data and described audio mode data is complementary, and satisfies the matching result information of the position of described predetermined condition in the described use object voice data of output expression; And
Matching result recording instruction unit, it is configured to the matching result information of described output is recorded on the recording medium as described support data.
3, information processor as claimed in claim 2, wherein
Described use object data is a video/audio data; And
Described voice data acquiring unit is isolated voice data from described use object data, and obtains described voice data as described use object voice data.
4, as claim 2 or 3 described information processors, wherein
The relevant operational attribute information of operation when described key data comprises with described reproduction, editor and retrieval; And
Described matching result recording instruction unit is recorded in described support data on the described recording medium according to described matching result information and described operational attribute information.
5, information processor as claimed in claim 4 also comprises the key data retrieval unit, and this key data retrieval unit is configured to based on described keyword appointed information, retrieval mark position or split position in described keyword voice data,
Wherein said keyword related data acquiring unit obtains described mark or the described information of cutting apart that retrieves in described key data retrieval unit, as described keyword related data.
6, information processor as claimed in claim 5, wherein, the consistent label range or the cutting unit of scope of appointment in the retrieval of described key data retrieval unit and the described keyword appointed information, and obtain this label range or this cutting unit, as described mark or the described information of cutting apart.
7, information processor as claimed in claim 5, wherein, in the retrieval of described key data retrieval unit and the described keyword appointed information starting end of the scope of appointment and finish to hold in consistent mark position or split position, and obtain described mark position, comprise described mark position label range, described split position, comprise in the cutting unit of described split position, as described mark or the described information of cutting apart.
8, information processor as claimed in claim 5, wherein, the starting end of the scope of appointment and finish approaching mark position or split position in the end in the retrieval of described key data retrieval unit and the described keyword appointed information, and obtain described mark position, comprise described mark position label range, described split position, comprise in the cutting unit of described split position, as described mark or the described information of cutting apart.
9, as the described information processor of any one claim in the claim 5 to 8, wherein, described keyword related data acquiring unit concerns based on the position between the described scope of appointment in the described mark that retrieves in described key data retrieval unit or described positional information of cutting apart and the described keyword appointed information, the operation when the creation operation attribute information specifies in described the coupling.
10, as the described information processor of any one claim in the claim 5 to 8, wherein,
Based on the relation of the position between the described scope of appointment in the positional information of the described mark that in described key data retrieval unit, retrieves and the described keyword appointed information, described keyword related data acquiring unit comes the creation operation attribute information based on detected fragment in the described matching result, with the controlling recording method for determining position, and
The position in the described use object data is determined according to described matching result information and described operational attribute information in described matching result recording instruction unit, and writes down described mark as described support data on described definite position.
11, as the described information processor of any one claim in the claim 5 to 8, wherein,
Concern based on the position between the described scope of appointment in the keyword appointed information of described positional information of cutting apart that in described key data retrieval unit, retrieves and described input, described keyword related data acquiring unit comes the creation operation attribute information based on detected fragment in the described matching result information, with the controlling recording method for determining position, and
Described matching result recording instruction unit is according to described matching result information and described operational attribute information, determine the position in the described use object data, and record as the information of described support data on described definite position, to cut apart described use object data.
12, as claim 10 or 11 described information processors, wherein
Described keyword related data acquiring unit is created described operational attribute information, with the creation method of the control text message relevant with described matching result, and
Described matching result recording instruction unit is according to described matching result information and described operational attribute information, create the described text message relevant with described matching result information, the text message of described establishment is associated with the mark or the described partitioning portion of described record, and the text message that writes down described establishment is as described support data.
13, information processor as claimed in claim 12, wherein
Described key data comprises the text message relevant with described key data, and
Described matching result recording instruction unit is according to the creation method of described controlled copy information, and based on the described text message relevant with described key data, creates the described text message relevant with described matching result.
14, information processor as claimed in claim 12, wherein
Described keyword related data acquiring unit obtain with described key data retrieval unit in the described mark or the described relevant text message of cutting apart of described information that retrieve, and
Described matching result recording instruction unit is according to the creation method of described controlled copy information, and based on described mark or the relevant described text message of the described information of cutting apart, create the described text message relevant with described matching result, the text message of described establishment is associated with the mark or the described partitioning portion of described record, and the text message that writes down described establishment is as described support data.
15, as claim 2 or 3 described information processors, wherein
Described key data comprises the text message relevant with described key data, and
Described matching result recording instruction unit basis is the creation method of controlled copy information in advance, and based on the described text message relevant with described key data, create the text message relevant, and the record described text message relevant with described matching result is as described support data with described matching result.
16, information processor as claimed in claim 15, wherein
Described keyword related data acquiring unit obtains and the relevant heading message of described keyword voice data based on described keyword appointed information, and
Whole relevant heading message in a series of use object datas that comprise in the unit record of described matching result recording instruction and the described matching result is as described support data.
17, as the described information processor of any one claim in the claim 1 to 16, wherein, described support data are metadata.
18, a kind of information processing method, be used for creating and support data to support the user, make when user's reproduction, editor or retrieval comprise video/audio data or only comprise the use object data of voice data, the user can carry out according to the operation of expectation and reproduce, edits or retrieval, and described information processing method comprises:
Obtain the keyword voice data that is used to create described support data;
Input is used to specify the keyword appointed information of the whole of described keyword voice data or part fragment;
Based on described keyword appointed information, create the audio mode data as search key by shearing the whole of described keyword voice data or part fragment;
Obtain the keyword related data relevant based on described keyword appointed information with described keyword voice data; And
Be complementary by the key data and the described use object data that will comprise described audio mode data and described keyword related data, create described support data.
19, a kind of information processor, be used for creating and support data to support the user, make when user's reproduction, editor or retrieval comprise video/audio data or only comprise the use object data of voice data, the user can carry out according to the operation of expectation and reproduce, edits or retrieval, and described information processor comprises:
First supports data input cell, and it is configured to import with first and uses the first relevant support data of object data;
Keyword voice data acquiring unit, it is configured to obtain with described first supports the relevant keyword voice data of data;
Keyword appointed information creating unit, it is configured to support data based on first of described input, creates a part of fragment that the keyword appointed information is selected described keyword voice data;
The keyword creating unit, it is configured to create the audio mode data as search key based on described keyword appointed information by the part fragment of shearing described keyword voice data;
Keyword related data acquiring unit, it is configured to based on described keyword appointed information, obtains the keyword related data relevant with described keyword voice data; And
Second supports the data creating unit, and it is configured to be complementary by key data that will comprise described audio mode data and described keyword related data and described use object data, creates second and supports data.
20, a kind of information processor, be used for creating and support data to support the user, make when user's reproduction, editor or retrieval comprise video/audio data or only comprise the use object data of voice data, this user can carry out according to the operation of expectation and reproduce, edits or retrieval, and described information processor comprises:
Keyword voice data acquiring unit, it is configured to obtain the keyword voice data that is used to create the first support data;
First supports the data creating unit, and it is configured to detect the change point of described keyword voice data and creates described first and support data;
Keyword appointed information creating unit, it is configured to support data based on first of described establishment, creates a part of fragment that the keyword appointed information is selected described keyword voice data;
The keyword creating unit, it is configured to based on described keyword appointed information, creates the audio mode data as search key by the described part fragment of shearing described keyword voice data;
Keyword related data acquiring unit, it is configured to obtain the keyword related data relevant with described keyword voice data based on described keyword appointed information; And
Second supports the data creating unit, and it is configured to be complementary by key data that will comprise described audio mode data and described keyword related data and described use object data, creates second and supports data.
21, as claim 19 or 20 described information processors, wherein, described second supports that data comprise:
The voice data acquiring unit, it is configured to only obtain voice data as using the object voice data from described use object data;
The key data administrative unit, it is configured to add described keyword related data to described audio mode data, and it is recorded as key data;
The keyword matching unit, it is configured to based on predetermined condition, described use object voice data and described audio mode data is complementary, and satisfies the matching result information of the position of described predetermined condition in the described use object voice data of output expression; And
Matching result recording instruction unit, it is configured to the matching result information of described output is supported that as described second data record on the recording medium.
22, information processor as claimed in claim 21, wherein
Described use object data is described video/audio data, and
Described voice data acquiring unit is isolated voice data from described use object data, and obtains described voice data as described use object voice data.
23, as claim 21 or 22 described information processors, wherein
The relevant operational attribute information of operation when described key data comprises with described reproduction, editor or retrieval; And
Described matching result recording instruction unit is recorded in described support data on the described recording medium according to described matching result information and described operational attribute information.
24, information processor as claimed in claim 23 also comprises the key data retrieval unit, and this key data retrieval unit is configured to retrieve mark position or split position in the described keyword voice data based on described keyword appointed information,
Wherein said keyword related data acquiring unit obtains described mark or the described information of cutting apart that retrieves in described key data retrieval unit, as described keyword related data.
25, information processor as claimed in claim 23, wherein
Described first supports that data are mark or carve information, and
Described keyword related data acquiring unit obtains described first described mark or the carve information of supporting in the data, as described keyword relevant information.
26, information processor as claimed in claim 25, wherein, described keyword appointed information creating unit is created and described first label range or the consistent scope of supporting in the data of cutting unit, as described keyword appointed information.
27, information processor as claimed in claim 25, wherein, described keyword appointed information creating unit is with one in the end end of the starting end of the described first end end of supporting mark position, the split position in the data, the starting end of label range, described label range, cutting unit and described cutting unit, be defined as first end point
Utilize previous appointed method, on the front side of described first end point and rear side one, determine second end point, and
Create the scope between described first end point and described second end point, as described keyword appointed information.
28, information processor as claimed in claim 25, wherein, described keyword appointed information creating unit will approach in the end end of the starting end of end end, cutting unit of described first starting end of supporting mark position, split position, label range in the data, this label range and described cutting unit, be defined as first end point
Utilize previous appointed method, on the front side of described first end point and rear side one, determine second end point, and
Create the scope between described first end point and described second end point, as described keyword appointed information.
29, as any one described information processor in the claim 25 to 28, wherein, based on the position relation between the described scope of appointment in the described mark in the described first support data or described positional information of cutting apart and the described keyword appointed information, described keyword related data acquiring unit creation operation attribute information is to specify the operation in described when coupling.
30, as any one described information processor in the claim 25 to 28, wherein,
Described keyword related data acquiring unit creation operation attribute information, with based on the described first position relation of supporting between the described scope of appointment in the described marker location information of data and the described keyword appointed information, control definite method with respect to the record position of detected fragment in described matching result information, and
The position in the described use object data is determined according to described matching result information and described operational attribute information in described matching result recording instruction unit, and described label record is supported data as described second on described definite position.
31, as any one described information processor in the claim 25 to 28, wherein,
Described keyword related data acquiring unit creation operation attribute information, to concern based on the position between the described scope of appointment in described split position information in the described first support data and the described keyword appointed information, control definite method with respect to the record position of detected fragment in described matching result information, and
Described matching result recording instruction unit is according to described matching result information and described operational attribute information, determine the position in the described use object data, and record sheet is shown in the information of cutting apart described use object data on the described definite position and supports data as described second.
32, as claim 30 or 31 described information processors, wherein
Described keyword related data acquiring unit creation operation attribute information, with the creation method of the control text message relevant with described matching result, and
Described matching result recording instruction unit is according to described matching result information and described operational attribute information, create the described text message relevant with described matching result information, the text message of described establishment is associated with the mark or the described partitioning portion of described record, and the text message that writes down described establishment is as described support data.
33, information processor as claimed in claim 32, wherein
Described keyword related data acquiring unit obtains and described first described mark or the relevant text message of supporting in the data of carve information, and
Described matching result recording instruction unit is according to the controlled creation method of described text message, and based on described mark or the relevant described text message of the described information of cutting apart, create the described text message relevant with described matching result information, the text message of described establishment is associated with the mark or the described partitioning portion of described record, and the text message that writes down described establishment is as described support data.
34, as claim 21 or 22 described information processors, wherein
Described key data comprises the text message relevant with described key data, and
Described matching result recording instruction unit is according to the previous controlled creation method of text message, and based on the described text message relevant with described key data, create the text message relevant, and the record described text message relevant with described matching result is as described support data with described matching result.
35, information processor as claimed in claim 34, wherein
Described keyword related data acquiring unit obtains and the relevant heading message of described keyword voice data based on described keyword appointed information, and
The relevant heading message of a series of complete use object data that comprises in the unit record of described matching result recording instruction and the described matching result is as supporting data.
CN 200610066969 2005-03-30 2006-03-30 Information processing apparatus and method Pending CN1842151A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP100176/2005 2005-03-30
JP2005100176 2005-03-30
JP058751/2006 2006-03-03

Publications (1)

Publication Number Publication Date
CN1842151A true CN1842151A (en) 2006-10-04

Family

ID=37030974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610066969 Pending CN1842151A (en) 2005-03-30 2006-03-30 Information processing apparatus and method

Country Status (1)

Country Link
CN (1) CN1842151A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159834B (en) * 2007-10-25 2012-01-11 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment
CN101821734B (en) * 2007-08-22 2013-09-25 谷歌公司 Detection and classification of matches between time-based media
CN106571137A (en) * 2016-10-28 2017-04-19 努比亚技术有限公司 Terminal voice dotting control device and method
CN107430395A (en) * 2014-12-29 2017-12-01 Abb瑞士股份有限公司 For identifying the method with the sequence of events of the conditions relevant in processing factory

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101821734B (en) * 2007-08-22 2013-09-25 谷歌公司 Detection and classification of matches between time-based media
CN101159834B (en) * 2007-10-25 2012-01-11 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment
CN107430395A (en) * 2014-12-29 2017-12-01 Abb瑞士股份有限公司 For identifying the method with the sequence of events of the conditions relevant in processing factory
CN107430395B (en) * 2014-12-29 2019-11-19 Abb瑞士股份有限公司 For identification with the method for the sequence of events of the conditions relevant in processing factory
CN106571137A (en) * 2016-10-28 2017-04-19 努比亚技术有限公司 Terminal voice dotting control device and method

Similar Documents

Publication Publication Date Title
CN1184631C (en) Recording/replay device, method and recording medium
CN1290323C (en) Screen control method and equipment there of
CN1179562C (en) Program information retrieval apparatus, method, and system for retrieving and display information of broadcast programs in units of broadcast programs, recording medium storing program information
CN1816879A (en) Video processing apparatus, ic circuit for video processing apparatus, video processing method, and video processing program
CN1507266A (en) Information processing apparatus and method, programmebroadcasting system, storage media and program
CN1856993A (en) Information-signal process apparatus and information-signal processing method
CN1767610A (en) Information processing apparatus and method, and program
CN1717025A (en) Information processing apparatus, information processing method and program for the same
CN1922605A (en) Dictionary creation device and dictionary creation method
CN1892880A (en) Content providing system, content, providing apparatus and method, content distribution server, and content receiving terminal
CN1538444A (en) Image recording/reproducing apparatus and control method thereof
CN1940910A (en) Content providing system, content providing apparatus, content distribution server, and content receiving terminal
CN1671193A (en) Program guide displaying method, apparatus and computer program
CN1323031A (en) Recording reproducing, recording/reproducing apparatus and method. displaying and recording carrier
CN1885426A (en) Information playback system using storage information medium
CN101053252A (en) Information signal processing method, information signal processing device, and computer program product
CN1728792A (en) Information processor, its method and program
CN1677387A (en) Information processing apparatus, information processing method, and program
CN1643605A (en) Data recording method, data recording device, data recording medium, data reproduction method, and data reproduction device
CN1976427A (en) Information processing apparatus, information processing method, and computer program
CN1871850A (en) Reproducing apparatus, method and program
CN1755663A (en) Information-processing apparatus, information-processing methods and programs
CN1816986A (en) Display,display method and display control program
CN1725350A (en) Information record medium, apparatus and method for recording, reproduction and playback
CN1274138C (en) Transmission method and reception method for image information, its transmission and reception device and system thereof and information recoding medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20061004