CN102084645A - Related scene addition device and related scene addition method - Google Patents

Related scene addition device and related scene addition method Download PDF

Info

Publication number
CN102084645A
CN102084645A CN200980119475XA CN200980119475A CN102084645A CN 102084645 A CN102084645 A CN 102084645A CN 200980119475X A CN200980119475X A CN 200980119475XA CN 200980119475 A CN200980119475 A CN 200980119475A CN 102084645 A CN102084645 A CN 102084645A
Authority
CN
China
Prior art keywords
retrieval
scene
starting point
search condition
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200980119475XA
Other languages
Chinese (zh)
Other versions
CN102084645B (en
Inventor
井上刚
小泽顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Corp of America
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN102084645A publication Critical patent/CN102084645A/en
Application granted granted Critical
Publication of CN102084645B publication Critical patent/CN102084645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/32Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
    • G11B27/322Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier used signal is digitally coded
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A related scene addition device, which is capable of extracting a related scene even when information search is performed for moving image contents to which the information related to the scenes of text information or the like is not added for each of the scenes, is provided with an operation history storage section (105) for storing the operation history of information search performed by a user who is viewing a moving image, a search result storage section (103) for storing the result of the information search transmitted to other terminals, a search starting-point time estimating section (107) for estimating the time at which the search for the result of the information search was started by using the operation history stored in the operation history storage section (105), a related scene extracting section (108) for extracting the scene of the moving image at the time at which the estimated search was started, a search result output section (111) for adding the related scene to the result of the information search and outputting the result thereof to which the related scene is added, and a search result transmitting section (113) for transmitting the outputted result to the other terminals.

Description

Related scene applicator and related scene adding method
Technical field
The present invention relates to a kind of related scene applicator and related scene adding method, when the result for retrieval of the information that will be associated with certain scene and other people are shared etc., the scene of the dynamic image content that will be associated with result for retrieval are imparted to result for retrieval support.
Background technology
In recent years, PC (hereinafter referred to as " PC ") had spread to numerous families, operated the user of PC and was constantly increasing while see TV.Therefore, in the audiovisual dynamic image content, for example in the present in progress TV programme of audiovisual, the user who utilizes the search engine of the Internet to retrieve the information that is associated with this TV programme also constantly increases.For example just under the program of audiovisual situation for the travelling program, as the information that is retrieved then for the relevant information in place and shop of present demonstration.And, under the situation of information please, then be the information relevant with the answer of the problem of setting a question now as the information that is retrieved.And, under the situation of animal program, as the information that is retrieved then for title of the animal of present demonstration and the relevant information in place that can see this animal.And, under the situation of sports casts such as football or baseball, then be and the player who is now showing and action or the relevant information of rule as the information that is retrieved.
Like this, will be just certain scene in the program in audiovisual as opportunity, the user begins to carry out under the situation of retrieval of the information that is associated with this scene, at result for retrieval, the scene that will become the retrieval opportunity is shared etc. with other people to be effective.
In view of above-mentioned situation, a kind of digitized video regenerating unit had once been proposed in the past, its utilization is comprised in the caption information in the content, extracts the scene (for example, with reference to patent documentation 1) that becomes the retrieval opportunity out.This digitized video regenerating unit is made the corresponding table of prompting time information of caption character data and these caption character data when accepting the recording request of content from the user.And, to accept in the video search instruction from the user by literal, the digitized video regenerating unit utilizes described table to retrieve the caption character that is associated with literal by user's input, and the image in the prompting moment of the caption character that is retrieved of regeneration.
(prior art)
Patent documentation
Patent documentation 1 TOHKEMY 2004-80476 communique
Yet, in patent documentation 1 disclosed digitized video regenerating unit, utilized caption information.Therefore, for the content that does not contain captions, the problem of scene of the broadcast program of the opportunity that becomes information retrieval appears then extracting out.For example, for being for the program of live play of representative with sports casts such as above-mentioned football or baseballs, the situation that is endowed captions almost is non-existent.Therefore, above-mentioned digitized video regenerating unit can applicable scope be restricted.
And above-mentioned digitized video regenerating unit is according to being retrieved by the literal of user's input.Therefore, the user needs to be grasped the keyword of representing with the scene of the opportunity that becomes information retrieval, and need carry out input operation.And the keyword that is transfused to is used to then can detect a plurality of scenes under the situation of a plurality of captions.Therefore, number of scenes is many more, and result for retrieval is also just many more.Therefore, the problem of appearance is that the user need spend time for the scene of finding out the opportunity that becomes required information retrieval.
Summary of the invention
The present invention is in order to solve above-mentioned problem, purpose is to provide a kind of related scene applicator and related scene adding method, at each scene in the content, even inferior, also can extract the scene of the opportunity that becomes the information retrieval that the user carries out out in the situation that is not endowed text labels such as captions.
In order to reach above-mentioned purpose, related scene applicator of the present invention makes related scene be associated with result for retrieval, described related scene is the view data that is associated with retrieval, this association scene applicator comprises: iconic memory portion, and memory has the regeneration moment of view data and this view data; Information retrieval execution portion is according to the search condition retrieving information of user's input; Operation history memory portion, memory has the resume of operation, and in the resume of this operation, described search condition is corresponding with the moment of having accepted this search condition; The retrieval starting point is inferred portion constantly, infer the retrieval starting point constantly according to the retrieval starting point, described retrieval starting point is to give the resume of the described operation that the object result for retrieval is associated with scene, it is that described retrieval starting point is to be used to obtain described scene to give the moment that the search condition of object result for retrieval begins to be transfused to constantly by the retrieving information by described user's appointment in the information of described information retrieval execution portion retrieval that described scene is given the object result for retrieval; And related scene extraction unit, will give the object result for retrieval with described scene in the view data that the time that includes the described retrieval starting point moment is reproduced and be associated, described retrieval starting point is inferred in the described retrieval starting point portion of inferring constantly constantly.
Constitute according to this, can determine to give the resume of the relevance height of object result for retrieval and operation the earliest in time with scene.That is, can determine at giving the resume that the object result for retrieval is the identical content operation that begins to retrieve with scene.And, on one side under user's situation that on one side the audiovisual dynamic image data is retrieved,, may be thought of as the front and back that are present in the moment that the user begins to retrieve for the scene that is associated with information that the user is retrieved.Therefore, with the view data before and after the moment of the operation history that begins to retrieve, extract out as related scene.In view of the above, can carry out with the information retrieval is the extraction of the related scene of opportunity.
And the present invention not only can be used as the related scene applicator with above-mentioned this characteristic handling part and realizes, and can be used as characteristic handling part included in the related scene applicator is realized as the related scene adding method of step.And, can be used as the program that makes computer carry out characteristic step included in the related scene adding method and realize.CD) etc. and such program can (Compact Disc-Read Only Memory: communication networks such as recording medium, the Internet be distributed by CD-ROM.
According to the related scene applicator among above this present invention,, also can extract the scene of the opportunity that becomes information retrieval out for for the dynamic image content that is not endowed the information relevant in each scene with scenes such as text messages.And, in related scene applicator of the present invention, do not need only to import keyword in order to extract scene out.Therefore, can alleviate the burden that the user is imparted to related scene the information retrieval result.
Description of drawings
Fig. 1 is the outside drawing that the formation of the searching system in the embodiments of the invention 1 is shown.
Fig. 2 is the block diagram that the function formation of the searching system in the embodiments of the invention 1 is shown.
Fig. 3 is the flow chart that the flow process of the user's operating sequence in the embodiments of the invention 1 is shown.
Fig. 4 is the flow chart of the processing of the related scene applicator in the embodiments of the invention 1 when being performed.
Fig. 5 shows an example of the operation history information in the embodiments of the invention 1.
Fig. 6 A shows an example of the output picture of the related scene applicator in the embodiments of the invention 1.
Fig. 6 B shows an example of the output picture of the related scene applicator in the embodiments of the invention 1.
Fig. 6 C shows an example of the output picture of the related scene applicator in the embodiments of the invention 1.
Fig. 6 D shows an example of the output picture of the related scene applicator in the embodiments of the invention 1.
Fig. 7 illustrates the functional-block diagram that retrieval starting point in the embodiments of the invention 1 is inferred the detailed formation of portion constantly.
Fig. 8 illustrates the detail flowchart that retrieval starting point in the embodiments of the invention 1 is inferred processing.
Fig. 9 shows an example of the similar degree information of being remembered in the similar degree memory portion in the embodiments of the invention 1.
Figure 10 shows an example of the output picture of the related scene applicator in the embodiments of the invention 1.
Figure 11 A shows an example of the output picture of the shared television set in the embodiments of the invention 1.
Figure 11 B shows an example of the output picture of the shared television set in the embodiments of the invention 1.
Figure 12 shows an example of the similar degree information of being remembered in the similar degree memory portion in the embodiments of the invention 1.
Figure 13 is the block diagram that the formation of the searching system in the embodiments of the invention 2 is shown.
Figure 14 is that the related scene in the embodiments of the invention 2 is extracted the detailed flow chart of handling out.
Figure 15 shows an example of the output picture of the related scene applicator in the embodiments of the invention 2.
Embodiment
Followingly embodiments of the invention are described with reference to accompanying drawing.
(embodiment 1)
In the present embodiment, the retrieval server that exists on the internet such as user capture PC or mobile phone.Contemplated in view of the above situation is, utilization can be carried out the device (hereinafter referred to as " information indexing device ") of information retrieval, carries out certain scene in the audiovisual TV programme just as the information retrieval of opportunity.The presuming method that will illustrate is in the present embodiment, will be presented at by the information retrieval result that information indexing device carried out share television set and share with other people in, infer the scene of the TV programme of the opportunity that becomes information retrieval.And, the scene of will infer is imparted to the method that information retrieval result's operation supports also can be illustrated.For example be,, occurring under the offside situation that the user retrieves at offside immediately in household's audiovisual football broadcast program in this situation about considering.Below describe at the searching system that is used in this case being associated with offside scene about offside result for retrieval.
Fig. 1 is the outside drawing of the formation of the searching system in the embodiments of the invention 1.As shown in the drawing, searching system comprise by LAN (Local Area Network: local area network) etc. computer network 1602 and interconnective related scene applicator 100, share television set 114 and portable terminal 1601.
Fig. 2 is the block diagram that the formation of the searching system in the embodiments of the invention 1 is shown.
Searching system is to carry out information retrieval, and the information retrieval result is imparted to the scene of the TV programme of the opportunity that becomes information retrieval, and the system that shows, this searching system comprises: related scene applicator 100, shared television set 114, portable terminal 1601.
Related scene applicator 100 is to carry out information retrieval, and the information retrieval result is imparted to the device of scene of the TV programme of the opportunity that becomes information retrieval.Related scene applicator 100 comprises: input part 101, information retrieval execution portion 102, result for retrieval memory portion 103, operation history collection unit 104, operation history memory portion 105, timer 106 and retrieval starting point are inferred portion 107 constantly.Related scene applicator 100 also comprises: related scene extraction unit 108, image acquisition portion 109, iconic memory portion 110, result for retrieval efferent 111, efferent 112 and result for retrieval sending part 113.
Related scene applicator 100 constitutes by having general computer such as CPU (Central Processing Unit), memory, communication interface.And, by on CPU, carrying out the program of each handling part be used to realize that related scene applicator 100 is possessed, thereby on function, realized each handling part.And each memory portion is realized by memory and HDD (Hard Disk Drive).
Input part 101 is that button, touch-screen or cross key etc. are accepted the handling part from user's input.Information retrieval execution portion 102 is present in retrieval server on the Internet by visit, carries out the handling part of information retrieval.Result for retrieval memory portion 103 is memory memory storages in the information retrieval result of information retrieval execution portion 102.At this, the result for retrieval of being remembered in result for retrieval memory portion 103 is, user's appointment carry out the result for retrieval that scene is given.Operation history collection unit 104 is the handling parts that are collected in the operation history of the information retrieval that information retrieval execution portion 102 carries out.That is to say, operation history collection unit 104 is collected the resume relevant with following operation, and these operations comprise: URL (Uniform Resource Locator) that word, the option that is comprised in the keyword that the user is imported when retrieving, the project of selection is represented and user's indication etc.Operation history memory portion 105 is memory memory storages by the operation history of operation history collection unit 104 collections.Timer 106 is the handling parts that obtain the present moment.
The retrieval starting point is inferred portion 107 constantly and is inferred the handling part that the user begins to carry out the moment of information retrieval.Promptly, the retrieval starting point is inferred portion 107 constantly and is utilized result for retrieval in the result for retrieval memory portion 103 and the operation history in the operation history memory portion 105, infer begin to carry out with result for retrieval memory portion 103 in the beginning search operaqtion of the result for retrieval information corresponding retrieval remembered.And the retrieval starting point is inferred portion 107 constantly and is utilized and begin the moment that search operaqtion is performed at this, infers the moment that begins to carry out with the retrieval of result for retrieval information corresponding.Like this, the retrieval starting point is inferred portion 107 constantly according in order to obtain the resume of the search condition that the determined result for retrieval of user imports, and infers the moment that when obtaining this result for retrieval and be purpose user begins to import search condition.
Related scene extraction unit 108 is handling parts, will be in the scene before and after the retrieval starting point is inferred moment of the start information retrieval that portion 107 infers constantly, as the scene that is associated with result for retrieval that result for retrieval memory portion 103 is remembered, from iconic memory portion 110, extract out.In the present embodiment, though imagination is to handle at the content of dynamic image, also carry out same processing for the content of still image.Image acquisition portion 109 is the handling parts that obtain to become the dynamic image of retrieving opportunity.The dynamic image that image acquisition portion 109 is obtained is content of TV program of playing or the dynamic image content of being accumulated.Iconic memory portion 110 is dynamic image data and this dynamic image data regeneration memory storages constantly that memory obtains in image acquisition portion 109.
Result for retrieval efferent 111 is handling parts, the result for retrieval in 103 memories of result for retrieval memory portion is synthesized with the related scene of extracting out in related scene extraction unit 108, and the result that will synthesize outputs to efferent 112.Efferent 112 is display unit such as display that the result from 111 outputs of result for retrieval efferent is shown.Result for retrieval sending part 113 is handling parts, will carry out combination with related scene to the result for retrieval of making at result for retrieval efferent 111 and the data that obtain, sends to external equipment.
Sharing television set 114 is television sets, receives the data of sending from result for retrieval sending part 113, and can the data that receive be shown.For example, sharing television set 114 is the large-scale tv machines etc. that are set at the parlor.The data of the result for retrieval by being endowed related scene by related scene applicator 100 are shown by shared television set 114, thereby can share information between a plurality of users.
Portable terminal 1601 is to receive the data of sending from result for retrieval sending part 113, and the portable terminal that can show the data that receive.For example, portable terminal 1601 can be the entrained equipment that can either carry and can utilize of user, also can be mobile phone etc.
An example as the related scene applicator 100 with this formation describes following processing.That is, in this was handled, the user can infer the scene that becomes opportunity according to operation history and described result for retrieval at being that opportunity is retrieved in the scene of sharing television set 114 with audiovisual.And the scene of being inferred is imparted into result for retrieval, and is sent to shared television set 114.
Fig. 3 is the flow chart that user's operating sequence is shown.Fig. 4 is the flow chart that the performed processing of related scene applicator 100 is shown.
In the present embodiment, the user is obtained by image acquisition portion 109 by the dynamic image content of the program of shared television set 114 audiovisual, and is remembered in iconic memory portion 110.And,, be not limit by this though image acquisition portion 109 and iconic memory portion 110 are set in the related scene applicator 100 in Fig. 2.For example, dynamic image content also can be remembered video recording equipment externally, and related scene extraction unit 108 is from being extracted out scene by memory the dynamic image content of this video recording equipment.
The user utilizes input part 101, is opportunity with certain scene of television set, sends the indication (S301) of information retrieval at related scene applicator 100.For example, the user sends the retrieval indication of the information relevant with place that is now showing and shop under the situation of audiovisual travelling program.And the user sends the retrieval indication of the information relevant with the answer of the problem of now setting a question under the situation of audiovisual information please.And the user sends the retrieval indication that maybe can see the information that the place of this animal is relevant with the title of the animal that is showing now under the situation of audiovisual animal program.And, the user under the situation of sports casts such as audiovisual football or baseball, the relevant information of player of sending and showing now, the retrieval indication of perhaps relevant information with player's action or rule.She Xiang situation is in the present embodiment, occurs " offside " when the audiovisual football match, by existing retrieval server on the access internet, retrieves the information of relevant " offside " rule.
The information retrieval execution portion 102 of related scene applicator 100 carries out retrieval process at information retrieval indication (S301).And the operation history of the information retrieval that operation history collection unit 104 will be undertaken by the user is remembered operation history memory portion 105 (S401).And it is that opportunity is just carried out information retrieval that the user is not only in the scene with TV, also has the situation of only carrying out information retrieval at simple information of interest.At this, all operations of the information retrieval that operation history collection unit 104 will be carried out in information retrieval execution portion 102 is all remembered operation history memory portion 105.And, in order to ensure the capacity that is no more than operation history memory portion 105, for being adopted following method to delete by the operation history remembered, be benchmark for example with finish time of one day, perhaps delete the opportunity that more than certain hour, does not all have with user's operation history.And, also can delete according to the old order of operation history.
Fig. 5 shows an example of the operation history information of being remembered in the operation history memory portion 105.Have in the included operation history of operation history information: Action number 501, operation constantly 502, show URL503, searching word 504, option 505 and other operation 506.
Action number 501 is the numberings that are used for determining operation history, be from operation 502 old order constantly, to rise along giving numbering.The operation moment 502 is an expression information constantly, and this is the moment that the user utilizes input part 101 to operate at related scene applicator 100 constantly.Show URL503 be illustrated in operation the time be engraved in the information of the URL of the shown Web webpage of efferent 112.Searching word 504 is the information of the searching word of expression user input.Searching word can be that the user utilizes the keyboard of input part 101 or the word that button is imported, and also can be the suggested keyword that is used to retrieve of related scene applicator 100.Option 505 is the Web webpages for the URL of moving to other, and the result for retrieval that shows at efferent 112 is had a guide look of the information of demonstration, or the information of the project selected from the Web webpage of explicit user.Other operation 506 be the following operation of expression information, these operations are meant, be used to carry out retrieval the retrieval executable operations, be used to turn back to the return of previous Web webpage and the operation of new window etc., 102 operations that can accept of information retrieval execution portion.
For example, carried out in the operation of operation history information representation shown in Figure 5, at the reflection of sharing television set 114 broadcast football matches the user.And, can under following situation, make at the object lesson of operation history shown in Figure 5.That is, the user at first retrieves interested item that have nothing to do with the content of sharing television set 114 broadcasts, own, for example the TV play of " Kimura opens up very " performance is retrieved.And the user reads to this result for retrieval.After this, in the football match of sharing television set 114 broadcasts " offside " appearred, because the explanation of household's inquiry " offside " therefore begins to carry out the retrieval of relevant " offside " for " offside " is described.
Picture example when carrying out relevant offside information retrieval is illustrated by Fig. 6 A-Fig. 6 D.The user at first accepts the page, input " offside " this keyword (Action number 7 of Fig. 5) in the imported retrieval input of the keyword shown in Fig. 6 A.The result of retrieval then becomes the result for retrieval shown in Fig. 6 B.The user selects " offside (football)-Wike " this option (link) (Action number 8 of Fig. 5) from result for retrieval.The result who selects is represented by the page shown in Fig. 6 C.The user considers whether by this page being shown to shared television set 114 more user can be shared.But the user judges that this page is unsuitable for sharing, and turns back to the operation (Action number 9 of Fig. 5) of the picture (Fig. 6 B) that shows previous result for retrieval.And the user reselects " rulebook (offside) " (Action number 10 of Fig. 5) from the result for retrieval shown in Fig. 6 B.The result who selects is shown by the page shown in Fig. 6 D.The user is after considering this page, and decision is shared this page between a plurality of users, thereby gives related scene to related scene applicator 100 requests.
In addition, for example also have and utilize keyboard or numeric keypad to export situation as " offside " of searching word or option.Therefore, though in Japanese, also have the situation of input Roman capitals such as " o, f, u, s, a, i, d, o ", in the present embodiment, decide input unit as operation by having meaning area branch, thereby generate resume shown in Figure 5.
User decision is wanted to share, promptly want to give related scene result for retrieval (hereinafter referred to as " and scene is given the object result for retrieval) (S302).The request of giving to the related scene of related scene applicator 100 from the user is the state that is shown with at efferent 112 result for retrieval, utilizes input part 101 to be performed by the operation of stipulating.
At the related scene request of giving (S302) from the user, information retrieval execution portion 102 gives the object result for retrieval with scene and remembers result for retrieval memory portion 103 (S402).Then, the retrieval starting point is inferred portion 107 constantly according to giving the operation (hereinafter referred to as " retrieval starting point ") that the object result for retrieval begins to retrieve at scene, infers at scene and gives moment (hereinafter referred to as " the retrieval starting point constantly ") that the object result for retrieval begins to retrieve (S403).And related scene extraction unit 108 is extracted related scene (S404) out from iconic memory portion 110 according to the retrieval starting point constantly.
Infer in the processing (S403) in this retrieval starting point, from user's operation history, judge whether the user has carried out and give the object retrieval that the retrieval that result for retrieval is carried out has same item at scene, and judged result is used in the inferring of retrieval starting point.That is, give object result for retrieval and user under the approaching situation of information input in the retrieval or reading in scene, the retrieval starting point is inferred portion 107 constantly and is utilized the user to carry out this feature of mutually the same content retrieval, infers the retrieval starting point.And, under the approaching each other situation of the information of input in the retrieval or reading, also can utilize the user to carry out this feature of mutually the same content retrieval the user, infer the retrieval starting point.To be elaborated following to this processing.
Fig. 7 illustrates the functional-block diagram that the retrieval starting point is inferred the detailed formation of portion 107 constantly.Fig. 8 is that the retrieval starting point is inferred the detail flowchart that portion's 107 performed retrieval starting points are inferred processing (S403 of Fig. 4) constantly.
As shown in Figure 7, the retrieval starting point is inferred portion 107 constantly and is comprised: text corpus collection unit 701, word information memory portion 702, word similar degree calculating part 703, page info collection unit 704, page similar degree calculating part 705 and retrieval status judging part 706.
Text corpus collection unit 701 is handling parts, collects to be used for carrying out the text corpus of quantification at similar degree semantically between word, and makes and be used to calculate between word or the information of the similar degree between word and text.Word information memory portion 702 is memory storages, and the information of being utilized in the calculating to the similar degree made in text corpus collection unit 701 is remembered.Word similar degree calculating part 703 is handling parts, utilizes by the information of memory in word information memory portion 702, calculates between word or the similar degree between word and text.
Page info collection unit 704 is handling parts, collects the information of the page that the user read or the information of giving the relevant page of object result for retrieval with scene.Page similar degree calculating part 705 is handling parts, and the page info according to collecting in page info collection unit 704 calculates the similar degree between the appointed page.Retrieval status judging part 706 is handling parts, gives the object result for retrieval according to operation history and scene, judges between operation whether carried out the retrieval of same item.Similar degree memory portion 707 is memory storages, to judging whether that the information of having carried out being utilized in the retrieval of above-mentioned same item remembers.It is handling parts that the retrieval starting point is inferred portion 708, utilizes the judged result at retrieval status judging part 706, infers retrieval starting point and retrieval starting point constantly.
At this, the making that text corpus collection unit 701 being used to of being carried out calculated the information of similar degree is handled and is elaborated.And the processing of being undertaken by text corpus collection unit 701 is that the processing of giving of the related scene of being carried out with related scene applicator 100 is performed respectively.
Text corpus collection unit 701 is collected a large amount of texts, and therefrom extraction is useful on the word (hereinafter referred to as " index language ") that is utilized when retrieving with noun or verb.And text corpus collection unit 701 has been carried out the matrix that dimension compresses by index language and text matrix are carried out singular value decomposition thereby make, and described index language and text matrix are to be showed with matrix by index language and each text of extracting out.Then, text corpus collection unit 701 comes to show index language and text with the vector of the dimension after the compression by the matrix that utilization has been carried out the dimension compression, thereby calculate index language vector sum text vector respectively, and these vectors that will calculate are remembered word information memory portion 702.
Index language vector or text vector that the matrix of utilization after according to described compression made are obtained between the index language distance semantically, in view of the above can be from based on the retrieval of carrying out text the index language of semantically similar degree.These technology are called as latent semantic analysis (Latent Semantic Analysis,: LSA) or potential semantic indexing (Latent Semantic Indexing:LSI) (with reference to non-patent literature: Japan is known can feelings Reported Off ア ジ イ Hui Chi Vol.17, No.1 is (2005) p.76, feelings Reported retrieval ア Le go リ ズ system (upright altogether the publication) p.65 (2002)).By this method, can carry out quantification to the similar degree semantically between word or between word and text, and can improve the computational speed when retrieving.
And, not only can show similar degree more semantically at described LSA and LSI, and also carry out the dimension compression simultaneously for the computational speed that improves when retrieval.But, even do not carry out dimension compression, also can between word or the similar degree between word and text carry out quantification.Therefore, also can under the state that does not carry out the dimension compression, make vector, and according to this vector calculation similar degree.And, as the method for obtaining the semantic similar degree between word, except above-mentioned method, also can be that the user makes in advance the method for similar degree semantically, or utilize dictionary such as thesaurus to calculate the method for similar degree semantically.For example, utilizing the semantic relation between word to construct with stratum under the situation of the thesaurus that shows, can wait the distance that defines between word by the number of links between word.
And the text that text corpus collection unit 701 is collected also can be the cut-and-dried general text collections of system development personnel, but is not limit by this.For example, text corpus collection unit 701 also can collect from and scene give the high page of similar degree between the object result for retrieval and obtain text collection, described and scene is given the high page of similar degree between the object result for retrieval and is retrieved associated text and obtain by give the object result for retrieval according to scene.And, also can collect the text collection that obtains from the page of reading in the user is during certain.Be produced at the text collection special under the situation of needed matrix in the similar degree semantically that calculates between word, can correctly show the distance between the word of user when actual retrieval from these.For this reason, the retrieval starting point is inferred portion 708 and can be inferred the retrieval starting point with highland accuracy more.Under the situation of utilizing the cut-and-dried general text collection of system development personnel,, can only carry out once just passable for the making of needed matrix when semantically the similar degree that calculates between above-mentioned word.To this,, need all make this matrix giving according to scene under the situation that the object result for retrieval obtains text corpus at every turn.
And, for example can automatically define the distance between word by utilizing from the text corpus of acquisitions such as Web as text corpus.When carrying out relevant news program or topical retrieval, utilize the situation of popular at that time proper noun many.Therefore, the retrieval resume in utilizing the such playing programs of relevant news program or topicality also utilize text corpus such as Web under the situation of deterministic retrieval starting point, are suitable for the situation of the method for making above-mentioned matrix.In addition, come under the situation of deterministic retrieval starting point at the retrieval resume of utilization at the dynamic image of kinds such as then educational programs, the possibility that new word occurs is lower.Therefore, can utilize the dictionaries of being constructed again such as thesaurus to make above-mentioned matrix.And, under the situation of regeneration video program, can consider to utilize the information on the date that this video program recorded, and utilize the text corpus that is used to this time on date, make above-mentioned matrix.
Below, utilize Fig. 8 that the retrieval starting point is inferred processing (S403 of Fig. 4) and be elaborated.
Retrieval status judging part 706 obtains and the relevant information (S802) of operation history that becomes the object of judging retrieval status.That is, retrieval status judging part 706 is from by in the operation history information shown in Figure 5 of memory operation history memory portion 105, selects to have carried out the operation history of the selection of the input of searching word or option.Retrieval status judging part 706 from selecteed operation history, with the operation of decision scene when giving the object result for retrieval in time in the operation history in immediate past, obtain to be transfused to or selecteed set of letters.Described operation history is with the nearest operation history of the operation history numbering shown in the Action number 501 of Fig. 5 and be expressed.And described set of letters is represented with " rulebook (offside) " of Action number 10 in object lesson shown in Figure 5.
706 pairs of information of retrieval status judging part compare, at this, the information that is compared is: with the operation relevant information of decision scene when giving the object result for retrieval, and the relevant information (S803) of operation history with the retrieval status of judging (S802) acquisition when the operation history information acquisition is handled the time.That is, retrieval status judging part 706 obtains to be given the object result for retrieval by the scene of memory in result for retrieval memory portion 103, and extracts the text message in the result for retrieval out.Retrieval status judging part 706 makes word similar degree calculating part 703 utilize the matrix that has carried out after above-mentioned dimension compresses, and makes the text message vectorization of extraction.The result of this vectorization is the vector that is generated to be called " result for retrieval vector ".Equally, retrieval status judging part 706 makes word similar degree calculating part 703 utilize the matrix that has carried out after above-mentioned dimension compresses, and makes at the operation history information acquisition and handles the set of letters vectorization that obtains in (S802).The result of this vectorization is the vector that is generated to be called " input word vector ".Retrieval status judging part 706 makes word similar degree calculating part 703 obtain the similar degree of input word vector and result for retrieval vector, and the similar degree of obtaining is remembered similar degree memory portion 707.And about the method for obtaining of the similar degree between vector, for example, what often be utilized in text retrieval is that cosine is measured (two vectorial angulations) or inner product.
Fig. 9 shows an example that is calculated and remembered the similar degree information of similar degree memory portion 707 by retrieval status judging part 706.Similar degree information comprises the similar degree resume that are made of Action number 901, the operation moment 902, searching word 903, option 904, first similar degree 905 and second similar degree 906.
At this, Action number 901, operation constantly 902, searching word 903 and option 904 respectively with the Action number 501 of operation history shown in Figure 5, operation constantly 502, searching word 504 and option 505 be corresponding.First similar degree 905 shows similar degree between result for retrieval vector and the input word vector according to each operation history, and this input word vector is by the making of the set of letters shown in searching word 903 or the option 904.Second similar degree 906 show the input word vector that comprised in the previous operation history (in time for a back operation history) with by the similar degree between the input word vector of the making of the set of letters shown in searching word 903 or the option 904.Though what calculate is first similar degree, but also can when calculating first similar degree, also calculate second similar degree in above-mentioned comparison process (S803), perhaps replace first the calculating of similar degree and calculate second similar degree.
Then, retrieval status judging part 706 judges that the similar degree (first similar degree) of result for retrieval vector and input word vector is whether below threshold value (S803).Under first similar degree situation bigger ("No" of S803), judge that then with giving the object retrieval that result for retrieval is carried out at scene be the retrieval of same item than threshold value.And retrieval status judging part 706 obtains and the next relevant information (S802) of operation history that becomes the object of judging retrieval status.That is, retrieval status judging part 706 is remembered operation history the operation history of selection of input in the operation history information of being remembered in the portion 105, that carried out searching word or option as object.Retrieval status judging part 706 is selected from the operation history that becomes object, in time with handle (S802) at previous operation history information acquisition and become the operation history in immediate past of operation history of the acquisition object of set of letters.Retrieval status judging part 706 obtains to be transfused to or selecteed set of letters from the operation history of selecting.What obtain in object lesson shown in Figure 5 is " offside (football) " of Action number 8.
At first similar degree is under the situation below the threshold value ("Yes" of S803), then judges and finishes with the retrieval that is retrieved as identical item of giving the object result for retrieval at scene.Therefore, the retrieval starting point is inferred the method for portion 708 according to the following stated, decides retrieval starting point (S804) constantly by decision retrieval starting point.
For example, in example shown in Figure 9, be that 7 first similar degree 905 is than threshold value big ("No" of S803) from Action number 10 to Action number under 0.5 the situation in threshold value.Therefore, judge that the retrieval that has same item with the retrieval of giving the object result for retrieval at scene continues 7 the operation from Action number 10 to Action number.And in " chance " this option in having selected the operation shown in the Action number 5, first similar degree 905 just becomes threshold value following ("Yes" of S803).Therefore, judgement finishes with the retrieval that is retrieved as same item of giving the object result for retrieval at scene.
Then, the retrieval starting point being determined to handle (S804) constantly is elaborated.The retrieval starting point is inferred portion 708 will be judged as operation history (operation history the earliest in time) in the operation history that is continuing with the retrieval that is retrieved as same item of giving the object result for retrieval at scene, the Action number minimum as the retrieval starting point in comparison process (S803).Particularly, in Fig. 9, first similar degree is judged as the retrieval of carrying out same item than the operation history under the big situation of threshold value.Threshold value is being made as under 0.5 the situation, the operation history that is carrying out same fact retrieval is the operation history shown in 7 from Action number 10 to Action number.The operation history that the retrieval starting point is inferred portion 708 Action number 7 that operation history numbering wherein is minimum is estimated as the retrieval starting point.The retrieval starting point is inferred the moment decision of portion 708 will retrieve the operation of starting point and be performed the time and is the retrieval starting point moment.The retrieval starting point is meant and retrieves the operation moment 902 that is comprised in the corresponding operation history of starting point constantly.In above-mentioned example, because the operation history of Action number 7 is the operation starting point, so " 20:21:20 " in the operation that is comprised in this operation history moment 902 then is considered to be the retrieval starting point constantly.
Determining the retrieval starting point after the moment (S403 of Fig. 4, Fig. 8) as above-mentioned, related scene extraction unit 108 is extracted retrieval starting points scene (S404) constantly out from iconic memory portion 110.At this, related scene extraction unit 108 only will begin Δ t time dynamic image data before constantly from the retrieval starting point, extract out as related scene.That is, related scene extraction unit 108 will begin the dynamic image data that comprised from (retrieval starting point moment-Δ t) in the scope in the retrieval starting point moment, extract out as related scene.Δ t can utilize the system development personnel fixed value of decision in advance.And Δ t is modifiable, can be set by the user.
The time spans that 111 pairs of result for retrieval efferents are extracted out in related scene extraction unit 108 are that the related scene of Δ t and scene are given the object result for retrieval and synthesized, and the result after will synthesizing outputs to efferent 112 (S405).Figure 10 shows an example of efferent 112 output pictures.At this, scene is given object result for retrieval 1001 and is synthesized with related scene 1002, and is displayed on the picture.And the icon group 1003 and the Menu key 1006 that are utilized when the result for retrieval that will be endowed related scene 1002 sends to other-end also are displayed on the picture.Above-mentioned Menu key 1006 is when the terminal to other sends, and is used to specify the button that with what kind of display mode shows in the terminal that sends the destination.
Want result for retrieval sent under other the situation of terminal being judged as the user, the user presses any included among the icon group 1003 icon (icon 1004 or 1005) and Menu key 1006.By this operation, the user sends the request (S303) that sends the result for retrieval that has been endowed related scene to result for retrieval sending part 113.
Result for retrieval sending part 113 sends to appointed transmission destination (S406) according to user's request (S303) with the result for retrieval that has been endowed related scene.For example, at the transmit operation of the result for retrieval that has been endowed related scene, the user for example can utilize icon group shown on picture shown in Figure 10 1003 to carry out.The user selects icon 1004 result for retrieval being sent under the situation of sharing television set 114, wanting to send under other the situation of portable terminal 1601 such as PC or portable phone, selects icon 1005.And, the result for retrieval that will be endowed related scene send to play or the situation of the terminal of regeneration dynamic image content under, can appear at and can't see the dynamic image content that is shown on the terminal and awkward situation.Therefore, selected in icon 1004 or 1005 the user, efferent 112 can make Menu key 1006 be presented on the picture, and can make the possibility that is selected at the display packing after sending.For example, selected the user under the situation of " full frame " this Menu key, result for retrieval sending part 113 utilization sends all pictures of the terminal of destinations, sends to be used to make the result for retrieval that has been endowed related scene to be presented at data on the picture of the terminal that sends the destination.In addition, selected the user under the situation of " many pictures " this Menu key, on a part of picture that is presented at the terminal that sends the destination of the dynamic image content that result for retrieval sending part 113 will be reproduced or be sent out, send simultaneously and be used to make the result for retrieval that has been endowed related scene to be presented at data on the picture of the terminal that sends the destination.This picture example is illustrated by Figure 11 A and Figure 11 B.Figure 11 A shows the user and has selected under the situation of " full frame " this Menu key in the Menu key shown on the picture shown in Figure 10 1006, sends the example of the display frame of destination terminal.On the picture that sends the destination terminal, only show the result for retrieval that has been endowed related scene that is sent by related scene applicator 100.In addition, Figure 11 B shows the user and has selected under the situation of " many pictures " in the Menu key shown on the picture shown in Figure 10 1006, sends the example of the display frame of destination terminal.On the picture that sends the destination terminal, show a plurality of pictures.Promptly, on the picture that sends the destination terminal, show the display frame 1101 and the picture 1102 of the dynamic image content that is reproduced or is played in transmission destination terminal, include the result for retrieval that has been endowed related scene that is sent to this transmission destination terminal so far in this picture 112.And the result for retrieval that has been endowed related scene that is comprised in the picture 1,102 1103 is shown constantly according to the retrieval starting point of each result for retrieval.And,, also can be shown according to the sender's of relevant result for retrieval information except retrieval starting point constantly 1103.By carrying out the display packing shown in Figure 11 B, thereby can under situation about just not influencing now, result for retrieval be shared in a plurality of users at the dynamic content of audiovisual.In addition, the former result who shares also can share simply once more.Like this, for example in initial sharing, not having user on the scene, also can illustrate simply.And, in the display packing of Figure 11 A, above-described now just will can't see at the dynamic image content of audiovisual.Therefore, when the user presses " full frame " this Menu key in the picture shown in Figure 10, can carry out further again that the display menu key " shows " now immediately and the control of " demonstration more later on ".Afterwards, when user's choice menus key " showed later on " again, the user can be presented at result for retrieval and send on the terminal of destination in the broadcast of next advertisement (Commercial Message).
Fig. 1 shows the demonstration example of the result for retrieval that has been endowed related scene.As shown in the drawing, result who is retrieved at related scene applicator 100 and this result's related scene are sent to by computer network 1602 and share television set 114 and portable terminal 1601, and be shown.
The related scene applicator 100 that present embodiment in as described above is related, can utilize the user information retrieval operation history and at the information retrieval result of each scene, extract the related scene of the dynamic image content that becomes the retrieval opportunity out.Therefore, even, also can extract related scene out for the dynamic image content that is not endowed text message in each scene.And, needn't only need the input keyword in order to extract related scene out.Therefore, can alleviate the burden that the user is imparted to related scene the information retrieval result.
And, utilized first similar degree in the judgement whether retrieval of the same item in comparing processing (S803) finishes, this first similar degree is meant, input word vector that generates from the set of letters of user's input or selection and the similar degree of giving the result for retrieval vector of making the object result for retrieval from scene.But, whether having carried out giving with scene the judgement of the retrieval of the identical item of object result for retrieval for the user, is not to be subject to the method.For example, can replace first similar degree, and utilize the similar degree of search key vector and input word vector, described search key vector is to utilize and generate the vectorial identical method of input word, in order to obtain that scene is given the object result for retrieval and the search key imported carries out obtaining after the vectorization.
And, other three variation of determination methods that can realize with this formation below will be shown.
(variation 1)
Difference with embodiment 1 in variation 1 is that in comparison process (S803), retrieval status judging part 706 utilizes the second above-mentioned similar degree, judges whether the user has retrieved same item.That is, what utilized in variation 1 is, in the same item process of retrieval, and high this feature of similar degree between input or the word selected.Utilize this feature, retrieval status judging part 706 is by to comparing between the input word vector in the adjacent operation history, thereby calculates the second above-mentioned similar degree.Under second similar degree situation bigger, then judge the retrieval of between adjacent operation history, having carried out same item than threshold value.For example, with in the object lesson of Li Sidu information shown in Figure 9, the retrieval status of decision operation numbering 8 is that example is considered.Retrieval status judging part 706 calculates second similar degree, this second similar degree is meant, the similar degree between the input word vector of the input in the next in time Action number 10 " rulebook (offside) " and the input word vector of the input in Action number 8 " offside (football) ".And, be second similar degree 906 with the memory of second similar degree.Retrieval status judging part 706 is by comparing calculated second similar degree and threshold value, thereby judges whether to have carried out the retrieval of same item.For example, in the retrieval status of Action number 8 and 7, though second similar degree is bigger than threshold value, in the retrieval status of Action number 5, second similar degree is then below threshold value.In this case, can know the retrieval of in Action number 8 and 7, having carried out same item, Action number 5 then retrieval be other business.Therefore, the earliest Action number 7 is the retrieval starting point in time.
And, in variation 1, although understand method, but be not limit by this to comparing between the adjacent input word vector.For example, give under the situation that the object result for retrieval is shown in scene, can be by to input word vector in the operation history before moment of being shown tight and similar degree and the threshold between other the input word vector, thus judge whether to have carried out the retrieval of same item.
Like this, retrieve same item by only utilizing input word vector, thereby can carry out the analysis that scene is given the object result for retrieval, and make the processing that is used to show all result for retrieval vectors of the page.Like this, can reduce computing.
(variation 2)
Difference with embodiment 1 in variation 2 is to be used to the method for judging whether the user has carried out the retrieval of same item.Particularly, in comparison process (S803 of Fig. 8), what retrieval status judged that judging part 706 utilized is, in the process of the same item of retrieval, high this feature of the similar degree of the word that is comprised in the text of the page of reading is carried out above-mentioned judgement.That is, retrieval status judging part 706 is by to comparing between the adjacent reading page, thereby calculates the similar degree between the adjacent reading page, under the similar degree situation bigger than threshold value, then judges the retrieval of having carried out same item between the reading page.Particularly, page info collection unit 704 is extracted the demonstration URL of the page that the user read out from by in the operation history of memory operation history memory portion 105.Information retrieval execution portion 102 obtains the page info that this shows URL.The retrieval starting point is inferred the retrieval status judging part 706 of portion 107 constantly, extracts word out in the text message that is comprised from obtained page info.The similar degree that retrieval status judging part 706 requests for page similar degree calculating parts 705 calculate between each page utilizes the similar degree that calculates at page similar degree calculating part 705, judges whether the user has carried out the retrieval of same item.
At this, be used in the method for obtaining of the similar degree between each page of described judgement, can with utilize above-mentioned be transfused to or the situation of selecteed set of letters identical.That is, can give the vector of the text message that is comprised in the page of object result for retrieval and be illustrated in similar degree between the vector of the text message that is comprised in each page that the user reads being illustrated in scene, as the similar degree between the page.And, also can be with the similar degree that illustrates between the vector of the text message that is comprised in the adjacent reading page, as the similar degree between the page.And, also can not be to utilize the similar degree of above-mentioned matrix to calculate, but merely count the quantity of the word that in two pages, all comprises, if the quantity of word is judged as the retrieval of not carrying out same item in threshold value with next.At this, Figure 12 shows an example of the similar degree information of being remembered in the similar degree memory portion 707 under utilizing the situation of the page.Similar degree information comprises by Action number 1201, the operation moment 1202, shows the similar degree resume that URL1203, the 3rd similar degree 1204 and the 4th similar degree 1205 constitute.Action number 1201, operation constantly 1202 and show URL1203 respectively with the Action number 501 of operation history shown in Figure 5, operation constantly 502 and show that URL503 is corresponding.The 3rd similar degree 1204 shows the page and the similar degree between the page of each Action number reading that scene is given the object result for retrieval.And the 4th similar degree 1205 shows at the page of each Action number reading and the similar degree between the page of the previous operation history of this page (being the next operation history in back in time) reading.
Under the situation of the retrieval that only utilizes similar degree between word to judge whether to have carried out same item, need construct similar degree between word with higher accuracy.Therefore, need from a large amount of text, make the text corpus that is used for similar degree is carried out quantification with multiple variation.But, under the situation of the retrieval that as above-mentioned, utilizes similar degree between the page to judge whether to have carried out same item, utilize more word to carry out this judgement.Therefore, compare the accuracy of the similar degree between the word that does not need in advance to make with the situation that the similar degree that only utilizes word is judged.And,, then do not need to obtain in advance again the similar degree between the word if utilize the occurrence frequency of above-mentioned this merely common word to calculate similar degree.Therefore, not needing to carry out in advance corpus collects, is used for the processing of the quantification of the similar degree between word and the comparison process between the vector.Therefore, can constitute with more simple system and realize comparison process (S803 of Fig. 8).
And, in variation 2, compare between to text message.Therefore, compare with the situation of carrying out the comparison between the word described in the embodiment 1, the amount of information of comparison other is many.Therefore, can calculate the similar degree of pinpoint accuracy.
And in above-mentioned situation, the text message that utilizes in the page to be comprised has carried out the judgement of similar degree.And, utilizing webpage to carry out under the situation of judgement of similar degree, URL is corresponding with each page.Therefore, can from the information of URL, judge whether to be identical content.
(variation 3)
In variation 3, the method for judgement of retrieval of whether having carried out same item for the user is different with embodiment 1.Particularly, in comparison process (S803 of Fig. 8), in the retrieving that carries out same item, retrieval status judging part 706 utilizes the part of the keyword that the user imports in common these characteristics of operation room, carries out above-mentioned judgement.
In retrieving by keyword, under the very few situation of the too much or relevant result for retrieval of result for retrieval, in order to improve retrieval accuracy, the user re-enters keyword.Usually, in the situation of the screening of carrying out result for retrieval or improve under the situation of retrieval accuracy, the user proofreaies and correct the input keyword that is used to retrieve.For example, the Action number 2 of Fig. 5 is to shown in the Action number 4, and user or interpolation input keyword perhaps stay a part of keyword, and change the keyword that stays.Therefore, input word in input word in the operation of 706 pairs of last times of retrieval status judging part and the current operation compares, under the situation that the word of a part is changed, perhaps under the situation that word is added, can judge between operation whether carry out the retrieval of same item.In addition, situation about being added for word, in other words, the operator (for example AND operator, OR operator) by regulation make the input word that in current operation, newly is added with on the input combinations of words that is transfused in once the operation, carry out the retrieval of keyword.
Like this, judge whether to have carried out in the relation between the keyword that utilizes the user to import under the situation of retrieval of same item, need be from user's keyword input.But, can obtain similar degree between word in advance.Therefore, the corpus that can carry out is in advance collected, is used for the similar degree between word is carried out the processing of quantification and the comparison process between the vector.Therefore, can constitute with more simple system and realize.
(embodiment 2)
In the above embodiments 1, the length Δ t of the related scene that is drawn out of has utilized the fixed value that system developer was predetermined.In present embodiment 2, be can be according to the length of Δ t because of different these modes of thinking of user institute content retrieved, the word and the result for retrieval that utilize the user to import or select decide the length of Δ t.In view of the above, can automatically determine the Δ t of suitable length according to result for retrieval.
The main difference part of present embodiment and embodiment 1 is that in related scene extraction processing (S404 of Fig. 4), related scene extraction unit also determines Δ t.For other inscape and to carry out the processing of these inscapes identical with embodiment 1.Therefore, only be that the center describes in the present embodiment with the difference from Example 1.
Figure 13 shows the formation block diagram of the searching system in the embodiments of the invention 2.This searching system has replaced the related scene applicator 100 in the formation of searching system shown in Figure 1, and utilizes related scene applicator 200.
The related scene applicator 200 that present embodiment is related on the basis of the formation of the related related scene applicator 100 of embodiment shown in Figure 21, has replaced related scene extraction unit 108, and has been provided with related scene extraction unit 1308.And, on constituting, become, on the related related scene applicator 100 of embodiment 1, added e-dictionary memory portion 1315 and programme information acquisition portion 1316.
Related scene applicator 200 is made of the general computer with CPU, memory, communication interface etc.By on CPU, carrying out the program of each handling part be used to realize that related scene applicator 200 is possessed, thereby on function, realized each handling part.And each memory portion is by memory, HDD (Hard Disk Drive: hard disk drive) realize.
E-dictionary memory portion 1315 is memory memory storages at the explanation that word carried out that is used to illustrate proper noun and action, for example, remember the explanation of relevant name, animal name and place name etc., and the explanation of the rule of relevant sports and action etc.And, in e-dictionary memory portion 1315, accumulated the information of the part of speech of word and relevant this word.
Programme information acquisition portion 1316 is the information processing portions that obtain to be remembered the relevant program in iconic memory portion 110 by broadcasting, and for example obtains EPG (Electric Program Guide: program information such as data electric program guide).Generally speaking, program names, broadcast date time, kind, performance personnel and programme content etc. are comprised in the programme information as information.
Below, related scene is extracted out processing (S404 of Fig. 4) be elaborated.Figure 14 is that related scene is extracted the detail flowchart of handling (S404 of Fig. 4) out.
Related scene extraction unit 108 is after having determined the length Δ t of related scene (S1401-S1412), by the related scene (S1413) of the 110 extraction length Δ t of iconic memory portion.
Promptly, related scene extraction unit 1308 decision words (S1401), be meant at the word that this determined, when retrieval becomes the information of program of the object of extracting scene out or scene and gives the object result for retrieval, think important word (hereinafter referred to as " important searching word ") in the word by user's input or selection, and represent scene to give the word of the page of object result for retrieval (hereinafter referred to as " representing pages word ").Related scene extraction unit 1308 at word that occurrence frequency is high in the set of letters of user's input or selection or the word that is transfused in the retrieval starting point, is come weighting in the mode that weight increases, and is calculated mark according to each word.The mark high order of related scene extraction unit 1308 from being calculated is important searching word with the word decision of stipulating number.And related scene extraction unit 1308 is given scene the high word of occurrence frequency in the text message that is comprised in the page of object result for retrieval, or as the title of this page and the word that is utilized determines to be the representing pages word.
Related scene extraction unit 1308 is set at the length Δ t of related scene 10 seconds (S1402) of initial value.Then, related scene extraction unit 1308 is utilized e-dictionary memory portion 1315, judges in above-mentioned processing the important searching word that determined and the part of speech (S1403) of representing pages word.
Related scene extraction unit 1308 for example judges whether include proper nouns (S1404) such as place name, name or animal name in important searching word and the representing pages word with reference to the result in part of speech judgement place (S1403).Be judged as ("Yes" of S1404) under the situation that includes proper noun, related scene extraction unit 1308 is set Δ t to such an extent that lack (for example, t is set at 0 with Δ, is not as dynamic image but as still image) (S1405).
Be judged as ("No" of S1404) under the situation that does not comprise proper noun, related scene extraction unit 1308 is judged the word (S1406) that whether comprises the expression action in important searching word and representing pages word with reference to the result at part of speech judgment processing (S1403).Under the situation that is judged as the word that includes the expression action ("Yes" of S1406), related scene extraction unit 1308 is set to Δ t (S1407) with certain length (for example 3 minutes).
For example, in the example of operation history information shown in Figure 5, when the football match program of audiovisual broadcast, carry out the retrieval of the relevant information of " offside " in the operation history with Action number 7.At this, " offside " this word is the word of being imported retrieval starting point user, also is the word that is comprised in the option 505.Therefore, related scene extraction unit 1308 judges that " offside " is important searching word (S1401).And because " offside " is the word ("No" of S1404, the "Yes" of S1406) of expression action, therefore, the time when related scene extraction unit 1308 will be carried out the action of word (for example 3 minutes) is set at Δ t (S1407).
Then, related scene extraction unit 1308 is from the obtained programme information of programme information acquisition portion 1316, obtains to become the retrieval starting point programme information (S1408) constantly of the opportunity of retrieval.Related scene extraction unit 1308 judges whether the represented programme variety of programme information that obtains is information please (S1409).Be ("Yes" of S1409) under the situation of information please in kind, it is many that the user carries out the retrieval relevant with the enquirement of question and answer.Therefore, related scene extraction unit 1308 is set at Δ t (S1410) with the needed average time of explanation (for example 4 minutes) of the enquirement and the answer of question and answer.
Be not ("No" of S1409) under the situation of information please in kind, related scene extraction unit 1308 judges whether the kind of the program shown in the programme information that obtains is news program (S1411).Be ("Yes" of S1411) under the situation of news program in kind, it is many that the user carries out the retrieval of relevant news topic.Therefore, related scene extraction unit 1308 is set at Δ t (S1412) with the needed average time of explanation (for example 2 minutes) of a topic in the news.Decide the value of Δ t by above processing.
Related scene extraction unit 1308 is extracted the dynamic image data that is comprised in the scope of (retrieving the starting point moment) at (retrieval starting point moment-Δ t) out from iconic memory portion 110, with as related scene (S1413).And under the situation of Δ t=0, related scene extraction unit 1308 is extracted out at retrieval starting point still image constantly.When carrying out the extraction of related scene, not the dynamic image data that is comprised in extracting out from (the retrieval starting point moment-Δ t) to the scope of (the retrieval starting point constantly), but surplus α (α is positive value) can be set.That is, can extract out from (the retrieval starting point moment-Δ t-α) to (dynamic image data that is comprised in the retrieval starting point scope constantly+α).
And, also can not utilize the part of speech of word, and only decide the value of Δ t according to the kind of program.That is, corresponding value of coming predefined Δ t with the kind of program, and decide the value of Δ t according to the kind of object program.In view of the above, the definite of word kinds such as the decision of important searching word and representing pages word and part of speech just can carry out.Therefore, can decide Δ t with more simple processing.And the similar degree between important searching word and representing pages word is little, and does not know and decide under the situation of Δ t according to which word, also can utilize the method that only adopts this kind.
Figure 15 shows by result for retrieval output and handles the example that (S405 of Fig. 4) is output to the picture of efferent 112.In picture example shown in Figure 15, in picture example shown in Figure 10, also show the operation bar 1507 that is used for dynamic image regeneration.Show the length of related scene with the length of operation bar at operation bar 1507.And the time location of current shown related scene 1002 is to indicate that 1508 represent.In this example, Δ t was made as 3 minutes, α was made as 1 minute.And, will retrieve starting point constantly (moment in the Action number 7 of Fig. 5) be made as 20: 21.At this moment, will (retrieve the dynamic image in starting point interval constantly+α), extract out by 20: 22 from 20: 17 (the retrieval starting point moment-Δ t-α) as related scene.And the related scene 1002 that shows on picture is that 20: 18 (the retrieval starting point moment-Δ t) is to the dynamic image in the interval of 20: 21 (the retrieval starting point constantly).But, move by making icon 1509 or 1510, thereby can change the time span that is displayed on the dynamic image on the picture as related scene 1002.
As described above, by present embodiment, can automatically change the time interval Δ t of related scene according to retrieval of content.Therefore, the related scene of suitable length can be imparted to result for retrieval.
More than the related searching system of present embodiment is illustrated, still, the present invention is limit by these embodiment.
For example, in the above-described embodiment, with give at scene moment that the object result for retrieval begins to retrieve as the retrieval starting point constantly, in fact see that from the user reflection of scene needs the cost official hour to beginning retrieval.Therefore, also can be with from giving moment that moment that the object result for retrieval begins to retrieve refunds official hour at scene constantly as the retrieval starting point.
This time the disclosed embodiments are example, and the present invention is not limit by these examples.Scope of the present invention is not shown by above-mentioned explanation, but is illustrated by the scope of claim, and the thinking that is equal to the scope of claim and all changes in the scope all are included in the present invention.
The present invention be owing to can generally be applicable to retrieval process based on dynamic image, therefore, not only can be applicable to that present embodiment is described to televise, but also can be applicable to dynamic image content and the captured reflection of individual on the Internet.And, to be not only as shared method that present embodiment is described to be shown sharing on the display, and can be used in and to be affixed to the medium various situations of Email, it utilizes possibility very big.
The present invention can be applicable at one side audiovisual dynamic image content and generally carry out in the information retrieval, makes related scene applicator that the information retrieval result and the scene of dynamic image content be associated, information indexing device etc.
Symbol description
100 related scene applicators
103 enter power section
102 information retrieval execution sections
103 result for retrieval memory sections
104 operation history collection units
105 operation history memory sections
106 timers
107 retrieval starting points are inferred section constantly
108 related scene extraction units
108 image acquisition sections
110 iconic memory sections
111 result for retrieval efferents
112 efferents
113 result for retrieval sending parts
114 share television set
1601 portable terminals

Claims (14)

1. a related scene applicator makes related scene be associated with result for retrieval, and described related scene is the view data that is associated with retrieval, and this association scene applicator comprises:
Iconic memory portion, memory has the regeneration moment of view data and this view data;
Information retrieval execution portion is according to the search condition retrieving information of user's input;
Operation history memory portion, memory has the resume of operation, and in the resume of this operation, described search condition is corresponding with the moment of having accepted this search condition;
The retrieval starting point is inferred portion constantly, infer the retrieval starting point constantly according to the retrieval starting point, described retrieval starting point is to give the resume of the described operation that the object result for retrieval is associated with scene, it is that described retrieval starting point is to be used to obtain described scene to give the moment that the search condition of object result for retrieval begins to be transfused to constantly by the retrieving information by described user's appointment in the information of described information retrieval execution portion retrieval that described scene is given the object result for retrieval; And
Related scene extraction unit will be given the object result for retrieval with described scene in the view data that the time that includes the described retrieval starting point moment is reproduced and will be associated, and described retrieval starting point is inferred in the described retrieval starting point portion of inferring constantly constantly.
2. related scene applicator as claimed in claim 1,
Described retrieval starting point is inferred portion constantly, infer the retrieval starting point constantly according to the retrieval starting point, described retrieval starting point is to give the resume of operation in the resume of the described operation that the object result for retrieval is associated, that carry out in the moment the earliest with scene, described retrieval starting point is to be used to obtain described scene to give the moment that the search condition of object result for retrieval begins to be transfused to constantly, and it is by the retrieving information of described user's appointment in the information that described information retrieval execution portion is retrieved that described scene is given the object result for retrieval.
3. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, resume according to each described operation, calculate described scene and give object result for retrieval and the similar degree between the search condition that was transfused to before the moment that first search condition is transfused in time, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, from the described similar degree that the big described word similar degree calculating part of value than regulation is calculated, determine to have utilized the search condition that is transfused in the moment farthest in the moment that is transfused to from described first search condition and the similar degree that is calculated, and will be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point at the resume of the operation in the described moment farthest.
4. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, resume according to each described operation, calculate first search condition and the similar degree of received search condition before the moment that this first search condition is transfused to, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, from the described similar degree that the big described word similar degree calculating part of value than regulation is calculated, determine to have utilized from described received search condition of the moment farthest of the received moment of first search condition and the similar degree that is calculated, and will be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point at the resume of the operation in the described moment farthest.
5. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, according to be transfused to before the moment that is transfused in first search condition, the group that resume became of adjacent operation in time, calculate the similar degree between the search condition that comprises in the group that resume became of this operation, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, the described similar degree that described word similar degree calculating part below the value that becomes regulation is calculated, determine to have utilized the search condition that is transfused in the nearest moment in the moment that is transfused to from described first search condition and the similar degree that is calculated, and will be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point at the resume of the operation in described nearest moment.
6. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, according to be transfused to before the moment that is transfused in first search condition, the group that resume became of adjacent operation in time, judge between the search condition that comprises in the group that resume became of this operation whether have common word, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, resume in the group that resume became of the operation that does not have common word, that be included in the operation of the search condition that is transfused to from the nearest moment in the moment that described first search condition is transfused to will be judged as, be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point.
7. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, resume according to each described operation, calculate that described scene is given the object result for retrieval and based on the similar degree between the result for retrieval of the search condition that was transfused to before the moment that is transfused in first search condition, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, from the described similar degree that the big described word similar degree calculating part of value than regulation is calculated, determine to have utilized the search condition that is transfused in the moment farthest in the moment that is transfused to from described first search condition and the similar degree that is calculated, and will be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point at the resume of the operation in the described moment farthest.
8. related scene applicator as claimed in claim 2,
The described retrieval starting point portion of inferring constantly comprises:
Word similar degree calculating part, according to be transfused to before the moment that is transfused in first search condition, the group that resume became of adjacent operation in time, calculate based on the similar degree between the result for retrieval of the search condition that comprises in the group that resume became of this operation, described first search condition is in order to retrieve that described scene is given the object result for retrieval and the search condition that is transfused to; And
The retrieval starting point is inferred portion, the described similar degree that described word similar degree calculating part below the value that becomes regulation is calculated, determine to have utilized the search condition that is transfused in the nearest moment in the moment that is transfused to from described first search condition and calculated similar degree, and will be defined as described retrieval starting point, and infer described retrieval starting point constantly according to this retrieval starting point at the resume of the operation in described nearest moment.
9. as each described related scene applicator of claim 1 to 8,
Described related scene applicator also possesses e-dictionary memory portion, and memory has word and the information relevant with the part of speech of described word in this e-dictionary memory portion;
Described related scene extraction unit, by with reference to the information of in described e-dictionary memory portion, being remembered, thereby decision and described scene are given the part of speech of the word that the object result for retrieval is associated, and according to being decided time span by the part of speech of the described word that determined, from the described dynamic image data that described iconic memory portion is remembered, dynamic image data or static image data that extraction is reproduced in comprising by the described retrieval starting point described retrieval starting point described time span constantly that the portion of inferring infers constantly, and the described dynamic image data that will extract out or described static image data are given the object result for retrieval with described scene and are associated.
10. as each described related scene applicator of claim 1 to 8,
Described related scene applicator also possesses programme information acquisition portion, and this programme information acquisition portion obtains the relevant information of kind with the described dynamic image data of being remembered in described iconic memory portion;
Described related scene extraction unit, by with reference to the information that obtains in described programme information acquisition portion, thereby the kind of the described dynamic image data that decision is reproduced constantly in described retrieval starting point, and according to by the kind of the described dynamic image data that determined decision time span, from the described dynamic image data that described iconic memory portion is remembered, dynamic image data or static image data that extraction is reproduced in comprising the described retrieval starting point described time span constantly of being inferred by the described retrieval starting point portion of inferring constantly, and the described dynamic image data that will extract out or described static image data are given the object result for retrieval with described scene and are associated.
11. as each described related scene applicator of claim 1 to 10,
Described related scene applicator also possesses the result for retrieval efferent, this result for retrieval efferent will be given the object result for retrieval with the described scene after described dynamic image data or described static image data are associated in described related scene extraction unit, output to outside equipment.
12. searching system, comprise display unit and related scene applicator, described display unit shows dynamic image data, and described related scene applicator is associated related scene with result for retrieval, described related scene is dynamic image data or the static image data that becomes the retrieval opportunity
Described related scene applicator comprises:
Iconic memory portion, the regeneration that memory has dynamic image data and this dynamic image data constantly, the shown dynamic image data of this dynamic image data and described display unit is identical;
Information retrieval execution portion accepts the search condition that the user imports, and comes retrieving information according to this search condition;
Operation history memory portion, memory has the resume of operation, and is in the resume of this operation, corresponding with the moment of having accepted this search condition by the described search condition that described information retrieval execution portion accepts;
The retrieval starting point is inferred portion constantly, give the relevance of the resume of the described operation that object result for retrieval and described operation history memory portion remembered according to scene, come the deterministic retrieval starting point, and infer the retrieval starting point constantly according to this retrieval starting point, described scene is given the object result for retrieval, in the result for retrieval that described information retrieval execution portion is carried out by the result for retrieval of described user's appointment, described retrieval starting point is, give with described scene that the object result for retrieval is associated, the resume of the described operation that described operation history memory portion is remembered, described retrieval starting point is to be used to obtain described scene and to give the moment that the search condition of object result for retrieval begins to be transfused to constantly;
Related scene extraction unit, from the described dynamic image data that described iconic memory portion is remembered, extraction is comprising dynamic image data or the static image data that described retrieval starting point that the described retrieval starting point portion of inferring constantly infers was reproduced in the time constantly, and the described dynamic image data that will extract out or described static image data are given the object result for retrieval with described scene and be associated; And
The result for retrieval efferent will be given the object result for retrieval with the described scene after described dynamic image data or described static image data are associated in described related scene extraction unit, output to described display unit;
Described display unit, described scene after being received in described related scene extraction unit and described dynamic image data or described static image data are associated from described result for retrieval efferent is given the object result for retrieval, and shows that the described scene that receives gives the object result for retrieval.
13. a related scene adding method is used for related scene applicator, this association scene applicator is associated related scene with result for retrieval, and described related scene is dynamic image data or the static image data that becomes the opportunity of retrieval,
Described related scene applicator comprises:
Iconic memory portion, memory has the regeneration moment of dynamic image data and this dynamic image data; And
Operation history memory portion, memory has the resume of operation, and in the resume of this operation, the search condition that the user imported is corresponding with the moment of having accepted this search condition;
Described related scene adding method comprises:
The information retrieval execution in step is accepted the search condition that the user imports, and comes retrieving information according to this search condition;
The retrieval starting point is inferred step constantly, give the relevance of the resume of the described operation that object result for retrieval and described operation history memory portion remembered according to scene, come the deterministic retrieval starting point, and infer the retrieval starting point constantly according to this retrieval starting point, described scene is given the object result for retrieval, in the result for retrieval that described information retrieval execution in step is carried out by the result for retrieval of described user's appointment, described retrieval starting point is, give with described scene that the object result for retrieval is associated, the resume of the described operation that described operation history memory portion is remembered, described retrieval starting point is to be used to obtain described scene and to give the moment that the search condition of object result for retrieval begins to be transfused to constantly; And
Related scene is extracted step out, from the described dynamic image data that described iconic memory portion is remembered, extraction is comprising dynamic image data or the static image data that described retrieval starting point that the described retrieval starting point portion of inferring constantly infers was reproduced in the time constantly, and the described dynamic image data that will extract out or described static image data are given the object result for retrieval with described scene and be associated.
14. a program makes related scene be associated with result for retrieval, described retrieval scene is dynamic image data or the static image data that becomes the retrieval opportunity,
In the regeneration moment of memory memory dynamic image data and this dynamic image data, described memory is also remembered makes user search condition of importing and the resume of having accepted the corresponding operation of the moment of this search condition,
Described program makes computer carry out following steps:
The information retrieval execution in step is accepted the search condition that the user imports, and comes retrieving information according to this search condition;
The retrieval starting point is inferred step constantly, give the relevance of the resume of the described operation that object result for retrieval and described storage part remember according to scene, come the deterministic retrieval starting point, and infer the retrieval starting point constantly according to this retrieval starting point, described scene is given the object result for retrieval, in the result for retrieval that described information retrieval execution in step is carried out by the result for retrieval of described user's appointment, described retrieval starting point is, give with described scene that the object result for retrieval is associated, the resume of the described operation that described storage part is remembered, described retrieval starting point is to be used to obtain described scene and to give the moment that the search condition of object result for retrieval begins to be transfused to constantly; And
Related scene is extracted step out, from the described dynamic image data that described storage part is remembered, extraction is inferred dynamic image data or the static image data that described retrieval starting point that step infers was reproduced in the time constantly constantly being included in described retrieval starting point, and the described dynamic image data that will extract out or described static image data are given the object result for retrieval with described scene and be associated.
CN200980119475XA 2008-08-22 2009-08-10 Related scene addition device and related scene addition method Active CN102084645B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2008214654 2008-08-22
JP2008-214654 2008-08-22
PCT/JP2009/003836 WO2010021102A1 (en) 2008-08-22 2009-08-10 Related scene addition device and related scene addition method

Publications (2)

Publication Number Publication Date
CN102084645A true CN102084645A (en) 2011-06-01
CN102084645B CN102084645B (en) 2013-06-19

Family

ID=41706995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200980119475XA Active CN102084645B (en) 2008-08-22 2009-08-10 Related scene addition device and related scene addition method

Country Status (4)

Country Link
US (1) US8174579B2 (en)
JP (1) JP4487018B2 (en)
CN (1) CN102084645B (en)
WO (1) WO2010021102A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102072989B1 (en) * 2013-01-14 2020-03-02 삼성전자주식회사 Apparatus and method for composing make-up for supporting the multi device screen
US10915543B2 (en) 2014-11-03 2021-02-09 SavantX, Inc. Systems and methods for enterprise data search and analysis
US9990441B2 (en) * 2014-12-05 2018-06-05 Facebook, Inc. Suggested keywords for searching content on online social networks
DE102015208060A1 (en) 2015-02-12 2016-08-18 Ifm Electronic Gmbh Method for operating a pulse generator for capacitive sensors and pulse generator
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
US11328128B2 (en) 2017-02-28 2022-05-10 SavantX, Inc. System and method for analysis and navigation of data
EP3590053A4 (en) 2017-02-28 2020-11-25 SavantX, Inc. System and method for analysis and navigation of data
US11488033B2 (en) * 2017-03-23 2022-11-01 ROVl GUIDES, INC. Systems and methods for calculating a predicted time when a user will be exposed to a spoiler of a media asset
WO2020213757A1 (en) * 2019-04-17 2020-10-22 엘지전자 주식회사 Word similarity determination method
WO2022198474A1 (en) * 2021-03-24 2022-09-29 Sas Institute Inc. Speech-to-analytics framework with support for large n-gram corpora

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11353325A (en) 1998-06-10 1999-12-24 Hitachi Ltd Synchronous display system for video and relative information
CN1371502B (en) * 1999-06-30 2010-05-05 夏普公司 Dynamic image search information recording apparatus and dynamic image searching device
WO2003039143A1 (en) * 2001-10-30 2003-05-08 Nikon Corporation Image accumulation apparatus, image accumulation support apparatus, image accumulation system, image control apparatus, image storage apparatus
JP2003150928A (en) * 2001-11-15 2003-05-23 Nikon Gijutsu Kobo:Kk Image management device and image saving device
JP2003304523A (en) * 2002-02-08 2003-10-24 Ntt Docomo Inc Information delivery system, information delivery method, information delivery server, content delivery server, and terminal
JP2004080476A (en) 2002-08-20 2004-03-11 Sanyo Electric Co Ltd Digital video reproducing device
JP2005033619A (en) * 2003-07-08 2005-02-03 Matsushita Electric Ind Co Ltd Contents management device and contents management method
JP4283080B2 (en) 2003-10-06 2009-06-24 株式会社メガチップス Image search system
JP2005333280A (en) * 2004-05-19 2005-12-02 Dowango:Kk Program link system
JP4252030B2 (en) * 2004-12-03 2009-04-08 シャープ株式会社 Storage device and computer-readable recording medium
JP4660824B2 (en) * 2006-10-11 2011-03-30 株式会社日立製作所 Information storage device for storing attribute information of media scene, information display device, and information storage method

Also Published As

Publication number Publication date
JPWO2010021102A1 (en) 2012-01-26
JP4487018B2 (en) 2010-06-23
US8174579B2 (en) 2012-05-08
WO2010021102A1 (en) 2010-02-25
US20110008020A1 (en) 2011-01-13
CN102084645B (en) 2013-06-19

Similar Documents

Publication Publication Date Title
CN102084645B (en) Related scene addition device and related scene addition method
CN100550014C (en) Information indexing device
US9202523B2 (en) Method and apparatus for providing information related to broadcast programs
US9008489B2 (en) Keyword-tagging of scenes of interest within video content
CN101595481B (en) Method and system for facilitating information searching on electronic devices
KR101811468B1 (en) Semantic enrichment by exploiting top-k processing
US20110106809A1 (en) Information presentation apparatus and mobile terminal
KR20130083829A (en) Automatic image discovery and recommendation for displayed television content
KR20030007727A (en) Automatic video retriever genie
JP2010055501A (en) Information providing server, information providing method and information providing system
JP4305080B2 (en) Video playback method and system
CN106815284A (en) The recommendation method and recommendation apparatus of news video
KR101140318B1 (en) Keyword Advertising Method and System Based on Meta Information of Multimedia Contents Information like Ccommercial Tags etc.
KR101122737B1 (en) Apparatus and method for establishing search database for knowledge node coupling structure
JP2005501343A (en) Automatic question construction from user selection in multimedia content
KR20200049192A (en) Providing Method for virtual advertisement and service device supporting the same
JP5335500B2 (en) Content search apparatus and computer program
KR20200024541A (en) Providing Method of video contents searching and service device thereof
KR20110043568A (en) Keyword Advertising Method and System Based on Meta Information of Multimedia Contents Information like Ccommercial Tags etc.
JP4774087B2 (en) Movie evaluation method, apparatus and program
KR101624172B1 (en) Appratus and method for management of contents information
CN112015972A (en) Information recommendation method and device, electronic equipment and storage medium
JP5600498B2 (en) Information selection device, server device, information selection method, and program
KR20200023094A (en) Method of simple image searching and service device thereof
KR101283726B1 (en) Method and System for Providing Information Relating to Moving Picture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: MATSUSHITA ELECTRIC (AMERICA) INTELLECTUAL PROPERT

Free format text: FORMER OWNER: MATSUSHITA ELECTRIC INDUSTRIAL CO, LTD.

Effective date: 20141009

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20141009

Address after: Seaman Avenue Torrance in the United States of California No. 2000 room 200

Patentee after: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

Address before: Osaka Japan

Patentee before: Matsushita Electric Industrial Co.,Ltd.