CN107180058A - A kind of method and apparatus for being inquired about based on caption information - Google Patents
A kind of method and apparatus for being inquired about based on caption information Download PDFInfo
- Publication number
- CN107180058A CN107180058A CN201610140826.7A CN201610140826A CN107180058A CN 107180058 A CN107180058 A CN 107180058A CN 201610140826 A CN201610140826 A CN 201610140826A CN 107180058 A CN107180058 A CN 107180058A
- Authority
- CN
- China
- Prior art keywords
- information
- candidate query
- query information
- caption information
- caption
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000007418 data mining Methods 0.000 claims description 24
- 238000004458 analytical method Methods 0.000 claims description 12
- 230000000977 initiatory effect Effects 0.000 abstract description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/685—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
It is an object of the invention to provide a kind of method and apparatus for being inquired about based on caption information.The method according to the invention comprises the following steps:When caption information corresponding with played audio/video is presented, there is provided the one or more candidate query information corresponding with current caption information;When user selects the candidate query information that one is provided on current broadcast interface, search operation is performed based on selected candidate query information.The advantage of the invention is that:By providing the candidate query information corresponding with current subtitle information, user is facilitated directly to select candidate query information interested to scan for from the captions of display during video is watched, there is provided a kind of mode of brand-new quick initiation search, user's operation is reduced, efficiency is improved.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of method for being used to be inquired about based on caption information with
Device.
Background technology
In the prior art, user can not typically operate during audio or video is watched to captions, if with
Family is found information interested in the captions that audio or video is presented and wanted when being scanned for based on the information, it is necessary to enter
Enter to corresponding search interface and input the information and scan for, to may take up longer time can just view accordingly
Search result.Therefore, the scheme based on prior art, user can not the captions based on audio or video initiate search come quick.
The content of the invention
It is an object of the invention to provide a kind of method and apparatus for being inquired about based on caption information.
According to an aspect of the invention, there is provided a kind of method for being inquired about based on caption information, wherein, institute
The method of stating comprises the following steps:
- when caption information corresponding with played audio/video is presented, there is provided corresponding with current caption information
One or more of candidate query information;
- when user selects the candidate query information that one is provided on current broadcast interface, waited based on selected
Query Information is selected to perform search operation.
According to an aspect of the present invention, a kind of inquiry unit for being inquired about based on caption information is additionally provided,
Wherein, the inquiry unit includes:
For when caption information corresponding with played audio/video is presented, there is provided relative with current caption information
The device for one or more of the candidate query information answered;
For when user selects the candidate query information that one is provided on current broadcast interface, based on selected
Candidate query information performs the device of search operation.
Compared with prior art, the present invention has advantages below:By providing the candidate corresponding with current subtitle information
Query Information, facilitates user directly to select candidate query information interested from the captions of display during video is watched
To scan for, there is provided a kind of mode of brand-new quick initiation search, reducing user's operation, improving efficiency.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, of the invention is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 illustrates a kind of method flow diagram for being inquired about based on caption information according to the present invention;
Fig. 2 is illustrated to be shown according to a kind of structure of inquiry unit for being inquired about based on caption information of the present invention
It is intended to.
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 illustrates a kind of method flow diagram for being inquired about based on caption information according to the present invention.According to
The method of the present invention includes step S1 and step S2.
Wherein, the method according to the invention is realized by the inquiry unit being contained in computer equipment.It is described to calculate
Machine equipment according to the instruction for being previously set or storing, can carry out the electricity of numerical computations and/or information processing automatically including a kind of
Sub- equipment, its hardware includes but is not limited to microprocessor, application specific integrated circuit (ASIC), programmable gate array (FPGA), numeral
Processor (DSP), embedded device etc..The computer equipment includes the network equipment and/or user equipment.Wherein, the net
Network equipment includes but is not limited to single network server, the server group of multiple webservers composition or based on cloud computing
The cloud being made up of a large amount of main frames or the webserver of (Cloud Computing), wherein, cloud computing is the one of Distributed Calculation
Kind, a super virtual computer being made up of the computer collection of a group loose couplings.The user equipment includes but is not limited to
Any one can carry out the electricity of man-machine interaction with user by modes such as keyboard, mouse, remote control, touch pad or voice-operated devices
Sub- product, for example, personal computer, tablet personal computer, smart mobile phone, PDA, game machine or IPTV etc..Wherein, the user sets
Network residing for the standby and network equipment includes but is not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
It should be noted that the user equipment, the network equipment and network are only for example, other are existing or from now on may be used
Can occur user equipment, the network equipment and network be such as applicable to the present invention, also should be included in the scope of the present invention with
It is interior, and be incorporated herein by reference.
Before the step of Fig. 1, inquiry unit first passes through execution step S3 (not shown) and step S4 (not shown) comes true
Fixed one or more of candidate query information.
In step s3, inquiry unit obtains the caption information of audio/video.
Wherein, the caption information includes the corresponding subtitle file of the audio/video.
In step s 4, inquiry unit determines one or more of the candidate that can be used for being inquired about based on the caption information
Query Information.
Specifically, inquiry unit carries out semantic analysis processing to the content information in the caption information, to determine can use
In one or more of the candidate query information inquired about.
For example, inquiry unit carries out semantic analysis processing to the content information in the caption information, by content information
One or more of keyword be used as candidate query information.
Preferably, inquiry unit carries out semantic analysis processing to the content information in the caption information and based on predetermined
Screening rule, is filtered out from the caption information available for progress one or more of candidate query information of inquiry.
For example, regarding the uncommon word included in caption information as candidate query information.
In another example, the Query Information that the focus inquiry information included in caption information or user were searched for is looked into as candidate
Ask information etc..
Preferably, when inquiry unit is contained in the network equipment, inquiry unit execution of step S3 and step S4 operation
Afterwards, one or more of candidate query information fixed, corresponding with the caption information is sent to corresponding user equipment.
According to the first example of the present invention, inquiry unit is contained in the player application of mobile device, and inquiry unit is obtained
Video video_1 to be played in player application subtitle file is taken, and the content in video video_1 subtitle files is believed
Breath carries out semantic analysis processing, obtains 3 candidate query information query_1, query_2 and the query_ that can be used for being inquired about
3。
Preferably, before the step of Fig. 1, methods described also includes step S5 (not shown).
In step s 5, inquiry unit generation search corresponding with one or more of candidate query information difference is linked
Information.
Wherein, the search link packet is included for obtaining search knot obtained, corresponding candidate query information
The link information of fruit.
Specifically, inquiry unit base is scanned for respectively with one or more of candidate query information, and based on search
Obtain with the corresponding search result of one or more of candidate query information difference, its each self-corresponding search is generated respectively
Link information.
Then, reference picture 1, in step sl, when caption information corresponding with played audio/video is presented, inquiry
Device provides the one or more candidate query information corresponding with current caption information.
Preferably, the step S1 further comprises step S101 (not shown) and step S102 (not shown).
In step S101, inquiry unit generates the figure layer of the candidate query information corresponding with the caption information
Information.
Preferably, the map data mining platform background transparent.
Specifically, size of the inquiry unit based on currently playing interface, generates the map data mining platform of identical size, and in the figure
The search trigger element for triggering the search operation based on candidate query information is added in layer information, for example, corresponding to each
The label of candidate query information or link etc..
Preferably, inquiry unit added in the map data mining platform it is generating in step s 4, corresponding to each candidate query
The search link information of information.
Preferably, inquiry unit determines the display position at each comfortable currently playing interface of one or more of candidate query information
Put, and in map data mining platform corresponding to the display location region add respectively corresponding to each candidate query information search touch
Send out element or search link information.
Then, in step s 102, when caption information corresponding with played audio/video is presented, inquiry unit phase
The ground output map data mining platform is answered, is believed with providing a user one or more candidate query corresponding with current caption information
Breath.
Preferably, when caption information corresponding with played audio/video is presented, inquiry unit is with predetermined display sample
Formula is provided and current caption information accordingly one or more of candidate query information.
For example, showing or converting font color etc. by one or more corresponding of candidate query information overstriking of caption information
Deng.
Continuation is illustrated to foregoing First example, and query_1 is contained in the captions letter corresponding to temporal information time_1
Cease in sub_t1, query_2 and query_3 are contained in the caption information sub_t2 corresponding to temporal information time_2.
Inquiry unit generates the candidate query information query_1's corresponding with caption information sub_t1 in step S101
Figure layer layer_1, and the region addition label label_1 in figure layer layer_1 corresponding to query_1 display locations, should
Label is used to trigger the search operation carried out based on query_1.Similarly, inquiry unit generation and query_2 and query_3 pairs
The figure layer layer_2 answered, figure layer layer_2 include scanning for operation for triggering query_2 and query_3
Label_2 and label_3.
When caption information sub_t1 is presented, inquiry unit correspondingly exports map data mining platform layer_1, is covered in it and works as
Preceding broadcast interface, to provide a user the candidate query information query_1 corresponding with current caption information sub_t1.It is similar
Ground, when captions sub_t2 is presented, inquiry unit correspondingly exports its corresponding figure layer layer_2, to provide a user with working as
Candidate query information query_2 and query_3 corresponding preceding caption information sub_t1.
With continued reference to Fig. 1, in step s 2, when user selects a candidate query provided on current broadcast interface
During information, inquiry unit performs search operation based on selected candidate query information.
Specifically, selection operation of the inquiry unit based on user determines the candidate query information of its selection, and based on institute
The candidate query information of selection performs search operation.
Wherein, selection operation includes the various operations that can be used for selecting candidate query information, for example, clicking operation, or touch
Long-press operation in screen equipment etc..
Preferably, methods described further comprises step S6 (not shown).
In step s 6, inquiry unit is based on selected candidate query information, triggers predetermined search engine and is searched
Rope, to obtain corresponding search result.
Continuation is illustrated to foregoing First example, and user clicks on label corresponding with query_1 in broadcast interface
Label_1, then inquiry unit is based on selected candidate query information query_1, and triggering search engine is scanned for, to obtain
The corresponding search results of candidate query information query_1.
Preferably, methods described also includes step S7 (not shown).
In the step s 7, inquiry unit will perform the search obtained after search operation based on selected candidate query information
As a result it is presented to the user.
Specifically, the search result of acquisition can be presented to the user by inquiry unit in the new page.
Or, inquiry unit generates window in currently playing interface and the search result of acquisition is presented into the use
Family.
Preferably, the step S7 further comprises step S701 (not shown) and step S702 (not shown).
In step s 701, inquiry unit prompts the user whether to need to check search result.
In step S702, when user determines to need to check, the search result is presented to the user in inquiry unit.
Continuation is illustrated to foregoing First example, and inquiry unit pop-up window prompts the user whether that needs check search knot
Really, user's selection needs are checked, then inquiry unit is presented corresponding with candidate query information query_1 in the new page to user
Search result.
The method according to the invention, by providing the candidate query information corresponding with current subtitle information, facilitates user
Watch video during directly candidate query information interested is selected from the captions of display come scan for there is provided
A kind of mode of brand-new quick initiation search, reduces user's operation, improves efficiency.
Fig. 2 is illustrated to be shown according to a kind of structure of inquiry unit for being inquired about based on caption information of the present invention
It is intended to.
Included according to the inquiry unit of the present invention:For present caption information corresponding with played audio/video when,
The device for providing the one or more candidate query information corresponding with current caption information (hereinafter referred to as " provides device
1 "), and for when user selects the candidate query information that one is provided on current broadcast interface, being waited based on selected
Query Information is selected to perform the device (hereinafter referred to as " searcher 2 ") of search operation.
Inquiry unit first determines one or more of candidate query information, and the inquiry unit includes being used for obtaining audio/regard
The device (not shown, hereinafter referred to as " captions acquisition device ") of the caption information of frequency, and for being determined based on the caption information
Device (not shown, hereinafter referred to as " candidate's determining device ") available for one or more of the candidate query information inquired about.
Captions acquisition device obtains the caption information of audio/video.
Wherein, the caption information includes the corresponding subtitle file of the audio/video.
One or more of candidate query letter that candidate's determining device determines to can be used for being inquired about based on the caption information
Breath.
Specifically, candidate's determining device carries out semantic analysis processing to the content information in the caption information, to determine
Available for one or more of the candidate query information inquired about.
For example, candidate's determining device carries out semantic analysis processing to the content information in the caption information, content is believed
One or more of keyword in breath is used as candidate query information.
Preferably, candidate's determining device carries out semantic analysis processing to the content information in the caption information and based on pre-
Fixed screening rule, is filtered out from the caption information available for progress one or more of candidate query information of inquiry.
For example, regarding the uncommon word included in caption information as candidate query information.
In another example, the Query Information that the focus inquiry information included in caption information or user were searched for is looked into as candidate
Ask information etc..
Preferably, when inquiry unit is contained in the network equipment, candidate's determining device has been performed after operation, inquiry unit to
Corresponding user equipment sends one or more of candidate query information fixed, corresponding with the caption information.
According to the first example of the present invention, inquiry unit is contained in the player application of mobile device, and captions obtain dress
Put the subtitle file for obtaining video video_1 to be played in player application, and in video video_1 subtitle files
Hold information and carry out semantic analysis processing, obtain can be used for 3 candidate query information query_1, query_2 being inquired about and
query_3。
Preferably, the inquiry unit also includes corresponding respectively with one or more of candidate query information for generating
Search link information device (not shown, hereinafter referred to as " link generating means ").
Link generating means and generate search link information corresponding with one or more of candidate query information difference.
Wherein, the search link packet is included for obtaining search knot obtained, corresponding candidate query information
The link information of fruit.
Specifically, link generating means base is scanned for respectively with one or more of candidate query information, and is based on
Search for obtain with the corresponding search result of one or more of candidate query information difference, its is generated respectively each self-corresponding
Search for link information.
Then, reference picture 2, present caption information corresponding with played audio/video when there is provided device 1 offer and
One or more corresponding of candidate query information of current caption information.
Preferably, the device 1 that provides further comprises for generating the candidate corresponding with the caption information
The device (not shown, hereinafter referred to as " figure layer generating means ") of the map data mining platform of Query Information, and for presenting with being played
The map data mining platform is correspondingly exported during the corresponding caption information of audio/video, to provide a user and current caption information
The device (not shown, hereinafter referred to as " figure layer output device ") of one or more corresponding of candidate query information.
Figure layer generating means generate the map data mining platform of the candidate query information corresponding with the caption information.
Preferably, the map data mining platform background transparent.
Specifically, size of the figure layer generating means based on currently playing interface, the map data mining platform of the identical size of generation, and
The search trigger element for triggering the search operation based on candidate query information is added in the map data mining platform, for example, corresponding to
The label of each candidate query information or link etc..
Preferably, figure layer generating means added in the map data mining platform link generating means generation, corresponding to each time
Select the search link information of Query Information.
Preferably, figure layer generating means determine the display at each comfortable currently playing interface of one or more of candidate query information
Position, and search corresponding to each candidate query information is added in region corresponding to the display location in map data mining platform respectively
Trigger element or search link information.
Then, when caption information corresponding with played audio/video is presented, figure layer output device correspondingly exports institute
Map data mining platform is stated, to provide a user the one or more candidate query information corresponding with current caption information.
Preferably, present caption information corresponding with played audio/video when there is provided device 1 with predetermined display
Pattern is provided and current caption information accordingly one or more of candidate query information.
For example, showing or converting font color etc. by one or more corresponding of candidate query information overstriking of caption information
Deng.
Continuation is illustrated to foregoing First example, and query_1 is contained in the captions letter corresponding to temporal information time_1
Cease in sub_t1, query_2 and query_3 are contained in the caption information sub_t2 corresponding to temporal information time_2.
Figure layer generating means generate the candidate query information query_1 corresponding with caption information sub_t1 figure layer
Layer_1, and the region addition label label_1 in figure layer layer_1 corresponding to query_1 display locations, the label
Label_1 is used to trigger the search operation carried out based on query_1.Similarly, figure layer generating means generation with query_2 and
The corresponding map data mining platform layer_2 of query_3, and added in figure layer layer_2 for triggering query_2 and query_3
Scan for the label_2 and label_3 of operation.
When caption information sub_t1 is presented, figure layer output device correspondingly exports map data mining platform layer_1, covers it
In currently playing interface, to provide a user the candidate query information query_1 corresponding with current caption information sub_t1.
Similarly, when captions sub_t2 is presented, figure layer output device correspondingly exports its corresponding figure layer layer_2, with to user
Candidate query the information query_2 and query_3 corresponding with current caption information sub_t1 is provided.
With continued reference to Fig. 2, when user selects the candidate query information that one is provided on current broadcast interface, search
Device 2 performs search operation based on selected candidate query information.
Specifically, selection operation of the searcher 2 based on user determines the candidate query information of its selection, and based on institute
The candidate query information of selection performs search operation.
Wherein, selection operation includes the various operations that can be used for selecting candidate query information, for example, clicking operation, or touch
Long-press operation in screen equipment etc..
Preferably, the inquiry unit further comprises being used to be based on selected candidate query information, triggers predetermined
Search engine is scanned for, to obtain the device (not shown, hereinafter referred to as " search trigger device ") of corresponding search result.
Search trigger device is based on selected candidate query information, triggers predetermined search engine and scans for, to obtain
Obtain corresponding search result.
Continuation is illustrated to foregoing First example, and user clicks on label corresponding with query_1 in broadcast interface
Label_1, then search trigger device is based on selected candidate query information query_1, and triggering search engine is scanned for, with
Obtain the corresponding search results of candidate query information query_1.
Preferably, the inquiry unit also includes being used to perform after search operation based on selected candidate query information
The search result of acquisition is presented to the device (not shown, hereinafter referred to as " result presentation device ") of the user.
As a result device, which is presented, to be in based on the search result obtained after selected candidate query information execution search operation
Now give the user.
Specifically, device is as a result presented by the search result of acquisition can be presented to the user in the new page.
Or, device is as a result presented and generate in currently playing interface described in the search result of acquisition is presented to by window
User.
Preferably, the result is presented device and further comprised for prompting the user whether to need the dress for checking search result
Put (not shown, hereinafter referred to as " suggestion device "), and for when user determine need to check when to the user present described in search
The device (not shown, hereinafter referred to as " sub- presentation device ") of hitch fruit.
Suggestion device prompts the user whether to need to check search result;
When user determines to need to check, the search result is presented to the user in sub- presentation device.
Continuation is illustrated to foregoing First example, and suggestion device pop-up window prompts the user whether that needs check search knot
Really, user's selection needs are checked, then son is presented device and presented and query_1 pairs of candidate query information to user in the new page
The search result answered.
According to the solution of the present invention, by providing the candidate query information corresponding with current subtitle information, facilitate user
Watch video during directly candidate query information interested is selected from the captions of display come scan for there is provided
A kind of mode of brand-new quick initiation search, reduces user's operation, improves efficiency.
The software program of the present invention can realize steps described above or function by computing device.Similarly, originally
The software program (including related data structure) of invention can be stored in computer readable recording medium storing program for performing, for example, RAM is deposited
Reservoir, magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the present invention can employ hardware to reality
It is existing, for example, as coordinating with processor so as to performing the circuit of each function or step.
In addition, the part of the present invention can be applied to computer program product, such as computer program instructions, when its quilt
When computer is performed, by the operation of the computer, the method according to the invention and/or technical scheme can be called or provided.
And the programmed instruction of the method for the present invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to described program instruction operation
In the working storage of computer equipment.Here, including a device according to one embodiment of present invention, the device includes using
In the memory and processor for execute program instructions of storage computer program instructions, wherein, when the computer program refers to
When order is by the computing device, method and/or skill of the plant running based on foregoing multiple embodiments according to the present invention are triggered
Art scheme.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit is required rather than described above is limited, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the claim involved by limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in system claims is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Although above specifically shown and describe exemplary embodiment, it will be understood to those of skill in the art that
It is that in the case of the spirit and scope without departing substantially from claims, can be varied from terms of its form and details.Here
Sought protection is illustrated in the dependent claims.Defined in following numbering clause each embodiment these
With other aspects:
1. a kind of method for being inquired about based on caption information, wherein, it the described method comprises the following steps:
- when caption information corresponding with played audio/video is presented, there is provided corresponding with current caption information
One or more of candidate query information;
- when user selects the candidate query information that one is provided on current broadcast interface, waited based on selected
Query Information is selected to perform search operation.
2. the method according to clause 1, wherein, it is described that caption information corresponding with played audio/video is being presented
When there is provided further comprise following step the step of the one or more candidate query information corresponding with current caption information
Suddenly:
The map data mining platform of-generation candidate query the information corresponding with the caption information;
- present caption information corresponding with played audio/video when, correspondingly export the map data mining platform, with to
User provides the one or more candidate query information corresponding with current caption information.
3. the method according to clause 1 or 2, wherein, it is described that captions letter corresponding with played audio/video is being presented
There is provided further comprising the steps of the step of the one or more candidate query information corresponding with current caption information during breath:
- when caption information corresponding with played audio/video is presented, with predetermined Show Styles offer and currently
One or more corresponding of candidate query information of caption information.
4. the method according to any one of clause 1 to 3, wherein, methods described further comprises the steps:
- selected candidate query information is based on, trigger predetermined search engine and scan for, to obtain corresponding search
As a result.
5. the method according to any one of clause 1 to 4, wherein, methods described is further comprising the steps of:
- use will be presented to based on the search result obtained after selected candidate query information execution search operation
Family.
6. the method according to clause 4 or 5, wherein, it is described search to be performed based on selected candidate query information
The step of search result obtained after operation is presented to the user further comprises the steps:
- prompt the user whether to need to check search result;
- when user determines to need to check, the search result is presented to the user.
7. the method according to any one of clause 1 to 6, wherein, methods described is further comprising the steps of:
The caption information of-acquisition audio/video;
- determined based on the caption information available for one or more of the candidate query information inquired about.
8. the method according to clause 7, wherein, it is described to be determined based on the caption information available for one inquired about
The step of item or multinomial candidate query information, further comprises the steps:
- semantic analysis processing is carried out to the content information in the caption information, to determine can be used for being inquired about one
Item or multinomial candidate query information.
9. the method according to clause 7 or 8, wherein, methods described is further comprising the steps of:
- generation search link information corresponding with one or more of candidate query information difference.
10. a kind of inquiry unit for being inquired about based on caption information, wherein, the inquiry unit includes:
For when caption information corresponding with played audio/video is presented, there is provided relative with current caption information
The device for one or more of the candidate query information answered;
For when user selects the candidate query information that one is provided on current broadcast interface, based on selected
Candidate query information performs the device of search operation.
11. the inquiry unit according to clause 10, wherein, it is described to be used to be presented corresponding with played audio/video
Caption information when further wrapped there is provided the device with current caption information accordingly one or more of candidate query information
Include:
For generate it is corresponding with the caption information, linked comprising search corresponding with the candidate query information
The device of map data mining platform;
For when caption information corresponding with played audio/video is presented, correspondingly exporting the map data mining platform, with
Provide a user the device of the one or more candidate query information corresponding with current caption information.
12. the inquiry unit according to clause 10 or 11, wherein, it is described to be used to present and play audio/video
Also used there is provided the device of the one or more candidate query information corresponding with current caption information during corresponding caption information
In:
- when caption information corresponding with played audio/video is presented, with predetermined Show Styles offer and currently
Caption information accordingly one or more of candidate query information.
13. the inquiry unit according to any one of clause 10 to 12, wherein, the inquiry unit also includes:
It is corresponding to obtain for based on selected candidate query information, triggering predetermined search engine and scanning for
The device of search result.
14. the inquiry unit according to any one of clause 10 to 13, wherein, the inquiry unit also includes:
It is described for will be presented to based on the search result obtained after selected candidate query information execution search operation
The device of user.
15. the inquiry unit according to clause 13 or 14, wherein, it is described to be used to based on selected candidate query to believe
The device that the search result obtained after breath execution inquiry operation is presented to the user further comprises:
For prompting the user whether to need the device for checking search result;
For when user determines to need to check, the device of the search result to be presented to the user.
16. the inquiry unit according to any one of clause 8 to 12, wherein, the inquiry unit also includes:
For the device for the caption information for obtaining audio/video;
For determining the device available for one or more of the candidate query information inquired about based on the caption information.
17. the inquiry unit according to clause 16, wherein, it is described be used for based on the caption information determine to can be used for into
The device of one or more of candidate query information of row inquiry further comprises:
For carrying out semantic analysis processing to the content information in the caption information, to determine to can be used for what is inquired about
The device of one or more of candidate query information.
18. the inquiry unit according to clause 16 or 17, wherein, the inquiry unit also includes:
Device for generating search link information corresponding with one or more of candidate query information difference.
Claims (18)
1. a kind of method for being inquired about based on caption information, wherein, it the described method comprises the following steps:
- when caption information corresponding with played audio/video is presented, there is provided one corresponding with current caption information
Or multinomial candidate query information;
- when user selects the candidate query information that one is provided on current broadcast interface, looked into based on selected candidate
Information is ask to perform search operation.
2. according to the method described in claim 1, wherein, it is described that corresponding with played audio/video caption information is being presented
When there is provided further comprise following step the step of the one or more candidate query information corresponding with current caption information
Suddenly:
The map data mining platform of-generation candidate query the information corresponding with the caption information;
- when caption information corresponding with played audio/video is presented, the map data mining platform is correspondingly exported, with to user
The one or more candidate query information corresponding with current caption information is provided.
3. method according to claim 1 or 2, wherein, it is described that captions letter corresponding with played audio/video is being presented
There is provided further comprising the steps of the step of the one or more candidate query information corresponding with current caption information during breath:
- when caption information corresponding with played audio/video is presented, provided and current captions with predetermined Show Styles
One or more corresponding of candidate query information of information.
4. according to the method in any one of claims 1 to 3, wherein, methods described further comprises the steps:
- selected candidate query information is based on, trigger predetermined search engine and scan for, tied with obtaining corresponding search
Really.
5. method according to any one of claim 1 to 4, wherein, methods described is further comprising the steps of:
- user will be presented to based on the search result obtained after selected candidate query information execution search operation.
6. the method according to claim 4 or 5, wherein, it is described search to be performed based on selected candidate query information
The step of search result obtained after operation is presented to the user further comprises the steps:
- prompt the user whether to need to check search result;
- when user determines to need to check, the search result is presented to the user.
7. method according to any one of claim 1 to 6, wherein, methods described is further comprising the steps of:
The caption information of-acquisition audio/video;
- determined based on the caption information available for one or more of the candidate query information inquired about.
8. method according to claim 7, wherein, it is described to be determined based on the caption information available for one inquired about
The step of item or multinomial candidate query information, further comprises the steps:
- semantic analysis processing is carried out to the content information in the caption information, with determine can be used for inquired about one or
Multinomial candidate query information.
9. the method according to claim 7 or 8, wherein, methods described is further comprising the steps of:
- generation search link information corresponding with one or more of candidate query information difference.
10. a kind of inquiry unit for being inquired about based on caption information, wherein, the inquiry unit includes:
For when caption information corresponding with played audio/video is presented, there is provided corresponding with current caption information
The device of one or more of candidate query information;
For when user selects the candidate query information that one is provided on current broadcast interface, based on selected candidate
Query Information performs the device of search operation.
11. inquiry unit according to claim 10, wherein, it is described to be used to be presented corresponding with played audio/video
Caption information when further wrapped there is provided the device with current caption information accordingly one or more of candidate query information
Include:
For generating figure layer that is corresponding with the caption information, being linked comprising search corresponding with the candidate query information
The device of information;
For present caption information corresponding with played audio/video when, correspondingly export the map data mining platform, with to
Family provides the device of the one or more candidate query information corresponding with current caption information.
12. the inquiry unit according to claim 10 or 11, wherein, it is described to be used to present and play audio/video
Also used there is provided the device of the one or more candidate query information corresponding with current caption information during corresponding caption information
In:
- when caption information corresponding with played audio/video is presented, provided and current captions with predetermined Show Styles
Information accordingly one or more of candidate query information.
13. the inquiry unit according to any one of claim 10 to 12, wherein, the inquiry unit also includes:
For based on selected candidate query information, triggering predetermined search engine and scanning for, to obtain corresponding search
As a result device.
14. the inquiry unit according to any one of claim 10 to 13, wherein, the inquiry unit also includes:
The user is presented to for the search result obtained after search operation will to be performed based on selected candidate query information
Device.
15. the inquiry unit according to claim 13 or 14, wherein, it is described to be used to based on selected candidate query to believe
The device that the search result obtained after breath execution inquiry operation is presented to the user further comprises:
For prompting the user whether to need the device for checking search result;
For when user determines to need to check, the device of the search result to be presented to the user.
16. the inquiry unit according to any one of claim 8 to 12, wherein, the inquiry unit also includes:
For the device for the caption information for obtaining audio/video;
For determining the device available for one or more of the candidate query information inquired about based on the caption information.
17. inquiry unit according to claim 16, wherein, it is described be used for based on the caption information determine to can be used for into
The device of one or more of candidate query information of row inquiry further comprises:
For carrying out semantic analysis processing to the content information in the caption information, to determine one that can be used for being inquired about
Or the device of multinomial candidate query information.
18. the inquiry unit according to claim 16 or 17, wherein, the inquiry unit also includes:
Device for generating search link information corresponding with one or more of candidate query information difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610140826.7A CN107180058B (en) | 2016-03-11 | 2016-03-11 | Method and device for inquiring based on subtitle information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610140826.7A CN107180058B (en) | 2016-03-11 | 2016-03-11 | Method and device for inquiring based on subtitle information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107180058A true CN107180058A (en) | 2017-09-19 |
CN107180058B CN107180058B (en) | 2024-06-18 |
Family
ID=59830803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610140826.7A Active CN107180058B (en) | 2016-03-11 | 2016-03-11 | Method and device for inquiring based on subtitle information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107180058B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110620960A (en) * | 2018-06-20 | 2019-12-27 | 北京优酷科技有限公司 | Video subtitle processing method and device |
CN111753135A (en) * | 2020-05-21 | 2020-10-09 | 北京达佳互联信息技术有限公司 | Video display method, device, terminal, server, system and storage medium |
CN113068077A (en) * | 2020-01-02 | 2021-07-02 | 腾讯科技(深圳)有限公司 | Subtitle file processing method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021852A (en) * | 2006-10-10 | 2007-08-22 | 鲍东山 | Video search dispatching system based on content |
CN101262494A (en) * | 2008-01-23 | 2008-09-10 | 华为技术有限公司 | Method, client, server and system for processing distributed information |
CN101267518A (en) * | 2007-02-28 | 2008-09-17 | 三星电子株式会社 | Method and system for extracting relevant information from content metadata |
CN101296362A (en) * | 2007-04-25 | 2008-10-29 | 三星电子株式会社 | Method and system for providing access to information of potential interest to a user |
CN101595481A (en) * | 2007-01-29 | 2009-12-02 | 三星电子株式会社 | Be used on electronic installation, promoting the method and system of information search |
CN104102683A (en) * | 2013-04-05 | 2014-10-15 | 联想(新加坡)私人有限公司 | Contextual queries for augmenting video display |
-
2016
- 2016-03-11 CN CN201610140826.7A patent/CN107180058B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021852A (en) * | 2006-10-10 | 2007-08-22 | 鲍东山 | Video search dispatching system based on content |
CN101595481A (en) * | 2007-01-29 | 2009-12-02 | 三星电子株式会社 | Be used on electronic installation, promoting the method and system of information search |
CN101267518A (en) * | 2007-02-28 | 2008-09-17 | 三星电子株式会社 | Method and system for extracting relevant information from content metadata |
CN101296362A (en) * | 2007-04-25 | 2008-10-29 | 三星电子株式会社 | Method and system for providing access to information of potential interest to a user |
CN101262494A (en) * | 2008-01-23 | 2008-09-10 | 华为技术有限公司 | Method, client, server and system for processing distributed information |
CN104102683A (en) * | 2013-04-05 | 2014-10-15 | 联想(新加坡)私人有限公司 | Contextual queries for augmenting video display |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110620960A (en) * | 2018-06-20 | 2019-12-27 | 北京优酷科技有限公司 | Video subtitle processing method and device |
CN110620960B (en) * | 2018-06-20 | 2022-01-25 | 阿里巴巴(中国)有限公司 | Video subtitle processing method and device |
CN113068077A (en) * | 2020-01-02 | 2021-07-02 | 腾讯科技(深圳)有限公司 | Subtitle file processing method and device |
CN113068077B (en) * | 2020-01-02 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Subtitle file processing method and device |
CN111753135A (en) * | 2020-05-21 | 2020-10-09 | 北京达佳互联信息技术有限公司 | Video display method, device, terminal, server, system and storage medium |
CN111753135B (en) * | 2020-05-21 | 2024-02-06 | 北京达佳互联信息技术有限公司 | Video display method, device, terminal, server, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107180058B (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10031649B2 (en) | Automated content detection, analysis, visual synthesis and repurposing | |
US9569541B2 (en) | Evaluating preferences of content on a webpage | |
AU2009337678B2 (en) | Visualizing site structure and enabling site navigation for a search result or linked page | |
US20090150784A1 (en) | User interface for previewing video items | |
US20080120328A1 (en) | Method of Performing a Weight-Based Search | |
JP6646931B2 (en) | Method and apparatus for providing recommendation information | |
KR101967036B1 (en) | Methods, systems, and media for searching for video content | |
US20080120290A1 (en) | Apparatus for Performing a Weight-Based Search | |
CN105898520A (en) | Video frame interception method and device | |
US11748365B2 (en) | Multi-dimensional search | |
CN105874449A (en) | Systems and methods for extracting and generating images for display content | |
CN105354288A (en) | Image searching method and apparatus based on video contents | |
CN104809224A (en) | Method and device for presenting application information | |
CN105045935B (en) | A kind of method and electronic equipment for recommended location information | |
CN104881431B (en) | A kind of method and apparatus for obtaining search results pages in computer equipment | |
CN105183853A (en) | Method and device used for presenting label page | |
CN103268405A (en) | Method, device and system for acquiring game information | |
CN107180058A (en) | A kind of method and apparatus for being inquired about based on caption information | |
CN107562750A (en) | A kind of method and apparatus for providing search result | |
CN106599285A (en) | News searching-based searching result providing method and apparatus | |
CN106021319A (en) | Voice interaction method, device and system | |
JP2001306579A (en) | Device and method for retrieving information and computer-readable recording medium recorded with program for computer to execute the same method | |
JP5767413B1 (en) | Information processing system, information processing method, and information processing program | |
CN106951429B (en) | Method, browser and equipment for enhancing webpage comment display | |
CN105243106A (en) | Method and apparatus used for generating inquiry results |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |