CN108133058A - A kind of video retrieval method - Google Patents
A kind of video retrieval method Download PDFInfo
- Publication number
- CN108133058A CN108133058A CN201810095506.3A CN201810095506A CN108133058A CN 108133058 A CN108133058 A CN 108133058A CN 201810095506 A CN201810095506 A CN 201810095506A CN 108133058 A CN108133058 A CN 108133058A
- Authority
- CN
- China
- Prior art keywords
- video
- user
- subclassification
- comentropy
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
Abstract
The embodiment of the invention discloses a kind of video retrieval method and video-units, are related to electronic information technical field, can realize the quick-searching of video, improve video frequency searching efficiency.Concrete scheme is:Obtain the first video collection;The comentropy of at least two attributive classifications in the first video collection is calculated, each attributive classification includes at least two subclassifications;Prompting user is selected in the subclassification of the attributive classification of comentropy maximum.The present invention is in the retrieving of video.
Description
The application is a kind of on 04 30th, 2014 entitled " video retrieval method and the video-units " proposed
The divisional application of Chinese invention patent application 201410180892.8.
Technical field
The present invention relates to electronic information technical field more particularly to a kind of video retrieval methods and video-unit.
Background technology
With the development of multimedia technology, multimedia technology field, which has emerged, can much provide video frequency searching to the user
Multimedia page, multimedia application and the client of function.
Video-unit in multimedia page of the prior art, multimedia application and client is generally using retrieving
The video in each attributive classification is displayed for a user with fixed sequence on interface, for selection by the user, then receives user couple
The selection of the attributive classification on search interface is enumerated, shows the subclassification in the attributive classification on search interface for user,
Selection of the user to the subclassification in the attributive classification is then received, what is included in the subclassification of retrieval user's selection all regards
Frequently.For example, attributive classification is the classification that is carried out according to attribute of video, such as according to type attribute video can be divided into action movie,
Video can be divided into continent piece, Hong Kong and Taiwan films and Japan and Korea S's piece etc. by comedy and science fiction film etc. according to region attribute.Wherein, it is each
Comprising multiple subclassifications in a attributive classification, as can be at least included action movie, comedy and science fiction film in type attribute.
During stating tree video frequency searching in realization, when the searched targets of user indefinite (i.e. user does not know),
User may randomly choose an attributive classification, then select suitable subclassification in randomly selected attributive classification again,
All videos included in the subclassification of retrieval user's selection.It is not wrapped when in all videos included in the subclassification of user's selection
Include user want viewing video when, return again to attributive classification selection interface, until user search to want watch video.
But video-unit displays for a user the video in each attributive classification using fixed sequence, and according to user
Randomly selected attributive classification retrieval video when, it may be necessary to selection of the user to attributive classification is repeatedly received, for the user
Retrieve video, it is impossible to be quickly found out user search to the video for wanting viewing, video frequency searching is less efficient.
Invention content
The embodiment of the present invention provides a kind of video retrieval method and video-unit, can realize the quick-searching of video,
Improve video frequency searching efficiency.
In order to achieve the above objectives, the embodiment of the present invention adopts the following technical scheme that:
The embodiment of the present invention in a first aspect, provide a kind of video retrieval method, including:
Obtain the first video collection;
The comentropy of at least two attributive classifications in first video collection is calculated, each described attributive classification includes
At least two subclassifications;
Prompting user is selected in the subclassification of the attributive classification of comentropy maximum.
With reference to first aspect, it is described to calculate in first video collection at least in a kind of possible realization method
The comentropy of two attribute classification, including:
According to the number of videos that subclassification each in the attributive classification is included, the comentropy of the attributive classification is calculated.
It is described according to institute in alternatively possible realization method with reference to first aspect with above-mentioned possible realization method
The number of videos that each subclassification is included in attributive classification is stated, calculates the comentropy of the attributive classification, including:
According to the number of videos that subclassification each in the attributive classification is included, each height point in the attributive classification is determined
Video distribution rate in class;
According to the distributive law, the comentropy of the attributive classification is calculated.
With reference to first aspect with above-mentioned possible realization method, in alternatively possible realization method, the calculating institute
The comentropy of at least two attributive classifications in the first video collection is stated, including:
With reference to current scene information and/or user behavior parameter, at least two categories in first video collection are calculated
Property classification comentropy.
With reference to first aspect with above-mentioned possible realization method, in alternatively possible realization method, the method is gone back
Including:
Second video collection is obtained according to the selection of the user;
The comentropy of at least two attributive classifications of second video collection is calculated, each described attributive classification is included extremely
Few two subclassifications;
Prompting user is selected in the subclassification of the attributive classification of comentropy maximum.
With reference to first aspect with above-mentioned possible realization method, in alternatively possible realization method, the method is gone back
Including:
According to the selection of the user, user behavior parameter is updated.
It is described to obtain the in alternatively possible realization method with reference to first aspect with above-mentioned possible realization method
One video collection, including:
It is retrieved according to the term input by user, to obtain first video collection;
Alternatively, degree of correlation retrieval is carried out according to the video that the user currently selects, to obtain first video collection;
Alternatively, the phonetic entry information according to the user is retrieved, to obtain first video collection.
With reference to first aspect with above-mentioned possible realization method, in alternatively possible realization method, the prompting is used
Family is selected in the subclassification of the attributive classification of comentropy maximum, including:
Show that the subclassification mark prompting user of the attributive classification of comentropy maximum is selected;
Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt
It selects.
The second aspect of the embodiment of the present invention also provides a kind of video-unit, including:
First acquisition unit, for obtaining the first video collection;
First computing unit, for calculating at least two in first video collection of the first acquisition unit acquisition
The comentropy of attribute classification, each described attributive classification include at least two subclassifications;
First prompt unit, for prompting user's attribute of comentropy maximum being calculated in first computing unit
It is selected in the subclassification of classification.
With reference to second aspect, in a kind of possible realization method, first computing unit is additionally operable to according to the category
Property classification in the number of videos that is included of each subclassification, calculate the comentropy of the attributive classification.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, first meter
Unit is calculated, including:
Determining module for the number of videos included according to subclassification each in the attributive classification, determines the attribute
Video distribution rate in classification in each subclassification;
Computing module, for according to the distributive law, calculating the comentropy of the attributive classification.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, first meter
Unit is calculated, is additionally operable to combine current scene information and/or user behavior parameter, calculates at least two in first video collection
The comentropy of attribute classification.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, the video dress
It puts, further includes:
Second acquisition unit, for obtaining the second video collection according to the selection of the user;
Second computing unit, for calculating at least two of second video collection that the second acquisition unit obtains
The comentropy of attributive classification, each described attributive classification include at least two subclassifications;
Second prompt unit, for prompting user's attribute of comentropy maximum being calculated in second computing unit
It is selected in the subclassification of classification.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, the video dress
It puts, further includes:
Updating unit for the selection according to the user, updates user behavior parameter.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, described first obtains
Unit is taken, is additionally operable to be retrieved according to the term input by user, to obtain first video collection;Alternatively, root
Degree of correlation retrieval is carried out according to the video that the user currently selects, to obtain first video collection;Alternatively, according to the use
The phonetic entry information at family is retrieved, to obtain first video collection.
With reference to second aspect and above-mentioned possible realization method, in alternatively possible realization method, described first carries
Show unit, the subclassification mark prompting user for being additionally operable to the attributive classification of display comentropy maximum is selected;Alternatively, pass through language
Sound prompts to be selected in the subclassification of attributive classification of the user in described information entropy maximum;
Second prompt unit, be additionally operable to display comentropy maximum attributive classification subclassification mark prompt user into
Row selection;Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt.
Video retrieval method and video-unit provided in an embodiment of the present invention obtain the first video collection;First is calculated to regard
The comentropy of at least two attributive classifications in frequency set, each attributive classification include at least two subclassifications;Prompting user exists
It is selected in the subclassification of the attributive classification of comentropy maximum.
Wherein, the comentropy of a system can embody the probability distribution of information in the system, information in embodiment system
Convergent when carrying out information retrieval, can effectively reduce retrieval with reference to the probability distribution and convergent of information in system
Range improves recall precision.In in the present solution, the comentropy for the attributive classification being calculated can embody the first video collection
Video video when being classified according to different attribute probability distribution and convergent, regarded in classifying with reference to different attribute
The probability distribution and convergent of frequency can improve recall precision with the diminution range of search of effective video.
Description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also
To obtain other attached drawings according to these attached drawings.
Fig. 1 is a kind of video retrieval method flow chart in the embodiment of the present invention 1;
Fig. 2 is a kind of video retrieval method flow chart in the embodiment of the present invention 2;
Fig. 3 is a kind of composition flow chart of video-unit in the embodiment of the present invention 3;
Fig. 4 is the composition flow chart of another video-unit in the embodiment of the present invention 3;
Fig. 5 is the composition flow chart of another video-unit in the embodiment of the present invention 3;
Fig. 6 is the composition flow chart of another video-unit in the embodiment of the present invention 3.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment shall fall within the protection scope of the present invention.
In addition, the terms " system " and " network " are often used interchangeably herein.The terms " and/
Or ", only a kind of incidence relation for describing affiliated partner, represents there may be three kinds of relationships, for example, A and/or B, it can be with table
Show:Individualism A exists simultaneously A and B, these three situations of individualism B.In addition, character "/" herein, typicallys represent front and rear
Affiliated partner is a kind of relationship of "or".
Embodiment 1
The embodiment of the present invention provides a kind of video retrieval method, as shown in Figure 1, including:
S101, video-unit obtain the first video collection.
Wherein, at least two videos are included in the first video collection.
Specifically, video-unit can be retrieved according to term input by user, to obtain the first video collection;Or
Person can carry out degree of correlation retrieval, to obtain the first video collection according to the video that user currently selects;It alternatively, can be with root
It is retrieved according to the phonetic entry information of user, to obtain the first video collection.
Illustratively, video-unit can be (current including term input by user, user according to the retrieval information of user
The phonetic entry information of term or user corresponding to the video of selection), determine the input keyword of user, and according to defeated
Entry keyword is retrieved, and determines the first video collection.Wherein, in the first video collection comprising at least two videos, described the
Video in one video collection matches with input keyword.
For example, the retrieval information that video-unit can receive user (is currently selected including term input by user, user
Video corresponding to term or user phonetic entry information), then to retrieval information carry out natural language understanding, with
Obtain input keyword.Video-unit can name mark to determine matching keyword corresponding with input keyword by entity,
Video in video information library is presorted using different matching keywords, then according to input keyword, is being had determined that
Matching keyword corresponding to the video that is included of visual classification in, determine the first video money set.
It should be noted that video-unit can to the specific method of retrieval information progress natural language understanding in the present embodiment
To refer to other methods embodiment of the present invention or associated description of the prior art, which is not described herein again for the embodiment of the present invention;
Video-unit determines the specific method of matching keyword corresponding with input keyword by entity name mark in the present embodiment
Other methods embodiment of the present invention or associated description of the prior art can be referred to, the embodiment of the present invention is no longer superfluous here
It states.
Wherein, the video-unit in the embodiment of the present invention can be the work(for having the retrieval information retrieval video according to user
Can search engine can also be either have according to user retrieval information retrieval video function retrieval facility or this
Retrieval module in class retrieval facility.
S102, video-unit calculate the comentropy of at least two attributive classifications in the first video collection, divide per attribute
Class includes at least two subclassifications.
Specifically, video-unit can determine that the video in the first video collection is divided according to each attributive classification respectively
During class, the number for the video that each subclassification is included in each attributive classification;It is wrapped according to subclassification each in attributive classification
The number of the video contained, the comentropy of computation attribute classification.
S103, video-unit prompting user are selected in the subclassification of the attributive classification of comentropy maximum.
Specifically, video-unit can be all subclassifications in the attributive classification of user's displaying comentropy maximum, for
User selects;Each attributive classification includes at least two subclassifications;It receives in the attributive classification that user shows video-unit
Subclassification selection;According to the second video collection of selection of the number of users.Wherein, the second video collection is video-unit root
According to the subclassification that user selects in the subclassification of the attributive classification of comentropy maximum, what is inquired from the first video collection returns
Belong to the video collection that the video of the subclassification of user's selection is formed.
Video retrieval method provided in an embodiment of the present invention obtains the first video collection;It calculates in the first video collection
The comentropy of at least two attributive classifications, each attributive classification include at least two subclassifications;User is prompted in comentropy maximum
Attributive classification subclassification in selected.
Wherein, the comentropy of a system can embody the probability distribution of information in the system, information in embodiment system
Convergent when carrying out information retrieval, can effectively reduce retrieval with reference to the probability distribution and convergent of information in system
Range improves recall precision.In in the present solution, the comentropy for the attributive classification being calculated can embody the first video collection
Video video when being classified according to different attribute probability distribution and convergent, regarded in classifying with reference to different attribute
The probability distribution and convergent of frequency can improve recall precision with the diminution range of search of effective video.
Embodiment 2
The embodiment of the present invention provides a kind of video retrieval method, as shown in Fig. 2, including:
S201, video-unit obtain the first video collection.
Specifically, S201 can be any one in S201a, S201b or S201c.
S201a, video-unit are retrieved according to term input by user, to obtain the first video collection.
S201b, video-unit carry out degree of correlation retrieval according to the video that user currently selects, to obtain the first video set
It closes.
S201c, video-unit are retrieved according to the phonetic entry information of user, to obtain the first video collection.
Illustratively, in embodiments of the present invention, frame retrieval is provided in video-unit, video-unit can pass through retrieval
Frame receives the retrieval information of user, and then carrying out retrieval according to the retrieval information of user determines the first video collection.
It should be noted that the retrieval information of user can be Chinese character, the Chinese phonetic alphabet or English alphabet etc., the present invention
Embodiment does not limit the language and form of retrieving information.
Wherein, video-unit can carry out natural language understanding to retrieval information, to obtain input keyword, then pass through
Entity name mark determines matching keyword corresponding with input keyword, the video information finally safeguarded in the video-unit
In all videos in library, retrieval matches the video that the visual classification corresponding to keyword is included, and determines the first video collection.
Wherein, natural language understanding, i.e. Natural Language Understanding, be one kind can realize people with
Between computer using natural language carry out efficient communication emerging technology, be commonly called as human-computer dialogue, refer to be exactly make computer by
The mechanism of respective reaction is made according to the meaning expressed by the natural language of human society.Main research is using computer mould personification
Language communication process, enables a computer to understand and natural language such as Chinese, English etc. with human society, realize it is man-machine between
Natural language communication, to replace the part mental labour of people, including inquiry data, answer a question, take passages document, compilation data
And all working processes in relation to natural language information.
It should be noted that in the embodiment of the present invention, " natural language understanding " technology may be used in video-unit, to retrieval
Information is understood and is analyzed, to obtain the input keyword that can be used for carrying out video frequency searching.
Illustratively, if the retrieval information of user is " slice, thin piece of Liu ", video-unit is to the retrieval information:" Liu
After so-and-so slice, thin piece " carries out natural language understanding, then the input keyword that can be obtained is " Liu " and " slice, thin piece ".
It should be noted that the above-mentioned basic principle and process for only illustrating natural language understanding by way of example, the present invention
The detailed description of " natural language understanding " technology in embodiment can refer to associated description of the prior art, the present embodiment this
In repeat no more.
Wherein, entity name mark be information extraction, question answering system, syntactic analysis, machine translation, towards Semantic
The important foundation tool of the application fields such as the metadata mark of Web, during natural language processing technique moves towards practical
It occupies an important position.In general, entity name mark task be exactly identify three categories in pending text (entity class,
Time class and numeric class), seven groups (name, mechanism name, place name, time, date, currency and percentage) name entity.
Specifically, video-unit after input keyword is obtained, may be used entity and mark named to determine with inputting key
The corresponding matching keyword of word.
Illustratively, if input keyword is " Liu " and " slice, thin piece ", video-unit can be named by entity and be marked
Note identifies that " Liu " corresponding matching keyword is " performer ", and it is " electricity to identify " slice, thin piece " corresponding matching keyword
Shadow ".
It should be noted that the above-mentioned basic principle and process for only illustrating entity name mark by way of example, the present invention
The detailed description of " entity name mark " technology in embodiment can refer to associated description of the prior art, the present embodiment this
In repeat no more.
In embodiments of the present invention, video-unit is safeguarded there are one video information library, is saved in the video information library existing
The link of all videos or video that play copyright has been shown and obtained to all videos having.
Wherein, matching keyword can be preset keyword.Video in video information library is closed using different matchings
Key word is presorted.
Illustratively, preset matching keyword can include in video information library:Performer, video type (including:Comedy
Piece, romance movie, action movie etc.), regional (including America and Europe, inland, Japan and Korea S, Hong Kong and Taiwan etc.), direct etc..
Specifically, video-unit according to input keyword, it is fixed matching keyword corresponding to visual classification institute
Comprising video in, the method for determining the first video collection can include:Video-unit determines fixed matching keyword institute
Corresponding visual classification mode;The subclassification in the visual classification mode is determined according to input keyword;Determine video information library
In the resource collection that is formed of all videos for including in the subclassification be first video collection.
Illustratively, if input keyword is " Liu ", video-unit identifies " Liu by entity name mark
Certain " it is corresponding matching keyword be " performer ";Then video-unit determines that the visual classification mode corresponding to " performer " is according to not
The mode that same performer classifies the video in video information library;Video-unit according to different performers by video information
When video in library is classified, all videos included in the subclassification corresponding to " Liu ", i.e. " Liu " institute are determined
The resource collection that all films (video) played are formed is first video collection.
It should be noted that the input keyword that video-unit is arrived according to user search acquisition of information in the embodiment of the present invention
Possible more than one, correspondingly, video-unit identifies the corresponding matching keyword of input keyword by entity name mark
It may also more than one.
When video-unit gets at least two input keywords, when recognizing at least two matching keywords, video fills
Put the visual classification mode corresponding to can determining fixed at least two matchings keyword respectively;Respectively according at least two
Each input keyword in input keyword determines the video point corresponding to the matching keyword corresponding to the input keyword
Subclassification in class mode;It determines in all videos included in fixed all subclassifications in video information library at least two
The resource collection that the corresponding video of all input keywords in a input keyword is formed is first video collection.
Illustratively, if input keyword is " Liu " and " slice, thin piece ", video-unit names mark identification by entity
It is " performer " to go out " Liu " corresponding matching keyword, identifies that " slice, thin piece " corresponding matching keyword is " film ";Then regard
Frequency device can determine that visual classification mode corresponding to " performer " is by the video in video information library according to different performers
The mode classified;It is all films in video information library to determine the visual classification mode corresponding to " film ";Video fills
It puts when the video in video information library is classified according to different performers, determines in the subclassification corresponding to " Liu "
Comprising all films (video), i.e., the resource collection that all films (video) that " Liu " played are formed be first
Video collection.
Explanation is needed further exist for, video-unit is according to input keyword in the embodiment of the present invention, at fixed
In the video included with the visual classification corresponding to keyword, the method for determining the first video collection includes but not limited to this hair
The above-mentioned cited implementation method of bright embodiment, video-unit obtain the first video collection the other methods embodiment of the present invention this
In repeat no more.
Specifically, when the video that video-unit calculates in first video collection is divided according to different attribute classification
The comentropy of at least two attributive classifications, the method that each described attributive classification includes at least two subclassifications can include
S202-S204:
S202, video-unit determine each subclassification at least two each attributive classifications in the first video collection respectively
Comprising video quantity.
Wherein, video can be divided according to different attributive classifications, as video can be divided into according to type attribute
Action movie, comedy, romance movie, horror film etc., attributive classification are the classification set that video is classified according to different attributes.
Illustratively, the attributive classification in the embodiment of the present invention at least can be:Type attribute, age attribute, area belong to
Property and scoring attribute etc..At least two subclassifications are included in each attributive classification.
For example, the subclassification in type attribute can at least include:Action movie, comedy, romance movie, horror film etc..Its
In, video is divided to different types according to the films types of video in type attribute.
Illustratively, when the video that video-unit can determine in the first video collection is classified according to type attribute,
The number for the video that each subclassification is included in type attribute.If for example, altogether comprising 200 videos in the first video collection,
When 200 videos in first video collection are classified according to type attribute, action movie 30, happiness are included in 200 videos
Acute piece 80, romance movie 50 and horror film 40.
For example, the subclassification in age attribute can at least include:The sixties, the seventies, the eighties, nineties etc..Its
In, video is divided to according to the film shooting time of video or premiere time by the different ages in age attribute.
When video-unit can determine that the video in the first video collection is classified according to age attribute, in age attribute
The number for the video that each subclassification is included.If for example, altogether comprising 200 videos, the first video set in the first video collection
When 200 videos in conjunction are classified according to age attribute, the video 10 comprising the sixties, the seventies in 200 videos
Video 120, the video of the eighties 60 and the video of the nineties 10.
For example, the subclassification in region attribute can at least include:American-European piece, Hong Kong and Taiwan films, inland piece, Japan and Korea S's piece etc..Its
In, video is divided to according to the film shooting time of video or premiere time by different areas in age attribute.
When video-unit can determine that the video in the first video collection is classified according to region attribute, in region attribute
The number for the video that each subclassification is included.If for example, altogether comprising 200 videos, the first video set in the first video collection
When 200 videos in conjunction are classified according to region attribute, American-European piece 6, Hong Kong and Taiwan films 70, inland piece 120, Japan and Korea S's piece
4.
If for example, video scoring 0 point -10/, and 10 points are scored for highest, and the subclassification in the attribute that scores is extremely
It can include less:It obtains the video of first default several 8-10 videos scorings, obtain regarding for second default several 6-7 videos scorings
Frequency and acquisition third preset video of several 0-5 videos scorings etc..Wherein, the first present count in the present embodiment, second pre-
If number and third present count are system intialization or the amount threshold of user setting.
When video-unit can determine that the video in the first video collection is classified according to scoring attribute, in the attribute that scores
The number for the video that each subclassification is included.If for example, altogether comprising 200 videos, the first video set in the first video collection
Obtained when 200 videos in conjunction are classified according to region attribute first default several 8-10 videos scorings video 100,
It obtains the video 80 of second default several 6-7 videos scorings, obtain the video 20 that third presets several 0-5 videos scorings.
The number of videos that S203, video-unit are included according to subclassification each in attributive classification determines each in attributive classification
Video distribution rate in a subclassification.
Wherein, for the video that video-unit can be included according to subclassification each in each attributive classification respectively
Number, determines distributive law of the video in each attributive classification in each subclassification in the first video collection.
Illustratively, it is assumed that the subclassification in type attribute includes:Action movie, comedy, romance movie, horror film.First
Altogether comprising 200 videos in video collection, in this 200 videos comprising action movie 30, comedy 80, romance movie 50 and
Horror film 40.
The distribution in subclassification of the video in the first video collection in type attribute can be calculated in video-unit
Rate is respectively:Action movie 15%, comedy 40%, romance movie 25%, horror film 20%.
Illustratively, it is assumed that the subclassification in age attribute includes:The sixties, the seventies, the eighties, the nineties.First
200 videos are included in video collection altogether, the video 10 comprising the sixties, 70 in 200 videos are included in this 200 videos
Video 120, the video of the eighties 60 and the video of the nineties 10 in age.
The distribution in subclassification of the video in the first video collection in type attribute can be calculated in video-unit
Rate is respectively:Video 5%, the video of the seventies 60%, the video of the eighties 30%, the video of the nineties 5% of the sixties.
It should be noted that the number of video that video-unit is included according to subclassification each in other attributive classifications,
The method for determining distributive law of the video in other attributive classifications in each subclassification in the first video collection can refer to upper
Stating video-unit in example determines video in the first video collection in age attribute or type attribute in each subclassification
Distributive law method, video-unit determines video in the first video collection in other attributive classifications in each subclassification
Which is not described herein again for the method embodiment of the present invention of distributive law.
S204, video-unit are according to distributive law, the comentropy of computation attribute classification.
Illustratively, it is assumed that include n subclassification x, i.e. S={ x in type attribute S1,x2,…xi,...xn, type
The probability distribution (distributive law) of subclassification x in attribute S is P={ P (x1),P(x2),...P(xi),...P(xn), then video
The comentropy of 1 computation attribute of formula classification may be used in device.
Formula 1:
Wherein, comentropies of the H (X) for attributive classification, P (xi) distributive law for i-th of subclassification in the attributive classification.
Illustratively, the distributive law in subclassification of the video in the first video collection in type attribute is respectively:It is dynamic
Make piece 15%=0.15, comedy 40%=0.4, romance movie 25%=0.25, horror film 20%=0.2, i.e. P={ P (x1),P
(x2),P(x3),P(x4)={ 0.15,0.4,0.25,0.2 }, video-unit may be used formula 1 calculate type attribute information
Entropy.
Formula 2:
Wherein, the x in above-mentioned formula1Represent action movie, P (x1) represent distribution of the subclassification-action movie in type attribute
Rate;x2Represent comedy, P (x2) represent distributive law of the subclassification-comedy in type attribute;x3Give a demonstration of love piece, P (x3)
Represent distributive law of the subclassification-romance movie in type attribute;x4Represent horror film, P (x4) represent subclassification-horror film in class
Distributive law in type attribute.
Illustratively, the distributive law in subclassification of the video in the first video collection in age attribute is respectively:60
The video 5%=0.05 in age, the video 60%=0.6 of the seventies, the video 30%=0.3 of the eighties, the video of the nineties
5%=0.05, i.e. P={ P (x1),P(x2),P(x3),P(x4)={ 0.05,0.6,0.3,0.05 }, video-unit may be used
Formula 1 calculates the comentropy of type attribute.
Formula 3:
Wherein, the x in above-mentioned formula1Represent the video of the sixties, P (x1) represent the video of subclassification-sixties in the age
Distributive law in attribute;x2Represent the video of the seventies, P (x2) represent point of the video of subclassification-seventies in age attribute
Cloth rate;x3Represent the video of the eighties, P (x3) represent distributive law of the video of subclassification-eighties in age attribute;x4Table
Show the video of the nineties, P (x4) represent distributive law of the video of subclassification-nineties in age attribute.
It should be noted that distributive law of the video-unit according to subclassification in other attributive classifications, calculates other attributes point
The method of the comentropy of class can refer to the computational methods in examples detailed above, and which is not described herein again for the embodiment of the present invention.
Further alternative, the method for the comentropy of video-unit computation attribute classification can also specifically include:Video fills
It puts with reference to current scene information and/or user behavior parameter, calculates first video collection and carried out according to different attribute classification
The comentropy that at least two attribute are classified during division.
Illustratively, current scene information can be user search video temporal information (e.g., can be divided into the morning, under
Noon, dusk, night etc.).
Video-unit can set the weighting weight of current scene information according to the temporal information of user search video.Example
Such as, if the time of user search video is night, comprising horror film in type attribute, then video-unit is by current scene information
Weighting weight is set as the first weight threshold A, the first weight threshold A and is less than 1, and in the comentropy for calculating type attribute, can
To give the information content-P (x of horror film4)log2P(x4) the first weight threshold A is multiplied by, so that the information content of horror film is-P (x4)
log2P(x4)×A。
Illustratively, the distributive law in subclassification of the video in the first video collection in type attribute is respectively:It is dynamic
Make piece 15%=0.15, comedy 40%=0.4, romance movie 25%=0.25, horror film 20%=0.2, i.e. P={ P (x1),P
(x2),P(x3),P(x4)={ 0.15,0.4,0.25,0.2 }, it is assumed that A=0.8, video-unit may be used formula 1 and calculate class
The comentropy of type attribute.
Formula 4:
H (X)=- (P (x1)log2P(x1)+P(x2)log2P(x2)+P(x3)log2P(x3)+P(x4)log2P(x4)×0.8)
0.15 × log of=- (20.15+0.4×log20.4+0.25×log20.25+0.2×log20.2×0.8)
=0.545
It can be obtained according to formula 2 and formula 4, for user when retrieving video at night, type category is calculated in video-unit
Property comentropy and user when retrieving video daytime, the comentropy of type attribute is calculated in video-unit size is different
, and the size of comentropy may determine the attributive classification whether be comentropy maximum attributive classification.User exists in formula 4
When night retrieves video, the comentropy of type attribute compared to user when retrieving video daytime, the comentropy of type attribute compared with
Low, then the possibility of all properties subtype that video-unit preferentially shows type attribute to user can then reduce, then can be to
The video that user's offer is more in line with current scene for selection by the user, can improve user experience.
Wherein, user behavior parameter can be video-unit counting user video frequency searching record and video-unit or
Video terminal where video-unit plays user that the record of video obtains to the subclassification in each attributive classification for user
Video preference.
Video-unit can set user according to user to the preference of the video of the subclassification in each attributive classification
The weighting weight of behavioral parameters.For example, if statistics obtains user in type attribute, the preference of action movie is 70%, likes
The preference of acute piece is 15%, the preference of romance movie is 10%, the preference of horror film is 5%;Then video-unit
The second weight threshold B is multiplied by the preference of each subclassification for user, the second weight threshold B is generally higher than 1, by preference
The product arrived that degree is multiplied by the second weight threshold B is weighed as the weighting of the user behavior parameter corresponding to each subclassification
Weight.
Video-unit can be multiplied by the son point in the comentropy for calculating type attribute to the information content of each subclassification
The product of the preference of class and the second weight threshold B.For example, information content-P (the x of action movie1)log2P(x1) it is multiplied by the son point
After the product of the preference of class and the second weight threshold B, information content is-P (x1)log2P(x170% × B of) × ();Comedy
Information content-P (the x of piece2)log2P(x2) be multiplied by the preference of the subclassification and the product of the second weight threshold B after, information
It measures as-P (x2)log2P(x215% × B of) × ();Information content-P (the x of romance movie3)log2P(x3) it is multiplied by the preference of the subclassification
After degree and the product of the second weight threshold B, information content is-P (x3)log2P(x310% × B of) × ();The information of horror film
Amount-P (x4)log2P(x4) be multiplied by the preference of the subclassification and the product of the second weight threshold B after, information content be-P
(x4)log2P(x45% × B of) × ().
Illustratively, it is assumed that the second weight threshold B is equal to 2, and the letter that formula 1 calculates type attribute may be used in video-unit
Cease entropy.
Formula 5:
It can be obtained according to formula 2 and formula 5, video-unit is calculated according to distributive law with reference to user behavior parameter
Type attribute comentropy and the size of the comentropy of type attribute that is calculated according only to distributive law of video-unit be not
With, and the size of comentropy may influence the height of priority of the attributive classification in the first attributive classification set.In formula 5
With reference to the type attribute that user behavior parameter is calculated comentropy compared to the type category being calculated according only to distributive law
Property comentropy it is relatively low, then the possibility of all properties subtype that video-unit preferentially shows type attribute to user can then drop
It is low, then it can provide a user and be more in line with the video of user preference for selection by the user, user experience can be improved.
S205, video-unit prompting user are selected in the subclassification of the attributive classification of comentropy maximum.
Wherein, since the comentropy of attributive classification can embody the general of video when video is classified according to the attributive classification
Rate be distributed and convergent, and user according to each subclassification in the attributive classification of comentropy maximum as search condition into
Row video frequency searching then can improve recall precision with the diminution range of search of effective video;Therefore video-unit be calculated to
After the comentropy of few two attribute classification, user can be prompted to be selected in the subclassification of the attributive classification of comentropy maximum
It selects.
Illustratively, video-unit can show that the subclassification mark prompting user of the attributive classification of comentropy maximum carries out
Selection;Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt.
It should be noted that in embodiments of the present invention, video-unit divides in prompting user in the attribute of comentropy maximum
After being selected in the subclassification of class, the mark of all videos in the selected subclassification of the user can be directly shown for user
Know information, so that user is by the selection of the identification information of all videos in the subclassification to having shown that, determine to be retrieved
Video.
It is further alternative, in order to further reduce range of search by the calculating of comentropy, improve recall precision, video
Device can obtain after prompting user is selected in the subclassification of the attributive classification of comentropy maximum according to the selection of user
All videos (the second video collection) in the subclassification of user's selection are obtained, and calculate the video in the second video collection according to not
The comentropy that at least two attribute are classified when being divided with attributive classification, and continue the attribute for prompting user in comentropy maximum
It is selected in the subclassification of classification.Specifically, the method for the embodiment of the present invention can also include S206-S208:
S206, video-unit obtain the second video collection according to the selection of user.
Specifically, the video in the first video collection can carry out sub- Attribute transposition according to any one attributive classification, it will
Video in first video collection is divided to different sub- attributes.
Video-unit can receive selection of the user to the subclassification in the attributive classification of comentropy maximum, by the first video
When video in set carries out sub- Attribute transposition according to the attributive classification of comentropy maximum, the son point of user selection can be divided to
The video collection that the video of class is formed is determined as the second video collection.
At least two when the video that S207, video-unit calculate in the second video collection is divided according to different attribute classification
The comentropy of attribute classification, each attributive classification include at least two subclassifications.
It should be noted that the video that video-unit is calculated in the second video collection is divided according to different attribute classification
When at least two attribute classification comentropy method, can refer to the embodiment of the present invention in video-unit calculate the first video set
The method of the comentropy of at least two attribute classification when video in conjunction is divided according to different attribute classification, the present invention is here
It repeats no more.
S208, video-unit prompting user are selected in the subclassification of the attributive classification of comentropy maximum.
Wherein, since the comentropy of attributive classification can embody the general of video when video is classified according to the attributive classification
Rate be distributed and convergent, and user according to each subclassification in the attributive classification of comentropy maximum as search condition into
Row video frequency searching then can improve recall precision with the diminution range of search of effective video;Therefore video-unit be calculated to
After the comentropy of few two attribute classification, user can be prompted to be selected in the subclassification of the attributive classification of comentropy maximum
It selects.
Illustratively, video-unit can show that the subclassification mark prompting user of the attributive classification of comentropy maximum carries out
Selection;Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt.
It should be noted that in the embodiment of the present invention, video-unit can calculate the first video collection, the second video collection
Or the video in N video collections according to different attribute classification divided when each attributive classification comentropy;It can also
The video only calculated in the first video collection, the second video collection or N video collections is drawn according to different attribute classification
Timesharing, the comentropy for the attributive classification that user gets used to.
Further alternative, the method for the embodiment of the present invention can also include:Video-unit is according to the selection of user, update
User behavior parameter.Wherein, video-unit can be according to user each time to the category of the comentropy maximum suggested by video-unit
Property classification subclassification selection, update user behavior parameter.
It should be noted that in embodiments of the present invention, video-unit divides in prompting user in the attribute of comentropy maximum
After being selected in the subclassification of class, the mark of all videos in the selected subclassification of the user can be directly shown for user
Know information, so that user is by the selection of the identification information of all videos in the subclassification to having shown that, determine to be retrieved
Video.
It is further alternative, in order to further reduce range of search by the calculating of comentropy, improve recall precision, video
Device can obtain after prompting user is selected in the subclassification of the attributive classification of comentropy maximum according to the selection of user
All videos (third video collection) in the subclassification of user's selection are obtained, and calculate the video in third video collection according to not
The comentropy that at least two attribute are classified when being divided with attributive classification, and continue the attribute for prompting user in comentropy maximum
It is selected in the subclassification of classification.
It should be noted that video-unit obtains third video collection, the video in third video collection is calculated according to not
The comentropy that at least two attribute are classified when being divided with attributive classification, and user is prompted in the attributive classification of comentropy maximum
Subclassification in carry out the method for selection and can refer to associated description in the embodiment of the present invention, the present embodiment is no longer superfluous here
It states.
Video retrieval method provided in an embodiment of the present invention obtains the first video collection;It calculates in the first video collection
The comentropy of at least two attributive classifications, each attributive classification include at least two subclassifications;User is prompted in comentropy maximum
Attributive classification subclassification in selected.
Wherein, the comentropy of a system can embody the probability distribution of information in the system, information in embodiment system
Convergent when carrying out information retrieval, can effectively reduce retrieval with reference to the probability distribution and convergent of information in system
Range improves recall precision.In in the present solution, the comentropy for the attributive classification being calculated can embody the first video collection
Video video when being classified according to different attribute probability distribution and convergent, regarded in classifying with reference to different attribute
The probability distribution and convergent of frequency can improve recall precision with the diminution range of search of effective video.
Embodiment 3
The embodiment of the present invention provides a kind of video-unit, as shown in figure 3, including:First acquisition unit 31, first calculates single
32 and first prompt unit 33 of member.
First acquisition unit 31, for obtaining the first video collection.
First computing unit 32, for calculating in first video collection of the acquisition of first acquisition unit 31 extremely
The comentropy of few two attribute classification, each described attributive classification include at least two subclassifications.
First prompt unit 33, for prompting comentropy that user is calculated in first computing unit 32 maximum
It is selected in the subclassification of attributive classification.
Further, first computing unit 32 is additionally operable to what is included according to subclassification each in the attributive classification
Number of videos calculates the comentropy of the attributive classification.
Further, as shown in figure 4, first computing unit 32, including:Determining module 321 and computing module 322.
Determining module 321 for the number of videos included according to subclassification each in the attributive classification, determines the category
Property classification in video distribution rate in each subclassification.
Computing module 322, for according to the distributive law, calculating the comentropy of the attributive classification.
Further, first computing unit 32 is additionally operable to combine current scene information and/or user behavior parameter,
Calculate the comentropy of at least two attributive classifications in first video collection.
Further, as shown in figure 5, the video-unit, further includes:Second acquisition unit 34, the second computing unit 35
With the second prompt unit 36.
Second acquisition unit 34, for obtaining the second video collection according to the selection of the user.
Second computing unit 35, for calculating in second video collection of the acquisition of second acquisition unit 34 extremely
The comentropy of few two attribute classification, each described attributive classification include at least two subclassifications.
Second prompt unit 36, for prompting comentropy that user is calculated in second computing unit 35 maximum
It is selected in the subclassification of attributive classification.
Further, as shown in fig. 6, the video-unit, further includes:Updating unit 37.
Updating unit 37 for the selection according to the user, updates user behavior parameter.
Further, the first acquisition unit 31 is additionally operable to be retrieved according to the term input by user, with
Obtain first video collection;Alternatively, degree of correlation retrieval is carried out according to the video that the user currently selects, with described in acquisition
First video collection;Alternatively, the phonetic entry information according to the user is retrieved, to obtain first video collection.
Further, first prompt unit 33 is additionally operable to the subclassification mark of the attributive classification of display comentropy maximum
Show that prompting user is selected;Alternatively, divided by user described in voice prompt in the son of the attributive classification of described information entropy maximum
It is selected in class.
Second prompt unit 36 is additionally operable to the subclassification mark prompting user of the attributive classification of display comentropy maximum
It is selected;Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt
It selects.
It should be noted that the specific descriptions of part function module can join in video-unit provided in an embodiment of the present invention
Corresponding content in test method embodiment, the present embodiment are no longer described in detail here.
Video-unit provided in an embodiment of the present invention obtains the first video collection;It calculates in the first video collection at least
The comentropy of two attribute classification, each attributive classification include at least two subclassifications;User is prompted in the category of comentropy maximum
Property classification subclassification in selected.
Wherein, the comentropy of a system can embody the probability distribution of information in the system, information in embodiment system
Convergent when carrying out information retrieval, can effectively reduce retrieval with reference to the probability distribution and convergent of information in system
Range improves recall precision.In in the present solution, the comentropy for the attributive classification being calculated can embody the first video collection
Video video when being classified according to different attribute probability distribution and convergent, regarded in classifying with reference to different attribute
The probability distribution and convergent of frequency can improve recall precision with the diminution range of search of effective video.
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description
It is convenienct and succinct, it, can as needed will be upper only with the division progress of above-mentioned each function module for example, in practical application
It states function distribution to be completed by different function modules, i.e., the internal structure of device is divided into different function modules, to complete
All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before can referring to
The corresponding process in embodiment of the method is stated, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the module or
The division of unit, only a kind of division of logic function can have other dividing mode, such as multiple units in actual implementation
Or component may be combined or can be integrated into another system or some features can be ignored or does not perform.Another point, institute
Display or the mutual coupling, direct-coupling or communication connection discussed can be by some interfaces, device or unit
INDIRECT COUPLING or communication connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit
The component shown may or may not be physical unit, you can be located at a place or can also be distributed to multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
That each unit is individually physically present, can also two or more units integrate in a unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is independent product sale or uses
When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially
The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products
It embodies, which is stored in a storage medium, is used including some instructions so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) perform the present invention
The all or part of step of embodiment the method.And aforementioned storage medium includes:USB flash disk, mobile hard disk, read-only memory
(ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD
Etc. the various media that can store program code.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in change or replacement, should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (7)
1. a kind of video retrieval method, which is characterized in that including:
It is retrieved according to the term input by user, to obtain first video collection;
When video-unit gets at least two input keywords, when recognizing at least two matching keywords, video-unit can
In a manner of the corresponding visual classification of determining fixed at least two matchings keyword difference;Respectively according at least two inputs
Each input keyword in keyword determines the visual classification side corresponding to the matching keyword corresponding to the input keyword
Subclassification in formula;It determines defeated at least two in all videos included in fixed all subclassifications in video information library
The resource collection that the corresponding video of all input keywords in entry keyword is formed is first video collection;
With reference to current scene information and/or user behavior parameter, according to the weighting weight of the current scene information and/or described
The weighting weight of user behavior parameter calculates the comentropy of at least two attributive classifications in first video collection, described
Each attributive classification includes at least two subclassifications, and the current scene information is the temporal information of user search video, described
User behavior parameter is the video frequency searching record of user, user is to the preference journey of the video of the subclassification in each attributive classification
Degree;
Prompting user is selected in the subclassification of the attributive classification of comentropy maximum.
2. video retrieval method according to claim 1, which is characterized in that described to calculate in first video collection
The comentropy of at least two attributive classifications, including:
According to the number of videos that subclassification each in the attributive classification is included, the comentropy of the attributive classification is calculated.
3. video retrieval method according to claim 2, which is characterized in that described according to son point each in the attributive classification
The number of videos that class is included calculates the comentropy of the attributive classification, including:
According to the number of videos that subclassification each in the attributive classification is included, determine in the attributive classification in each subclassification
Video distribution rate;
According to the distributive law, the comentropy of the attributive classification is calculated.
4. video retrieval method according to claim 1, which is characterized in that further include:
Second video collection is obtained according to the selection of the user;
The comentropy of at least two attributive classifications in second video collection is calculated, each described attributive classification is included at least
Two subclassifications;
Prompting user is selected in the subclassification of the attributive classification of comentropy maximum.
5. video retrieval method according to claim 4, which is characterized in that further include:
According to the selection of the user, user behavior parameter is updated.
6. according to claim 1-5 any one of them video retrieval methods, which is characterized in that the first video set of the acquisition
It closes, including:
It is retrieved according to the term input by user, to obtain first video collection;
Alternatively, degree of correlation retrieval is carried out according to the video that the user currently selects, to obtain first video collection;
Alternatively, the phonetic entry information according to the user is retrieved, to obtain first video collection.
7. according to claim 1-4 any one of them video retrieval methods, which is characterized in that the prompting user is in comentropy
It is selected in the subclassification of maximum attributive classification, including:
Show that the subclassification mark of the attributive classification of comentropy maximum prompts the user to be selected;
Alternatively, it is selected in the subclassification of the attributive classification of described information entropy maximum by user described in voice prompt.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810095506.3A CN108133058B (en) | 2014-04-30 | 2014-04-30 | Video retrieval method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410180892.8A CN103942328B (en) | 2014-04-30 | 2014-04-30 | A kind of video retrieval method and video-unit |
CN201810095506.3A CN108133058B (en) | 2014-04-30 | 2014-04-30 | Video retrieval method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410180892.8A Division CN103942328B (en) | 2014-04-30 | 2014-04-30 | A kind of video retrieval method and video-unit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108133058A true CN108133058A (en) | 2018-06-08 |
CN108133058B CN108133058B (en) | 2022-02-18 |
Family
ID=51189996
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410180892.8A Active CN103942328B (en) | 2014-04-30 | 2014-04-30 | A kind of video retrieval method and video-unit |
CN201810095506.3A Active CN108133058B (en) | 2014-04-30 | 2014-04-30 | Video retrieval method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410180892.8A Active CN103942328B (en) | 2014-04-30 | 2014-04-30 | A kind of video retrieval method and video-unit |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN103942328B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109286833A (en) * | 2018-09-30 | 2019-01-29 | 湖南机电职业技术学院 | A kind of information processing method and system applied in network direct broadcasting |
CN109614517A (en) * | 2018-12-04 | 2019-04-12 | 广州市百果园信息技术有限公司 | Classification method, device, equipment and the storage medium of video |
CN111079015A (en) * | 2019-12-17 | 2020-04-28 | 腾讯科技(深圳)有限公司 | Recommendation method and device, computer equipment and storage medium |
CN114697748B (en) * | 2020-12-25 | 2024-05-03 | 深圳Tcl新技术有限公司 | Video recommendation method and computer equipment based on voice recognition |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107333149A (en) * | 2017-06-30 | 2017-11-07 | 环球智达科技(北京)有限公司 | The aggregation processing method of programme information |
CN110543862B (en) * | 2019-09-05 | 2022-04-22 | 北京达佳互联信息技术有限公司 | Data acquisition method, device and storage medium |
CN114120180B (en) * | 2021-11-12 | 2023-07-21 | 北京百度网讯科技有限公司 | Time sequence nomination generation method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059814A (en) * | 2006-04-17 | 2007-10-24 | 株式会社理光 | Image processing device and image processing method |
JP2007293602A (en) * | 2006-04-25 | 2007-11-08 | Nec Corp | System and method for retrieving image and program |
CN102521321A (en) * | 2011-12-02 | 2012-06-27 | 华中科技大学 | Video search method based on search term ambiguity and user preferences |
CN102682132A (en) * | 2012-05-18 | 2012-09-19 | 合一网络技术(北京)有限公司 | Method and system for searching information based on word frequency, play amount and creation time |
CN102982153A (en) * | 2012-11-29 | 2013-03-20 | 北京亿赞普网络技术有限公司 | Information retrieval method and device |
CN103686236A (en) * | 2013-11-19 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Method and system for recommending video resource |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080120328A1 (en) * | 2006-11-20 | 2008-05-22 | Rexee, Inc. | Method of Performing a Weight-Based Search |
JP2010055431A (en) * | 2008-08-28 | 2010-03-11 | Toshiba Corp | Display processing apparatus and display processing method |
-
2014
- 2014-04-30 CN CN201410180892.8A patent/CN103942328B/en active Active
- 2014-04-30 CN CN201810095506.3A patent/CN108133058B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059814A (en) * | 2006-04-17 | 2007-10-24 | 株式会社理光 | Image processing device and image processing method |
JP2007293602A (en) * | 2006-04-25 | 2007-11-08 | Nec Corp | System and method for retrieving image and program |
CN102521321A (en) * | 2011-12-02 | 2012-06-27 | 华中科技大学 | Video search method based on search term ambiguity and user preferences |
CN102682132A (en) * | 2012-05-18 | 2012-09-19 | 合一网络技术(北京)有限公司 | Method and system for searching information based on word frequency, play amount and creation time |
CN102982153A (en) * | 2012-11-29 | 2013-03-20 | 北京亿赞普网络技术有限公司 | Information retrieval method and device |
CN103686236A (en) * | 2013-11-19 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Method and system for recommending video resource |
Non-Patent Citations (2)
Title |
---|
XIAOLI WEI 等: "A Novel Algorithm for Video Retrieval Using Video Metadata Information", 《 2009 FIRST INTERNATIONAL WORKSHOP ON EDUCATION TECHNOLOGY AND COMPUTER SCIENCE》 * |
张环: "基于内容的视频检索技术分析与研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109286833A (en) * | 2018-09-30 | 2019-01-29 | 湖南机电职业技术学院 | A kind of information processing method and system applied in network direct broadcasting |
CN109614517A (en) * | 2018-12-04 | 2019-04-12 | 广州市百果园信息技术有限公司 | Classification method, device, equipment and the storage medium of video |
CN111079015A (en) * | 2019-12-17 | 2020-04-28 | 腾讯科技(深圳)有限公司 | Recommendation method and device, computer equipment and storage medium |
CN114697748B (en) * | 2020-12-25 | 2024-05-03 | 深圳Tcl新技术有限公司 | Video recommendation method and computer equipment based on voice recognition |
Also Published As
Publication number | Publication date |
---|---|
CN103942328B (en) | 2018-05-04 |
CN108133058B (en) | 2022-02-18 |
CN103942328A (en) | 2014-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103942328B (en) | A kind of video retrieval method and video-unit | |
CN107832437B (en) | Audio/video pushing method, device, equipment and storage medium | |
US8321456B2 (en) | Generating metadata for association with a collection of content items | |
CN101281540B (en) | Apparatus, method and computer program for processing information | |
CN109408665A (en) | A kind of information recommendation method and device, storage medium | |
CN108108821A (en) | Model training method and device | |
CN106919575B (en) | Application program searching method and device | |
CN105426514A (en) | Personalized mobile APP recommendation method | |
KR20130119246A (en) | Apparatus and method for recommending contents based sensibility | |
CN103886081A (en) | Information sending method and system | |
JP6370434B1 (en) | Company information provision system and program | |
CN106777282B (en) | The sort method and device of relevant search | |
CN105654198B (en) | Brand advertisement effect optimization method capable of realizing optimal threshold value selection | |
CN103309869A (en) | Method and system for recommending display keyword of data object | |
KR20120101233A (en) | Method for providing sentiment information and method and system for providing contents recommendation using sentiment information | |
JP2007041721A (en) | Information classifying method and program, device and recording medium | |
CN106709851A (en) | Big data retrieval method and apparatus | |
WO2010096986A1 (en) | Mobile search method and device | |
CN109508441A (en) | Data analysing method, device and electronic equipment | |
Qu et al. | A novel approach based on multi-view content analysis and semi-supervised enrichment for movie recommendation | |
JP5302614B2 (en) | Facility related information search database formation method and facility related information search system | |
CN110263207A (en) | Image search method, device, equipment and computer readable storage medium | |
CN116882414B (en) | Automatic comment generation method and related device based on large-scale language model | |
JP2016081265A (en) | Picture selection device, picture selection method, picture selection program, characteristic-amount generation device, characteristic-amount generation method and characteristic-amount generation program | |
CN111435514A (en) | Feature calculation method and device, sorting method and device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |