CN108681606B - Big data-based storage and extraction method for movie programs - Google Patents

Big data-based storage and extraction method for movie programs Download PDF

Info

Publication number
CN108681606B
CN108681606B CN201810512725.7A CN201810512725A CN108681606B CN 108681606 B CN108681606 B CN 108681606B CN 201810512725 A CN201810512725 A CN 201810512725A CN 108681606 B CN108681606 B CN 108681606B
Authority
CN
China
Prior art keywords
movie
stored
keyword
storage
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810512725.7A
Other languages
Chinese (zh)
Other versions
CN108681606A (en
Inventor
付小双
胡安静
宋艳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Zhuoying Digital Technology Co ltd
Original Assignee
Guangzhou Qiangui Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Qiangui Software Technology Co ltd filed Critical Guangzhou Qiangui Software Technology Co ltd
Priority to CN201810512725.7A priority Critical patent/CN108681606B/en
Publication of CN108681606A publication Critical patent/CN108681606A/en
Application granted granted Critical
Publication of CN108681606B publication Critical patent/CN108681606B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a big data-based storage and extraction method of movie programs, which comprises the following steps: dividing a storage space into a plurality of storage units, wherein the storage units are divided into a plurality of sub-storage units; extracting character contents and played pictures of the film to be stored to obtain a film type association degree coefficient; comparing the movie character set of the movie to be stored with the set keywords of each movie type to obtain a keyword comparison set; comparing a movie playing picture set of a movie to be stored with a set characteristic image set of each region to obtain a characteristic image comparison set and obtain a region association degree coefficient, and storing the movie to be stored to a corresponding sub-storage unit; inputting a plurality of keywords of the movies to be extracted, comparing the input keywords with the stored keyword set of each movie, and extracting the movie with the highest matching degree. The invention improves the storage efficiency, the classification accuracy and the movie extraction accuracy and convenience.

Description

Big data-based storage and extraction method for movie programs
Technical Field
The invention belongs to the technical field of storage and extraction of movies, and relates to a storage and extraction method of movie programs based on big data.
Background
Along with the continuous improvement of the living standard of people, the living pressure and the mental pressure of people are gradually increased, in order to reduce the pressure of people, the movie program is produced at random, and the movie can lead people to forget the worried and unconscious emotion, thereby occupying an important position in the life of people.
The existing movie program cannot effectively and accurately divide movies according to the types and shooting areas of the movies in the storage process, so that all types of movies are stored in a movie storage space, and cannot be effectively distinguished, so that the storage is disordered and the scheduling is poor.
Disclosure of Invention
The invention aims to provide a big data-based storage and extraction method for movie programs, which solves the problems that the existing movies cannot be effectively and accurately stored according to the types and shooting areas of the movies in the storage and extraction processes, and the extraction process has low extraction efficiency, poor accuracy and long screening time.
The purpose of the invention can be realized by the following technical scheme:
a big data-based storage and extraction method for movie programs comprises the following steps:
s1, dividing the storage space into a plurality of storage units with the same storage space and a spare storage unit, wherein different storage units store different movie types, the same storage unit is divided into a plurality of sub-storage units, and the plurality of sub-storage units in the storage units are used for storing movies shot in different areas under the same movie type;
s2, acquiring a space occupation request sent by the movie storage, and extracting the text content and the playing picture of the movie needing to be stored;
s3, sequencing the movies to be stored according to the playing time sequence, and intercepting the text content played in the movies at a fixed time interval T1 to obtain a movie text set Ai(ai1,ai2,...,aij,...,ain) in which AiThe text set represented as the ith movie to be stored, aij represents the character content corresponding to the ith time slot of the movie to be stored, n represents the number of the time slots divided by the movie to be stored, and according to the movie character set, all keywords corresponding to the movie to be stored are extracted to form a keyword set Ci(ci1,ci2,...,cis,...,cim) wherein CiAll keyword sets of the ith movie to be stored are expressed, and m is expressed as the total number of keywords;
s4, sequencing the movies to be stored according to the playing time sequence, and intercepting the pictures played by the movies at a fixed time interval T2 to obtain a movie playing picture set Bi(bi1,bi2,...,bij,...,bin),BiSet of pictures represented as the ith movie to be stored, bij represents the picture of the ith movie to be stored in the jth time period;
s5, and respectively setting the movie characters in each time slot in the movie character set acquired in the step S3The keywords of the movie types are compared to obtain a keyword comparison set V'ik(v′ik1,v′ik2,...,v′ikh,...,v′ikr), wherein, V'ikSet of keyword comparisons, v ', expressed as ith movie to be stored and kth movie genre'ikh represents the number of the h keyword of the k film type in the ith film to be stored;
s6, according to the obtained keyword comparison set, counting the film type association coefficient of the film to be stored and each film type, extracting the film type with the maximum association coefficient in the comparison of the film to be stored and each film type, and storing the film type to be stored to the storage unit corresponding to the film type;
s7, comparing the movie pictures in each time slot in the movie playing picture set acquired in the step S4 with the set feature image sets of each region to obtain a feature image comparison set W'ig(w′ig1,w′ig2,...,w′igh,...,w′igl), wherein, W'igIs represented as a set of feature image comparisons of the ith film to be stored and the g th region, w'igh is expressed as the number of times of the h characteristic image of the g area appearing in the ith movie to be stored;
s8, according to the obtained feature image contrast set, counting the area relevance coefficient of the film to be stored and each area, extracting the area with the maximum relevance coefficient in the comparison of the film to be stored and each area, and storing the film to be stored to the sub-storage unit of the area corresponding to the storage unit;
s9, completing the storage of the movie to be stored, receiving a next space occupation request sent by the movie storage, extracting the text content played in the movie to be stored and the picture played by the movie, and executing the steps S3-S8;
s10, detecting the residual storage space of each sub-storage unit in each storage unit in real time, if the residual storage space of each sub-storage unit is smaller than the set standard storage space, dividing the spare storage unit to the sub-storage unit of which the residual storage space is smaller than the set standard storage space by using a fixed storage capacity until the residual storage space of the sub-storage unit is larger than the set standard storage space;
s11, when extracting a movie, inputting a plurality of keywords of the movie to be extracted, where the input keywords constitute a keyword set D (D1, D2, a. # ds, a. # dp) to be detected, and the specific gravities corresponding to each keyword are different, and are gd1+ gd2+ # + gds + # gdp ═ 1;
s12, and a keyword set C for associating each input keyword with each movie stored in the storage spaceiComparing to obtain a screening keyword set D'k(d′k1,d′k2,...,d′ks,...,d′kp),D′kIs expressed as a keyword comparison set, d ', of the movie to be extracted and the keyword of the k-th movie stored in the storage space'ks is the number of times that the s-th keyword of the movie to be extracted appears in the keyword of the k-th movie, the movie with the highest matching degree coefficient of the movie to be extracted is screened according to a matching degree calculation method, and the movie is output
Figure RE-GDA0001698192010000041
Wherein G iskThe index is expressed as the matching coefficient between the movie to be extracted and the kth movie, and gds is expressed as the proportion of the s-th keyword in the movie to be extracted.
Furthermore, the movie types comprise love, comedy, action, science fiction, ancient costume, martial art, thriller and crime, different movie types are sorted according to a set movie type sorting sequence, the different movie types are respectively 1,2, ak(vk1,vk2,...,vkh,...,vkr) wherein VkRepresenting a set of keywords, v, corresponding to the kth film typekh represents the h-th keyword corresponding to the k-th movie type, different keywords correspond to different weights, and the weights are qk1,qk2,...,qkh,...,qkr, and qk1+qk2+...+qkh+...+qkr=1。
Further, different regions include China, Korea, America, Japan, India, Thailand, France and UK, the different regions are sorted from high to low according to the total number of the movies in each region, and are respectively 1,2g(wg1,wg2,...,wgh,...,wgl) wherein WgRepresenting the g-th region corresponding feature image set, wgh represents the h-th image screen corresponding to the g-th region.
Further, the time interval T1 is equal to the time interval T2.
Further, the film type association degree coefficient
Figure RE-GDA0001698192010000042
Wherein Q isikExpressed as the coefficient of the degree of association between the ith film to be stored and the kth film type, u is a parameter factor, and is 0.192, qkh is expressed as the weight value v 'occupied by the h keyword in the k film type'ikh represents the number of h-th keywords of the k-th movie type appearing in the ith movie to be stored.
Further, the area relevance coefficient
Figure RE-GDA0001698192010000051
Wherein E isigIs expressed as a set of characteristic image contrast of the ith film to be stored and the g region, f is a parameter factor, and is taken as 0.615, w'igh represents the number of times of the h characteristic image of the g area appearing in the ith movie to be stored.
The invention has the beneficial effects that:
the invention provides a method for storing and extracting movie programs based on big data, which divides the storage space into different storage spaces according to movie types and shooting areas, extracts the text content of the movie to be stored and the pictures in the movie in the storage process, screens out the types and areas to which the movie belongs by comparing the movie characters and the pictures in the movie with the set keywords of each movie type and the set characteristic image sets of each area, facilitates regular storage of the movie, improves the storage efficiency and the classification accuracy, screens out the best movie by inputting the keywords of the movie and calculating the matching degree of the keywords of the movie and the keywords of the movie in storage in the movie extraction process, improves the convenience of storing and extracting the movie, and has simple operation, high reliability.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a big data-based storage and extraction method of movie programs, which comprises the following steps:
s1, the storage space is divided into a plurality of storage units with the same storage space and a spare storage unit, different storage units store different movie types, the same storage unit is divided into a plurality of sub-storage units, the plurality of sub-storage units in the storage units are used for storing movies shot in different areas under the same movie type, the movie types comprise love, comedy, action, science fiction, ancient costume, martial art knight, thriller and crime and the like, the different movie types are sorted according to the set movie type sorting sequence, the sequence is respectively 1,2, ak(vk1,vk2,...,vkh,...,vkr) wherein VkRepresenting a set of keywords, v, corresponding to the kth film typekh represents the h-th keyword corresponding to the k-th movie type, different keywords correspond to different weights, and the weights are qk1,qk2,...,qkh,...,qkr, and qk1+qk2+...+qkh+...+qkr is 1; the different regions comprise China, Korea, America, Japan, India, Thailand, France and UK, the different regions are sorted from high to low according to the total number of the movies in each region, the number of the movies is 1,2, theg(wg1,wg2,...,wgh,...,wgl) wherein WgRepresenting the g-th region corresponding feature image set, wgh represents the h image picture corresponding to the g area;
s2, acquiring a space occupation request sent by the movie storage, and extracting the text content and the playing picture of the movie needing to be stored;
s3, sequencing the movies to be stored according to the playing time sequence, and intercepting the text content played in the movies at a fixed time interval T1 to obtain a movie text set Ai(ai1,ai2,...,aij,...,ain) in which AiThe text set represented as the ith movie to be stored, aij represents the character content corresponding to the ith time slot of the movie to be stored, n represents the number of the time slots divided by the movie to be stored, and according to the movie character set, all keywords corresponding to the movie to be stored are extracted to form a keyword set Ci(ci1,ci2,...,cis,...,cim) wherein CiAll keyword sets of the ith movie to be stored are expressed, and m is expressed as the total number of keywords;
s4, sequencing the movies to be stored according to the playing time sequence, and intercepting the pictures played by the movies at a fixed time interval T2 to obtain a movie playing picture set Bi(bi1,bi2,...,bij,...,bin),BiSet of pictures represented as the ith movie to be stored, bij represents the picture of the ith movie to be stored in the jth time period, and the time interval T1 is equal to the time interval T2;
s5, collecting the electricity in each time slot in the movie character set acquired in the step S3The film characters are respectively compared with the set keywords of each film type to obtain a keyword comparison set V'ik(v′ik1,v′ik2,...,v′ikh,...,v′ikr), wherein, V'ikSet of keyword comparisons, v ', expressed as ith movie to be stored and kth movie genre'ikh represents the number of the h keyword of the k film type in the ith film to be stored;
s6, according to the obtained keyword comparison set, counting the film type association degree coefficient of the film to be stored and each film type
Figure RE-GDA0001698192010000071
Wherein Q isikExpressed as the coefficient of the degree of association between the ith film to be stored and the kth film type, u is a parameter factor, and is 0.192, qkh is expressed as the weight value v 'occupied by the h keyword in the k film type'ikh represents the number of the h-th keywords of the k-th movie type in the ith movie to be stored, the movie type with the largest relevance coefficient in the comparison of the ith movie to be stored and each movie type is extracted, and the ith movie type to be stored is stored in the storage unit corresponding to the movie type;
s7, comparing the movie pictures in each time slot in the movie playing picture set acquired in the step S4 with the set feature image sets of each region to obtain a feature image comparison set W'ig(w′ig1,w′ig2,...,w′igh,...,w′igl), wherein, W'igIs represented as a set of feature image comparisons of the ith film to be stored and the g th region, w'igh is expressed as the number of times of the h characteristic image of the g area appearing in the ith movie to be stored;
s8, according to the obtained feature image contrast set, counting the area association degree coefficient of the film to be stored and each area
Figure RE-GDA0001698192010000081
Wherein E isigExpressed as the ith requirementStoring a set of feature image contrast of the film and the g region, wherein f is a parameter factor, and is taken as 0.615, w'igh represents the number of times of the ith characteristic image of the g-th area in the ith movie to be stored, the area with the largest relevance coefficient in the comparison of the ith movie to be stored and each area is extracted, and the ith movie to be stored is stored in the sub-storage unit of the area corresponding to the storage unit;
s9, completing the storage of the movie to be stored, receiving a next space occupation request sent by the movie storage, extracting the text content played in the movie to be stored and the picture played by the movie, and executing the steps S3-S8;
s10, detecting the residual storage space of each sub-storage unit in each storage unit in real time, if the residual storage space of each sub-storage unit is smaller than the set standard storage space, dividing the spare storage unit to the sub-storage unit of which the residual storage space is smaller than the set standard storage space by using a fixed storage capacity until the residual storage space of the sub-storage unit is larger than the set standard storage space;
s11, when extracting a movie, inputting a plurality of keywords of the movie to be extracted, where the input keywords constitute a keyword set D (D1, D2, a. # ds, a. # dp) to be detected, and the specific gravities corresponding to each keyword are different, and are gd1+ gd2+ # + gds + # gdp ═ 1;
s12, and a keyword set C for associating each input keyword with each movie stored in the storage spaceiComparing to obtain a screening keyword set D'k(d′k1,d′k2,...,d′ks,...,d′kp),D′kIs expressed as a keyword comparison set, d ', of the movie to be extracted and the keyword of the k-th movie stored in the storage space'ks is the number of times that the s-th keyword of the movie to be extracted appears in the keyword of the k-th movie, the movie with the highest matching degree coefficient of the movie to be extracted is screened according to a matching degree calculation method, and the movie is output
Figure RE-GDA0001698192010000091
Wherein G iskThe index is expressed as the matching coefficient between the movie to be extracted and the kth movie, and gds is expressed as the proportion of the s-th keyword in the movie to be extracted.
The invention provides a method for storing and extracting movie programs based on big data, which divides the storage space into different storage spaces according to movie types and shooting areas, extracts the text content of the movie to be stored and the pictures in the movie in the storage process, screens out the types and areas to which the movie belongs by comparing the movie characters and the pictures in the movie with the set keywords of each movie type and the set characteristic image sets of each area, facilitates regular storage of the movie, improves the storage efficiency and the classification accuracy, screens out the best movie by inputting the keywords of the movie and calculating the matching degree of the keywords of the movie and the keywords of the movie in storage in the movie extraction process, improves the convenience of storing and extracting the movie, and has simple operation, high reliability.
The foregoing is merely exemplary and illustrative of the principles of the present invention and various modifications, additions and substitutions of the specific embodiments described herein may be made by those skilled in the art without departing from the principles of the present invention or exceeding the scope of the claims set forth herein.

Claims (6)

1. A big data-based storage and extraction method for movie programs is characterized by comprising the following steps:
s1, dividing the storage space into a plurality of storage units with the same storage space and a spare storage unit, wherein different storage units store different movie types, the same storage unit is divided into a plurality of sub-storage units, and the plurality of sub-storage units in the storage units are used for storing movies shot in different areas under the same movie type;
s2, acquiring a space occupation request sent by the movie storage, and extracting the text content and the playing picture of the movie needing to be stored;
s3, sequentially sequencing movies to be stored according to playing time, and intercepting text played in the movies at a fixed time interval T1 to obtain a movie text set Ai (Ai1, Ai2,....,. aij,..,. ain), wherein Ai represents a text set of the ith movie to be stored, aij represents text content corresponding to the ith movie to be stored in the jth time period, and n represents the time period number of division of the movie to be stored, and according to the movie text set, all keywords corresponding to the movies to be stored are extracted to form a keyword set Ci (Ci1, Ci2,..,. cis.,..,. cim), wherein Ci represents all the keyword sets of the ith movies to be stored, and m represents the total number of the keywords;
s4, sequentially sequencing the movies to be stored according to the playing time, and intercepting the pictures played by the movies at a fixed time interval T2 to obtain a movie playing picture set Bi (Bi1, Bi2,. copy., bij,. copy., bin), wherein Bi represents the picture set of the ith movie to be stored, and bij represents the picture of the ith movie to be stored in the jth time period;
s5, comparing the movie characters in each time period in the movie character set acquired in the step S3 with set keywords of each movie type respectively to obtain a keyword comparison set V 'ik (V' ik1, V 'ik 2.., V' ikh), wherein V 'ik is a keyword comparison set of the ith movie to be stored and the kth movie type, and V' ikh is the number of the kth keywords of the kth movie type appearing in the ith movie to be stored;
s6, according to the obtained keyword comparison set, counting the film type association coefficient of the film to be stored and each film type, extracting the film type with the maximum association coefficient in the comparison of the film to be stored and each film type, and storing the film type to be stored to the storage unit corresponding to the film type;
s7, comparing the movie pictures in each time period in the movie playing picture set acquired in the step S4 with the set feature image sets of each region to obtain a feature image comparison set W 'ig (W' ig1, W 'ig 2.., W' igh), wherein W 'ig represents a set of image comparison between the ith movie to be stored and the g-th region, and W' igh represents the number of times of the ith feature image of the g-th region appearing in the ith movie to be stored;
s8, according to the obtained feature image contrast set, counting the area relevance coefficient of the film to be stored and each area, extracting the area with the maximum relevance coefficient in the comparison of the film to be stored and each area, and storing the film to be stored to the sub-storage unit of the area corresponding to the storage unit;
s9, completing the storage of the movie to be stored, receiving a next space occupation request sent by the movie storage, extracting the text content played in the movie to be stored and the picture played by the movie, and executing the steps S3-S8;
s10, detecting the residual storage space of each sub-storage unit in each storage unit in real time, if the residual storage space of each sub-storage unit is smaller than the set standard storage space, dividing the spare storage unit to the sub-storage unit of which the residual storage space is smaller than the set standard storage space by using a fixed storage capacity until the residual storage space of the sub-storage unit is larger than the set standard storage space;
s11, when extracting a movie, inputting a plurality of keywords of the movie to be extracted, where the input keywords constitute a keyword set D (D1, D2, a. # ds, a. # dp) to be detected, and the specific gravities corresponding to each keyword are different, and are gd1+ gd2+ # + gds + # gdp ═ 1;
s12, comparing the input keywords with the keyword sets Ci corresponding to the movies stored in the storage space respectively to obtain screening keyword sets D 'k (D' k1, D 'k 2., D' ks), wherein D 'k represents a comparison set of the keywords of the movie to be extracted and the keywords of the kth movie stored in the storage space, D' ks represents the times of the keywords of the movie to be extracted appearing in the keywords of the kth movie, the movie with the highest matching degree coefficient is screened according to a matching degree calculation method, and the movie is output
Figure FDA0002948400520000031
Wherein, Gk represents the matching coefficient between the movie to be extracted and the kth movie, and gds represents the proportion of the s-th keyword in the movie to be extracted.
2. The method for storing and extracting big data-based movie programs according to claim 1, wherein: the movie types comprise love, comedy, action, science fiction, ancient costume, martial art, thriller and crime, different movie types are sorted according to a set movie type sorting sequence, the movie types are respectively 1,2,. once, k,. once and x, each movie type is provided with a plurality of keywords, and a keyword set Vk (Vk1, Vk2,. once and vkh) is formed, wherein Vk represents a keyword set corresponding to a kth movie type, vkh represents a h-th keyword corresponding to the kth movie type, and different keywords correspond to different weights, and the weights are respectively qk1, qk2,. qkh,. once, qkr, and qk1+ qk2+. another. + qkh +. once and qkr ═ 1.
3. The method for storing and extracting big data-based movie programs according to claim 1, wherein: different regions comprise China, Korea, America, Japan, India, Thailand, France and UK, the different regions are sorted from high to low according to the total number of movies of each region, and are respectively 1,2, 9.
4. The method for storing and extracting big data-based movie programs according to claim 1, wherein: the time interval T1 is equal to time interval T2.
5. The method for storing and extracting big data-based movie programs according to claim 1, wherein: the film type relevance coefficient
Figure FDA0002948400520000041
Wherein Qik represents the correlation coefficient between the ith movie to be stored and the kth movie type, u represents a parameter factor, 0.192 is taken, qkh represents the weight value of the h keyword in the kth movie type, and v' ikh represents the number of the h keyword of the kth movie type appearing in the ith movie to be stored.
6. The method for storing and extracting big data-based movie programs according to claim 1, wherein: the area relevance coefficient
Figure FDA0002948400520000051
Wherein Eig represents the set of the image comparison of the ith movie to be stored and the g-th region, f represents a parameter factor, and 0.615 is taken, and w' igh represents the number of times of the ith feature image in the g-th region appearing in the ith movie to be stored.
CN201810512725.7A 2018-05-25 2018-05-25 Big data-based storage and extraction method for movie programs Expired - Fee Related CN108681606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810512725.7A CN108681606B (en) 2018-05-25 2018-05-25 Big data-based storage and extraction method for movie programs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810512725.7A CN108681606B (en) 2018-05-25 2018-05-25 Big data-based storage and extraction method for movie programs

Publications (2)

Publication Number Publication Date
CN108681606A CN108681606A (en) 2018-10-19
CN108681606B true CN108681606B (en) 2021-10-26

Family

ID=63808384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810512725.7A Expired - Fee Related CN108681606B (en) 2018-05-25 2018-05-25 Big data-based storage and extraction method for movie programs

Country Status (1)

Country Link
CN (1) CN108681606B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287161B (en) * 2019-07-02 2022-09-23 北京字节跳动网络技术有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657376A (en) * 2013-11-20 2015-05-27 航天信息股份有限公司 Searching method and searching device for video programs based on program relationship
CN105611331A (en) * 2015-12-28 2016-05-25 康佳集团股份有限公司 Video pushing method and system based on smart television
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content
CN106792018A (en) * 2016-12-12 2017-05-31 四川长虹电器股份有限公司 The low-end set top boxes system of integrated YouTube applications and the method for playing YouTube web videos

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011116422A1 (en) * 2010-03-24 2011-09-29 Annaburne Pty Ltd Method of searching recorded media content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657376A (en) * 2013-11-20 2015-05-27 航天信息股份有限公司 Searching method and searching device for video programs based on program relationship
CN105611331A (en) * 2015-12-28 2016-05-25 康佳集团股份有限公司 Video pushing method and system based on smart television
CN106792018A (en) * 2016-12-12 2017-05-31 四川长虹电器股份有限公司 The low-end set top boxes system of integrated YouTube applications and the method for playing YouTube web videos
CN106600343A (en) * 2016-12-30 2017-04-26 中广热点云科技有限公司 Method and system for managing online video advertisement associated with video content

Also Published As

Publication number Publication date
CN108681606A (en) 2018-10-19

Similar Documents

Publication Publication Date Title
CN104050247B (en) The method for realizing massive video quick-searching
Zhou et al. Movie genre classification via scene categorization
US6094653A (en) Document classification method and apparatus therefor
US8718386B2 (en) Adaptive event timeline in consumer image collections
CN110427895A (en) A kind of video content similarity method of discrimination based on computer vision and system
WO2018137126A1 (en) Method and device for generating static video abstract
JP2012530287A (en) Method and apparatus for selecting representative images
CN103186538A (en) Image classification method, image classification device, image retrieval method and image retrieval device
CN106686460B (en) Video program recommendation method and video program recommendation device
WO2017075912A1 (en) News events extracting method and system
CN103226585A (en) Self-adaptation Hash rearrangement method for image retrieval
CN104284252A (en) Method for generating electronic photo album automatically
Sony et al. Video summarization by clustering using euclidean distance
CN105488212B (en) A kind of data quality checking method and device of repeated data
CN108681606B (en) Big data-based storage and extraction method for movie programs
Wang et al. Video inter-frame forgery identification based on optical flow consistency
CN111061894A (en) Processing method and device of peer data, electronic equipment and storage medium
CN109918529A (en) A kind of image search method based on the quantization of tree-like cluster vectors
CN107423297A (en) The screening technique and device of picture
CN108763465B (en) Video storage allocation method based on big data
CN102306179A (en) Image content retrieval method based on hierarchical color distribution descriptor
JP6062981B2 (en) Video search apparatus, method, and program
CN105843930A (en) Video search method and device
Marsala et al. High scale video mining with forests of fuzzy decision trees
GB2552969A (en) Image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220415

Address after: 361000 room 602g, zone B, No. 178, Xinfeng Road, Huizhi space, torch high tech Zone, Xiamen, Fujian

Patentee after: Xiamen Zhuoying Digital Technology Co.,Ltd.

Address before: 511457 room 701, No.4 zhudian Road, Nansha District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU QIANGUI SOFTWARE TECHNOLOGY CO.,LTD.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211026