CN108090497A - Video classification methods, device, storage medium and electronic equipment - Google Patents

Video classification methods, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108090497A
CN108090497A CN201711464317.0A CN201711464317A CN108090497A CN 108090497 A CN108090497 A CN 108090497A CN 201711464317 A CN201711464317 A CN 201711464317A CN 108090497 A CN108090497 A CN 108090497A
Authority
CN
China
Prior art keywords
feature point
characteristic
video file
point set
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711464317.0A
Other languages
Chinese (zh)
Other versions
CN108090497B (en
Inventor
陈岩
刘耀勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711464317.0A priority Critical patent/CN108090497B/en
Publication of CN108090497A publication Critical patent/CN108090497A/en
Application granted granted Critical
Publication of CN108090497B publication Critical patent/CN108090497B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

This application discloses a kind of video classification methods, device, storage medium and electronic equipment, the described method includes:Obtain the multiple image of video file;The characteristic point of each two field picture in the multiple image is extracted, obtains set of characteristic points;The characteristic point on scene characteristic is chosen out of described set of characteristic points, obtains fisrt feature point set, the characteristic point on object features is chosen out of described set of characteristic points, obtains second feature point set;Disaggregated model classifies to the video file according to the fisrt feature point set and the second feature point set.Two set of characteristic points on scene characteristic and object features are obtained from video file, then two set of characteristic points input algoritic modules are classified, input data carries out classification processing, and classification is more accurate.

Description

Video classification methods, device, storage medium and electronic equipment
Technical field
The application belongs to field of communication technology more particularly to a kind of video classification methods, device, storage medium and electronics are set It is standby.
Background technology
With the substantial increase of video file, people need watch video file before all can according to video file classification into Then the preliminary screening of row goes that interested video file is selected to watch from the video file of corresponding classification.It thus needs pair Video file is effectively classified, so that video file is presented in suitable classification.
During existing video file classification, the class label of video file is mainly first set, then according to class label It is assigned in corresponding video classification.But class label setting may be inaccurate or not comprehensive, causes visual classification not Accurately.
The content of the invention
The application provides a kind of video classification methods, device, storage medium and electronic equipment, video file is carried out automatic Rational Classification improves the accuracy of visual classification.
In a first aspect, the embodiment of the present application provides a kind of video classification methods, applied to electronic equipment, the method bag It includes:
Obtain the multiple image of video file;
The characteristic point of each two field picture in the multiple image is extracted, obtains set of characteristic points;
Characteristic point on scene characteristic is chosen out of described set of characteristic points, obtains fisrt feature point set, from described The characteristic point on object features is chosen in set of characteristic points, obtains second feature point set;
Disaggregated model classifies to the video file according to the fisrt feature point set and the second feature point set.
Second aspect, the embodiment of the present application provides a kind of visual classification device, applied to electronic equipment, described device bag It includes:
Image collection module, for obtaining the multiple image of video file;
Set of characteristic points acquisition module for extracting the characteristic point of each two field picture in the multiple image, obtains feature Point set;
Set of characteristic points chooses module, for choosing the characteristic point on scene characteristic out of described set of characteristic points, obtains To fisrt feature point set, the characteristic point on object features is chosen out of described set of characteristic points, obtains second feature point set It closes;
Processing module, for disaggregated model according to the fisrt feature point set and the second feature point set to described Video file is classified.
The third aspect, the embodiment of the present application provide a kind of storage medium, computer program are stored thereon with, when the calculating When machine program is run on computers so that the computer performs above-mentioned video classification methods.
Fourth aspect, the embodiment of the present application provide a kind of electronic equipment, and including processor and memory, the memory has Computer program, the processor is by calling the computer program, for performing above-mentioned video classification methods.
Video classification methods provided by the embodiments of the present application, device, storage medium and electronic equipment, by obtaining video text The multiple image of part;The characteristic point of each two field picture in multiple image is extracted, obtains set of characteristic points;It is selected out of set of characteristic points The characteristic point on scene characteristic is taken, obtains fisrt feature point set, the spy on object features is chosen out of set of characteristic points Point is levied, obtains second feature point set;Disaggregated model is according to fisrt feature point set and second feature point set to video file Classification.Two set of characteristic points on scene characteristic and object features are obtained from video file, then by two features Point set input algoritic module is classified, and input data carries out classification processing, and classification is more accurate.
Description of the drawings
In order to illustrate more clearly of the technical solution in the embodiment of the present application, make required in being described below to embodiment Attached drawing is briefly described.It should be evident that the accompanying drawings in the following description is only some embodiments of the present application, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is the application scenarios schematic diagram of visual classification device provided by the embodiments of the present application.
Fig. 2 is the first flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 3 is second of flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 4 is the third flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 5 is the 4th kind of flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 6 is the 5th kind of flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 7 is the 6th kind of flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 8 is the 7th kind of flow diagram of video classification methods provided by the embodiments of the present application.
Fig. 9 is the first structure diagram of visual classification device provided by the embodiments of the present application.
Figure 10 is second of structure diagram of visual classification device provided by the embodiments of the present application.
Figure 11 is the third structure diagram of visual classification device provided by the embodiments of the present application.
Figure 12 is the structure diagram of electronic equipment provided by the embodiments of the present application.
Figure 13 is another structure diagram of electronic equipment provided by the embodiments of the present application.
Specific embodiment
Schema is refer to, wherein identical element numbers represent identical component, the principle of the application is to implement one It is illustrated in appropriate computing environment.The following description be based on illustrated the application specific embodiment, should not be by It is considered as limitation the application other specific embodiments not detailed herein.
In the following description, the specific embodiment of the application will be with reference to as the step performed by one or multi-section computer And symbol illustrates, unless otherwise stating clearly.Therefore, these steps and operation will have to mention for several times is performed by computer, this paper institutes The computer execution of finger includes by representing with the computer processing unit of the electronic signal of the data in a structuring pattern Operation.This operation is converted at the data or the position being maintained in the memory system of the computer, reconfigurable Or in addition change the running of the computer in a manner of known to the tester of this field.The data structure that the data are maintained For the provider location of the memory, there is the specific feature as defined in the data format.But the application principle is with above-mentioned text Word illustrates that be not represented as a kind of limitation, this field tester will appreciate that plurality of step as described below and behaviour Also may be implemented among hardware.
Term as used herein " module " can regard the software object to be performed in the arithmetic system as.It is as described herein Different components, module, engine and service can be regarded as the objective for implementation in the arithmetic system.And device as described herein and side Method can be implemented in a manner of software, can also be implemented certainly on hardware, within the application protection domain.
Term " first ", " second " and " the 3rd " in the application etc. is for distinguishing different objects rather than for retouching State particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that cover non-exclusive include. Such as contain the step of process, method, system, product or the equipment of series of steps or module is not limited to list or Module, but some embodiments further include the step of not listing or module or some embodiments further include for these processes, Method, product or equipment intrinsic other steps or module.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Referring to Fig. 1, Fig. 1 is the application scenarios schematic diagram of visual classification device provided by the embodiments of the present application.For example, it regards Frequency sorter obtains the multiple image of video file;Then the characteristic point of each two field picture in multiple image is extracted, obtains spy Levy point set;The characteristic point on scene characteristic is chosen out of set of characteristic points again, fisrt feature point set is obtained, from characteristic point The characteristic point on object features is chosen in set, obtains second feature point set;By fisrt feature point set second feature point Set input disaggregated model, disaggregated model classify to video file according to fisrt feature point set and second feature point set.Example Such as can video file be assigned to the classification of starring actors or assign to classification of sports video etc..
The embodiment of the present application provides a kind of video classification methods, and the executive agent of the video classification methods can be the application The visual classification device of embodiment offer or the electronic equipment for being integrated with the visual classification device, the wherein visual classification fill It puts and the mode of hardware or software may be employed realizes.It is understood that the executive agent of the embodiment of the present application can be all The terminal device of such as smart mobile phone or tablet computer.
The embodiment of the present application will be described from the angle of visual classification device, which can specifically integrate In the electronic device.The video classification methods include:Obtain the multiple image of video file;Extract each frame figure in multiple image The characteristic point of picture, obtains set of characteristic points;The characteristic point on scene characteristic is chosen out of set of characteristic points, obtains fisrt feature Point set chooses the characteristic point on object features out of set of characteristic points, obtains second feature point set;Disaggregated model according to Fisrt feature point set and second feature point set classify to video file.
Referring to Fig. 2, Fig. 2 is the first flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment provide video classification methods be applied to electronic equipment, idiographic flow can be as follows:
Step 101, the multiple image of video file is obtained.
Video file is the multimedia file for containing real-time audio, video information.Video file is the more matchmakers in internet One of body important content.Including multiple format, such as AVI, MPEG form.
The continuous multiple image of video file can be obtained, it can also be according to default acquisition frequency acquisition multiple image, example Such as to obtain the frequency acquisition multiple image of a two field picture every 1 second, naturally it is also possible to according to other frequencies, such as every 2 seconds, often Every 1 minute or every 5 minutes obtain a two field picture obtain multiple image, can also every 1 minute, every 5 minutes or every 10 Minute obtains continuous multiple image, and the multiple image several times of acquisition is merged to form final multiple image.Continuous multiframe Image has the relevance on time dimension, therefore continuous multiple image is more accurate as the reference data of visual classification.
Referring to Fig. 3, Fig. 3 is second of flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment provide acquisition video file multiple image the step of, idiographic flow can be as follows:
Step 1011, video file is divided into the equal sub-video file of multiple playing durations.
The broadcasting total duration of video file is first obtained, the quantity for needing to divide sub-video file is then obtained, when then total Long divided by division quantity, obtains the sub- playing duration of each sub-video file, and then obtains the starting of each sub-video file Then moment and end time are taken out sub-video file in corresponding initial time and end time, realize video file It is divided into the equal sub-video file of multiple playing durations.
Step 1012, the multiple image of the initial time section of each sub-video file is obtained.
Each sub-video file only obtains the multiple image of initial time section.It is of course possible to obtain each sub-video file Other times section multiple image, the multiple image of initial time section is easily obtained, and it is inadequate will not to lead to the problem of frame number.
Step 102, the characteristic point of each two field picture in multiple image is extracted, obtains set of characteristic points.
In the multiple image obtained from video file, each two field picture is all extracted corresponding by feature point extraction algorithm One or more features point, so as to obtain the set of characteristic points of corresponding multiple image.
Feature point extraction algorithm can be Scale invariant features transform algorithm or accelerate robust feature algorithm etc..
Step 103, the characteristic point on scene characteristic is chosen out of set of characteristic points, obtains fisrt feature point set, from The characteristic point on object features is chosen in set of characteristic points, obtains second feature point set.
After obtaining the set of characteristic points of corresponding multiple image, the feature on scene characteristic is selected from this feature point set Point, and all characteristic points on scene characteristic are formed into fisrt feature point set.Selected from this feature point set on The characteristic point of object features, and all characteristic points on object features are formed into second feature point set.
Step 104, disaggregated model classifies to video file according to fisrt feature point set and second feature point set.
By the corresponding set of characteristic points input sort module of multiple image, disaggregated model is according to set of characteristic points to video text Part is classified.The corresponding fisrt feature point set of multiple image and second feature point set can also be inputted disaggregated model, classification Model is according to fisrt feature point set and second feature point set to visual classification.Because video file is dynamic, consecutive frame Image it is associated with each other, the accuracy of visual classification can be improved by the image for obtaining consecutive frame.And pass through consecutive frame figure Variation tendency as that can obtain characteristic point, further improves the accuracy of visual classification.Such as the corresponding fortune of sports video file Dynamic characteristic point variation.
Referring to Fig. 4, Fig. 4 is the third flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment disaggregated model the step of being classified according to fisrt feature point set and second feature point set to video file for providing, Idiographic flow can be as follows:
Step 1041, multiple tag along sorts of disaggregated model are obtained, tag along sort corresponds to a kind of video classification respectively.
Multiple tag along sorts of disaggregated model are first obtained, i.e., the classification that video file is finally divided by disaggregated model is such as divided Class label can include film video, sports video, education video, funny video etc., and tag along sort can also be regarded including basketball Frequently, football video, dash video etc..Tag along sort can include multiclass classification label, such as the first order contingency table of scope bigger Label include multiple second level tag along sorts again under first order tag along sort.As first order tag along sort be sports video, second Grade tag along sort is basketball video.Certainly the tag along sort of the more stages such as three-level, level Four can also be included.
Step 1042, fisrt feature point set and second feature point set are inputted into disaggregated model, obtain video file with Each matched probability value of tag along sort.
By fisrt feature point set and second feature point set input disaggregated model, disaggregated model according to the information of input into Row prediction, obtains video file and the matched probability value of each tag along sort.
Step 1043, acquisition probability value is more than the tag along sort of predetermined probabilities value as target classification label.
By video file and the matched probability value of each tag along sort successively compared with predetermined probabilities value, it will be greater than default general The tag along sort of rate value is selected.
Step 1044, by video file according to target classification labeling.
Classified according to target classification label to video file.If target classification label is sports video label, then this is regarded Frequency file is divided into sports video.Specifically, the video file can be stored in the memory space of sports video division. Different classes of video file can be corresponded to, one set is set, which contains depositing for the video file of each category Chained address is stored up, when video file is checked, is shown multiple videos according to category classification according to the classification of video file In different regions, as in different files, a file corresponds to the video file of a classification.It should be noted that target Tag along sort can be including multiple, and corresponding, which can be divided into multiple classifications, and each classification corresponds to a collection It closes, each storage chained address gathered memory and contain all video files of the category.Amount of physical memory is not required to be arranged on It together, will not wasting space.
Referring to Fig. 5, Fig. 5 is the 4th kind of flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please the video classification methods that provide of embodiment, disaggregated model is according to fisrt feature point set and second feature point set to video text After the step of part is classified, further include:
Step 1051, identify the facial image of each two field picture in multiple image, obtain face image set.
By face recognition technology, recognition of face is carried out to each two field picture, obtains corresponding facial image.Then will be more The corresponding facial image of two field picture merges, and obtains face image set, and face image set includes the institute occurred in multiple image There is facial image.
Step 1052, the frequency that each facial image occurs in face image set is calculated, and determines frequency of occurrences highest Facial image be target facial image.
The frequency that each facial image occurs in face image set is calculated, that is, calculates each facial image and appears in multiframe The number occurred in image, it is then determined that the highest facial image of probability of occurrence is target facial image.
Step 1053, identify target facial image, obtain the face label information of target facial image.
It identifies target facial image, obtains corresponding face label information.Such as corresponding name of target facial image.
Step 1054, face label information is added in the filename of video file.
Face label information is added in the filename of the video file.Such as target facial image is Liu Dehua, by Liu The face label information such as name of moral China are added in the filename of the video file, the corresponding film for being categorized as Liu Dehua, i.e., Classify by acting the leading role.
Referring to Fig. 6, Fig. 6 is the 5th kind of flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please the video classification methods that provide of embodiment, disaggregated model is according to fisrt feature point set and second feature point set to video text After the step of part is classified, further include:
Step 1055, the first label information is obtained according to fisrt feature point set.
Step 1056, the second label information is obtained according to second feature point set.
Step 1057, the first label information and the second label information are added in the filename of video file.
First label information is obtained according to the fisrt feature point set of corresponding scene characteristic, if scene characteristic is for the Lakers Basketball court, then the first label information is Lakers home court.Second label information, such as object are obtained according to the second feature point of corresponding object Body characteristics point is automobile people, then the second label information can be automobile people.
Referring to Fig. 7, Fig. 7 is the 6th kind of flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment provide sorting technique, idiographic flow can be as follows:
Step 201, the multiple image of video file is obtained.
Step 202, the characteristic point of each two field picture in multiple image is extracted, obtains multiple subcharacters of corresponding multiple image Point set.
Step 203, the number that each characteristic point occurs in multiple set of characteristic points is calculated.
Step 204, multiple subcharacter point sets are merged to form set of characteristic points, while occurred according to each characteristic point Number obtains corresponding to the weighted value of each characteristic point.
Step 205, the characteristic point on scene characteristic is chosen out of set of characteristic points, obtains fisrt feature point set, from The characteristic point on object features is chosen in set of characteristic points, obtains second feature point set.
Step 206, it will input and classify after the vectorial multiplication with corresponding weighted value of fisrt feature point vector sum second feature point Model.
Step 207, disaggregated model according to the fisrt feature point set after multiplication and second feature point set to video file Classification.
According to the number of characteristic point occurrence number, different weighted values is set, characteristic point occurrence number is more spoken more the bright spy Sign point is more important, and characteristic point occurrence number is fewer, illustrates that this feature point importance is relatively low.Improve the characteristic point more than occurrence number Proportion improves the accuracy of visual classification.
Referring to Fig. 8, Fig. 8 is the 7th kind of flow diagram of video classification methods provided by the embodiments of the present application.This Shen Please embodiment provide sorting technique, idiographic flow can be as follows:
Step 301, the multiple image of video file is obtained.
Step 302, the foreground image and background image of each two field picture are obtained.
Step 303, foreground features point is obtained according to foreground image, background characteristics point is obtained according to background image.
Step 304, the characteristic point of each two field picture in multiple image is extracted, obtains set of characteristic points.
Step 305, the characteristic point on scene characteristic is chosen out of set of characteristic points, obtains fisrt feature point set, from The characteristic point on object features is chosen in set of characteristic points, obtains second feature point set.
Step 306, fisrt feature point set and second feature point set are converted into the second spy of fisrt feature point vector sum Sign point vector.
Step 307, the weight of foreground features point is set to be more than the weighted value of background characteristics point.
Step 308, it will input and classify after the vectorial multiplication with corresponding weighted value of fisrt feature point vector sum second feature point Model.
Step 309, disaggregated model according to the fisrt feature point set after multiplication and second feature point set to video file Classification.
The importance of the characteristic point of foreground image is more than the importance of the characteristic point of background image.Improve the spy of foreground image The proportion of point is levied, improves the accuracy of visual classification.
From the foregoing, it will be observed that video classification methods provided by the embodiments of the present application, by the multiple image for obtaining video file;It carries The characteristic point of each two field picture in multiple image is taken, obtains set of characteristic points;It is chosen out of set of characteristic points on scene characteristic Characteristic point, obtain fisrt feature point set, characteristic point on object features chosen out of set of characteristic points, it is special to obtain second Levy point set;Disaggregated model classifies to video file according to fisrt feature point set and second feature point set.From video file In obtain two set of characteristic points on scene characteristic and object features, then by two set of characteristic points input algorithm moulds Block is classified, and input data carries out classification processing, and classification is more accurate.
Referring to Fig. 9, Fig. 9 is the first structure diagram of visual classification device provided by the embodiments of the present application.Wherein The visual classification device 500 is applied to electronic equipment, which includes image collection module 501, feature point set Close acquisition module 502, set of characteristic points chooses module 503 and processing module 504.Wherein:
Image collection module 501, for obtaining the multiple image of video file.
Video file is the multimedia file for containing real-time audio, video information.Video file is the more matchmakers in internet One of body important content.Including multiple format, such as AVI, MPEG form.
The continuous multiple image of video file can be obtained, it can also be according to default acquisition frequency acquisition multiple image, example Such as to obtain the frequency acquisition multiple image of a two field picture every 1 second, naturally it is also possible to according to other frequencies, such as every 2 seconds, often Every 1 minute or every 5 minutes obtain a two field picture obtain multiple image, can also every 1 minute, every 5 minutes or every 10 Minute obtains continuous multiple image, and the multiple image several times of acquisition is merged to form final multiple image.
Set of characteristic points acquisition module 502 for extracting the characteristic point of each two field picture in multiple image, obtains characteristic point Set.
In the multiple image obtained from video file, each two field picture is all extracted corresponding by feature point extraction algorithm One or more features point, so as to obtain the set of characteristic points of corresponding multiple image.
Feature point extraction algorithm can be Scale invariant features transform algorithm or accelerate robust feature algorithm etc..
Set of characteristic points chooses module 503, for choosing the characteristic point on scene characteristic out of set of characteristic points, obtains Fisrt feature point set chooses the characteristic point on object features out of set of characteristic points, obtains second feature point set.
After obtaining the set of characteristic points of corresponding multiple image, the feature on scene characteristic is selected from this feature point set Point, and all characteristic points on scene characteristic are formed into fisrt feature point set.Selected from this feature point set on The characteristic point of object features, and all characteristic points on object features are formed into second feature point set.
Processing module 504, for disaggregated model according to fisrt feature point set and second feature point set to video file Classification.
By the corresponding set of characteristic points input sort module of multiple image, disaggregated model is according to set of characteristic points to video text Part is classified.The corresponding fisrt feature point set of multiple image and second feature point set can also be inputted disaggregated model, classification Model is according to fisrt feature point set and second feature point set to visual classification.Because video file is dynamic, consecutive frame Image it is associated with each other, the accuracy of visual classification can be improved by the image for obtaining consecutive frame.And pass through consecutive frame figure Variation tendency as that can obtain characteristic point, further improves the accuracy of visual classification.Such as the corresponding fortune of sports video file Dynamic characteristic point variation.
Referring to Fig. 10, Figure 10 is second of structure diagram of visual classification device provided by the embodiments of the present application.This In embodiment, set of characteristic points acquisition module 502 includes subcharacter point set submodule 5021, computational submodule 5022 and weight It is worth acquisition submodule 5023.Wherein:
Subcharacter point set submodule 5021, for extracting the characteristic point of each two field picture in multiple image, is corresponded to Multiple subcharacter point sets of multiple image;
Computational submodule 5022, for calculating the number that each characteristic point occurs in multiple set of characteristic points;
Weighted value acquisition submodule 5023, for multiple subcharacter point sets to be merged to form set of characteristic points, while root The number occurred according to each characteristic point obtains corresponding to the weighted value of each characteristic point;
Processing module 504 is additionally operable to be multiplied fisrt feature point vector sum second feature point is vectorial with corresponding weighted value After input disaggregated model.
According to the number of characteristic point occurrence number, different weighted values is set, characteristic point occurrence number is more spoken more the bright spy Sign point is more important, and characteristic point occurrence number is fewer, illustrates that this feature point importance is relatively low.Improve the characteristic point more than occurrence number Proportion improves the accuracy of visual classification.
Please refer to Fig.1 the third structure diagram that 1, Figure 11 is visual classification device provided by the embodiments of the present application.This In embodiment, processing module 504 includes tag along sort acquisition submodule 5041, probability value acquisition submodule 5042, tag along sort Choose submodule 5043 and classification submodule 5044.Wherein:
Tag along sort acquisition submodule 5041, for obtaining multiple tag along sorts of disaggregated model, tag along sort is right respectively Answer a kind of video classification.
Multiple tag along sorts of disaggregated model are first obtained, i.e., the classification that video file is finally divided by disaggregated model is such as divided Class label can include film video, sports video, education video, funny video etc., and tag along sort can also be regarded including basketball Frequently, football video, dash video etc..Tag along sort can include multiclass classification label, such as the first order contingency table of scope bigger Label include multiple second level tag along sorts again under first order tag along sort.As first order tag along sort be sports video, second Grade tag along sort is basketball video.Certainly the tag along sort of the more stages such as three-level, level Four can also be included.
Probability value acquisition submodule 5042, for fisrt feature point set and second feature point set to be inputted classification mould Type obtains video file and the matched probability value of each tag along sort.
By fisrt feature point set and second feature point set input disaggregated model, disaggregated model according to the information of input into Row prediction, obtains video file and the matched probability value of each tag along sort.
Tag along sort chooses submodule 5043, is more than the tag along sort of predetermined probabilities value as target for acquisition probability value Tag along sort.
By video file and the matched probability value of each tag along sort successively compared with predetermined probabilities value, it will be greater than default general The tag along sort of rate value is selected.
Classify submodule 5044, for by video file according to target classification labeling.
Classified according to target classification label to video file.If target classification label is sports video label, then this is regarded Frequency file is divided into sports video.Specifically, the video file can be stored in the memory space of sports video division. Different classes of video file can be corresponded to, one set is set, which contains depositing for the video file of each category Chained address is stored up, when video file is checked, is shown multiple videos according to category classification according to the classification of video file In different regions, as in different files, a file corresponds to the video file of a classification.It should be noted that target Tag along sort can be including multiple, and corresponding, which can be divided into multiple classifications, and each classification corresponds to a collection It closes, each storage chained address gathered memory and contain all video files of the category.Amount of physical memory is not required to be arranged on It together, will not wasting space.
In some embodiments, device further includes prospect background image collection module and feature point extraction module.Before wherein Scape background image acquisition module, for obtaining the foreground image of each two field picture and background image.Feature point extraction module, is used for Foreground features point is obtained according to foreground image, background characteristics point is obtained according to background image.
Processing module includes transform subblock, weighted value sets submodule, merge submodule and handles submodule.Its transfer Submodule is changed, for fisrt feature point set and second feature point set to be converted into fisrt feature point vector sum second feature point Vector.Weighted value sets submodule, for the weight of foreground features point to be set to be more than the weighted value of background characteristics point.Merge submodule Block, for disaggregated model will to be inputted after the vectorial multiplication with corresponding weighted value of fisrt feature point vector sum second feature point.Processing Submodule classifies to video file according to the fisrt feature point set after multiplication and second feature point set for disaggregated model.
The importance of the characteristic point of foreground image is more than the importance of the characteristic point of background image.Improve the spy of foreground image The proportion of point is levied, improves the accuracy of visual classification.
In some embodiments, device further includes face image set acquisition module, target facial image acquisition module, people Face label information acquisition module and filename renamer module.Wherein face image set acquisition module, for identifying multiframe figure The facial image of each two field picture, obtains face image set as in.Target facial image acquisition module, for calculating face figure The image set frequency that each facial image occurs in closing, and determine that the highest facial image of the frequency of occurrences is target facial image.People Face label information acquisition module for identifying target facial image, obtains the face label information of target facial image.Filename Renamer module, for face label information to be added in the filename of video file.
By face recognition technology, recognition of face is carried out to each two field picture, obtains corresponding facial image.Then will be more The corresponding facial image of two field picture merges, and obtains face image set, and face image set includes the institute occurred in multiple image There is facial image.By face recognition technology, recognition of face is carried out to each two field picture, obtains corresponding facial image.Then The corresponding facial image of multiple image is merged, obtains face image set, face image set includes occurring in multiple image Face images.It identifies target facial image, obtains corresponding face label information.Such as the target facial image is corresponding Name etc..Face label information is added in the filename of the video file.Such as target facial image is Liu Dehua, by Liu De The face label information such as name of China are added in the filename of the video file, the corresponding film for being categorized as Liu Dehua is pressed Act the leading role classification.
In some embodiments, device further include the first label information acquisition module, the second label information acquisition module and Filename renamer module.Wherein the first label information acquisition module, for obtaining the first label according to fisrt feature point set Information.Second label information acquisition module, for obtaining the second label information according to second feature point set.Filename renaming Module, for the first label information and the second label information to be added in the filename of video file.
First label information is obtained according to the fisrt feature point set of corresponding scene characteristic, if scene characteristic is for the Lakers Basketball court, then the first label information is Lakers home court.Second label information, such as object are obtained according to the second feature point of corresponding object Body characteristics point is automobile people, then the second label information can be automobile people.
In some embodiments, image collection module includes division submodule and image acquisition submodule.Wherein division Module, for video file to be divided into the equal sub-video file of multiple playing durations.Image acquisition submodule, it is every for obtaining The multiple image of the initial time section of a sub- video file.
The broadcasting total duration of video file is first obtained, the quantity for needing to divide sub-video file is then obtained, when then total Long divided by division quantity, obtains the sub- playing duration of each sub-video file, and then obtains the starting of each sub-video file Then moment and end time are taken out sub-video file in corresponding initial time and end time, realize video file It is divided into the equal sub-video file of multiple playing durations.Each sub-video file only obtains the multiple image of initial time section. It is of course possible to obtain the multiple image of the other times section of each sub-video file, the multiple image of initial time section is easy to obtain It takes, it is inadequate frame number will not to be led to the problem of.
From the foregoing, it will be observed that visual classification device provided by the embodiments of the present application, by the multiple image for obtaining video file;It carries The characteristic point of each two field picture in multiple image is taken, obtains set of characteristic points;It is chosen out of set of characteristic points on scene characteristic Characteristic point, obtain fisrt feature point set, characteristic point on object features chosen out of set of characteristic points, it is special to obtain second Levy point set;Disaggregated model classifies to video file according to fisrt feature point set and second feature point set.From video file In obtain two set of characteristic points on scene characteristic and object features, then by two set of characteristic points input algorithm moulds Block is classified, and input data carries out classification processing, and classification is more accurate.
It when it is implemented, Yi Shang modules can be independent entity to realize, can also be combined, be made It is realized for same or several entities, the specific implementation of more than modules can be found in the embodiment of the method for front, herein not It repeats again.
In the embodiment of the present application, visual classification device belongs to same design with the video classification methods in foregoing embodiments, The either method provided in video classification methods embodiment can be run on visual classification device, specific implementation process refers to The embodiment of video classification methods, details are not described herein again.
The embodiment of the present application also provides a kind of electronic equipment.Please refer to Fig.1 2, electronic equipment 600 include processor 601 with And memory 602.Wherein, processor 601 is electrically connected with memory 602.
Processor 600 is the control centre of electronic equipment 600, utilizes various interfaces and the entire electronic equipment of connection Various pieces computer program in memory 602 and are called by operation or load store and are stored in memory 602 Interior data perform the various functions of electronic equipment 600 and handle data, so as to carry out integral monitoring to electronic equipment 600.
Memory 602 can be used for storage software program and unit, and processor 601 is stored in memory 602 by operation Computer program and unit, so as to perform various functions application and data processing.Memory 602 can mainly include storage Program area and storage data field, wherein, storing program area can storage program area, the computer program needed at least one function (such as sound-playing function, image player function etc.) etc.;Storage data field can be stored to be created according to using for electronic equipment Data etc..In addition, memory 602 can include high-speed random access memory, nonvolatile memory, example can also be included Such as at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 602 may be used also To include Memory Controller, to provide access of the processor 601 to memory 602.
In the embodiment of the present application, the processor 601 in electronic equipment 600 can be according to the steps, by one or one The corresponding instruction of process of a above computer program is loaded into memory 602, and is stored in by the operation of processor 601 Computer program in reservoir 602, it is as follows so as to fulfill various functions:
Obtain the multiple image of video file;
The characteristic point of each two field picture in multiple image is extracted, obtains set of characteristic points;
The characteristic point on scene characteristic is chosen out of set of characteristic points, fisrt feature point set is obtained, from feature point set The characteristic point on object features is chosen in closing, obtains second feature point set;
Disaggregated model classifies to video file according to fisrt feature point set and second feature point set.
In some embodiments, processor 601 is additionally operable to perform following steps:
Multiple tag along sorts of disaggregated model are obtained, tag along sort corresponds to a kind of video classification respectively;
By fisrt feature point set and second feature point set input disaggregated model, video file and each contingency table are obtained Sign matched probability value;
Acquisition probability value is more than the tag along sort of predetermined probabilities value as target classification label;
By video file according to target classification labeling.
In some embodiments, processor 601 is additionally operable to perform following steps:
The characteristic point of each two field picture in multiple image is extracted, obtains multiple subcharacter point sets of corresponding multiple image;
Calculate the number that each characteristic point occurs in multiple set of characteristic points;
Multiple subcharacter point sets are merged to form set of characteristic points, while are obtained according to the number that each characteristic point occurs The weighted value of corresponding each characteristic point;
Disaggregated model will be inputted after the vectorial multiplication with corresponding weighted value of fisrt feature point vector sum second feature point;
Disaggregated model classifies to video file according to the fisrt feature point set after multiplication and second feature point set.
Processor 601 is additionally operable to perform following steps:
Obtain the foreground image and background image of each two field picture;
Foreground features point is obtained according to foreground image, background characteristics point is obtained according to background image;
Fisrt feature point set and second feature point set are converted into fisrt feature point vector sum second feature point vector;
The weight of foreground features point is set to be more than the weighted value of background characteristics point;
Disaggregated model will be inputted after the vectorial multiplication with corresponding weighted value of fisrt feature point vector sum second feature point;
Disaggregated model classifies to video file according to the fisrt feature point set after multiplication and second feature point set.
In some embodiments, processor 601 is additionally operable to perform following steps:
It identifies the facial image of each two field picture in multiple image, obtains face image set;
The frequency that each facial image occurs in face image set is calculated, and determines the highest facial image of the frequency of occurrences For target facial image;
It identifies target facial image, obtains the face label information of target facial image;
Face label information is added in the filename of video file.
In some embodiments, processor 601 is additionally operable to perform following steps:
First label information is obtained according to fisrt feature point set;
Second label information is obtained according to second feature point set;
First label information and the second label information are added in the filename of video file.
In some embodiments, processor 601 is additionally operable to perform following steps:
Video file is divided into the equal sub-video file of multiple playing durations;
Obtain the multiple image of the initial time section of each sub-video file.
It can be seen from the above, electronic equipment provided by the embodiments of the present application, by the multiple image for obtaining video file;Extraction The characteristic point of each two field picture, obtains set of characteristic points in multiple image;It is chosen out of set of characteristic points on scene characteristic Characteristic point obtains fisrt feature point set, and the characteristic point on object features is chosen out of set of characteristic points, obtains second feature Point set;Disaggregated model classifies to video file according to fisrt feature point set and second feature point set.From video file Two set of characteristic points on scene characteristic and object features are obtained, then by two set of characteristic points input algoritic modules Classify, input data carries out classification processing, and classification is more accurate.
Also referring to Figure 13, in some embodiments, electronic equipment 600 can also include:Display 603, radio frequency electrical Road 604, voicefrequency circuit 605 and power supply 606.Wherein, wherein, display 603, radio circuit 604, voicefrequency circuit 605 and Power supply 606 is electrically connected respectively with processor 601.
Display 603 is displayed for by information input by user or is supplied to the information of user and various figures to use Family interface, these graphical user interface can be made of figure, text, icon, video and its any combination.Display 603 Can include display panel, in some embodiments, may be employed liquid crystal display (Liquid Crystal Display, LCD) or the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) configure display surface Plate.
Radio circuit 604 can be used for transceiving radio frequency signal, to be set by wireless communication and the network equipment or other electronics It is standby to establish wireless telecommunications, the receiving and transmitting signal between the network equipment or other electronic equipments.
Voicefrequency circuit 605 can be used for providing the audio interface between user and electronic equipment by loud speaker, microphone.
Power supply 606 is used to all parts power supply of electronic equipment 600.In some embodiments, power supply 606 can be with It is logically contiguous by power-supply management system and processor 601, so as to by power-supply management system realize management charge, electric discharge, with And the functions such as power managed.
Although not shown in Figure 13, electronic equipment 600 can also include camera, bluetooth unit etc., and details are not described herein.
It is understood that the electronic equipment of the embodiment of the present application can be the end of smart mobile phone or tablet computer etc. End equipment.
The embodiment of the present application also provides a kind of storage medium, and storage medium is stored with computer program, works as computer program When running on computers so that computer performs the video classification methods in any of the above-described embodiment, such as:From the foregoing, it will be observed that Video classification methods provided by the embodiments of the present application, by the multiple image for obtaining video file;It extracts each in multiple image The characteristic point of two field picture, obtains set of characteristic points;The characteristic point on scene characteristic is chosen out of set of characteristic points, obtains first Set of characteristic points chooses the characteristic point on object features out of set of characteristic points, obtains second feature point set;Disaggregated model Classified according to fisrt feature point set and second feature point set to video file.
In the embodiment of the present application, storage medium can be magnetic disc, CD, read-only memory (Read Only Memory, ) or random access memory (Random Access Memory, RAM) etc. ROM.
In the above-described embodiments, all emphasize particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
It should be noted that for the video classification methods of the embodiment of the present application, this field common test personnel can be with Understand all or part of flow for realizing the embodiment of the present application video classification methods, be that can phase be controlled by computer program The hardware of pass is completed, and computer program can be stored in a computer read/write memory medium, as being stored in electronic equipment It in memory, and is performed by least one processor in the electronic equipment, may include such as visual classification side in the process of implementation The flow of the embodiment of method.Wherein, storage medium can be magnetic disc, CD, read-only memory, random access memory etc..
For the visual classification device of the embodiment of the present application, each functional unit can be integrated in a processing chip In or unit be individually physically present, can also two or more units integrate in a unit.It is above-mentioned The form that hardware had both may be employed in integrated unit is realized, can also be realized in the form of SFU software functional unit.Integrated list If member is realized in the form of SFU software functional unit and is independent production marketing or is counted in use, one can also be stored in In calculation machine read/write memory medium, storage medium is for example read-only memory, disk or CD etc..
Above to a kind of video classification methods, device, storage medium and the electronic equipment that the embodiment of the present application is provided into It has gone and has been discussed in detail, the principle and implementation of this application are described for specific case used herein, implements above The explanation of example is only intended to help to understand the present processes and its core concept;Meanwhile for those skilled in the art, according to According to the thought of the application, there will be changes in specific embodiments and applications, in conclusion this specification content It should not be construed as the limitation to the application.

Claims (12)

1. a kind of video classification methods, applied to electronic equipment, which is characterized in that the described method includes:
Obtain the multiple image of video file;
The characteristic point of each two field picture in the multiple image is extracted, obtains set of characteristic points;
The characteristic point on scene characteristic is chosen out of described set of characteristic points, fisrt feature point set is obtained, from the feature The characteristic point on object features is chosen in point set, obtains second feature point set;
Disaggregated model classifies to the video file according to the fisrt feature point set and the second feature point set.
2. video classification methods as described in claim 1, which is characterized in that disaggregated model is according to the fisrt feature point set The step of classifying with the second feature point set to the video file, including:
Multiple tag along sorts of disaggregated model are obtained, the tag along sort corresponds to a kind of video classification respectively;
The fisrt feature point set and second feature point set are inputted into the disaggregated model, obtain the video file with it is each A matched probability value of tag along sort;
Tag along sort of the probability value more than predetermined probabilities value is obtained as target classification label;
By the video file according to the target classification labeling.
3. video classification methods as described in claim 1, which is characterized in that extract each two field picture in the multiple image Characteristic point, the step of obtaining set of characteristic points, including:
The characteristic point of each two field picture in the multiple image is extracted, obtains multiple subcharacter point sets of corresponding multiple image;
Calculate the number that each characteristic point occurs in the multiple set of characteristic points;
The multiple subcharacter point set is merged to form the set of characteristic points, while the number occurred according to each characteristic point Obtain corresponding to the weighted value of each characteristic point;
The step that disaggregated model classifies to the video file according to the fisrt feature point set and the second feature point set Suddenly, including:
Disaggregated model will be inputted after the vectorial multiplication with corresponding weighted value of second feature point described in the fisrt feature point vector sum;
Disaggregated model classifies to the video file according to the fisrt feature point set after multiplication and second feature point set.
4. video classification methods as described in claim 1, which is characterized in that extract each two field picture in the multiple image Characteristic point, the step of obtaining set of characteristic points, including:
Obtain the foreground image and background image of each two field picture;
Foreground features point is obtained according to the foreground image, background characteristics point is obtained according to the background image;
The step that disaggregated model classifies to the video file according to the fisrt feature point set and the second feature point set Suddenly, including:
The fisrt feature point set and the second feature point set are converted into fisrt feature point vector sum second feature point Vector;
The weight of the foreground features point is set to be more than the weighted value of the background characteristics point;
Disaggregated model will be inputted after the vectorial multiplication with corresponding weighted value of second feature point described in the fisrt feature point vector sum;
Disaggregated model classifies to the video file according to the fisrt feature point set after multiplication and second feature point set.
5. video classification methods as described in claim 1, which is characterized in that the method further includes, and further includes:
It identifies the facial image of each two field picture in the multiple image, obtains face image set;
The frequency that each facial image occurs in the face image set is calculated, and determines the highest facial image of the frequency of occurrences For target facial image;
It identifies the target facial image, obtains the face label information of the target facial image;
The face label information is added in the filename of the video file.
6. video classification methods as described in claim 1, which is characterized in that the method further includes, and further includes:
First label information is obtained according to the fisrt feature point set;
Second label information is obtained according to the second feature point set;
First label information and second label information are added in the filename of the video file.
7. video classification methods as described in claim 1, which is characterized in that the step of obtaining the multiple image of video file, Including:
The video file is divided into the equal sub-video file of multiple playing durations;
Obtain the multiple image of the initial time section of each sub-video file.
8. a kind of visual classification device, applied to electronic equipment, which is characterized in that described device includes:
Image collection module, for obtaining the multiple image of video file;
Set of characteristic points acquisition module for extracting the characteristic point of each two field picture in the multiple image, obtains feature point set It closes;
Set of characteristic points chooses module, for choosing characteristic point on scene characteristic out of described set of characteristic points, obtains the One set of characteristic points chooses the characteristic point on object features out of described set of characteristic points, obtains second feature point set;
Processing module, for disaggregated model according to the fisrt feature point set and the second feature point set to the video Document classification.
9. visual classification device according to claim 8, which is characterized in that the processing module includes:
Tag along sort acquisition submodule, for obtaining multiple tag along sorts of disaggregated model, the tag along sort corresponds to one respectively Kind video classification;
Probability value acquisition submodule, for the fisrt feature point set and second feature point set to be inputted the classification mould Type obtains the video file and the matched probability value of each tag along sort;
Tag along sort chooses submodule, for obtaining tag along sort of the probability value more than predetermined probabilities value as target classification Label;
Classify submodule, for by the video file according to the target classification labeling.
10. visual classification device according to claim 8, which is characterized in that the set of characteristic points acquisition module includes:
Subcharacter point set submodule for extracting the characteristic point of each two field picture in the multiple image, obtains corresponding multiframe Multiple subcharacter point sets of image;
Computational submodule, for calculating the number that each characteristic point occurs in the multiple set of characteristic points;
Weighted value acquisition submodule, for the multiple subcharacter point set to be merged to form the set of characteristic points, while root The number occurred according to each characteristic point obtains corresponding to the weighted value of each characteristic point;
The processing module is additionally operable to second feature point described in the fisrt feature point vector sum is vectorial with corresponding weighted value Disaggregated model is inputted after multiplication, using the disaggregated model according to the fisrt feature point set and second feature point set after multiplication Classify to the video file.
11. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program is in computer During upper operation so that the computer performs video classification methods as described in any one of claim 1 to 7.
12. a kind of electronic equipment, including processor and memory, the memory has computer program, which is characterized in that described Processor is by calling the computer program, for performing video classification methods as described in any one of claim 1 to 7.
CN201711464317.0A 2017-12-28 2017-12-28 Video classification method and device, storage medium and electronic equipment Expired - Fee Related CN108090497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711464317.0A CN108090497B (en) 2017-12-28 2017-12-28 Video classification method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711464317.0A CN108090497B (en) 2017-12-28 2017-12-28 Video classification method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108090497A true CN108090497A (en) 2018-05-29
CN108090497B CN108090497B (en) 2020-07-07

Family

ID=62179811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711464317.0A Expired - Fee Related CN108090497B (en) 2017-12-28 2017-12-28 Video classification method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108090497B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764208A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN108804658A (en) * 2018-06-08 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN108881740A (en) * 2018-06-28 2018-11-23 Oppo广东移动通信有限公司 Image method and device, electronic equipment, computer readable storage medium
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN108921204A (en) * 2018-06-14 2018-11-30 平安科技(深圳)有限公司 Electronic device, picture sample set creation method and computer readable storage medium
CN109272041A (en) * 2018-09-21 2019-01-25 联想(北京)有限公司 The choosing method and device of characteristic point
CN109710802A (en) * 2018-12-20 2019-05-03 百度在线网络技术(北京)有限公司 Video classification methods and its device
CN109740019A (en) * 2018-12-14 2019-05-10 上海众源网络有限公司 A kind of method, apparatus to label to short-sighted frequency and electronic equipment
CN110046278A (en) * 2019-03-11 2019-07-23 北京奇艺世纪科技有限公司 Video classification methods, device, terminal device and storage medium
CN110163115A (en) * 2019-04-26 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer readable storage medium
CN110222649A (en) * 2019-06-10 2019-09-10 北京达佳互联信息技术有限公司 Video classification methods, device, electronic equipment and storage medium
CN110348367A (en) * 2019-07-08 2019-10-18 北京字节跳动网络技术有限公司 Video classification methods, method for processing video frequency, device, mobile terminal and medium
CN110580428A (en) * 2018-06-08 2019-12-17 Oppo广东移动通信有限公司 image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110580508A (en) * 2019-09-06 2019-12-17 捷开通讯(深圳)有限公司 video classification method and device, storage medium and mobile terminal
WO2019242222A1 (en) * 2018-06-21 2019-12-26 北京字节跳动网络技术有限公司 Method and device for use in generating information
CN111177466A (en) * 2019-12-23 2020-05-19 联想(北京)有限公司 Clustering method and device
CN111432138A (en) * 2020-03-16 2020-07-17 Oppo广东移动通信有限公司 Video splicing method and device, computer readable medium and electronic equipment
CN111553191A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Video classification method and device based on face recognition and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN105302906A (en) * 2015-10-29 2016-02-03 小米科技有限责任公司 Information labeling method and apparatus
CN106599907A (en) * 2016-11-29 2017-04-26 北京航空航天大学 Multi-feature fusion-based dynamic scene classification method and apparatus
CN107194419A (en) * 2017-05-10 2017-09-22 百度在线网络技术(北京)有限公司 Video classification methods and device, computer equipment and computer-readable recording medium
US20170308753A1 (en) * 2016-04-26 2017-10-26 Disney Enterprises, Inc. Systems and Methods for Identifying Activities and/or Events in Media Contents Based on Object Data and Scene Data
CN107316035A (en) * 2017-08-07 2017-11-03 北京中星微电子有限公司 Object identifying method and device based on deep learning neutral net

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN105302906A (en) * 2015-10-29 2016-02-03 小米科技有限责任公司 Information labeling method and apparatus
US20170308753A1 (en) * 2016-04-26 2017-10-26 Disney Enterprises, Inc. Systems and Methods for Identifying Activities and/or Events in Media Contents Based on Object Data and Scene Data
US20170308754A1 (en) * 2016-04-26 2017-10-26 Disney Enterprises, Inc. Systems and Methods for Determining Actions Depicted in Media Contents Based on Attention Weights of Media Content Frames
CN106599907A (en) * 2016-11-29 2017-04-26 北京航空航天大学 Multi-feature fusion-based dynamic scene classification method and apparatus
CN107194419A (en) * 2017-05-10 2017-09-22 百度在线网络技术(北京)有限公司 Video classification methods and device, computer equipment and computer-readable recording medium
CN107316035A (en) * 2017-08-07 2017-11-03 北京中星微电子有限公司 Object identifying method and device based on deep learning neutral net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YONQQING SUN ET AL.: "Multimedia Event Detection", 《TRECVID 2016》 *
ZUXUAN WU ET AL.: "Harnessing object and scene semantics for large-scale video understanding", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2016》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804658A (en) * 2018-06-08 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN110580428A (en) * 2018-06-08 2019-12-17 Oppo广东移动通信有限公司 image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108764208A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
WO2019233394A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium and electronic device
CN108804658B (en) * 2018-06-08 2022-06-10 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN108764208B (en) * 2018-06-08 2021-06-08 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
WO2019237558A1 (en) * 2018-06-14 2019-12-19 平安科技(深圳)有限公司 Electronic device, picture sample set generation method, and computer readable storage medium
CN108921204A (en) * 2018-06-14 2018-11-30 平安科技(深圳)有限公司 Electronic device, picture sample set creation method and computer readable storage medium
CN108921204B (en) * 2018-06-14 2023-12-26 平安科技(深圳)有限公司 Electronic device, picture sample set generation method, and computer-readable storage medium
WO2019242222A1 (en) * 2018-06-21 2019-12-26 北京字节跳动网络技术有限公司 Method and device for use in generating information
CN108881740B (en) * 2018-06-28 2021-03-02 Oppo广东移动通信有限公司 Image method and device, electronic equipment and computer readable storage medium
CN108881740A (en) * 2018-06-28 2018-11-23 Oppo广东移动通信有限公司 Image method and device, electronic equipment, computer readable storage medium
CN109272041A (en) * 2018-09-21 2019-01-25 联想(北京)有限公司 The choosing method and device of characteristic point
CN109740019A (en) * 2018-12-14 2019-05-10 上海众源网络有限公司 A kind of method, apparatus to label to short-sighted frequency and electronic equipment
CN109710802A (en) * 2018-12-20 2019-05-03 百度在线网络技术(北京)有限公司 Video classification methods and its device
CN110046278A (en) * 2019-03-11 2019-07-23 北京奇艺世纪科技有限公司 Video classification methods, device, terminal device and storage medium
CN110163115B (en) * 2019-04-26 2023-10-13 腾讯科技(深圳)有限公司 Video processing method, device and computer readable storage medium
CN110163115A (en) * 2019-04-26 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer readable storage medium
CN110222649A (en) * 2019-06-10 2019-09-10 北京达佳互联信息技术有限公司 Video classification methods, device, electronic equipment and storage medium
CN110222649B (en) * 2019-06-10 2020-12-18 北京达佳互联信息技术有限公司 Video classification method and device, electronic equipment and storage medium
CN110348367B (en) * 2019-07-08 2021-06-08 北京字节跳动网络技术有限公司 Video classification method, video processing device, mobile terminal and medium
CN110348367A (en) * 2019-07-08 2019-10-18 北京字节跳动网络技术有限公司 Video classification methods, method for processing video frequency, device, mobile terminal and medium
CN110580508A (en) * 2019-09-06 2019-12-17 捷开通讯(深圳)有限公司 video classification method and device, storage medium and mobile terminal
CN111177466A (en) * 2019-12-23 2020-05-19 联想(北京)有限公司 Clustering method and device
CN111177466B (en) * 2019-12-23 2024-03-26 联想(北京)有限公司 Clustering method and device
CN111432138B (en) * 2020-03-16 2022-04-26 Oppo广东移动通信有限公司 Video splicing method and device, computer readable medium and electronic equipment
CN111432138A (en) * 2020-03-16 2020-07-17 Oppo广东移动通信有限公司 Video splicing method and device, computer readable medium and electronic equipment
CN111553191A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Video classification method and device based on face recognition and storage medium

Also Published As

Publication number Publication date
CN108090497B (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN108090497A (en) Video classification methods, device, storage medium and electronic equipment
CN108337358A (en) Using method for cleaning, device, storage medium and electronic equipment
CN107678845A (en) Application program management-control method, device, storage medium and electronic equipment
CN108121816A (en) Picture classification method, device, storage medium and electronic equipment
CN110009556A (en) Image background weakening method, device, storage medium and electronic equipment
CN108090203A (en) Video classification methods, device, storage medium and electronic equipment
CN104239535A (en) Method and system for matching pictures with characters, server and terminal
CN107678800A (en) Background application method for cleaning, device, storage medium and electronic equipment
US8358842B2 (en) Electronic device with function of separating panels of digital comic strip and method thereof
CN112052387B (en) Content recommendation method, device and computer readable storage medium
CN107704070A (en) Using method for cleaning, device, storage medium and electronic equipment
CN108108455A (en) Method for pushing, device, storage medium and the electronic equipment of destination
CN110489449A (en) A kind of chart recommended method, device and electronic equipment
CN109118288A (en) Target user's acquisition methods and device based on big data analysis
CN110031761B (en) Battery screening method, battery screening device and terminal equipment
CN108986125A (en) Object edge extracting method, device and electronic equipment
CN108197225A (en) Sorting technique, device, storage medium and the electronic equipment of image
CN107894827A (en) Using method for cleaning, device, storage medium and electronic equipment
CN107517312A (en) A kind of wallpaper switching method, device and terminal device
CN107807730B (en) Using method for cleaning, device, storage medium and electronic equipment
CN112241789A (en) Structured pruning method, device, medium and equipment for lightweight neural network
CN108133020A (en) Video classification methods, device, storage medium and electronic equipment
CN108197105B (en) Natural language processing method, device, storage medium and electronic equipment
CN107943571A (en) Background application management-control method, device, storage medium and electronic equipment
CN107643925A (en) Background application method for cleaning, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200707