CN103079117A - Video abstract generation method and video abstract generation device - Google Patents

Video abstract generation method and video abstract generation device Download PDF

Info

Publication number
CN103079117A
CN103079117A CN2012105877576A CN201210587757A CN103079117A CN 103079117 A CN103079117 A CN 103079117A CN 2012105877576 A CN2012105877576 A CN 2012105877576A CN 201210587757 A CN201210587757 A CN 201210587757A CN 103079117 A CN103079117 A CN 103079117A
Authority
CN
China
Prior art keywords
moving target
target
video
image
target frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105877576A
Other languages
Chinese (zh)
Other versions
CN103079117B (en
Inventor
王海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IROBOTCITY (BEIJING) CO.,LTD.
TELEFRAME TECHNOLOGY (BEIJING) CO LTD
Original Assignee
XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd filed Critical XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority to CN201210587757.6A priority Critical patent/CN103079117B/en
Publication of CN103079117A publication Critical patent/CN103079117A/en
Application granted granted Critical
Publication of CN103079117B publication Critical patent/CN103079117B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a video abstract generation method and a video abstract generation device. The video abstract generation method comprises the steps that a target frame image in original video is subjected to background modeling, and a background model is obtained; moving targets in the target frame image are extracted by using the background model; the extracted moving targets are classified by using a preset classifier; and the extracted moving targets are arranged in a track manner according to the classification result, so that an abstract is generated. With application of the technical scheme of the invention, the use efficiency of video is improved; and an organized abstract is generated by finally adopting the object classification method, and the purpose of quick multi-object browsing is met.

Description

Video abstraction generating method and video frequency abstract generating apparatus
Technical field
The invention belongs to computer vision and area of pattern recognition, be specifically related to a kind of video abstraction generating method and video frequency abstract generating apparatus.
Background technology
It is concentrated that video frequency abstract is called again video, it is the summary to video content, in automatic or semi-automatic mode, by the moving target analysis, extract moving target, then the movement locus of each target is analyzed, different targets is spliced in the common background scene, and they are made up in some way.Along with the development of video technique, the effect of video frequency abstract in video analysis and content-based video frequency searching is further important.
The purpose that video frequency abstract extracts is the moving target in order to occur in the browsing video fast, the target that yet video content is very complicated and different user is paid close attention to is different, and video abstraction generating method can't generate gratifying video frequency abstract according to various user's request complete and accurate ground in the prior art.
Can't to the problem of variety classes moving target difference generating video summary, effective solution be proposed not yet at present for video abstraction generating method in the prior art.
Summary of the invention
The present invention proposes a kind of video abstraction generating method and video frequency abstract generating apparatus, and video abstraction generating method can't be to the problem of variety classes moving target difference generating video summary in the prior art to solve.
The technical solution adopted for the present invention to solve the technical problems is:
According to an aspect of the present invention, provide a kind of video abstraction generating method.This video abstraction generating method comprises: the target frame image in the original video is carried out background modeling, obtain background model; Utilize background model to extract moving target in the target frame image; Use default grader that the moving target that extracts is classified; According to classification results the moving target that extracts is carried out trajectory alignment, generate summary.
Further, the image of the target frame in the original video is carried out background modeling comprise: use the mixed Gaussian Background Algorithm that the image of target frame is calculated, obtain the mixed Gauss model of target frame image.
Further, using default grader that the moving target that extracts is classified comprises: use the default object classification device model that is trained by SVMs that the moving target that extracts is classified.
Further, also comprise afterwards at the moving target that utilizes background model to extract in the target frame image:
The moving target of the former frame of the moving target of target frame image and target frame is carried out track following, obtain the movement locus of moving target.
Further, the moving target of the former frame of the moving target of target frame image and target frame being carried out track following comprises: the area that intersects of the moving target that calculates the target frame image and the moving target of the former frame of target frame; Judge and whether intersect area greater than the preset area value; When intersecting area during greater than the preset area value, upgrade movement locus according to the position of moving target on image of target frame image; When the intersection area is less than or equal to preset area, according to the position generation new movement locus of moving target on image of target frame image.
Further, according to classification results the moving target that extracts being carried out trajectory alignment comprises: according to time relationship and locus that the movement locus of moving target occurs this movement locus is arranged; Movement locus after arranging is added on the background image.
According to another aspect of the present invention, provide a kind of video frequency abstract generating apparatus.This video frequency abstract generating apparatus comprises: the background modeling module is used for the target frame image of original video is carried out background modeling to obtain background model; The moving target extraction module is used for utilizing background model to extract the moving target of target frame image; The moving object classification module is used for using default grader that the moving target that extracts is classified; The summarization generation module is used for according to classification results the moving target that extracts being carried out trajectory alignment, generates summary.
Further, the background modeling module also is used for: use the mixed Gaussian Background Algorithm that the image of target frame is calculated, obtain the mixed Gauss model of target frame image.
Further, the moving object classification module also is used for: use the default object classification device model that is trained by SVMs that the moving target that extracts is classified.
Further, this video frequency abstract generating apparatus also comprises: the track following module, be used for the moving target of the former frame of the moving target of target frame image and target frame is carried out track following, and obtain the movement locus of moving target.
Use technical scheme of the present invention, technical scheme of the present invention extracts after the moving target in image, utilize default grader to set up the model of object common in the video, the prospect that extraction is obtained is that moving target is classified, and makes a summary according to the classification results generating video.Thereby complete and accurate ground generates the video frequency abstract that meets user's request, makes the user can promptly obtain the video information that needs from video frequency abstract, has improved the service efficiency of video.The final method of object classification of using has generated coherent summary, has satisfied the purpose that fast multi-target is browsed.
Description of drawings
Fig. 1 is the schematic diagram according to the video frequency abstract generating apparatus of the embodiment of the invention;
Fig. 2 is the schematic diagram according to the video abstraction generating method of the embodiment of the invention;
Fig. 3 is the flow chart according to the video abstraction generating method of the embodiment of the invention.
Embodiment
Need to prove, in the situation that do not conflict, embodiment and the feature among the embodiment among the application can make up mutually.Describe below with reference to the accompanying drawings and in conjunction with the embodiments the present invention in detail.
The embodiment of the invention provides a kind of video frequency abstract generating apparatus, Fig. 1 is the schematic diagram according to the video frequency abstract generating apparatus of the embodiment of the invention, as shown in Figure 1, this generating apparatus comprises: background modeling module 11 is used for the target frame image of original video is carried out background modeling to obtain background model; Moving target extraction module 13 is used for utilizing background model to extract the moving target of target frame image; Moving object classification module 15 is used for using default grader that the moving target that extracts is classified; Summarization generation module 17 is used for according to classification results the moving target that extracts being carried out trajectory alignment, generates summary.
Utilize the video frequency abstract generating apparatus of the present embodiment, after setting up background model, utilize default grader to set up the model of object common in the video, the prospect that extraction is obtained is that moving target is classified, and makes a summary according to the classification results generating video.Thereby complete and accurate ground generates the video frequency abstract that meets user's request, makes the user can promptly obtain the video information that needs from video frequency abstract, has improved the service efficiency of video.
Above background modeling module 11 can be used various image background modeling algorithms, after setting up background model, current image and background model is compared, and determines foreground target (namely need extract moving target) according to comparative result.The image background modeling algorithm can select to adopt color background model or grain background model particularly, and wherein, the color background model is that the color value of each pixel in the image (comprising gray scale or colour) is carried out modeling.If when the pixel color value in the pixel color value on the present image coordinate (x, y) and the background model on (x, y) had larger difference, current pixel was considered to prospect, otherwise it is background.
The background modeling module 11 of the video frequency abstract generating apparatus of this example can preferably be used the mixed Gaussian Background Algorithm in the color background model, mixture Gaussian background model (Gaussian Mixture Model) is in the basis of single Gaussian Background model has carried out improving, be similar to smoothly the density fonction of arbitrary shape by the weighted average of a plurality of Gaussian probability-density functions, be particularly useful for processing for the image of outdoor environment, background modeling module 11 is utilized the characteristics of mixed Gaussian Background Algorithm, the moving target in the video under the outdoor environment can be identified rapidly and accurately.
When background modeling module 11 is carried out background modeling, can carry out corresponding filtering processing to the illumination in the target frame image and shade, be mistaken as moving target to avoid illumination and shade, affect the generation of video frequency abstract.
In the situation that use the mixed Gaussian Background Algorithm, background modeling module 11 can also be used for: use the mixed Gaussian Background Algorithm that the image of target frame is calculated, obtain the mixed Gauss model of target frame image.
Moving object classification module 15 employed default graders can use multiple sorting algorithm, can adopt in the present embodiment SVMs (Support Vector Machine, SVM) object that often occurs in the video (such as vehicles such as bicycle, automobile, motorcycles, or animal etc.) is carried out model training.SVMs is analyzed the linear separability situation, situation for linearly inseparable, make its linear separability by using non-linear map that the sample of low-dimensional input space linearly inseparable is converted into high-dimensional feature space, thereby realize that high-dimensional feature space adopts linear algorithm that the nonlinear characteristic of sample is carried out linear analysis.Simultaneously SVMs based on the structural risk minimization theory in feature space construction optimum segmentation hyperplane so that learner obtains global optimization, and satisfy certain upper bound in the expected risk of whole sample space with certain probability.
The algorithm of SVMs is by a Nonlinear Mapping p, sample space is mapped to (Hilbert space) in a higher-dimension and even the infinite dimensional feature space, so that the problem of Nonlinear separability is converted into the problem of the linear separability in feature space in original sample space, namely realize a linear separability by rising dimension.Simultaneously for the calculation of complex that rises dimension and may bring, SVMs is used the expansion theorem of kernel function, does not just need to know the explicit expression of Nonlinear Mapping; Owing to being in high-dimensional feature space, to set up linear learning machine, so compare with linear model, not only increase hardly the complexity of calculating, by selecting different kernel functions, can generate different models.
In the present embodiment, can carry out in advance SVM training to multiple subject image, obtain several typical models and store.Thereby moving object classification module 15 can also be used for using the default object classification device model that is trained by SVMs that the moving target that extracts is classified.
The video frequency abstract generating apparatus of the present embodiment can also comprise the track following module, the moving target that every frame detects is followed the tracks of, tracking can adopt neighbor method, concrete steps can be: the moving target of the former frame of the moving target of target frame image and target frame is carried out track following, obtain the movement locus of moving target.
Wherein, track following can comprise the judgement of several steps such as track association, track generation and track disappearance.Concrete method of discrimination is: the area that intersects of the moving target that calculates the target frame image and the moving target of the former frame of target frame; Judge and whether intersect area greater than the preset area value; When intersecting area during greater than the preset area value, upgrade movement locus according to the position of moving target on image of target frame image; When the intersection area is less than or equal to preset area, according to the position generation new movement locus of moving target on image of target frame image.Repeat above step, until all moving targets that extracts in the traversal present frame.
The area of the moving target of the former frame of hypothetical target frame is S Pre, the area of the moving target of target frame image is S Temp, above-mentioned preset area value can be set to min (S Pre, S Temp) * R is so as above-mentioned intersection area S CrossSatisfy: S CrossMin (S Pre, S TempDuring the condition of) * R, can judge that the moving target of target frame image is associated with the track of the moving target of the former frame of target frame, upgrades this movement locus according to the position of moving target on image of target frame image.R is cross-ratio in following formula, can carry out value according to empirical value, and general value is 0.4.
Above-mentioned intersection area S CrossComputational methods be S Cross=Width Cross* Height Cross, wherein,
Width Cross=min (right Pre, right Temp)-max (lef Pre, lef Temp), right PreThe maximum of the former frame moving target abscissa in image coordinate that is has represented the rightest position of moving target in a two field picture before; Left PreThe minimum value of the former frame moving target abscissa in image coordinate that is has represented the most left position of moving target in a two field picture before; S is the maximum of moving target abscissa in image coordinate in the target frame, has represented the least significant of moving target in the target frame image; Left TempBe the minimum value of moving target abscissa in image coordinate in the target frame, represented the leftmost position of moving target in the target frame image.So min (right Pre, right Temp) be right PreAnd right TempIn a less value, max (left Pre, left Temp) be left PreAnd left TempIn a larger value.
Height Cross=min (Top Pre, Top Temp)-max (Bottom Pre, Bottom Temp), Bottom PreThe minimum value of the former frame moving target ordinate in image coordinate that is has represented the bottom position of moving target in the former frame image; Top PreThe maximum of the former frame moving target ordinate in image coordinate that is has represented the tip position of moving target in the former frame image; Top TempBe the maximum of moving target ordinate in image coordinate in the target frame, represented the bottom position of moving target in the target frame image; Top PreBe the minimum value of moving target ordinate in image coordinate in the target frame, represented the tip position of moving target in the target frame image.So min (Top Pre, Top Temp) be Top PreAnd Top TempIn a less value, max (Bottom Pre, Bottom Temp) be Bottom PreAnd Bottom TempIn a larger value.
S when the intersection area is less than or equal to preset area Cross≤ min (S Pre, S Temp) * R, according to the position generation new movement locus of moving target on image of target frame image, and the movement locus before judging disappears.
The flow process of summarization generation module 17 specifically can comprise: according to time relationship and locus that the movement locus of moving target occurs this movement locus is arranged; Movement locus after arranging is added on the background image, thereby has generated video frequency abstract.
The embodiment of the invention also provides a kind of video abstraction generating method, this video abstraction generating method can be carried out by any video frequency abstract generating apparatus that the above embodiment of the present invention provides, Fig. 2 is the schematic diagram according to the video abstraction generating method of the embodiment of the invention, this video abstraction generating method comprises:
Step S21 carries out background modeling to the target frame image in the original video, obtains background model;
Step S23 utilizes background model to extract moving target in the target frame image;
Step S25 uses default grader that the moving target that extracts is classified;
Step S27 carries out trajectory alignment according to classification results with the moving target that extracts, and generates summary.
Wherein, step S21 specifically can comprise: use the mixed Gaussian Background Algorithm that the image of target frame is calculated, obtain the mixed Gauss model of target frame image.Step S21 has utilized the characteristics of mixed Gaussian Background Algorithm, be similar to smoothly the density fonction of arbitrary shape by the weighted average of a plurality of Gaussian probability-density functions, be applicable to process for the image of outdoor environment, the moving target in the video under the outdoor environment can be identified rapidly and accurately.
Step S25 specifically can comprise: use the default object classification device model that is trained by SVMs that the moving target that extracts is classified.SVMs is by a Nonlinear Mapping p, sample space is mapped in the feature space of a higher-dimension, so that the problem of Nonlinear separability is converted into the problem of the linear separability in feature space in original sample space, namely realize a linear separability by rising dimension.Object classification device model draws multiple subject image precondition by SVMs, and preserves.
The video abstraction generating method of the present embodiment can also comprise after step S23: the moving target of the former frame of the moving target of target frame image and target frame is carried out track following, obtain the movement locus of moving target.Flow process is particularly: the area that intersects of the moving target that calculates the target frame image and the moving target of the former frame of target frame; Judge and whether intersect area greater than the preset area value; When intersecting area during greater than the preset area value, upgrade movement locus according to the position of moving target on image of target frame image; When the intersection area is less than or equal to preset area, according to the position generation new movement locus of moving target on image of target frame image.Concrete computational methods were introduced introducing the video frequency abstract generating apparatus, no longer repeated at this.
Step S27 specifically can comprise: according to time relationship and locus that the movement locus of moving target occurs this movement locus is arranged; Movement locus after arranging is added on the background image.
The video abstraction generating method of the present embodiment, at first carry out background modeling, detect moving object, object is followed the tracks of, obtained the track of object, store these tracks and Background, utilize grader to classify to the object that detects, tell the common classification of several classes such as car, humans and animals, then respectively every class is carried out object and carry out trajectory alignment, generate summary.Namely mainly be divided into: foreground detection, target following, object classification, the several steps of summarization generation.Fig. 3 is the flow chart according to the video abstraction generating method of the embodiment of the invention, as shown in the figure:
The step of foreground detection mainly comprises: utilize mixed Gaussian that image is carried out background modeling, extract the prospect of motion, computational process will be processed accordingly to illumination and shade.Utilize mixed Gaussian that image is carried out background modeling, extract the prospect of motion, the number that wherein can select the mixed Gaussian function to adopt according to video scene can be trained separately a Gauss model for shade or illumination.
The step of target following mainly comprises: the target that every frame detects is followed the tracks of, and tracking can adopt simply neighbor method, and storage track and Background.On if the track of certain prospect of present frame and previous frame storage is related, then upgrade track, if track does not have in the association, then produce new track, if have and do not have the prospect that detects with the present frame track on related, then stop this track and carry out operation next time, track is stored, be used for follow-up generation video frequency abstract.
The step of object classification mainly comprises: utilize the SVM precondition to go out the model of several objects, then to the sorting objects of each track of detecting.Utilize the precondition good model, the target that detects is classified, adopt SVM algorithm precondition to go out the model of several objects of needs here, for example can train motor vehicle, motorcycle, the model of several objects of humans and animals.
The step of summarization generation mainly comprises: according to the track of the moving target that extracts and the Background of storage, according to time relationship and spatial relationship that track occurs track is arranged, then the target trajectory that moves is added on the Background of storing, generates summary.Thereby utilize the trace information of all moving targets that extract and the background of storage, by certain rule compositor, on the background that then track is added to, the generating video summary.
Use technical scheme of the present invention, technical scheme of the present invention extracts after the moving target in image, utilize default grader to set up the model of object common in the video, the prospect that extraction is obtained is that moving target is classified, and makes a summary according to the classification results generating video.Thereby complete and accurate ground generates the video frequency abstract that meets user's request, makes the user can promptly obtain the video information that needs from video frequency abstract, has improved the service efficiency of video.The final method of object classification of using has generated coherent summary, has satisfied the purpose that fast multi-target is browsed.
Through the above description of the embodiments, those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential general hardware platform, can certainly pass through hardware, but the former is better execution mode in a lot of situation.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words can embody with the form of software product, this computer software product can be stored in the storage medium, such as ROM/RAM, magnetic disc, CD etc., comprise that some instructions are with so that a computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the present invention or embodiment.
The above only is preferred implementation of the present invention; should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the principle of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a video abstraction generating method is characterized in that, comprising:
Target frame image in the original video is carried out background modeling, obtain background model;
Utilize described background model to extract moving target in the described target frame image;
Use default grader that the moving target that extracts is classified;
According to classification results the moving target that extracts is carried out trajectory alignment, generate summary.
2. video abstraction generating method according to claim 1 is characterized in that, the image of the target frame in the original video is carried out background modeling comprise:
Use the mixed Gaussian Background Algorithm that the image of described target frame is calculated, obtain the mixed Gauss model of described target frame image.
3. video abstraction generating method according to claim 1 is characterized in that, uses default grader that the moving target that extracts is classified and comprises:
Use the default object classification device model that is trained by SVMs that the moving target that extracts is classified.
4. video abstraction generating method according to claim 1 is characterized in that, also comprises afterwards at the moving target that utilizes described background model to extract in the described target frame image:
The moving target of the former frame of the moving target of described target frame image and described target frame is carried out track following, obtain the movement locus of described moving target.
5. video abstraction generating method according to claim 4 is characterized in that, the moving target of the former frame of the moving target of described target frame image and described target frame is carried out track following comprise:
The area that intersects of the moving target that calculates described target frame image and the moving target of the former frame of described target frame;
Judge that whether described intersection area is greater than the preset area value;
When described intersection area during greater than the preset area value, upgrade movement locus according to the position of moving target on image of described target frame image;
When described intersection area is less than or equal to preset area, according to the position generation new movement locus of moving target on image of described target frame image.
6. video abstraction generating method according to claim 5 is characterized in that, according to classification results the moving target that extracts is carried out trajectory alignment and comprises:
According to time relationship and locus that the movement locus of described moving target occurs this movement locus is arranged;
Movement locus after arranging is added on the background image.
7. a video frequency abstract generating apparatus is characterized in that, comprising:
The background modeling module is used for the target frame image of original video is carried out background modeling to obtain background model;
The moving target extraction module is used for utilizing described background model to extract the moving target of described target frame image;
The moving object classification module is used for using default grader that the moving target that extracts is classified;
The summarization generation module is used for according to classification results the moving target that extracts being carried out trajectory alignment, generates summary.
8. video frequency abstract generating apparatus according to claim 7 is characterized in that, described background modeling module also is used for: use the mixed Gaussian Background Algorithm that the image of described target frame is calculated, obtain the mixed Gauss model of described target frame image.
9. video frequency abstract generating apparatus according to claim 7 is characterized in that, the moving object classification module also is used for: use the default object classification device model that is trained by SVMs that the moving target that extracts is classified.
10. video frequency abstract generating apparatus according to claim 7 is characterized in that, also comprises:
The track following module is used for the moving target of the former frame of the moving target of described target frame image and described target frame is carried out track following, obtains the movement locus of described moving target.
CN201210587757.6A 2012-12-30 2012-12-30 Video abstraction generating method and video frequency abstract generating apparatus Expired - Fee Related CN103079117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210587757.6A CN103079117B (en) 2012-12-30 2012-12-30 Video abstraction generating method and video frequency abstract generating apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210587757.6A CN103079117B (en) 2012-12-30 2012-12-30 Video abstraction generating method and video frequency abstract generating apparatus

Publications (2)

Publication Number Publication Date
CN103079117A true CN103079117A (en) 2013-05-01
CN103079117B CN103079117B (en) 2016-05-25

Family

ID=48155505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210587757.6A Expired - Fee Related CN103079117B (en) 2012-12-30 2012-12-30 Video abstraction generating method and video frequency abstract generating apparatus

Country Status (1)

Country Link
CN (1) CN103079117B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
CN103957472A (en) * 2014-04-10 2014-07-30 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN104134075A (en) * 2013-05-03 2014-11-05 贺江涛 Mineral content monitoring system based on videos
CN104683765A (en) * 2015-02-04 2015-06-03 上海依图网络科技有限公司 Video concentration method based on mobile object detection
CN105187801A (en) * 2015-09-17 2015-12-23 桂林远望智能通信科技有限公司 Condensed video generation system and method
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN106096496A (en) * 2016-05-28 2016-11-09 张维秀 A kind of fire monitoring method and system
CN109661808A (en) * 2016-07-08 2019-04-19 汉阳大学校产学协力团 Simplify the recording medium of video-generating device, method and logger computer program
CN109862313A (en) * 2018-12-12 2019-06-07 科大讯飞股份有限公司 A kind of video concentration method and device
CN110351597A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus and electronic equipment of video clipping
CN110705287A (en) * 2019-09-27 2020-01-17 北京妙笔智能科技有限公司 Method and system for generating text abstract
CN110717068A (en) * 2019-08-27 2020-01-21 中山大学 Video retrieval method based on deep learning
CN111324744A (en) * 2020-02-17 2020-06-23 中山大学 Data enhancement method based on target emotion analysis data set

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202003350U (en) * 2010-12-14 2011-10-05 广东鑫程电子科技有限公司 Video summary system
CN102256065A (en) * 2011-07-25 2011-11-23 中国科学院自动化研究所 Automatic video condensing method based on video monitoring network
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202003350U (en) * 2010-12-14 2011-10-05 广东鑫程电子科技有限公司 Video summary system
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN102256065A (en) * 2011-07-25 2011-11-23 中国科学院自动化研究所 Automatic video condensing method based on video monitoring network
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134075A (en) * 2013-05-03 2014-11-05 贺江涛 Mineral content monitoring system based on videos
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
CN103686095B (en) * 2014-01-02 2017-05-17 中安消技术有限公司 Video concentration method and system
CN103957472B (en) * 2014-04-10 2017-01-18 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN103957472A (en) * 2014-04-10 2014-07-30 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN104683765A (en) * 2015-02-04 2015-06-03 上海依图网络科技有限公司 Video concentration method based on mobile object detection
CN105187801A (en) * 2015-09-17 2015-12-23 桂林远望智能通信科技有限公司 Condensed video generation system and method
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN105262932B (en) * 2015-10-20 2018-06-29 深圳市华尊科技股份有限公司 A kind of method and terminal of video processing
CN106096496A (en) * 2016-05-28 2016-11-09 张维秀 A kind of fire monitoring method and system
CN109661808A (en) * 2016-07-08 2019-04-19 汉阳大学校产学协力团 Simplify the recording medium of video-generating device, method and logger computer program
CN109661808B (en) * 2016-07-08 2021-10-26 汉阳大学校产学协力团 Simplified video generation device, method, and recording medium for recording computer program
CN110351597A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus and electronic equipment of video clipping
CN109862313A (en) * 2018-12-12 2019-06-07 科大讯飞股份有限公司 A kind of video concentration method and device
CN109862313B (en) * 2018-12-12 2022-01-14 科大讯飞股份有限公司 Video concentration method and device
CN110717068A (en) * 2019-08-27 2020-01-21 中山大学 Video retrieval method based on deep learning
CN110717068B (en) * 2019-08-27 2023-04-18 中山大学 Video retrieval method based on deep learning
CN110705287A (en) * 2019-09-27 2020-01-17 北京妙笔智能科技有限公司 Method and system for generating text abstract
CN111324744A (en) * 2020-02-17 2020-06-23 中山大学 Data enhancement method based on target emotion analysis data set
CN111324744B (en) * 2020-02-17 2023-04-07 中山大学 Data enhancement method based on target emotion analysis data set

Also Published As

Publication number Publication date
CN103079117B (en) 2016-05-25

Similar Documents

Publication Publication Date Title
CN103079117A (en) Video abstract generation method and video abstract generation device
Li et al. Traffic light recognition for complex scene with fusion detections
Zhu et al. Traffic sign detection and recognition using fully convolutional network guided proposals
Hu et al. Fast detection of multiple objects in traffic scenes with a common detection framework
Min et al. A new approach to track multiple vehicles with the combination of robust detection and two classifiers
CN107315998B (en) Vehicle class division method and system based on lane line
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
WO2017059576A1 (en) Apparatus and method for pedestrian detection
US8965115B1 (en) Adaptive multi-modal detection and fusion in videos via classification-based-learning
CN112836639A (en) Pedestrian multi-target tracking video identification method based on improved YOLOv3 model
Li et al. A highly efficient vehicle taillight detection approach based on deep learning
CN103092930A (en) Method of generation of video abstract and device of generation of video abstract
Zhang et al. Coarse-to-fine object detection in unmanned aerial vehicle imagery using lightweight convolutional neural network and deep motion saliency
Kavitha et al. Pothole and object detection for an autonomous vehicle using yolo
CN112487905A (en) Method and system for predicting danger level of pedestrian around vehicle
Ou et al. FAMN: feature aggregation multipath network for small traffic sign detection
Dewangan et al. Towards the design of vision-based intelligent vehicle system: methodologies and challenges
Liu et al. Research on vehicle object detection algorithm based on improved YOLOv3 algorithm
Gu et al. Embedded and real-time vehicle detection system for challenging on-road scenes
CN112241757A (en) Apparatus and method for operating a neural network
Lee et al. License plate detection via information maximization
Alam et al. Faster RCNN based robust vehicle detection algorithm for identifying and classifying vehicles
CN112614158B (en) Sampling frame self-adaptive multi-feature fusion online target tracking method
Jeong et al. Homogeneity patch search method for voting-based efficient vehicle color classification using front-of-vehicle image
CN111178158B (en) Rider detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100096 A910-16, block A, 9 Sandi street, Haidian District, Beijing.

Patentee after: IROBOTCITY (BEIJING) CO.,LTD.

Address before: 100085 A-6, 28, Beijing, Haidian District, Shanghai.

Patentee before: Teleframe Electronic Technology (Beijing) Co.,Ltd.

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20180709

Address after: 100096 Haidian District, Beijing, west 2 new apartment building, three floor commercial room 337.

Patentee after: Teleframe Technology (Beijing) Co.,Ltd.

Address before: 100096 A910-16, block A, 9 Sandi street, Haidian District, Beijing.

Patentee before: IROBOTCITY (BEIJING) CO.,LTD.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160525

Termination date: 20191230

CF01 Termination of patent right due to non-payment of annual fee