CN106227759A - A kind of method and device of dynamic generation video frequency abstract - Google Patents

A kind of method and device of dynamic generation video frequency abstract Download PDF

Info

Publication number
CN106227759A
CN106227759A CN201610555529.9A CN201610555529A CN106227759A CN 106227759 A CN106227759 A CN 106227759A CN 201610555529 A CN201610555529 A CN 201610555529A CN 106227759 A CN106227759 A CN 106227759A
Authority
CN
China
Prior art keywords
activity
value
pixel
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610555529.9A
Other languages
Chinese (zh)
Other versions
CN106227759B (en
Inventor
江大白
陈柏年
胡增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinotech Nantong Co Ltd
Original Assignee
China Applied Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Applied Technology Co Ltd filed Critical China Applied Technology Co Ltd
Priority to CN201610555529.9A priority Critical patent/CN106227759B/en
Publication of CN106227759A publication Critical patent/CN106227759A/en
Application granted granted Critical
Publication of CN106227759B publication Critical patent/CN106227759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses the method and device of a kind of dynamic generation video frequency abstract, described method is: extract each frame in original video and the corresponding pixel points in the most each n frame collectively forms a vector, take the median of element in described vector, constitute background image;Relatively each frame and the value of correspondence position pixel in described background image in original video, generate activity diagram and Activity Level list;Relatively the Activity Level of pixel and the threshold activity level of setting in described Activity Level list, generate two-value activity mask function and accumulative activity function;Video frequency abstract is generated according to described accumulative activity function.Described device includes background image generation module, Pixel actuation class computing module, two-value activity mask function generation module and video summary generation module.Present application addresses traditional method easily by identifying and follow the tracks of the problem that mistake is affected, algorithm complex is high, algorithm complex is low, real-time good, facilitates user mutual.

Description

A kind of method and device of dynamic generation video frequency abstract
Technical field
The application belongs to technical field of image processing, specifically, and a kind of method relating to dynamic generation video frequency abstract, also Relate to the device of a kind of dynamic generation video frequency abstract.
Background technology
Video frequency abstract is to video content simplified summary, and different targets is spliced to a common ambient field Jing Zhong, and they are combined in some way, generate the video after new concentration.Merged by analysis, can be at several seconds In finish watching all of moving target, recall original video, instantaneous-locking target is in the position of original video.Video frequency abstract carries significantly The efficiency that high massive video monitoring video is analyzed.First traditional method generating video frequency abstract extracts moving target, then The movement locus of each target is analyzed, finally each target is spliced because relate to target recognition and with Track (the two itself is also still the field of development), in a conventional manner easily by identifying and follow the tracks of wrong impact, and And algorithm complex is high.
Summary of the invention
In view of this, for traditional method, easily by identifying and following the tracks of, mistake is affected the application, algorithm complex is high asks Topic, it is provided that the method and device of a kind of dynamic generation video frequency abstract, it is to avoid traditional method is easily identified and follows the tracks of mistake Impact, and algorithm complex is low, real-time is good, facilitates user mutual.
In order to solve above-mentioned technical problem, a kind of method that this application discloses dynamic generation video frequency abstract, including following Step:
Step 1, extracts each frame in original video and the corresponding pixel points in the most each n frame collectively forms a vector, Take the median of element in described vector, constitute background image;Wherein n is the integer more than or equal to 1;
Step 2, compares each frame and the value of correspondence position pixel, generation activity in described background image in original video Figure and Activity Level list;
In step 3, relatively described Activity Level list, the Activity Level of pixel and the threshold activity level of setting, generate Two-value activity mask function, generates accumulative activity function further according to described two-value activity mask function;
Step 4, generates background pixel point and the foreground pixel point remapped, described background according to described accumulative activity function Pixel and the described foreground pixel point remapped collectively constitute the frame of video of video frequency abstract.
Further, the described value tool comparing correspondence position pixel in each frame and described background image in original video Body is: is subtracted each other with the value of correspondence position pixel in background image by the pixel of frame each in original video, takes the two difference Absolute value, if 0, then representing this pixel is static background pixel;If 1-255, then this value represents the activity of pixel Grade, so generates activity diagram.
Further, the generation method of described Activity Level list is: preserved by the nonzero value in described activity diagram, generates Non-zero list, arranges according to Activity Level ascending order described non-zero list;Described nonzero value includes position and the activity of pixel Grade.
Further, the generation method of described two-value activity mask function is: if the work of pixel in Activity Level list Dynamic grade is more than or equal to the threshold activity level set, then two-value activity mask functional value is calculated as 1;If in Activity Level list The Activity Level of pixel is less than the threshold activity level set, then two-value activity mask functional value is calculated as 0.
Further, described accumulative activity function uses following algorithm to calculate:
Wherein, (x, y t) represent accumulative activity function to C;N0Represent the length of original video, the i.e. quantity of frame of video;N1Table Show the video frequency abstract length of generation;T represents the frame number in video frequency abstract, 1≤t≤N1;M(x,y,kN1+ t) it is that two-value activity is covered Code function.
Further, described have according to described accumulative activity function generation background pixel point and the foreground pixel point remapped Body is: judge the value of described accumulative activity function, if the value of accumulative activity function is 0, then generates background pixel point;Live if accumulative The value of dynamic function is not 0, then generate the foreground pixel point remapped.
Further, the foreground pixel point remapped described in uses following algorithm to calculate:
Wherein, (x, y t) represent the foreground pixel point remapped to F;N0Represent the length of original video, the i.e. number of frame of video Amount;N1Represent the video frequency abstract length generated;T represents the frame number in video frequency abstract, 1≤t≤N1;(x, y t) represent accumulative to C Activity function;O(x,y,kN1+ t) represent each frame in original video;M(x,y,kN1+ t) it is two-value activity mask function.
Further, also include: receive original video and display video frequency abstract.
Further, also include: set and adjust Time Compression multiple and determine the length of the video frequency abstract generated;Also wrap Include: adjust threshold activity level.
Disclosed herein as well is the device of a kind of dynamic generation video frequency abstract, including:
Background image generation module, for extracting each frame in original video and the corresponding pixel points in the most each n frame thereof Collectively forming a vector, take the median of element in described vector, constitute background image, wherein n is the integer more than or equal to 1;
Pixel actuation class computing module, is used for comparing each frame and correspondence position in described background image in original video The value of pixel, generates activity diagram and Activity Level list;
Two-value activity mask function generation module, in relatively described Activity Level list the Activity Level of pixel with The threshold activity level set, generates two-value activity mask function, and generates further according to described two-value activity mask function Accumulative activity function;
Video summary generation module, for generating background pixel point and the prospect remapped according to described accumulative activity function Pixel, described background pixel point and the described foreground pixel point remapped collectively constitute the frame of video of video frequency abstract.
Compared with prior art, the application can obtain and include techniques below effect:
The application dynamically generates the method for video frequency abstract by each frame in extraction original video and the most each n frame thereof Corresponding pixel points collectively forms a vector, takes the median of element in described vector, constitutes background image;Then original regarding is compared In Pin, each frame and the value of correspondence position pixel in described background image, generate activity diagram and Activity Level list;Compare again The threshold activity level of the Activity Level of pixel and setting in described Activity Level list, generate two-value activity mask function and Accumulative activity function;Generate video frequency abstract finally according to described accumulative activity function, this shows that the application method is base In pixel, it is not based on target recognition and tracking, it is to avoid traditional method based on target recognition and tracking is easily subject to Identify and follow the tracks of the impact of mistake, and algorithm complex is low, real-time is good, facilitates user mutual.
Certainly, the arbitrary product implementing the application it is not absolutely required to reach all the above technique effect simultaneously.
Accompanying drawing explanation
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes the part of the application, this Shen Schematic description and description please is used for explaining the application, is not intended that the improper restriction to the application.In the accompanying drawings:
Fig. 1 is the method step figure that the embodiment of the present application dynamically generates video frequency abstract;
Fig. 2 is the method flow diagram that the embodiment of the present application dynamically generates video frequency abstract;
Fig. 3 is the process prescription figure that the embodiment of the present application dynamically generates the method for video frequency abstract;
Fig. 4 is the apparatus structure schematic diagram that the embodiment of the present application dynamically generates video frequency abstract.
Detailed description of the invention
Presently filed embodiment is described in detail, thereby to the application how application technology hands below in conjunction with embodiment Section solves technical problem and reaches the process that realizes of technology effect and can fully understand and implement according to this.
The method of a kind of dynamic generation video frequency abstract of the present invention, sees Fig. 1 and Fig. 2, comprises the following steps:
Step 1, generates adaptive background image
Extract each frame (Frame) O in original video (x, y, t) and the most each n frame, take O (x, y, t) in each picture Vegetarian refreshments, collectively forms a vector V together with the pixel on correspondence position in each n frame before and after it, takes in described vector V in element Place value, and composition background image B (x, y, t);Wherein n is the integer more than or equal to 1;
Step 2, calculates Pixel actuation grade
Relatively the pixel of each frame and the value of correspondence position pixel in described background image in original video, generate and live Cardon and Activity Level list;
Further, described correspondence position pixel in the pixel of each frame and described background image is compared in original video The value of point particularly as follows: by frame O each in original video (x, y, pixel t) and background image B (x, y, t) in correspondence position picture The value of vegetarian refreshments is subtracted each other, and takes the absolute value of the two difference, if 0, then representing this pixel is static background pixel;If 1-255, Then this value represents the Activity Level of pixel, so generates activity diagram, be designated as A (x, y, t).
Further, the generation method of described Activity Level list is: preserved by the nonzero value in described activity diagram, generates Non-zero list, arranges according to Activity Level ascending order described non-zero list;Described nonzero value includes position and the activity of pixel Grade.
In the embodiment of the present application, in the scene of a lot of video monitorings, (x, y t) can be a sparse matrix (major part to A Element is 0) because the most of pixel in original video frame is background pixel point.Therefore we have only to preserve A (x, y, T) nonzero value in, including the position of pixel, (x, y) and Activity Level, (x, y, list t), to this to generate a non-zero A List arranges according to Activity Level ascending order.So, when user changes threshold activity level, can quickly find more than this threshold Those active pixel points (having only to the active pixel point finding first more than this threshold value) of value.
Step 3, generates two-value activity mask function
Relatively the Activity Level of pixel and the threshold activity level of setting in described Activity Level list, generate two-value and live Dynamic mask function, generates accumulative activity function further according to described two-value activity mask function;
Further, the generation method of described two-value activity mask function is: if the work of pixel in Activity Level list Dynamic grade is more than or equal to the threshold activity level set, then two-value activity mask functional value is calculated as 1;If in Activity Level list The Activity Level of pixel is less than the threshold activity level set, then two-value activity mask functional value is calculated as 0.
In the embodiment of the present application, two-value activity mask function uses following algorithm to calculate:
M ( x , y , t ) = 1 i f A ( x , y , t ) &GreaterEqual; &theta; 0 i f A ( x , y , t ) < &theta;
Wherein, θ represents the threshold activity level of setting;(x, y t) represent the activity etc. of pixel in Activity Level list to A Level.
Further, described accumulative activity function uses following algorithm to calculate:
Wherein, (x, y t) represent accumulative activity function to C;N0Represent the length of original video, the i.e. quantity of frame of video;N1Table Show the video frequency abstract length of generation;T represents the frame number (which frame) in video frequency abstract;M(x,y,kN1+ t) it is that two-value activity is covered Code function.
Step 4, remaps active pixel, generates video frequency abstract
Background pixel point and the foreground pixel point remapped, described background pixel point is generated according to described accumulative activity function With the frame of video that the described foreground pixel remapped point collectively constitutes video frequency abstract.
Further, described have according to described accumulative activity function generation background pixel point and the foreground pixel point remapped Body is: judge the value of described accumulative activity function, if the value of accumulative activity function is 0, then generates background pixel point;Live if accumulative The value of dynamic function is not 0, then generate the foreground pixel point (active pixel point) remapped.
In the embodiment of the present application, video frequency abstract uses following algorithm to obtain:
S ( x , y , t ) = B ( x , y , t ) i f C ( x , y , t ) = 0 F ( x , y , t ) o t h e r w i s e
Wherein, (x, y t) represent the frame of video of video frequency abstract to S;(x, y t) represent background image to B;(x, y t) represent tired to C Meter activity function;(x, y t) represent the foreground pixel point remapped to F.
Further, the foreground pixel point remapped described in uses following algorithm to calculate:
Wherein, (x, y t) represent the foreground pixel point remapped to F;N0Represent the length of original video, the i.e. number of frame of video Amount;N1Represent the video frequency abstract length generated;T represents the frame number (which frame) in video frequency abstract;(x, y t) represent accumulative to C Activity function;O(x,y,kN1+ t) represent each frame in original video;M(x,y,kN1+ t) it is two-value activity mask function.
Further, also include before described step 1: receive original video.
Further, also include after described step 3: set and adjust Time Compression multiple and determine that the video generated is plucked The length wanted.
Specifically, Time Compression multiple is the amount that user sets, and is used for determining the length of the video frequency abstract of generation, its Meet:
Wherein, r express time compression multiple;N0Represent the length of original video;N1Represent the video frequency abstract length generated.
Further, also include: adjust threshold activity level.Concrete, if adjusting the threshold activity level set, then return Return step 3, otherwise show video frequency abstract.
Further, also include after described step 4: display video frequency abstract.
Fig. 3 is to said process description, by said method it can be seen that use the video that the inventive method generates Summary be exactly by all M (x, y, t) be 1 pixel be mapped in the video frequency abstract ultimately generated, it can be seen that the present invention's Principle is based on pixel rather than based on target recognition and tracking.
The device of a kind of dynamic generation video frequency abstract of the present invention, sees Fig. 4, including:
Background image generation module, for extracting each frame in original video and the corresponding pixel points in the most each n frame thereof Collectively forming a vector, take the median of element in described vector, constitute background image, wherein n is the integer more than or equal to 1;
Pixel actuation class computing module, is used for comparing each frame and correspondence position in described background image in original video The value of pixel, generates activity diagram and Activity Level list;
Two-value activity mask function generation module, in relatively described Activity Level list the Activity Level of pixel with The threshold activity level set, generates two-value activity mask function, and generates further according to described two-value activity mask function Accumulative activity function;
Video summary generation module, for generating background pixel point and the prospect remapped according to described accumulative activity function Pixel, described background pixel point and the described foreground pixel point remapped collectively constitute the frame of video of video frequency abstract.
Further, the described value tool comparing correspondence position pixel in each frame and described background image in original video Body is: is subtracted each other with the value of correspondence position pixel in background image by the pixel of frame each in original video, takes the two difference Absolute value, if 0, then representing this pixel is static background pixel;If 1-255, then this value represents the activity of pixel Grade, so generates activity diagram.
Further, the generation method of described Activity Level list is: preserved by the nonzero value in described activity diagram, generates Non-zero list, arranges according to Activity Level ascending order described non-zero list;Described nonzero value includes position and the activity of pixel Grade.
Further, the generation method of described two-value activity mask function is: if the work of pixel in Activity Level list Dynamic grade is more than or equal to the threshold activity level set, then two-value activity mask functional value is calculated as 1;If in Activity Level list The Activity Level of pixel is less than the threshold activity level set, then two-value activity mask functional value is calculated as 0.
Further, described accumulative activity function uses following algorithm to calculate:
Wherein, (x, y t) represent accumulative activity function to C;N0Represent the length of original video, the i.e. quantity of frame of video;N1Table Show the video frequency abstract length of generation;T represents the frame number (which frame) in video frequency abstract, 1≤t≤N1;M(x,y,kN1+ t) it is two Value activity mask function.
Further, described have according to described accumulative activity function generation background pixel point and the foreground pixel point remapped Body is: judge the value of described accumulative activity function, if the value of accumulative activity function is 0, then generates background pixel point;Live if accumulative The value of dynamic function is not 0, then generate the foreground pixel point remapped.
Further, the foreground pixel point remapped described in uses following algorithm to calculate:
Wherein, (x, y t) represent the foreground pixel point remapped to F;N0Represent the length of original video, the i.e. number of frame of video Amount;N1Represent the video frequency abstract length generated;T represents the frame number in video frequency abstract, 1≤t≤N1;(x, y t) represent accumulative to C Activity function;O(x,y,kN1+ t) represent each frame in original video;M(x,y,kN1+ t) it is two-value activity mask function.
Further, also include: receiver module, be used for receiving original video;
Further, also include: Time Compression multiple setting module, be used for setting and adjust Time Compression multiple and determine The length of the video frequency abstract generated.
Further, also include: display module, be used for showing video frequency abstract.
Further, also include: threshold activity level adjusting module, for adjusting the threshold activity level of setting.
The method dynamically generating video frequency abstract based on pixel that the present invention proposes, it is to avoid traditional knowing based on target Easily do not affected by identifying and follow the tracks of mistake with the method followed the tracks of, and algorithm complex is low, real-time is good, facilitates user to hand over Mutually.
As employed some vocabulary in the middle of description and claim to censure special component or method.Art technology Personnel are it is to be appreciated that same composition may be called with different nouns in different regions.This specification and claims are not In the way of the difference of title is used as distinguishing composition." comprising " as mentioned by the middle of description and claim in the whole text is One open language, therefore " comprise but be not limited to " should be construed to." substantially " refer in receivable range of error, this area Technical staff can solve described technical problem in the range of certain error, basically reaches described technique effect.Description is follow-up It is described as implementing the better embodiment of the application, for the purpose of right described description is the rule so that the application to be described, not In order to limit scope of the present application.The protection domain of the application is when being as the criterion depending on the defined person of claims.
Also, it should be noted term " includes ", " comprising " or its any other variant are intended to nonexcludability Comprise, so that include that the commodity of a series of key element or system not only include those key elements, but also include the most clearly Other key elements listed, or also include the key element intrinsic for this commodity or system.In the feelings not having more restriction Under condition, statement " including ... " key element limited, it is not excluded that in the commodity including described key element or system also There is other identical element.
Described above illustrate and describes some preferred embodiments of invention, but as previously mentioned, it should be understood that invention is not It is confined to form disclosed herein, is not to be taken as the eliminating to other embodiments, and can be used for other combinations various, amendment And environment, and can be carried out by above-mentioned teaching or the technology of association area or knowledge in invention contemplated scope described herein Change.And the change that those skilled in the art are carried out and change are without departing from the spirit and scope of invention, the most all should weigh appended by invention In the protection domain that profit requires.

Claims (10)

1. the method for a dynamic generation video frequency abstract, it is characterised in that comprise the following steps:
Step 1, extracts each frame in original video and the corresponding pixel points in the most each n frame collectively forms a vector, takes institute State the median of element in vector, constitute background image;Wherein n is the integer more than or equal to 1;
Step 2, compares in original video the value of correspondence position pixel in each frame and described background image, generate activity diagram and Activity Level list;
In step 3, relatively described Activity Level list, the Activity Level of pixel and the threshold activity level of setting, generate two-value Movable mask function, generates accumulative activity function further according to described two-value activity mask function;
Step 4, generates background pixel point and the foreground pixel point remapped, described background pixel according to described accumulative activity function Point and the described foreground pixel point remapped collectively constitute the frame of video of video frequency abstract.
2. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that described compare in original video In each frame and described background image, the value of correspondence position pixel is particularly as follows: by the pixel of frame each in original video and the back of the body In scape image, the value of correspondence position pixel is subtracted each other, and takes the absolute value of the two difference, if 0, then it is static for representing this pixel Background pixel;If 1-255, then this value represents the Activity Level of pixel, so generates activity diagram.
3. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that described Activity Level list Generation method is: preserved by the nonzero value in described activity diagram, generates non-zero list, to described non-zero list according to Activity Level Ascending order arranges;Described nonzero value includes position and the Activity Level of pixel.
4. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that described two-value activity mask letter The generation method of number is: if the Activity Level of pixel is more than or equal to the threshold activity level set in Activity Level list, Then two-value activity mask functional value is calculated as 1;If the Activity Level of pixel is less than the Activity Level set in Activity Level list Threshold value, then two-value activity mask functional value is calculated as 0.
5. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that described accumulative activity function is adopted Calculate with following algorithm:
Wherein, (x, y t) represent accumulative activity function to C;N0Represent the length of original video, the i.e. quantity of frame of video;N1Represent raw The video frequency abstract length become;T represents the frame number in video frequency abstract, 1≤t≤N1;M(x,y,kN1+ t) it is two-value activity mask letter Number.
6. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that described according to described accumulative work Dynamic function generates background pixel point with the foreground pixel point remapped particularly as follows: judge the value of described accumulative activity function, if tired The value of meter activity function is 0, then generate background pixel point;If the value of accumulative activity function is not 0, then generate the prospect remapped Pixel.
7. the method for a kind of dynamic generation video frequency abstract as described in profit requires 1 or 6, it is characterised in that described in remap before Scene vegetarian refreshments uses following algorithm to calculate:
Wherein, (x, y t) represent the foreground pixel point remapped to F;N0Represent the length of original video, the i.e. quantity of frame of video;N1 Represent the video frequency abstract length generated;T represents the frame number in video frequency abstract, 1≤t≤N1;(x, y t) represent accumulative activity to C Function;O(x,y,kN1+ t) represent each frame in original video;M(x,y,kN1+ t) it is two-value activity mask function.
8. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that also include: receive original regarding Frequency and display video frequency abstract.
9. such as method that profit requires a kind of dynamic generation video frequency abstract as described in 1, it is characterised in that also include: set and adjust Time Compression multiple determines the length of the video frequency abstract generated;Also include: adjust threshold activity level.
10. the device of a dynamic generation video frequency abstract, it is characterised in that including:
Background image generation module, common for extracting each frame in original video and the corresponding pixel points in the most each n frame thereof Constituting a vector, take the median of element in described vector, constitute background image, wherein n is the integer more than or equal to 1;
Pixel actuation class computing module, is used for comparing each frame and correspondence position pixel in described background image in original video The value of point, generates activity diagram and Activity Level list;
Two-value activity mask function generation module, for comparing Activity Level and the setting of pixel in described Activity Level list Threshold activity level, generate two-value activity mask function, and generate accumulative further according to described two-value activity mask function Activity function;
Video summary generation module, for generating background pixel point and the foreground pixel remapped according to described accumulative activity function Point, described background pixel point and the described foreground pixel point remapped collectively constitute the frame of video of video frequency abstract.
CN201610555529.9A 2016-07-14 2016-07-14 A kind of method and device of dynamic generation video frequency abstract Active CN106227759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610555529.9A CN106227759B (en) 2016-07-14 2016-07-14 A kind of method and device of dynamic generation video frequency abstract

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610555529.9A CN106227759B (en) 2016-07-14 2016-07-14 A kind of method and device of dynamic generation video frequency abstract

Publications (2)

Publication Number Publication Date
CN106227759A true CN106227759A (en) 2016-12-14
CN106227759B CN106227759B (en) 2019-09-13

Family

ID=57519950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610555529.9A Active CN106227759B (en) 2016-07-14 2016-07-14 A kind of method and device of dynamic generation video frequency abstract

Country Status (1)

Country Link
CN (1) CN106227759B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028262A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-channel composite high-definition high-speed video background modeling method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184221A (en) * 2011-05-06 2011-09-14 北京航空航天大学 Real-time video abstract generation method based on user preferences
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
CN105025360A (en) * 2015-07-17 2015-11-04 江西洪都航空工业集团有限责任公司 Improved fast video summarization method and system
WO2015184768A1 (en) * 2014-10-23 2015-12-10 中兴通讯股份有限公司 Method and device for generating video abstract

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184221A (en) * 2011-05-06 2011-09-14 北京航空航天大学 Real-time video abstract generation method based on user preferences
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
WO2015184768A1 (en) * 2014-10-23 2015-12-10 中兴通讯股份有限公司 Method and device for generating video abstract
CN105025360A (en) * 2015-07-17 2015-11-04 江西洪都航空工业集团有限责任公司 Improved fast video summarization method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩小萱: "高效监控视频摘要的关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028262A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-channel composite high-definition high-speed video background modeling method

Also Published As

Publication number Publication date
CN106227759B (en) 2019-09-13

Similar Documents

Publication Publication Date Title
Montserrat et al. Deepfakes detection with automatic face weighting
Badhe et al. Indian sign language translator using gesture recognition algorithm
Sawant et al. Real time sign language recognition using pca
Doliotis et al. Comparing gesture recognition accuracy using color and depth information
CN103824050B (en) A kind of face key independent positioning method returned based on cascade
Nalepa et al. Wrist localization in color images for hand gesture recognition
CN108171213A (en) A kind of Relation extraction method for being applicable in picture and text knowledge mapping
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN106326851B (en) A kind of method of number of people detection
Hatibaruah et al. A static hand gesture based sign language recognition system using convolutional neural networks
CN104091150B (en) A kind of human eye state judgment method based on recurrence
Ruiz-Santaquiteria et al. Improving handgun detection through a combination of visual features and body pose-based data
CN106227759A (en) A kind of method and device of dynamic generation video frequency abstract
Agrawal et al. Redundancy removal for isolated gesture in Indian sign language and recognition using multi-class support vector machine
Carneiro et al. Static gestures recognition for Brazilian sign language with kinect sensor
Yadav et al. Noval approach of classification based Indian sign language recognition using transform features
US20200097748A1 (en) Method and system for splicing and restoring shredded paper based on extreme learning machine
Davydov et al. Real-time Ukrainian sign language recognition system
Li et al. Segmentation and attention network for complicated X-ray images
Lu et al. Novel infrared and visible image fusion method based on independent component analysis
Bora et al. ISL gesture recognition using multiple feature fusion
CN104318207B (en) A kind of method that shearing lens and gradual shot are judged using rapid robust feature and SVMs
Kim et al. Development of a Face Detection and Recognition System Using a RaspberryPi
Raut et al. A system for recognition of indian sign language for deaf people using otsu’s algorithm
Bousaaid et al. Hand gesture detection and recognition in cyber presence interactive system for E-learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 230601 floor 12, building e, intelligent equipment science and Technology Park, 3963 Susong Road, Hefei Economic and Technological Development Zone, Anhui Province

Patentee after: CHINA APPLIED TECHNOLOGY Co.,Ltd.

Address before: 230088 Anhui city of Hefei province high tech Zone Innovation Industrial Park Road Wenqu room B1-1102

Patentee before: CHINA APPLIED TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201015

Address after: Room 102, building 8, Caizhi Tiandi garden, 255 Renmin Middle Road, Nantong City, Jiangsu Province 226000

Patentee after: Sinotech (Nantong) Co., Ltd

Address before: 230601 floor 12, building e, intelligent equipment science and Technology Park, 3963 Susong Road, Hefei Economic and Technological Development Zone, Anhui Province

Patentee before: CHINA APPLIED TECHNOLOGY Co.,Ltd.