CN104063883B - A kind of monitor video abstraction generating method being combined based on object and key frame - Google Patents

A kind of monitor video abstraction generating method being combined based on object and key frame Download PDF

Info

Publication number
CN104063883B
CN104063883B CN201410321281.0A CN201410321281A CN104063883B CN 104063883 B CN104063883 B CN 104063883B CN 201410321281 A CN201410321281 A CN 201410321281A CN 104063883 B CN104063883 B CN 104063883B
Authority
CN
China
Prior art keywords
frame
image
video
background
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410321281.0A
Other languages
Chinese (zh)
Other versions
CN104063883A (en
Inventor
裘加林
陈建群
王亚沛
张标标
李仁旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Original Assignee
HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd filed Critical HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Priority to CN201410321281.0A priority Critical patent/CN104063883B/en
Publication of CN104063883A publication Critical patent/CN104063883A/en
Application granted granted Critical
Publication of CN104063883B publication Critical patent/CN104063883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to field of video image processing, more particularly to a kind of monitor video abstraction generating method being combined based on object and key frame, this method to include:Extraction motion segments, establish mixture Gaussian background model, background subtraction detection Moving Objects, the moving object tracking based on Kalman predictions, by tentatively extract and further sort out picture frame is accurately clustered after extract key frame, Moving Objects and take off and synthesized with summary.The beneficial effects of the present invention are:The inventive method uses a kind of main algorithm flow of object-based video frequency abstract, and improved extraction method of key frame is incorporated, by tentatively extract and further sort out picture frame is accurately clustered after extract key frame, Video Redundancy information is reduced as far as on the premise of key message is not omitted, video compress degree is preferable;A kind of relatively reasonable Video Composition step is proposed simultaneously, user is read video monitoring content in a short time, fast and effeciently positions useful information.

Description

Surveillance video abstract generation method based on combination of object and key frame
Technical Field
The invention relates to the field of video image processing, in particular to a surveillance video abstract generating method based on combination of an object and a key frame.
Background
With the development of digital video technology, more and more surveillance video equipment is deployed in various places of society, various surveillance video equipment monitors in real time, and massive video information is acquired every day, so that the requirements on video analysis and processing are continuously increased. Generally, a monitoring video device needs a monitoring person to monitor a video, but how to quickly browse video data from massive video information and accurately and quickly find things happening in a certain time period is time-consuming and labor-consuming work for the monitoring person, and for the monitoring person, the monitoring person usually needs to pay attention to a plurality of monitoring devices at the same time, and cannot pay attention to each monitoring detail in real time, so that omission is easily caused because the attention of the person is not concentrated.
In order to help monitoring personnel to quickly browse video data and conveniently inquire video information, a video abstraction method is provided based on analysis of video content. The video abstract is also called video concentration and is a simple summary of video content, moving objects are extracted by analyzing the moving objects, then the moving tracks of all the objects are analyzed, different objects are spliced into a common background scene, and the different objects are combined in a certain mode to obtain new video content. Existing video summarization studies are mostly based on key-frame alone and object alone. The method for abstracting the video based on the key frames selects a sequence of key frames in the original video to combine into the heart video, and has the defect that the semantics of the video cannot be completely expressed, so that a large amount of video information is lost, and the video information cannot be comprehensively obtained; although the object-based video summarization method can describe the track of the moving object, the video compression degree is not strong.
Disclosure of Invention
The invention aims to overcome the defects and provide a method for generating a surveillance video abstract based on the combination of an object and a key frame, and the method can solve the problems of incomplete video semantics and low compression degree of the conventional video abstract.
The invention achieves the aim through the following technical scheme: a monitoring video abstract generation method based on combination of an object and a key frame comprises the following steps:
1) Comparing two adjacent frames of images to obtain a change value, comparing the change value with a threshold value, keeping the images with the change value larger than the threshold value as motion segments, and forming a new video by the motion segments;
2) Uniformly dividing a new video into a plurality of segments according to video time, producing a background model for an image frame in each segment by using a Gaussian mixture background model, adding timestamp information to the background model, and storing the timestamp information into a background database;
3) At any time t, the current frame image I t With background model B t Comparing to obtain a differential value at the (x, y) point, comparing the differential value with a threshold value T, wherein the part of the differential value greater than T is a moving object, and the rest is a background;
4) Tracking the moving objects based on a Kalman filtering algorithm, and storing tracking information of all the moving objects into an object database;
5) Marking the detected same moving object, carrying out similarity comparison by using the color characteristics of the object, classifying the image frames with high similarity, calculating the average value of the image frames of the same class, and selecting the frame with the minimum difference value with the average value as a key frame;
6) And (3) adopting a GraphCut algorithm to deduct the moving objects in the area marked with the moving objects, taking background images of different time periods from a background database and copying, and sequentially putting the deducted moving objects at corresponding positions in the background images according to a time sequence to form a video abstract.
Preferably, in the step 5), the first frame is used as a reference frame, and the color histogram is V 1 The color histogram of the subsequent frame is V i ,V i And V 1 Has a similarity ratio of S i Each frame is divided into MxN blocks, and the calculation formula is as follows:
S ij =|V ij -V 1j |
wherein S is ij Is the similarity value of the jth block of the ith frame, M j Is the weighting factor for the jth block;
classifying the image frame according to the formula if S i Less than or equal to the threshold value alpha, the imageFrame and V 1 Grouped into a class of image frames L 1 If α is<S i ≤α 0 The image frame is classified as a pending frame, if S i0 Then the image frame is classified as a second class L 2 (ii) a Then in a second class L 2 The first frame is used as a reference frame, and the similarity comparison is carried out with each frame in the following, so that the comparison of all image frames is completed.
Preferably, the method for establishing the auxiliary thread for the frame to be determined by using the color similarity contrast of channel separation specifically comprises the following steps:
separating the RGB format image into B, G, R images, bImg, gImg and rImg, of three channels by using a cvSplit function for the input reference frame image and the image to be framed, respectively carrying out similarity comparison on the three color channels of the reference frame image and the image to be framed, if the similarities of the three channels are within a threshold value, the image to be framed and the reference frame image are in the same class, otherwise, the image to be framed is in the next class.
Preferably, the same moving object is marked by using a minimum rectangular frame, and the rectangular frame is a similarity comparison area and a moving object deduction area.
Preferably, the step 6) specifically comprises the following steps:
6.1 ) extracting a background image frame obtained in a first time period;
6.2 All the motion objects deducted in the time period are placed at the corresponding positions of the background image according to the time sequence and are numbered;
6.3 To translucency process the overlapped object parts in one frame image;
6.4 Stopping the placement when the number of objects placed in one frame of image is greater than a certain threshold;
6.5 Copy background image frames and continue to place objects until they are placed;
6.6 Extracting the background image frame in the next time period and the object extracted in the time period, and repeating the steps 6.2) -6.5);
6.7 Combine all the synthesized image frames to form a final video summary.
The invention has the beneficial effects that: the method adopts a main algorithm flow of video abstraction based on objects, and is integrated with an improved key frame extraction method, the key frames are extracted after the image frames are accurately clustered through preliminary extraction and further classification, the video redundant information is reduced as much as possible on the premise of not missing the key information, and the video compression degree is better; meanwhile, a more reasonable video synthesis step is provided, so that a user can read video monitoring contents in a short time, and useful information can be quickly and effectively positioned.
Drawings
FIG. 1 is a flow chart of the method steps of the present invention;
FIG. 2 is a flowchart of the steps of key frame extraction;
fig. 3 is a graph of weight value segmentation of image frames in an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto:
example 1: as shown in fig. 1, a method for generating a surveillance video summary based on combination of an object and a key frame includes the following steps:
(1) Extracting a motion segment:
and continuously comparing the two adjacent frames by adopting an adjacent frame comparison method, if the change between the two frames is greater than a threshold value, determining that a moving object appears, namely extracting the moving object as a moving segment, and otherwise discarding the moving object as a non-moving segment.
Taking an image frame with a size of m × n as an example, let the gray value of each pixel point be G ij And if so, the pixel point matrix of the Kth frame image is as follows:
let m column row vector A = (1 … 1), n column row vector B = (1 …) T Then, then
C of the K +1 th frame can be obtained in the same way K+1 And R K+1 Respectively as follows:
subtracting the corresponding row-column positions of the K frame and the K +1 frame and taking an absolute value:
order to
C =C d ×B
R =A×R d
Finally, two values sigma representing the row and column contrast are obtained C Sum Σ R Then respectively with a threshold value T C 、T R Contrast, if satisfy
C >T C ||∑ R ≤T R (7)
If one of the contrast values of the rows and the columns exceeds a set threshold value, the motion segment is judged, otherwise, the non-motion segment is judged, and the non-motion segment is discarded to reduce redundant video.
(2) Modeling mixed Gaussian background:
and establishing K Gaussian models for each pixel point to reflect the real background changing along with time, matching new image frames, and continuously updating the models. Establishing a mixed Gaussian model for each pixel point as follows:
wherein w i,t Is the weight corresponding to the ith Gaussian model, eta (x) t ,u i,ti,t ) As a function of the probability density of the ith Gaussian model, sigma i,t Is the covariance matrix of the ith gaussian model.
Weight w of ith Gaussian model at time t i,t The update formula is as follows:
w i,t =(1-α)w i,t-1 +α(M i,t ) (10)
where α is the learning rate, M i,t For the matching degree, the value of the matched model is 1, the value of the rest is 0, the unmatched models are kept unchanged, and the matched models are updated as follows:
u t =(1-ρ)u t-1 +ρx t (11)
σ 2 t =(1-ρ)σ 2 t-1 +ρ(x t -u t ) T (x t -u t ) (12)
ρ=αη(x t |u kk ) (13)
arranging all Gaussian distributions according to the priority order, taking the first N distributions to generate a background model according to the weight, wherein the selection rule of N is as follows:
N=argmin n (w k >T) (14)
wherein T is a predetermined threshold.
The method comprises the steps of adopting a mode of regularly extracting a background model, uniformly dividing video time into N time periods, adopting a mixed Gaussian model method for image frames in each time period to generate the background model, adding timestamp information and storing the background model into a background database.
(3) Background subtraction detection of moving objects:
the background image model B at any time t can be obtained according to the above background modeling t Then the current frame image I t The difference in gray value from the background image at the (x, y) point results in:
Δd t (x,y)=|I t (x,y)-B t (x,y)| (15)
wherein Δ d t (x,y)、I t (x,y)、B t And (x, y) respectively representing the gray values of the difference image, the current frame image and the background image at the time T at the position (x, y), comparing the gray value of the difference image with a threshold value T, and determining the part which is larger than T as a moving object part, otherwise, as a background part.
(4) Moving object tracking based on Kalman prediction
Firstly, predicting the position of a moving object in the next frame of image by adopting a Kalman filtering algorithm to obtain the center point coordinate of a rectangular frame of the predicted object, then calculating the two-dimensional distance between the two-dimensional distance and the actually detected center point coordinate of the rectangular frame of the object, if the distance is smaller than a threshold value, considering that matching is successful, updating the object, if matching is unsuccessful, considering that the original object disappears, tracking a new object, adding timestamp information to each tracked moving object, and storing the timestamp information into an object database.
The Kalman filtering algorithm flow is as follows:
prediction equation of state vector:
predicted variance of error covariance:
p(k,k-1)=A(k,k-1)P(k-1)A T (k,k-1)+Q(k-1) (18)
gain of Kalman filtering:
K(k)=P(k,k-1)H T (k)(H(k)P(k,k-1)H T (k)+R(k)) -1 (19)
and (3) correcting the state vector:
correcting the error covariance matrix:
P(k)=P(k,k-1)-K(k)H(k)P(k,k-1) (21)
the coordinates of the center point of the ith object obtained by Kalman filtering prediction are set asThe detected coordinates of the center point of the jth object are X j (x j,n+1 ,y j,n+1 ) Then the measured two-dimensional distance is:
the judgment result after binarization is as follows:
(5) Key frame extraction, as shown in fig. 2:
the invention adopts an improved key frame extraction method, carries out preliminary extraction in a main thread of the system by a weighted comparison method, simultaneously establishes an auxiliary thread, and further carries out discrimination by adopting a method of respectively comparing separate channels on the premise of not influencing the real-time performance of the system.
1) And (3) weighted preliminary extraction:
and for the detected same moving object, marking each frame by using a minimum rectangular frame, and comparing the similarity characteristics of pixel points in different image frames by using the rectangular frame as a matching area.
Similarity comparison is performed by using color features of objects, color histogram V is used to represent color features of image frames, and similarity between objects is represented by S (abbreviation of similarity).
Using the first frame as a reference frame and the color histogram thereof as V 1 Color histogram V of subsequent frame i Respectively with V 1 The similarity comparison is carried out, and the result is S i If S is i Less than or equal to threshold value alpha, then 1 Grouped into a class of image frames L 1 If α is<S i ≤α 0 Then it is classified as pending frame, if S i0 Then, they are classified into the second class L 2 Then, the first frame of the second class is used as a reference frame, similarity comparison is carried out with each frame in the following, and the like.
Considering that the importance of different pixels in the rectangular frame of the object is different in similarity comparison, a 4*4 matrix is used to divide the rectangular frame into 16 blocks, as shown in fig. 3- (b), similarity comparison is performed on each block, then the weight of each point in the matrix participates in the similarity operation of the frame, and the weight distribution is shown in fig. 3- (a).
The similarity of the jth block of the ith frame is S ij The similarity between each block of the i-th frame and the corresponding region of the 1 st frame can be respectively calculated, and the image V of the i-th frame can be calculated after the weighting factor M of each block is added i And V 1 Similarity between them S i The calculation formula is as follows:
S ij =|V ij -V 1j | (24)
the auxiliary thread is established for further discrimination on the premise of not influencing the real-time performance of the main process, and the auxiliary process only processes the undetermined frame sent after each time of main process similarity comparison and the reference frame at the moment.
2) Channel separation and accurate extraction:
the helper thread processing is as follows:
for each input reference frame image and image to be framed, separating the RGB format image into images bImg, gImg and rImg of B, G, R three channels by using a cvSplit function in opencv, then respectively comparing the similarity of the three color channels, and if the similarity of the three channels is within a threshold value, classifying the three color channels into one class with a reference class, otherwise classifying the three color channels into the next class.
In the formula S ir 、S ig 、S ib Respectively represents the corresponding similarity alpha of R, G, B three channels r 、α g 、α b Respectively representing similarity threshold values set by three channels.
For image frames V classified as 1 ~V i Taking the average of all framesAnd taking the frame with the minimum difference value with the average value as a key frame, and deleting the rest frames to reduce image redundancy.
The monitoring video of a period of 58 minutes is selected for testing, and the test results are shown in table 1:
TABLE 1
(6) And (3) combining the deduction and the abstract of the moving object:
the foreground detection and marking of the moving object have been carried out previously, but the detected object is not enough to be used as the deducted object to be divided from the background, in order to reduce the calculation amount, the rectangular frame of the detected object is used as the area, the deduction of the moving object is carried out in the area by adopting the GraphCut algorithm, and the deducted object is added with the time stamp information and stored into the moving object database.
Extracting background images extracted in the previous step in different time periods, copying the background images, sequentially placing the deducted moving objects at corresponding positions in the background images according to a time sequence, and finally forming a video abstract form, wherein the method comprises the following specific steps of:
1) Extracting a background image frame obtained in a first time period;
2) Placing and numbering all the moving objects deducted in the time period at the corresponding positions of the background image according to the time sequence;
3) Performing translucency processing on mutually overlapped object parts in a frame of image so as to avoid influencing visual effect;
4) Stopping placing when the number of the objects placed in one frame of image is larger than a certain threshold value;
5) Copying the background image frame and continuously placing the object until the background image frame is completely placed;
6) Taking out the background image frame in the next time period and the object deducted in the time period for placing in the same way until the placing is finished;
7) And combining all the synthesized image frames to form a final video abstract.
While the invention has been described in connection with specific embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. A monitoring video abstract generation method based on combination of an object and a key frame is characterized by comprising the following steps:
1) Comparing two adjacent frames of images to obtain a change value, comparing the change value with a threshold value, keeping the images with the change values larger than the threshold value as motion segments, and forming a new video by the motion segments;
2) Uniformly dividing a new video into a plurality of segments according to video time, producing a background model for an image frame in each segment by using a Gaussian mixture background model, and storing the background model and timestamp information into a background database;
3) At any time t, the current frame image I t With background model B t Comparing to obtain a differential value at the (x, y) point, comparing the differential value with a threshold value T, wherein the part of the differential value greater than T is a moving object, and the rest is a background;
4) Tracking the moving objects based on a Kalman filtering algorithm, and storing tracking information of all the moving objects into an object database;
5) Marking the detected same moving object, carrying out similarity comparison by using the color characteristics of the object, classifying the image frames with high similarity, calculating the average value of the image frames of the same class, and selecting the frame with the minimum difference value with the average value as a key frame;
6) And (3) adopting a GraphCut algorithm to deduct the moving objects in the area marked with the moving objects, taking background images of different time periods from a background database and copying, and sequentially putting the deducted moving objects at corresponding positions in the background images according to a time sequence to form a video abstract.
2. The method as claimed in claim 1, wherein in step 5), the first frame is used as a reference frame, and the color histogram thereof is V 1 The color histogram of the subsequent frame is V i ,V i And V 1 Has a similarity ratio of S i Each frame is divided into MxN blocks, and the calculation formula is as follows:
S ij =|V ij -V 1j |
wherein S is ij Is the similarity value of the jth block of the ith frame, M j Is the weighting factor for the jth block;
classifying the image frame according to the formula if S i Less than or equal to threshold value alpha, image frame and V 1 Grouped into a class of image frames L 1 If α is<S i ≤α 0 The image frame is classified as a pending frame if S i0 If the image frame is classified as the second class L 2 (ii) a Then in a second class L 2 The first frame is used as a reference frame, and the similarity comparison is carried out with each frame in the following, so that the comparison of all image frames is completed.
3. The method for generating the surveillance video abstract based on the combination of the object and the key frame as claimed in claim 2, wherein an auxiliary thread is established for the frame to be determined, and color similarity contrast of channel separation is adopted, specifically comprising the following steps:
separating the RGB format image into B, G, R images, bImg, gImg and rImg, of three channels by using a cvSplit function for the input reference frame image and the image to be framed, respectively carrying out similarity comparison on the three color channels of the reference frame image and the image to be framed, if the similarities of the three channels are within a threshold value, the image to be framed and the reference frame image are in the same class, otherwise, the image to be framed is in the next class.
4. The method according to claim 1, wherein the same moving object is labeled with a minimum rectangular frame, and the rectangular frame is a similarity comparison area and a moving object capturing area.
5. The method for generating a surveillance video summary based on the combination of an object and a key frame according to claim 1, wherein the step 6) specifically comprises the following steps:
6.1 ) extracting a background image frame obtained in a first time period;
6.2 All the motion objects deducted in the time period are placed at the corresponding positions of the background image according to the time sequence and are numbered;
6.3 To translucency process the overlapped object parts in one frame image;
6.4 Stopping the placement when the number of objects placed in one frame of image is greater than a certain threshold;
6.5 Copy background image frames and continue to place objects until they are placed;
6.6 Extracting the background image frame in the next time period and the object extracted in the time period, and repeating the steps 6.2) -6.5);
6.7 Combine all the synthesized image frames to form a final video summary.
CN201410321281.0A 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame Active CN104063883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410321281.0A CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410321281.0A CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Publications (2)

Publication Number Publication Date
CN104063883A CN104063883A (en) 2014-09-24
CN104063883B true CN104063883B (en) 2018-03-16

Family

ID=51551575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410321281.0A Active CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Country Status (1)

Country Link
CN (1) CN104063883B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284158B (en) * 2014-10-23 2018-09-14 南京信必达智能技术有限公司 Method applied to event-oriented intelligent monitoring camera
US9544615B2 (en) * 2014-11-14 2017-01-10 Sony Corporation Method and system for processing video content
CN104394379B (en) * 2014-12-05 2017-10-13 北京厚吉科技有限公司 Monitoring video rapid preview system and rapid preview method
CN104469547B (en) * 2014-12-10 2017-06-06 西安理工大学 A kind of video abstraction generating method based on tree-shaped movement objective orbit
CN104867161B (en) * 2015-05-14 2018-03-23 国家电网公司 A kind of method for processing video frequency and device
CN104980707B (en) * 2015-06-25 2019-03-08 浙江立元通信技术股份有限公司 A kind of intelligent video patrol system
CN104954893B (en) * 2015-06-25 2017-11-28 西安理工大学 A kind of flase drop target chain delet method of video frequency abstract generation
TWI564822B (en) * 2015-11-20 2017-01-01 晶睿通訊股份有限公司 Preselectable Video File Playback System, Method Using The Same, and Computer Program Product Using The Same
CN105451023B (en) * 2015-11-20 2018-10-02 南京杰迈视讯科技有限公司 A kind of Video Storage System and method of motion perception
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
CN107018352A (en) * 2016-01-27 2017-08-04 北京中科晶上科技有限公司 A kind of method of video image processing, apparatus and system
CN107360386A (en) * 2016-05-09 2017-11-17 杭州登虹科技有限公司 Reduce the method for multi-medium file size
CN106534653A (en) * 2016-12-30 2017-03-22 哈尔滨理工大学 Network imaging device for detecting indoor moving target and detection method thereof
CN107223344A (en) * 2017-01-24 2017-09-29 深圳大学 The generation method and device of a kind of static video frequency abstract
CN107133580B (en) * 2017-04-24 2020-04-10 杭州空灵智能科技有限公司 Synthetic method of 3D printing monitoring video
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108446373A (en) * 2018-03-16 2018-08-24 五八有限公司 Animation playing method, device, equipment and storage medium
CN108848422B (en) * 2018-04-19 2020-06-02 清华大学 Video abstract generation method based on target detection
CN109145771B (en) * 2018-08-01 2020-11-20 武汉普利商用机器有限公司 Face snapshot method and device
CN110798656A (en) * 2018-08-03 2020-02-14 中国移动通信有限公司研究院 Method, device, medium and equipment for processing monitoring video file
CN111385672B (en) * 2018-12-31 2022-01-04 杭州登虹科技有限公司 Video concentration method based on event
CN111510724A (en) * 2019-01-31 2020-08-07 北京小犀智能科技中心(有限合伙) Equivalent video compression storage method and system based on image feature extraction
CN112333537B (en) * 2020-07-27 2023-12-05 深圳Tcl新技术有限公司 Video integration method, device and computer readable storage medium
CN113873200B (en) * 2021-09-26 2024-02-02 珠海研果科技有限公司 Image identification method and system
CN114245033A (en) * 2021-11-03 2022-03-25 浙江大华技术股份有限公司 Video synthesis method and device
CN114422848A (en) * 2022-01-19 2022-04-29 腾讯科技(深圳)有限公司 Video segmentation method and device, electronic equipment and storage medium
CN114697761B (en) 2022-04-07 2024-02-13 脸萌有限公司 Processing method, processing device, terminal equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101996307A (en) * 2009-08-10 2011-03-30 上海理视微电子有限公司 Intelligent video human body identification method
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101996307A (en) * 2009-08-10 2011-03-30 上海理视微电子有限公司 Intelligent video human body identification method
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing

Also Published As

Publication number Publication date
CN104063883A (en) 2014-09-24

Similar Documents

Publication Publication Date Title
CN104063883B (en) A kind of monitor video abstraction generating method being combined based on object and key frame
CN106960195B (en) Crowd counting method and device based on deep learning
CN111368788B (en) Training method and device for image recognition model and electronic equipment
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN107944427B (en) Dynamic face recognition method and computer readable storage medium
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN107358141B (en) Data identification method and device
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN107424175B (en) Target tracking method combined with space-time context information
CN111402298A (en) Grain depot video data compression method based on target detection and trajectory analysis
CN110930434A (en) Target object tracking method and device, storage medium and computer equipment
CN113239869A (en) Two-stage behavior identification method and system based on key frame sequence and behavior information
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN112990122A (en) Complex behavior identification method based on video basic unit analysis
CN112102250A (en) Method for establishing and detecting pathological image detection model with training data as missing label
CN112489055A (en) Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
US20240161461A1 (en) Object detection method, object detection apparatus, and object detection system
CN114373162A (en) Dangerous area personnel intrusion detection method and system for transformer substation video monitoring
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
KR101124560B1 (en) Automatic object processing method in movie and authoring apparatus for object service
CN112258525A (en) Image abundance statistics and population recognition algorithm based on bird high frame frequency sequence
CN112488165A (en) Infrared pedestrian identification method and system based on deep learning model
CN110490170A (en) A kind of face candidate frame extracting method
CN115512263A (en) Dynamic visual monitoring method and device for falling object
CN111626102B (en) Bimodal iterative denoising anomaly detection method and terminal based on video weak marker

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant