CN102810208B - Based on the criminal investigation video pre-filtering method that direct of travel detects - Google Patents

Based on the criminal investigation video pre-filtering method that direct of travel detects Download PDF

Info

Publication number
CN102810208B
CN102810208B CN201210257970.0A CN201210257970A CN102810208B CN 102810208 B CN102810208 B CN 102810208B CN 201210257970 A CN201210257970 A CN 201210257970A CN 102810208 B CN102810208 B CN 102810208B
Authority
CN
China
Prior art keywords
video
frame
image
moving
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210257970.0A
Other languages
Chinese (zh)
Other versions
CN102810208A (en
Inventor
严国建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Original Assignee
WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd filed Critical WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Priority to CN201210257970.0A priority Critical patent/CN102810208B/en
Publication of CN102810208A publication Critical patent/CN102810208A/en
Application granted granted Critical
Publication of CN102810208B publication Critical patent/CN102810208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a kind of criminal investigation video pre-filtering method detected based on direct of travel, first it arrange concern line according to video pictures at area-of-interest, concern line again in conjunction with setting carries out the detection of moving target direct of travel to video content, extracts crucial frame number; Finally the frame of video of extraction is merged into new video.This preprocess method can detect in conjunction with direct of travel and detect across paying close attention to line target the video frame number obtaining and have concern scope internal object in criminal investigation video, filter out uninterested frame of video in video, then the key frame extracted these reintegrates into one section of new video for browsing, decrease the frame of video quantity needing viewing, do not omit the information wanting to pay close attention to simultaneously, thus improve the browse efficiency of criminal investigation video.

Description

Criminal investigation video preprocessing method based on traveling direction detection
Technical Field
The invention relates to a video processing method in multimedia, in particular to a criminal investigation video preprocessing method based on traveling direction detection, which is used for criminal investigation video processing.
Background
In order to improve the comprehensive prevention and control capability of social security, a large number of established video monitoring systems are widely applied to public security criminal investigation business, and a video investigation technology for finding and tracking suspected targets from video videos becomes the fourth major technical support for investigation and solution after criminal technologies, action technologies and network investigation technologies. The essence of video surveillance is to monitor videos for surveillance cues and find suspected targets.
However, a large number of video monitoring systems bring a large number of monitoring video recordings, in the current video investigation work, criminal investigation personnel watch the players and record the videos while watching, and even when few active targets appear in the monitoring video recordings at night or in remote areas, the monitoring video recordings can only be browsed completely and cannot be missed for one second; when the video recording is browsed for a long time, the visual fatigue of criminal investigation personnel is easily caused to influence the video browsing working quality, even the visual damage of the investigation personnel is caused, the working mode of completely browsing and searching suspected targets by the investigation personnel in a manual mode is time-consuming and labor-consuming, and the efficiency is low.
When criminal investigation personnel use the surveillance video to discover and analyze criminal behaviors, the criminal investigation personnel often pay more attention to the pictures with moving objects in the surveillance video, and particularly, the criminal investigation personnel are more likely to pay more attention to the moving objects in a local area (such as a parking lot) in the video, the criminal investigation personnel are also likely to pay more attention to the moving objects crossing a certain warning line (such as a warning line drawn at the entrance of a cell), and the reduction of the area range is often more helpful for discovering and analyzing the criminal behaviors. In such cases, criminal personnel tend to pay more attention to whether there are moving objects within the region of interest (local area and warning line), but video footage tends to have a large number of "still" (including moving objects not within the region of interest) pictures. Although common video players have a quick playing function, a static part and an active part in a video are not distinguished, if a video preprocessing method is provided, video frame numbers (hereinafter referred to as motion frame numbers) of active objects (or active objects only in an interested area) in the video can be automatically extracted before criminal detection personnel watch a monitoring video, video frame numbers (hereinafter referred to as non-motion frame numbers) without the active objects (including the active objects not in the interested area) in the video are filtered, and then the extracted motion frame numbers are segmented and organized into video segments, so that the criminal detection personnel can only watch the video segments without watching the whole video, and the efficiency of browsing a large number of videos can be improved. However, such a video preprocessing method has not been found at present.
Disclosure of Invention
The invention provides a criminal investigation video preprocessing method based on the traveling direction detection, which overcomes the defects of the prior art, can extract a moving target segment from the region of interest of a user of a video, filter out a 'static' segment, and does not omit important pictures while reducing the video pictures needing to be watched, thereby improving the efficiency of video browsing.
The technical scheme adopted for realizing the aim of the invention is as follows: a criminal investigation video preprocessing method based on traveling direction detection comprises the following steps:
obtaining a video picture, using a mouse to demarcate a straight line in an interested area on the video picture, recording track coordinates of the mouse, drawing a track of the mouse in another all-black image with the same size, setting the straight line of the track of the mouse to be white, widening pixels of the straight line of the track of the mouse to obtain a rectangular closed area, and setting the closed area to be white to obtain a mask image;
setting an attention direction according to a defined straight line, detecting moving targets, judging the advancing direction of each detected moving target, and recording a key frame number;
and merging the video frames corresponding to the extracted key frame numbers into a new video.
In the above technical solution, the setting of the attention direction is setting of a reference direction in the video to be processed, the reference direction corresponding to a dial of the clock plane.
In the above technical solution, the detecting a moving target by using a background difference method specifically includes:
reading in a video to be processed, and creating a background image;
taking the frame difference between the current frame and the background image;
and comparing the frame difference with an area threshold, and if the frame difference is greater than the area threshold, determining that the frame difference is an active target.
And further, comparing the frame difference with an area threshold value after binarization, expansion and median filtering operations are carried out on the frame difference.
Further, the frame difference is binarized by the following formula:
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>255</mn> </mtd> <mtd> <mi>ifD</mi> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>ifD</mi> <mo>&lt;</mo> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
where D (x, y) represents the gray value of the pixel at the position (x, y) in the frame difference result image, and T is a given one of the thresholds.
In the above technical solution, the specific step of determining the traveling direction of each detected moving object is:
recording gradient information of a historical track of each moving part within a period of time by using a motion history map by using a timestamp;
calculating the direction of each component in a weighting mode according to the recorded gradient information;
and keeping the part consistent with the concerned direction, namely the direction of the moving target.
Further, the global gradient direction of the moving part is calculated by the following formula:
wherein,is the calculated global gradient direction of the component, i.e. the direction of the moving component,w (stamp) is a weight set according to the time stamp,is the minimum angular difference between the resulting direction of motion and the reference angle.
In the above technical solution, the recording the key frame number includes: and judging whether the center point of the reserved target is in the set attention line area range, if so, considering the current frame as a key frame and recording the frame number of the current frame, and otherwise, judging the current frame as a non-key frame.
In the above technical solution, the merging into a new video includes the following steps:
reading the first key frame according to the selected key frame number and taking the first key frame as the starting frame of the output video:
and reading the key frames after the first key frame according to the sequence of the key frame numbers, and adding the key frames into the output video in sequence until the last key frame is read and is taken as the end frame of the output video.
Compared with the prior art of directly watching the original video, the method has the advantages that: the criminal investigation video preprocessing algorithm based on motion detection obtains video frame numbers with moving targets in videos by three methods of global, arbitrarily assigned area and cross-warning line moving target detection, filters out 'static' picture frame numbers, and then divides and organizes the moving frame numbers into video segments.
Drawings
FIG. 1 is a flow chart of a criminal investigation video preprocessing method based on travel direction detection according to the present invention;
FIG. 2 is a flow chart of a method for setting a focus line of FIG. 1;
FIG. 3 is a flowchart of a key frame number extraction method of FIG. 1;
FIG. 4 is a flowchart of a key frame merging method of FIG. 1;
FIG. 5-1 is an example of a motion history map at a time;
fig. 5-2 is the result of the corresponding direction determination.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
As shown in fig. 1, the criminal investigation video preprocessing method based on the traveling direction detection comprises the following steps:
step S101: importing an original video, designating an interested area in a video picture of the original video, and setting a focus line in the interested area, as shown in fig. 2, the specific steps are as follows:
step S201: a frame of video picture is acquired.
Step S202: and (5) delimiting a straight line in the region of interest on the video picture by using the mouse, and recording the track coordinates of the mouse.
Step S203: a black-and-white image having the same size as the video frame in step S202 is created, and all pixel values of the black-and-white image are set to 0, that is, the black-and-white image is changed to full black, and the mouse trajectory line recorded in step S202 is drawn into the full black image.
Step S204: setting the straight line of the mouse track in the black-and-white image obtained in step S203 to be white, and adding the width to 5 pixels, because the ideal straight line is 1 pixel wide, the straight line is expanded to 5 pixels wide, so as to form a rectangular area, which is convenient for judging whether an active target overline exists in the area. The rectangular closed region thus obtained is set to white (pixel value is 255), thus obtaining a binary image. The widened attention line corresponds to a white portion in the binary image, the pixel values are all 255, the other portions are black, and the pixel values are all 0. The resulting image is the mask image.
Step S102: detecting the moving object moving direction of the video content according to the set attention line, and extracting the key frame number, as shown in fig. 3, the specific steps are as follows:
step S301: firstly, setting the attention direction for the video to be processed, wherein the direction is specified by adopting a 12-hour clock method in the embodiment, for example, for a horizontal road, if only an object moving to the right across the attention line is concerned, the direction can be specified to be along a dial 3 o' clock direction;
step S302: initializing a motion history map, which is initialized to a full black image (the pixel value is 0) in the embodiment, wherein the motion history map is used for keeping a history track of a moving target in a period of time to calculate the moving direction of the target;
step S303: reading a video to be processed;
step S304: creating a background image by using the pixel average value of the first 100 frames of images of the video, and converting the background image into a gray scale image;
step S305: reading the first frame image of the video again, converting the first frame image into a gray scale image, and starting a processing process;
step S306: and performing frame difference between the frame image and the background image, taking an absolute value of a frame difference result and storing the absolute value into a gray-scale image. The threshold used for making the background difference is adjustable, the background difference threshold can be adjusted a little lower for the video with inconspicuous contrast between the background and the moving foreground, such as night, and the threshold can be adjusted a little higher for the video with conspicuous contrast between the background and the moving foreground. The adjustment of the background difference threshold can be realized on a user operation interface, and the user can set the threshold according to a specific criminal investigation video to achieve the best detection effect.
Step S307: binarizing each pixel in the frame difference result image obtained in step S306 according to the following formula:
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>255</mn> </mtd> <mtd> <mi>ifD</mi> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>ifD</mi> <mo>&lt;</mo> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein D (x, y) represents the gray value of the pixel at the position (x, y) in the frame difference result image, T is a given threshold, i.e. the gray value D of each pixel in the frame difference result image is compared with the threshold T, if D is greater than or equal to T, the pixel value is assigned 255, otherwise, the pixel value is assigned 0;
step S308: performing a dilation operation in the related art on the image obtained in step S307;
step S309: performing median filtering operation in the prior art on the image obtained in step S308;
step S310: and (4) judging whether the area of each moving target in the image obtained in the step (S309) is larger than S, and removing the target with the area smaller than S, wherein S is a given area threshold, and the S can be set to remove the influence of small noise such as leaf disturbance and the like. In addition, the value of S in different criminal investigation videos may not be fixed, S may be set higher when the target is closer to the lens, and S may be set lower when the target is farther from the lens. The adjustment of the area threshold value S can be realized on a user operation interface, when the user adjusts the size of the S, the detection result is fed back to the user in a dynamic visualization mode (all activity targets in the video are framed by an external rectangular frame, and the value of the S set by the user is also set intuitively by randomly adjusting the rectangular frames with different sizes), so that the adjustment of the user is facilitated. The best detection effect is achieved by the way that the user sets himself according to the specific criminal investigation video. The area threshold is set according to the required activity target to be determined, if the required activity target to be determined is a pedestrian, the area threshold is the minimum value in the area size range of the pedestrian, and the area size range of the pedestrian can be obtained through statistics.
Step S311: the moving object pixel reserved for the moving image (set to silh) obtained in step S310 is assigned a current timestamp (timestamp), and updated to the motion history map (mhi):
mhi ( x , y ) = timestamp ifsilh ( x , y ) ! = 0 0 ifsilh ( x , y ) = 0 andmhi ( x , y ) < timestamp - duration mhi ( x , y ) otherwise
the time stamp is designed to ensure that the latest detected moving object has a gray value larger than the previous track, so that in the motion history map, the history track of each moving object within a period of time (i.e. duration in the above formula, set to 0.5 second) is retained, and more importantly, the added time stamp enables the tracks in different periods to have different gray values, which is convenient for further calculating the moving direction of the object.
Fig. 5-1 is an example of an effect at a certain time of the motion history map, and fig. 5-2 is a result of a corresponding direction judgment. It can be seen that the contour of the moving object of the current frame has the brightest gray level, the longer the historical contour from the current moment is, the lighter the gray level of the historical contour is, and the historical track pixel value exceeding the set time interval is restored to 0 (black). This forms gradient information for the calculation of the direction.
Step S312: in the motion history map in step S311, a plurality of moving objects (hereinafter referred to as moving parts) with gradient information may be recorded, and it is necessary to divide them one by one and calculate the directions respectively;
step S313: for each motion component segmented in step S312, first, the gradient direction of each pixel point is calculated:whereinThe moving direction of the moving part at the point (x, y) is represented by 0-360 degrees, Fy(x, y) and Fx(x, y) is a gradient image with pixel points (x, y) calculated by the gradient mask.
Then, a weighting method is adopted to calculate the global gradient direction of the moving part, and the rule is that the more recent historical pixel point is, the larger the corresponding weight is:
is the global gradient direction of the calculated component,for the resulting base reference angle (peak point in the direction histogram), w (stamp) is the weight set according to the timestamp,is the minimum angular difference between the resulting direction of motion and the reference angle.Namely the direction of the moving part, the angle value is 0-360 degrees, the direction is compared with the set attention direction, and the direction is converted into a clockwork:
the pointers in fig. 5-1 and 5-2 show the direction judgment results of the respective components of the motion history map at a certain moment, and the indication results can also be synchronously displayed in the video.
Step S314: if the calculated direction of the component is close to the set direction of interest, for example, the angular deviation is within 30 degrees, the moving direction is considered to be consistent with the direction of interest, and those components consistent with the direction of interest are retained. For setting attention lines and attention directions of videos of different scenes, in order to achieve the best effect, a judgment rule of 'direction consistency' can be customized on a user operation interface, and the user defines how large deflection angle is allowed to be consistent according to a specific video.
Step S315: judging whether the central point of the part reserved in the step S314 is in the set mask area (the logic and operation of the mask image can be realized), if so, judging that the frame is a key frame, and recording the frame number of the frame;
step S316: judging whether the current frame is the last frame of the video, if so, ending the selection of the key frame number module, and if not, turning to the step S317;
step S317: reading the next frame image of the video, converting the next frame image into a gray image, and then updating the background image, wherein the background updating idea is that the original background and a newly read frame image respectively account for a certain proportion to form a new background:
background(x,y)=(1-α)·background(x,y)+α·newframe(x,y)
where background represents the background image and newframe represents the newly read-in frame of image, and α controls the update rate, which is set to 0.003 in our experiment. Considering that in practical situations, moving objects in the video may stay in the field of view, which will affect the accuracy of the background segmentation. The problem can be solved by additionally arranging a mask image, the objects in the video are shielded during updating, and the newframe of the incoming background update only contains parts without moving objects, so that the method has good effect in experiments.
And after the background updating is completed, the step S306 is carried out to continue the selection of the key frame.
Step S103: and merging the video frames corresponding to the extracted key frame numbers into a new video. As shown in fig. 4, the method comprises the following specific steps:
step S401: reading the 1 st key frame according to the selected key frame number and taking the key frame as a starting frame of the output video;
step S402: judging whether the frame is the last key frame, if so, turning to the step S404 to take the frame as the end frame of the output video, otherwise, turning to the step S403 to continue reading the next motion frame;
step S403: reading the next motion frame and continuing the judgment process of the step S402;
step S404: this frame is taken as the end frame of the output video.
While the present invention has been described with reference to the particular illustrative embodiments and examples, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (5)

1. A criminal investigation video preprocessing method based on traveling direction detection is characterized by comprising the following steps:
obtaining a video picture, using a mouse to demarcate a straight line in an interested area on the video picture, recording track coordinates of the mouse, drawing a track of the mouse in another all-black image with the same size, setting the straight line of the track of the mouse to be white, widening pixels of the straight line of the track of the mouse to obtain a rectangular closed area, and setting the closed area to be white to obtain a mask image;
setting an attention direction according to a defined straight line, detecting moving targets, judging the advancing direction of each detected moving target, and recording a key frame number; setting a focus direction is to set a reference direction in a video to be processed, the reference direction corresponding to a dial of a clock plane; the method is characterized in that the detection of the moving target is realized by using a background difference method, and comprises the following specific steps:
step S102: the method comprises the following steps of detecting the moving target advancing direction of video content according to a set attention line, and extracting a key frame number, wherein the method comprises the following specific steps:
step S301: firstly, setting a concerned direction for a video to be processed, and designating the direction by adopting a 12-hour clock method;
step S302: initializing a motion history map, wherein the motion history map is initialized to be a completely black image, and the motion history map is used for keeping the history track of a moving target in a period of time to calculate the moving direction of the target;
step S303: reading a video to be processed;
step S304: creating a background image by using the pixel average value of the first 100 frames of images of the video, and converting the background image into a gray scale image;
step S305: reading the first frame image of the video again, converting the first frame image into a gray scale image, and starting a processing process;
step S306: making frame difference between the frame image and the background image, taking the absolute value of the frame difference result and storing the absolute value in a gray scale image;
step S307: binarizing each pixel in the frame difference result image obtained in step S306 according to the following formula:
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = '{' close = ''> <mtable> <mtr> <mtd> <mn>255</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>D</mi> <mo>&GreaterEqual;</mo> <mi>T</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>D</mi> <mo>&lt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein D (x, y) represents the gray value of the pixel at the position (x, y) in the frame difference result image, T is a given threshold, i.e. the gray value D of each pixel in the frame difference result image is compared with the threshold T, if D is greater than or equal to T, the pixel value is assigned 255, otherwise, the pixel value is assigned 0;
step S308: performing a dilation operation in the related art on the image obtained in step S307;
step S309: performing median filtering operation in the prior art on the image obtained in step S308;
step S310: judging whether the area of each moving target in the image obtained in the step S309 is larger than S, and removing targets with the area smaller than S, wherein S is a given area threshold;
step S311: the moving object pixel value reserved for the moving image sil obtained in step S310 is assigned as the current timestamp, and is updated to the motion history map mhi:
m h i ( x , y ) = t i m e s t a m p i f s i l h ( x , y ) ! = 0 0 i f s i l h ( x , y ) = 0 a n d m h i ( x , y ) < t i m e s t a m p - d u r a t i o n m h i ( x , y ) o t h e r w i s e
in the above formula, duration is a period of time for which each moving object continues in the movement history map;
step S312: in the motion history map in step S311, a plurality of moving objects with gradient information may be recorded, and the moving objects are divided one by one to calculate directions respectively;
step S313: for each motion component segmented in step S312, first, the gradient direction of each pixel point is calculated:whereinThe moving direction of the moving part at the point (x, y) is represented by 0-360 degrees, Fy(x, y) and Fx(x, y) is a gradient image calculated by the gradient mask for the pixel (x, y); then, a weighting method is adopted to calculate the global gradient direction of the moving part, and the rule is that the more recent historical pixel point is, the larger the corresponding weight is:
is the global gradient direction of the calculated component,for the resulting base reference angle, w (stamp) is a weight set according to the time stamp,for the minimum angular difference between the resulting direction of movement and the reference angle,namely the direction of the moving part, the angle value is 0-360 degrees, the direction is compared with the set attention direction, and the direction is converted into a clockwork:
step S314: if the calculated direction of the component is close to the set attention direction, the movement direction of the component is considered to be consistent with the attention direction, and the components consistent with the attention direction are reserved;
step S315: judging whether the central point of the part reserved in the step S314 is in the set mask area, if so, judging that the frame is a key frame, and recording the frame number of the frame;
step S316: judging whether the current frame is the last frame of the video, if so, ending the selection of the key frame number module, and if not, turning to the step S317;
step S317: reading the next frame image of the video, converting the next frame image into a gray image, and then updating the background image, wherein the background updating idea is that the original background and a newly read frame image respectively account for a certain proportion to form a new background:
background(x,y)=(1-α)·background(x,y)+α·newframe(x,y)
wherein background represents a background image, newframe represents a newly read frame image, and alpha controls the updating rate, and the step S306 is switched to continue to select the key frame after the background updating is finished;
and merging the video frames corresponding to the extracted key frame numbers into a new video.
2. The criminal investigation video preprocessing method based on traveling direction detection as claimed in claim 1, wherein the specific steps of judging the traveling direction of each detected moving object are as follows:
recording gradient information of a historical track of each moving part within a period of time by using a motion history map by using a timestamp;
calculating the direction of each component in a weighting mode according to the recorded gradient information;
and keeping the part consistent with the concerned direction, namely the direction of the moving target.
3. The criminal investigation video pre-processing method based on direction of travel detection according to claim 2, characterized in that the global gradient direction of the moving part is calculated by the following formula:
wherein,is the calculated global gradient direction of the component, i.e. the direction of the moving component,w (stamp) is a weight set according to the time stamp,is the minimum angular difference between the resulting direction of motion and the reference angle.
4. The criminal investigation video preprocessing method based on direction of travel detection as claimed in claim 1, wherein the recording key frame number comprises: and judging whether the center point of the reserved target is in the set attention line area range, if so, considering the current frame as a key frame and recording the frame number of the current frame, and otherwise, judging the current frame as a non-key frame.
5. The criminal investigation video pre-processing method based on direction of travel detection according to claim 1, wherein the merging into a new video comprises the steps of:
reading a first key frame according to the selected key frame number and taking the first key frame as a starting frame of the output video;
and reading the key frames after the first key frame according to the sequence of the key frame numbers, and adding the key frames into the output video in sequence until the last key frame is read and is taken as the end frame of the output video.
CN201210257970.0A 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects Active CN102810208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210257970.0A CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210257970.0A CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Publications (2)

Publication Number Publication Date
CN102810208A CN102810208A (en) 2012-12-05
CN102810208B true CN102810208B (en) 2015-12-16

Family

ID=47233910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210257970.0A Active CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Country Status (1)

Country Link
CN (1) CN102810208B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092929B (en) * 2012-12-30 2016-12-28 信帧电子技术(北京)有限公司 A kind of generation method and device of video frequency abstract
CN105744345B (en) * 2014-12-12 2019-05-31 深圳Tcl新技术有限公司 Video-frequency compression method and device
CN107770528B (en) * 2016-08-19 2023-08-25 中兴通讯股份有限公司 Video playing method and device
CN107133580B (en) * 2017-04-24 2020-04-10 杭州空灵智能科技有限公司 Synthetic method of 3D printing monitoring video
CN111866428B (en) * 2019-04-29 2023-03-14 杭州海康威视数字技术股份有限公司 Historical video data processing method and device
CN110933455B (en) * 2019-12-16 2023-03-14 云粒智慧科技有限公司 Video screening method and device, electronic equipment and storage medium
CN111738769B (en) * 2020-06-24 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Video processing method and device
CN112312087B (en) * 2020-10-22 2022-07-29 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119481A (en) * 2007-08-27 2008-02-06 刘文萍 Remote alarm video monitoring system and method
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102547244A (en) * 2012-01-17 2012-07-04 深圳辉锐天眼科技有限公司 Video monitoring method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119481A (en) * 2007-08-27 2008-02-06 刘文萍 Remote alarm video monitoring system and method
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102547244A (en) * 2012-01-17 2012-07-04 深圳辉锐天眼科技有限公司 Video monitoring method and system

Also Published As

Publication number Publication date
CN102810208A (en) 2012-12-05

Similar Documents

Publication Publication Date Title
CN102810208B (en) Based on the criminal investigation video pre-filtering method that direct of travel detects
CN107256225B (en) Method and device for generating heat map based on video analysis
CN108040221B (en) Intelligent video analysis and monitoring system
US9269245B2 (en) Region of interest based video synopsis
JP5280503B2 (en) Image display method, shared board system
Albiol et al. Detection of parked vehicles using spatiotemporal maps
US9398349B2 (en) Comment information generation device, and comment display device
US7982774B2 (en) Image processing apparatus and image processing method
US20110228978A1 (en) Foreground object detection system and method
CN104978567B (en) Vehicle checking method based on scene classification
Breitenstein et al. Hunting nessie-real-time abnormality detection from webcams
US8355079B2 (en) Temporally consistent caption detection on videos using a 3D spatiotemporal method
CN102833465A (en) Criminal investigation video pretreatment method based on movement detection
US20110280478A1 (en) Object monitoring system and method
CN101715070B (en) Method for automatically updating background in specifically monitored video
CN105608209B (en) Video annotation method and video annotation device
Höferlin et al. Uncertainty-aware video visual analytics of tracked moving objects
CN107105193B (en) Robot monitoring system based on human body information
CN102915542A (en) Image processing apparatus, image processing method, and program
US20160210759A1 (en) System and method of detecting moving objects
CN109905660B (en) Method, device and computer readable storage medium for searching video event
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
Sitara et al. Differentiating synthetic and optical zooming for passive video forgery detection: An anti-forensic perspective
CN113947744A (en) Fire image detection method, system, equipment and storage medium based on video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant