CN104504730A - Method for distinguishing parked vehicle from falling object - Google Patents

Method for distinguishing parked vehicle from falling object Download PDF

Info

Publication number
CN104504730A
CN104504730A CN201410802663.5A CN201410802663A CN104504730A CN 104504730 A CN104504730 A CN 104504730A CN 201410802663 A CN201410802663 A CN 201410802663A CN 104504730 A CN104504730 A CN 104504730A
Authority
CN
China
Prior art keywords
frame
area
video image
region
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410802663.5A
Other languages
Chinese (zh)
Other versions
CN104504730B (en
Inventor
崔华
孙丽婷
宋焕生
李钢
朱龙生
公维宾
李怀宇
王璇
孙士杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201410802663.5A priority Critical patent/CN104504730B/en
Publication of CN104504730A publication Critical patent/CN104504730A/en
Application granted granted Critical
Publication of CN104504730B publication Critical patent/CN104504730B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Abstract

The invention discloses a method for distinguishing a parked vehicle from a falling object and belongs to the field of traffic. The method comprises the following steps: determining a suspicious region where a parked vehicle or a falling object may occur in a video image, and specifically judging the condition in the suspicious region. According to the method disclosed by the invention, an acquired video image is processed, and a projection area of an object main body corresponding to a track is compared with the projection area of a preset vehicle after the condition that the track of the parked vehicle or the falling object exists in the video image is judged, so the object main body in the video image is a parked vehicle or a falling object is judged, the problem of poor detection accuracy caused by loss of height information of the object during dimensionality reduction of an existing detection technology can be avoided, and the detection accuracy can be improved.

Description

A kind of to parking cars and leaving the differentiating method of thing
Technical field
The present invention relates to field of traffic, particularly a kind of to parking cars and leaving the differentiating method of thing.
Background technology
Along with expanding economy, the recoverable amount of private car is increasing rapidly.While recoverable amount increases, vehicle is disobeyed and is stopped and illegally leave the violation of laws such as thing also in increase, can cause the generation of traffic hazard thus.
In existing technology, detection for above-mentioned violation of law mainly contains radar, infrared, ultrasound wave, video etc., relative to other detection technique, video detection technology because its overlay area is wide, accuracy of detection is high, contain much information, to detect the application of feature in detection field such as flexibly all the more extensive.
Realizing in process of the present invention, inventor finds that prior art at least exists following problem:
When detecting violation of law, video detection technology mainly carries out based on two dimensional image, due to illegal activities is in three dimensions, the reduction process from three-dimensional to two dimension can be there is in processing procedure, this process can cause the loss detecting object height information, and then reduces the precision of testing result.
Summary of the invention
In order to solve the problem of prior art, the invention provides a kind of to parking cars and leaving the differentiating method of thing, described method is used for processing the video image got, and described method comprises:
Step one, carries out edge to each frame of described video image and strengthens process, be multiple region, and arrange counter and abnormal marking to described each region by the Iamge Segmentation after process;
Step 2, step 2, choose the first frame of described video image as sample frame, obtain the absolute value of each frame and the gray scale difference of described sample frame in single described region in the residue frame in the image after described process and, be denoted as the first cumulative sum, described first cumulative sum and first threshold are contrasted, if described first cumulative sum is less than described first threshold, then makes the described counter values of the corresponding frame of described first cumulative sum add one;
Step 3, if in any frame, the counter values of first area exceedes predetermined threshold value in described residue frame, then the gray-scale value of described first area is saved as the first steady state (SS) of described first area, and make the counter clear of described first area, if the counter values of first area exceedes described predetermined threshold value again in any frame in described residue frame, then the gray-scale value of first area in present frame is saved as the second steady state (SS) of described first area;
Step 4, obtain the absolute value of the gray value differences of described first steady state (SS) and described second steady state (SS) and, be denoted as the second cumulative sum, described second cumulative sum and Second Threshold are contrasted, if described second cumulative sum is greater than described Second Threshold, then activate the described abnormal marking of described first area, the gray-scale value in described first area is set to 255;
Step 5, in the place frame of described first area, obtains the boundary rectangle that all regions with described abnormal marking are formed, if the area of described boundary rectangle is greater than the 3rd threshold value, then makes the region that described rectangle is corresponding be suspicious region;
Step 6, E frame video image nearest before obtaining described suspicious region place video image, window is set by the region with described abnormal marking, obtain the energy change value of described window, if described energy change value is greater than the 4th threshold value, then the center pixel of described window is labeled as candidate feature point, in described suspicious region, choose and there is candidate feature point corresponding to maximum energy variation value as angle point, if can determine the movement locus that exists in described E frame video image then to carry out next step according to described angle point;
Step 7, F frame video image nearest after obtaining described suspicious region place video image, obtains movement locus in described F frame video image, if do not get described movement locus, then determines to exist in described suspicious region park cars or leave thing;
Step 8, background removal process is carried out to described suspicious region, obtain target subject, the sample that described target subject and the vehicle preset projected carries out area matched, if the area of described view field and described default vehicle project, the area of sample overlapping region is greater than the 5th threshold value, then determine for parking cars in described region, otherwise for leaving thing.
Optionally, described method also comprises:
If described first cumulative sum is not less than described first threshold, then the counter values of described first cumulative sum corresponding region is reset, and upgrade the data in the described single region of described sample frame.
Optionally, described method also comprises:
If described second cumulative sum is not more than described Second Threshold, the gray-scale value in described first steady state (SS) is replaced with the gray-scale value in described second steady state (SS).
Optionally, described method also comprises:
If the area of described boundary rectangle is not more than the 3rd threshold value, then give up the area data of described abnormal marking.
Optionally, determine the movement locus existed in described E frame video image according to described angle point, comprising:
Travel through within the scope of preset search in each frame of described E two field picture, determine the point with described corners Matching in video image described in each frame;
The point of described coupling and described angle point are connected and composed line, and described line is the movement locus existed in described video image.
The beneficial effect that technical scheme provided by the invention is brought is:
By processing the video image got, after determining to exist in video image the track parked cars or leave thing, again according to the target subject of correspondence and the area of projection sample overlapping region of preset vehicle and the relation of predetermined threshold value, thus the target subject in judgement video image parks cars or leaves thing, avoid existing detection technique in reduction process because the loss of object height information causes the problem of accuracy of detection, improve accuracy of detection.
Accompanying drawing explanation
In order to be illustrated more clearly in technical scheme of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is provided by the invention a kind of to parking cars and leaving the schematic flow sheet of differentiating method of thing;
Fig. 2 is the schematic diagram the second frame and sample frame being carried out regional area contrast provided by the invention;
Fig. 3 is the schematic diagram of the abnormal area detected provided by the invention;
Fig. 4 is the schematic diagram of the suspicious region boundary rectangle determined according to multiple abnormal area provided by the invention;
Fig. 5 is the schematic diagram of the variable quantity of calculating moving window provided by the invention;
Fig. 6 is the schematic diagram of the tracking of Block-matching provided by the invention;
Fig. 7 is the schematic diagram obtaining target subject after background subtraction provided by the invention;
Fig. 8 is the projection of abnormal area provided by the invention and the result schematic diagram of car Model Matching;
Fig. 9 is the schematic diagram of the message window of the position that prompting provided by the invention parks cars.
Embodiment
For making structure of the present invention and advantage clearly, below in conjunction with accompanying drawing, structure of the present invention is further described.
Embodiment one
The invention provides a kind of to parking cars and leaving the differentiating method of thing, described method is used for processing the video image got, and as shown in Figure 1, the method comprises:
Step one, carries out edge to each frame of described video image and strengthens process, be multiple region, and arrange counter and abnormal marking to described each region by the Iamge Segmentation after process.
In force, before each frame of video image being carried out to edge and strengthening process, the picture of video image is often needed to carry out based on histogrammic linear stretch process, video image after process is relative to before stretch processing, the brightness and contrast of image obtains enhancing, edge details part is more outstanding, the sharpness of image is finally made to be significantly improved, be convenient to follow-up process, after stretch processing terminates, the first order differential operator simplified by use extracts the image border in each frame video image, the mode of medium filtering is adopted to carry out filtering to image border afterwards, be convenient to the burr reducing image border.
Above-mentioned medium filtering is a kind of nonlinear filter processing method, and the central idea of the method uses the intermediate value of pixel field gray-scale value to replace the gray-scale value of this pixel, and median filter such as
f(x,y)=median{Sf(x,y)},
Wherein Sf (x, y) be a f (x, y) region near, for the filtering object in the present embodiment, can by sorting to the gray-scale value of the most adjacent five pixels, and then using the intermediate value after sequence as the pixel value of filtering object, the noise component in image border so just can be removed.Here why carrying out edge enhancing, is the marginal information can giving prominence to handling object.
After above-mentioned edge strengthens process, also need each frame of video image to be divided into multiple region, concrete, need each frame video image piecemeal, especially by following formula:
C × R = W × H w × h ,
Wherein W, H, w, h be respectively image width, highly, the width of block, highly, namely the number of pixels in the number of pixels in image number of pixels in the horizontal direction, vertical direction and arbitrary region number of pixels in the horizontal direction and vertical direction, C represents the number in the region that horizontal direction divides, R represents the number in the region that vertical direction divides, afterwards also for each region arranges a counter CN, abnormal marking D.Here the resolution of video image is 720 × 288, and each area size after piecemeal is 6 × 8, and unit is pixel p ixel.
Step 2, choose the first frame of described video image as sample frame, obtain the absolute value of each frame and the gray scale difference of described sample frame in single described region in the residue frame in the image after described process and, be denoted as the first cumulative sum, described first cumulative sum and first threshold are contrasted, if described first cumulative sum is less than described first threshold, then the described counter values of the corresponding frame of described first cumulative sum is made to add one.
Optionally, described method also comprises:
If described first cumulative sum is not less than described first threshold, then the counter values of corresponding for described first cumulative sum frame is reset, and upgrade sample frame.
In force, using the first frame in video image as sample frame, second frame adjacent with this frame and sample frame are contrasted, concrete comparison other is the gray-scale value of correspondence position pixel in each region in every frame video image, as shown in Figure 2, contrasted one by one by a kind of for the region of same position in the gray-scale value of the whole pixels in the second frame region one and sample frame gray-scale value of whole pixels, the gray-scale value of such as, five pixels in the second frame is followed successively by 122, 54, 18, 220, 214, in sample frame, the gray-scale value of five pixels of the same area is followed successively by 53, 85, 147, 56, 244, the difference that can obtain above-mentioned five grey scale pixel values is like this followed successively by 69,-31,-129, 164,-30, and then can determine above-mentioned gray-scale value difference absolute value and for 69+31+129+164+30=423, namely the first cumulative sum is 423, next the first cumulative sum and first threshold are contrasted, if the first cumulative sum is less than first threshold, then the counter values in region corresponding for this first cumulative sum is added one, the span of first threshold is 500 to 600 here.
Corresponding, if described first cumulative sum is not less than described first threshold, then the counter values of described first cumulative sum corresponding region is reset, and upgrade the data in the described single region of described sample frame.
In force, if when the first cumulative sum is not less than first threshold, show that the gray-scale value in this region does not exceed default standard, directly the numerical value of the counter in this region is reset, and by the Data Update in this region.Namely remove with the gray-scale value in this region the gray-scale value replacing the same area in sample frame.
Step 3, if in any frame, the counter values of first area exceedes predetermined threshold value in described residue frame, then the gray-scale value of described first area is saved as the first steady state (SS) of described first area, and make the counter clear of described first area, if the counter values of first area exceedes described predetermined threshold value again in any frame in described residue frame, then the gray-scale value of first area in present frame is saved as the second steady state (SS) of described first area.
In force, using the first frame as sample frame, the gray-scale value of all pixels in the gray-scale value of pixels all in the first area of the second frame and the first area of sample frame is calculated the difference of gray-scale value, and all differences are carried out absolute value summation i.e. the 3rd cumulative sum, if the 3rd cumulative sum is greater than predetermined threshold value, just use the gray-scale value of the pixel in the first area of the second frame to be substituted by the gray-scale value of pixel in the first area in sample frame, ensure that sample frame carries out real-time update along with the change of background; Otherwise, if the 3rd cumulative sum is less than predetermined threshold value, just the numerical value of the counter of first area is added one, then the above-mentioned calculating asking the first cumulative sum is carried out in the first area of the 3rd frame and the first area of sample frame, then be the first area of the first area of the 4th frame, the 5th frame, by that analogy, if the counter proceeding to first area during 300 frame reaches predetermined threshold value, just each grey scale pixel value in the first area in the 300 two field picture is saved in the steady state (SS) one of first area.The span of predetermined threshold value is here 60 to 100.
Step 4, obtain the absolute value of the gray value differences of described first steady state (SS) and described second steady state (SS) and, be denoted as the second cumulative sum, described second cumulative sum and Second Threshold are contrasted, if described second cumulative sum is greater than described Second Threshold, then activate the described abnormal marking of described first area, the gray-scale value in described first area is set to 255.
Optionally, described method also comprises:
If described second cumulative sum is not more than described Second Threshold, the gray-scale value in described first steady state (SS) is replaced with the gray-scale value in described second steady state (SS).
In force, content in continuity step 3, after in the steady state (SS) one each grey scale pixel value in the first area in the 300 two field picture being saved in first area, counter is made to count from zero, if when when proceeding to 800 frame, the counter of first area once reaches Second Threshold again, the gray-scale value of each pixel in the first area of the 800 two field picture is saved in the second steady state (SS) of first area, and calculate the absolute value of the difference of the first area i.e. 6 × 8=48 grey scale pixel value in the first steady state (SS) of first area and the second steady state (SS) and, i.e. the second cumulative sum, if the second cumulative sum is greater than predetermined threshold value, just think that first area is abnormal area, the abnormal marking of first area is set to TRUE.Here the span of Second Threshold is 500 to 600.
If first, second steady state (SS) is all regions, here calculating cumulative sum and with threshold comparison after, whole for this region gray scale is set to 255, and all the other are set to 0, and here all the other refer in entire image the part removed beyond all abnormal area blocks.As shown in Figure 3, wherein white lines surround the boundary rectangle that region is exactly abnormal area, white block be exactly all abnormal areas detected.The part of black is not abnormal area.
It should be noted that no matter the second cumulative sum is large or little relative to predetermined threshold value, all use the data that the data stored in the second steady state (SS) are gone in replacement first steady state (SS).All operations as above first area is all carried out to each region in video image.
Specifically be described with first area, because again cumulative detection second steady state (SS) of counter also will be carried out in first area after detecting first steady state (SS), the detection also will carrying out the 3rd, the 4th, the 5th etc. steady state (SS) to the process of first area is not stopped after having detected second steady state (SS), and be currently only assigned with two storage spaces for stable storage state to each region, and these two storage spaces store is distance nearest, namely time upper two up-to-date steady state (SS)s now.Such as current detection has gone out three steady state (SS)s, be followed successively by the first steady state (SS), the second steady state (SS), the 3rd steady state (SS) in chronological order, calculative right 3rd with the cumulative sum of second steady state (SS), instead of the cumulative sum of the 3rd and first steady state (SS).Therefore often detect that a steady state (SS) will being stored in steady state (SS) two, and the steady state (SS) that to be steady state (SS)s of going out of distance current detection nearest of steady state (SS) two before, therefore the deposit data in steady state (SS) two before in steady state (SS) one, namely so-called " gray-scale value in described first steady state (SS) is replaced with the gray-scale value in described second steady state (SS) ", the data cases of accumulative and time of just can the determining upper up-to-date current region of calculating second steady state (SS) and the first steady state (SS) is so only needed.
Step 5, in the place frame of described first area, obtains the boundary rectangle that all regions with described abnormal marking are formed, if the area of described boundary rectangle is greater than the 3rd threshold value, then makes the region that described rectangle is corresponding be suspicious region.
Optionally, described method also comprises:
If the area of described boundary rectangle is not more than the 3rd threshold value, then give up the area data of described abnormal marking.
In force, as Fig. 4, in this video frame image, there is multiple region with abnormal marking, be namely filled with the part of black oblique line, according to the edge, position that these regions exist, determine boundary rectangle, and determine the area of this boundary rectangle, when this area is greater than the 3rd threshold value, the region that this rectangle surrounds is and is considered as suspicious region.Be less than the situation of threshold value, just do not deal with.Here the value of the 3rd threshold value is 5 to 10.
Step 6, E frame video image nearest before obtaining described suspicious region place video image, window is set by the region with described abnormal marking, obtain the energy change value of described window, if described energy change value is greater than the 4th threshold value, then the center pixel of described window is labeled as candidate feature point, in described suspicious region, choose and there is candidate feature point corresponding to maximum energy variation value as angle point, if can determine the movement locus that exists in described E frame video image then to carry out next step according to described angle point.
Optionally, determine the movement locus existed in described E frame video image according to described angle point, comprising:
Travel through in described E two field picture, determine the point with described corners Matching in video image described in each frame;
The point of described coupling and described angle point are connected and composed line, and described line is the movement locus existed in described video image.
In force, shown in specific as follows:
First, in abnormal area, set a local window,
The size of window is w × w.
Secondly, by this local window when certain orientation is moved, the diagonal line of moving direction is determined five nearest reference point, the gray-scale value of these five reference point is asked poor between two, and absolute value summation, should and be the energy change value of described local window, by this changing value and the 3rd threshold comparison, if this changing value is greater than the 3rd threshold value, then by the center pixel of this window alternatively unique point
What adopt in the present embodiment is Moravec Corner Detection Algorithm, and conventional Moravec Corner Detection Algorithm is the gray scale difference quadratic sum calculating the upper neighbor of four direction (namely 0 °, 45 °, 90 ° and 135 °).The such as energy variation of computed image pixel (x, y),
V 1 = Σ i = - k k - 1 ( I x + i , y - I x + i + 1 , y ) 2 , V 2 = Σ i = - k k - 1 ( I x , y + i - I x , y + i + 1 ) 2 ,
V 3 = Σ i = - k k - 1 ( I x + i , y + i - I x + i + 1 , y + i + 1 ) 2 , V 4 = Σ i = - k k - 1 ( I x + 1 , y - i - I x + i + 1 , y - i - 1 ) 2
E=min(V 1,V 2,V 3,V 4),
Wherein, representative rounds, V 1, V 2, V 3, V4 represents energy change value along 90 °, 0 °, 45 °, 135 ° directions respectively, choose in the energy change value E of four direction minimum as pixel (x, y) energy change value, if this energy change value E is greater than certain threshold value, just by pixel (x, y) alternatively unique point.
Again, in suspicious region, select to there is candidate feature point corresponding to maximum energy variation value as angle point.
All pixels in abnormal area are all calculated to its energy change value, choose the maximum pixel of energy change value as choosing to obtain angle point.
When window size is 5 × 5, as shown in Figure 5, removing central point, the pixel number single direction participating in calculate is 4, and when image has noise, Moravec algorithm is easily affected by noise.Therefore, in the present embodiment traditional Moravec algorithm is slightly made improvements, increase the pixel quantity participating in calculating, not only utilized the point on the four direction of measuring point to be checked, but utilize fritter that on this four direction, size is W × H to carry out angle point grid.
For reducing calculated amount, change the square operation of asking in former algorithm into the computing that takes absolute value, then new Corner Detection Algorithm can be calculated as follows:
v 1 = Σ a = - w w Σ b = - h h | g ( x - i + a , y + b ) - g ( x + i + a , y + b ) | ,
v 2 = Σ a = - w w Σ b = - h h | g ( x - i + a , y + b ) - g ( x + i + a , y + j + b ) | ,
v 3 = Σ a = - w w Σ b = h h | g ( x + a , y - j + b ) - g ( x + a , y + j + b ) | ,
v 4 = Σ a = - w w Σ b = - h h | g ( x + i + a , y - j + b ) - g ( x - i + a , y + j + b ) | ,
Choose in the energy change value of four direction minimum as pixel (x, y) energy change value, if this energy change value is greater than certain threshold value, the span of the 4th threshold value is here 200 to 300, just by pixel (x, y) alternatively unique point, all calculates its energy change value, chooses the maximum pixel of energy change value as choosing to obtain angle point to all pixels in abnormal area.
Finally, according to angle point, in E frame video image, determine movement locus.Specifically comprise:
Travel through in described E two field picture, determine the point with described corners Matching in video image described in each frame;
The point of described coupling and described angle point are connected and composed line, and described line is the movement locus existed in described video image.
The angle point choosing method adopted in the present embodiment is the Moravec algorithm improved, and concrete what adopt is tracking based on Block-matching, and as shown in Figure 6, during setting video sequence N frame, the angle point adopting Moravec operator to obtain in abnormal area is P 0(x, y); With P 0(x, y) centered by, constructing a size is the sub-block of a × b, in the given search window (R × Q) of N-1 frame, so find the match block of this block, so the position at match block center just represents the position that this block is residing when N-1 frame, look for match block be adopt be full searching, namely all positions in hunting zone are calculated successively, select the fritter the most similar to template as match block.The central point of the fritter the most similar to template is exactly the position of angle point at this frame.Assumed position is P 1(x, y), carries out tracking successively to E two field picture and will obtain a series of match point, and being linked up by these match points is exactly the track of angle point.Judge whether to be the criterion of the most similar fritter to be that in fritter to be matched and hunting zone, the absolute value sum of the difference of each pixel of certain fritter is minimum.Here the value of E is generally 300.
Step 7, F frame video image nearest after obtaining described suspicious region place video image, obtains movement locus in described F frame video image, if do not get described movement locus, then determines to exist in described suspicious region park cars or leave thing.
In force, concrete step is identical with step 6, is only that former frame is mated with a rear frame in step 7, and step 7 be after a frame mate with former frame, namely one is that to mate forward one be mate backward.Concrete matching process is the same, and details can refer step six, is not just repeating here.Here the value of F is generally 50 to 60.
Step 8, background removal process is carried out to described suspicious region, obtain target subject, the sample that described target subject and the vehicle preset projected carries out area matched, if the area of described view field and described default vehicle project, the area of sample overlapping region is greater than the 5th threshold value, then determine for parking cars in described region, otherwise for leaving thing.
In force, target subject is obtained after background subtraction, as shown in Figure 7, it itself has been two dimensional image, therefore carry out area matched one by one by target subject and the projection of several conventional vehicle good in advance, as shown in Figure 8, the surface area of the coincidence of grey is 456 for the projection of abnormal area and the result of car Model Matching, be greater than specific threshold value, so judge it is stop.The span of the 5th threshold value is here 400 to 500.
After judgement terminates, judged result can be pointed out in several ways, as shown in Figure 9, be by message window, point out the position parked cars.
What propose in the present embodiment is a kind of to parking cars and leaving the differentiating method of thing, by processing the video image got, after determining to exist in video image the track parked cars or leave thing, the projected area of the target subject that track is corresponding and the projected area of preset vehicle contrast again, thus the target subject in judgement video image parks cars or leaves thing, avoid existing detection technique in reduction process because the loss of object height information causes the problem of accuracy of detection, improve accuracy of detection.
It should be noted that: a kind of differentiating method to parking cars and leave thing that above-described embodiment provides is to parking cars and leaving the embodiment that thing distinguishes, only as explanation in actual applications in this differentiating method, can also use in other application scenarioss according to actual needs and by above-mentioned differentiating method, its specific implementation process is similar to above-described embodiment, repeats no more here.
Each sequence number in above-described embodiment, just to describing, does not represent the sequencing in the assembling of each parts or use procedure.
The foregoing is only embodiments of the invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (5)

1., to parking cars and leaving the differentiating method of thing, described method is used for processing the video image got, and it is characterized in that, described method comprises:
Step one, carries out edge to each frame of described video image and strengthens process, be multiple region, and arrange counter and abnormal marking to described each region by the Iamge Segmentation after process;
Step 2, choose the first frame of described video image as sample frame, obtain the absolute value of each frame and the gray scale difference of described sample frame in single described region in the residue frame in the image after described process and, be denoted as the first cumulative sum, described first cumulative sum and first threshold are contrasted, if described first cumulative sum is less than described first threshold, then the described counter values of the corresponding frame of described first cumulative sum is made to add one;
Step 3, if in any frame, the counter values of first area exceedes predetermined threshold value in described residue frame, then the gray-scale value of described first area is saved as the first steady state (SS) of described first area, and make the counter clear of described first area, if the counter values of first area exceedes described predetermined threshold value again in any frame in described residue frame, then the gray-scale value of first area in present frame is saved as the second steady state (SS) of described first area;
Step 4, obtain the absolute value of the gray value differences of described first steady state (SS) and described second steady state (SS) and, be denoted as the second cumulative sum, described second cumulative sum and Second Threshold are contrasted, if described second cumulative sum is greater than described Second Threshold, then activate the described abnormal marking of described first area, the gray-scale value in described first area is set to 255;
Step 5, in the place frame of described first area, obtains the boundary rectangle that all regions with described abnormal marking are formed, if the area of described boundary rectangle is greater than the 3rd threshold value, then makes the region that described rectangle is corresponding be suspicious region;
Step 6, E frame video image nearest before obtaining described suspicious region place video image, window is set by the region with described abnormal marking, obtain the energy change value of described window, if described energy change value is greater than the 4th threshold value, then the center pixel of described window is labeled as candidate feature point, in described suspicious region, choose and there is candidate feature point corresponding to maximum energy variation value as angle point, if can determine the movement locus that exists in described E frame video image then to carry out next step according to described angle point;
Step 7, F frame video image nearest after obtaining described suspicious region place video image, obtains movement locus in described F frame video image, if do not get described movement locus, then determines to exist in described suspicious region park cars or leave thing;
Step 8, background removal process is carried out to described suspicious region, obtain target subject, the sample that described target subject and the vehicle preset projected carries out area matched, if the area of described view field and described default vehicle project, the area of sample overlapping region is greater than the 5th threshold value, then determine for parking cars in described region, otherwise for leaving thing.
2. method according to claim 1, is characterized in that, described method also comprises:
If described first cumulative sum is not less than described first threshold, then the counter values of described first cumulative sum corresponding region is reset, and upgrade the data in the described single region of described sample frame.
3. method according to claim 1, is characterized in that, described method also comprises:
If described second cumulative sum is not more than described Second Threshold, the gray-scale value in described first steady state (SS) is replaced with the gray-scale value in described second steady state (SS).
4. method according to claim 1, is characterized in that, described method also comprises:
If the area of described boundary rectangle is not more than the 3rd threshold value, then give up the area data of described abnormal marking.
5. method according to claim 1, is characterized in that, determines the movement locus existed in described E frame video image, comprising according to described angle point:
Travel through within the scope of preset search in each frame of described E two field picture, determine the point with described corners Matching in video image described in each frame;
The point of described coupling and described angle point are connected and composed line, and described line is the movement locus existed in described video image.
CN201410802663.5A 2014-12-19 2014-12-19 A kind of differentiating method to parking cars and leaving thing Expired - Fee Related CN104504730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410802663.5A CN104504730B (en) 2014-12-19 2014-12-19 A kind of differentiating method to parking cars and leaving thing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410802663.5A CN104504730B (en) 2014-12-19 2014-12-19 A kind of differentiating method to parking cars and leaving thing

Publications (2)

Publication Number Publication Date
CN104504730A true CN104504730A (en) 2015-04-08
CN104504730B CN104504730B (en) 2017-07-11

Family

ID=52946124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410802663.5A Expired - Fee Related CN104504730B (en) 2014-12-19 2014-12-19 A kind of differentiating method to parking cars and leaving thing

Country Status (1)

Country Link
CN (1) CN104504730B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869109A (en) * 2016-03-28 2016-08-17 长安大学 Method for differentiating parking vehicles and fallen objects based on inverse projective planes of different heights
CN106297278A (en) * 2015-05-18 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of method and system shedding thing vehicle for inquiry
CN110956647A (en) * 2019-11-02 2020-04-03 上海交通大学 System and method for dynamically tracking object behaviors in video based on behavior dynamic line model
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium
CN111753574A (en) * 2019-03-26 2020-10-09 顺丰科技有限公司 Throw area positioning method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002334386A (en) * 2001-05-08 2002-11-22 Seiwa Electric Mfg Co Ltd Object detection system and image processing method
US20070257814A1 (en) * 2006-05-05 2007-11-08 Tilton Scott K Combined speed detection, video and timing apparatus
CN101458871A (en) * 2008-12-25 2009-06-17 北京中星微电子有限公司 Intelligent traffic analysis system and application system thereof
CN101964145A (en) * 2009-07-23 2011-02-02 北京中星微电子有限公司 Automatic license plate recognition method and system
US8107678B2 (en) * 2008-03-24 2012-01-31 International Business Machines Corporation Detection of abandoned and removed objects in a video stream
CN102521979A (en) * 2011-12-06 2012-06-27 北京万集科技股份有限公司 High-definition camera-based method and system for pavement event detection
CN202486980U (en) * 2012-03-13 2012-10-10 长安大学 Traffic information detection device based on video sequence
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103136514A (en) * 2013-02-05 2013-06-05 长安大学 Parking event detecting method based on double tracking system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002334386A (en) * 2001-05-08 2002-11-22 Seiwa Electric Mfg Co Ltd Object detection system and image processing method
US20070257814A1 (en) * 2006-05-05 2007-11-08 Tilton Scott K Combined speed detection, video and timing apparatus
US8107678B2 (en) * 2008-03-24 2012-01-31 International Business Machines Corporation Detection of abandoned and removed objects in a video stream
CN101458871A (en) * 2008-12-25 2009-06-17 北京中星微电子有限公司 Intelligent traffic analysis system and application system thereof
CN101964145A (en) * 2009-07-23 2011-02-02 北京中星微电子有限公司 Automatic license plate recognition method and system
CN102521979A (en) * 2011-12-06 2012-06-27 北京万集科技股份有限公司 High-definition camera-based method and system for pavement event detection
CN202486980U (en) * 2012-03-13 2012-10-10 长安大学 Traffic information detection device based on video sequence
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103136514A (en) * 2013-02-05 2013-06-05 长安大学 Parking event detecting method based on double tracking system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YINGLI TIAN ET AL.: ""Robust detection of abandoned and removed objects in complex surveillance videos"", 《IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS,PART C(APPLICATIONS AND REVIEWS)》 *
理勤辉: ""公路抛洒物视频检测方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297278A (en) * 2015-05-18 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of method and system shedding thing vehicle for inquiry
CN106297278B (en) * 2015-05-18 2019-12-20 杭州海康威视数字技术股份有限公司 Method and system for querying a projectile vehicle
CN105869109A (en) * 2016-03-28 2016-08-17 长安大学 Method for differentiating parking vehicles and fallen objects based on inverse projective planes of different heights
CN105869109B (en) * 2016-03-28 2018-12-07 长安大学 Parking based on different height inverse projection face and leave object differentiating method
CN111753574A (en) * 2019-03-26 2020-10-09 顺丰科技有限公司 Throw area positioning method, device, equipment and storage medium
CN110956647A (en) * 2019-11-02 2020-04-03 上海交通大学 System and method for dynamically tracking object behaviors in video based on behavior dynamic line model
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium
CN111274982B (en) * 2020-02-04 2023-04-07 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium

Also Published As

Publication number Publication date
CN104504730B (en) 2017-07-11

Similar Documents

Publication Publication Date Title
CN105260713B (en) A kind of method for detecting lane lines and device
CN104504730A (en) Method for distinguishing parked vehicle from falling object
Broggi et al. Self-calibration of a stereo vision system for automotive applications
Arróspide et al. Homography-based ground plane detection using a single on-board camera
CN101620732A (en) Visual detection method of road driving line
CN111291603B (en) Lane line detection method, device, system and storage medium
CN106815583B (en) Method for positioning license plate of vehicle at night based on combination of MSER and SWT
GB2317066A (en) Method of detecting objects for road vehicles using stereo images
JPH07336669A (en) Stereo image corresponding method and stereo image parallax measuring method
CN108256445B (en) Lane line detection method and system
CN101930609A (en) Approximate target object detecting method and device
CN107392139A (en) A kind of method for detecting lane lines and terminal device based on Hough transformation
Panev et al. Road curb detection and localization with monocular forward-view vehicle camera
US8264526B2 (en) Method for front matching stereo vision
CN101436300B (en) Method and apparatus for dividing barrier
Huang et al. Robust lane marking detection under different road conditions
Williamson et al. A trinocular stereo system for highway obstacle detection
CN103632376A (en) Method for suppressing partial occlusion of vehicles by aid of double-level frames
Foresti et al. A change-detection method for multiple object localization in real scenes
Samadzadegan et al. Automatic lane detection in image sequences for vision-based navigation purposes
CN106875430A (en) Single movement target method for tracing and device based on solid form under dynamic background
CN110443142B (en) Deep learning vehicle counting method based on road surface extraction and segmentation
Coşkun et al. Real time lane detection and tracking system evaluated in a hardware-in-the-loop simulator
CN111260675A (en) High-precision extraction method and system for image real boundary
Dailey et al. An algorithm to estimate vehicle speed using uncalibrated cameras

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170711

Termination date: 20211219

CF01 Termination of patent right due to non-payment of annual fee