CN105404847A - Real-time detection method for object left behind - Google Patents

Real-time detection method for object left behind Download PDF

Info

Publication number
CN105404847A
CN105404847A CN201410472162.5A CN201410472162A CN105404847A CN 105404847 A CN105404847 A CN 105404847A CN 201410472162 A CN201410472162 A CN 201410472162A CN 105404847 A CN105404847 A CN 105404847A
Authority
CN
China
Prior art keywords
foreground
period
gaussian
long
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410472162.5A
Other languages
Chinese (zh)
Other versions
CN105404847B (en
Inventor
单海婧
常青
王子亨
赵倩
张琍
周锦源
侯祖贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING AEROSPACE AIWEI ELECTRONIC TECHNOLOGY Co Ltd
Beijing Institute of Computer Technology and Applications
Original Assignee
BEIJING AEROSPACE AIWEI ELECTRONIC TECHNOLOGY Co Ltd
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING AEROSPACE AIWEI ELECTRONIC TECHNOLOGY Co Ltd, Beijing Institute of Computer Technology and Applications filed Critical BEIJING AEROSPACE AIWEI ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201410472162.5A priority Critical patent/CN105404847B/en
Publication of CN105404847A publication Critical patent/CN105404847A/en
Application granted granted Critical
Publication of CN105404847B publication Critical patent/CN105404847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a real-time detection method for an object left behind. The method comprises the steps of acquiring video image data; carrying out background modeling of the video image by using a Gaussian mixture model improved algorithm, and establishing a long-period background model and a short-period background model respectively, the background being divided into a stable region and a dynamic region; obtaining a long-period prospect FL by subtracting the long-period background model from a current video frame, and obtaining a short-period prospect FS by subtracting the short-period background model from the current video frame; analyzing the long-period prospect FL and the short-period prospect FS, detecting an object left behind, marking the object left behind and sending out an alarm. The invention adopts the fast Gaussian mixture model improved algorithm for modeling the background which is divided into the stable region and the dynamic region, the detection accuracy of the object left behind is improved, and the execution speed of the algorithm is also increased.

Description

Real-time detection method for remains
Technical Field
The invention relates to application of a computer vision technology in an all-weather high-definition video monitoring system, in particular to a method for detecting a remnant of the all-weather high-definition video monitoring system in real time based on all-weather complex light.
Background
In recent two years, the method is a period of high-speed development of security industry, the safety problem is a big thing which is commonly concerned by the whole society, the detection method of the remnant is an important measure for preventing danger and ensuring safety, plays an important role in the security industry, particularly in high-risk industries with potential danger, and is similar to high-risk places such as airports, historical cultural relics, scenic spots, military control areas and the like. The method for detecting the abandoned object is mainly applied to the high-risk places, whether the area which is mainly concerned by a user is abandoned or the abandoned object is automatically analyzed and detected, when a certain object is abandoned in a certain area or abandoned for a certain time, the system can detect the object, mark the object and trigger an alarm to prevent accidents.
The existing detection method for the remnant is mainly aimed at a monitoring environment with low definition. In high-risk places, the requirements on monitoring products are high, particularly the requirements on definition are higher and higher, and the large-scale 1080P full-high-definition network camera is popularized and applied to the high-risk places so as to be convenient for later enlarged viewing and detail control. The full high-definition network camera makes the image definition have qualitative leap, but also brings a series of practical problems to a high-definition monitoring system, the workload of management and post analysis of video images is increased by times, and the requirement on the real-time performance of the analysis work of the remnants is higher.
In addition, the existing detection method for the remnant mainly has the following problems:
at present, a mixed Gaussian model is directly adopted for background modeling, the parameters of each Gaussian function of each color channel of each pixel point need to be updated after each frame of video image is modeled by the mixed Gaussian background, the calculated amount is huge, the requirement of algorithm real-time performance is difficult to meet, and the limitation is more obvious particularly in a full-high-definition video monitoring system; the existing shadow detection method has poor effect, and particularly cannot well separate a moving target from a moving shadow cast by the moving target under the condition of all-weather complex light change; at present, no interference target is strictly eliminated in the detection of the abandoned object, so that the false detection rate of the abandoned object is too high.
Chinese patent application publication nos. CN103714325A and CN102509075A disclose a detection method based on carryover and carryover respectively, which both use a gaussian mixture model to perform background modeling to establish a long-period background model and a short-period background model, but still have the problems of large data update amount and low processing speed.
Therefore, how to improve the real-time performance, accuracy and robustness of the detection of the remnant in the full high-definition video monitoring system under the all-weather complex light condition is a problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the problem that the existing detection method for the abandoned object has poor effect under all-weather complex light ray change conditions, provides a high-efficiency detection method for the abandoned object, and is suitable for real-time and accurate detection of the abandoned object in a full high-definition video monitoring system under all-weather complex light ray conditions.
In order to achieve the above object, the method for detecting a carry-over object in real time of the present invention comprises the following steps:
s10, acquiring video image data;
s30, background modeling is carried out on the video image by adopting a Gaussian mixture model improvement algorithm, a long-period background model and a short-period background model are respectively established, wherein the background is divided into a stable region and a dynamic region, in the stable region, when the matching frequency of a Gaussian distribution of the Gaussian mixture model and each newly entered pixel value in the background model of a pixel is higher than a set threshold value, each Gaussian distribution parameter of the background model of the pixel in the next N-frame image is not updated any more, after N frames, each Gaussian distribution parameter of the Gaussian mixture model is reset and learning is restarted until the matching frequency of the Gaussian distribution and the newly entered pixel value is higher than the set threshold value, and the steps are repeated; in the dynamic region, when two or three Gaussian functions are continuously and alternately matched with a newly obtained pixel value in a background model of a pixel, and the sum of the weights of the Gaussian functions is greater than a set threshold, each Gaussian distribution parameter of the background model of the pixel point in a next M-frame image is not updated any more, the average value of the Gaussian distributions represents the background value of the pixel point, after M frames, each Gaussian distribution parameter of the mixed Gaussian model is reset and learning is restarted until the sum of the weights of new Gaussian distributions is greater than the set threshold, and the process is repeated in a circulating manner, wherein M, N is an integer;
s40, subtracting the long-period background model from the current video frame to obtain a long-period foreground FLSubtracting the short-period background model from the current video frame to obtain a short-period foreground FS
S50, for the long period foreground FLAnd short cycle foreground FSPerforming an analysisAnd detecting the remnants, marking the remnants and alarming.
In the above real-time detection method for the remains, the step S30 of resetting the gaussian distribution parameters of the gaussian mixture model includes the following steps:
S31,ωi,ti,tthe weighted value corresponding to the gaussian distribution with the maximum value is set as:
ω i , t = μ ‾ + β
s32, setting the weights corresponding to the rest Gaussian distributions as:
ω i , t = 1 - μ ‾ - β K - 1
wherein, ω isi,tIs the weight, σ, of the ith Gaussian distribution at time ti,tIs the variance of the ith Gaussian distribution at the time t, K is the number of Gaussian functions in the mixed Gaussian model,β is a floating point number in the interval [0, 1).
In the real-time detection method for the remains, in step S10, the video image data is obtained in a frame-extracting manner.
In the real-time detection method for the carry-over object, the following step S20 is further included between the step S10 and the step S30, and the step S20 includes the following steps:
s21, the video image is down-sampled by a bilinear interpolation method.
In the real-time detection method for the carry-over, the step S20 further includes the following steps:
and S22, performing noise reduction processing on the video image data acquired in the night mode by using a Gaussian filter.
In the real-time detection method for the carry-over, the step S50 further includes the following steps:
s510, eliminating the long-period foreground F by adopting a shadow suppression algorithm based on mixed GaussianLAnd short cycle foreground FSMoving shadows in (1).
In the real-time detection method for the carry-over, the step S510 includes the following steps:
s511, detecting the long-period foreground F by utilizing the shade model based on the HSV color spaceLAnd short cycle foreground FSA suspected shadow in (1);
s512, according to the long-period foreground FLAnd short cycle foreground FSCarrying out learning updating of a Gaussian mixture shadow model on the pixels which are judged to be suspected shadows;
s513, judging the long-period foreground FLAnd short cycle foreground FSWhether the suspected shadow in (1) is a moving shadow or not and eliminates the long-period foreground FLAnd short cycle foreground FSMoving shadows in (1).
In the real-time detection method for the carry-over object, the step S50 further includes a step S520, and the step S520 includes the following steps:
s521, for the long period foreground FLAnd short cycle foreground FSCarrying out binarization processing to obtain a long-period foreground binary image FL /And short-period foreground binary image FS /
S522, processing the long-period foreground binary image F by adopting a morphological methodL /And short-period foreground binary image FS /
S523, eliminating the long-period foreground binary image F by adopting a region marking methodL /And short-period foreground binary image FS /A middle communicating region.
In the above real-time detection method for the carry-over, the step S523 further includes the following steps:
long period foreground binary image F by region growing methodL /And short-period foreground binary image FS /The connected regions in (1) are marked, and the area R of each connected region is calculatediIf the area R of the communicating regioniLess than a predetermined area threshold RminThen the connected region is removed from the foreground.
In the above real-time detection method for the carry-over, the step S650 further includes the following steps:
s530, analyzing the long-period foreground binary image FL /And short-period foreground binary image FS /The method is characterized in that the targets in the foreground are classified to obtain a target object O suspected of being leftcurThe classification rule is as follows:
FL /(x, y) is 1 and FS /(x, y) ═ 1, (x, y) point pixels belong to a moving object;
FL /(x, y) is 1 and FS /(x, y) — 0, the (x, y) point pixel belongs to the target object O of the suspected tracecur
FL /(x, y) is 0 and FS /(x, y) ═ 1, (x, y) point pixels belong to a scene change target or noise;
FL /(x, y) is 0 and FS /And (x, y) ═ 0, and the (x, y) point pixel belongs to the background object.
In the real-time detection method for the carry-over, the step S50 further includes the following steps:
s540, detecting the suspected object O of the remnant by adopting a method of combining the target contour with the color histogram of the central and peripheral area of the targetcurIs taken away of the substance OremoveAnd removing the suspected objects OcurIs taken away of the substance OremoveObtaining a temporarily stationary target object Oabandon
In the above real-time detection method for the carry-over, the step S540 further includes the following steps:
s541, according to the object OcurDetermining candidate takeaway object O based on the target profile characteristicsEtemp
S542, according to the target object OcurCenter peripheral area color histogram feature determination candidate takeaway object OHtemp
S543, according to the candidate removed object OEtempAnd candidate takeaway OHtempDetermining the removed substance OremoveRemoving the suspected objectscurIs taken away of the substance OremoveObtaining a temporarily stationary target object OabandonJudging the temporarily stationary object OabandonAnd removing the substance OremoveThe decision of (a) is disclosed as:
in the real-time detection method for the carry-over, the step S50 further includes the following steps:
s550, detecting pedestrians by adopting a pedestrian detection algorithm based on HOG and skin color characteristics, and eliminating temporary static target object OabandonThe target object of the candidate carry-over is obtained.
In the real-time detection method for the carry-over, the step S50 further includes the following steps:
s560, performing blob tracking on the target object of each candidate remnant, and determining the number Num of continuous staying frames of the target object of each candidate remnantiRespectively counting, when the number of the accumulated staying frames of the target object of a certain candidate remnant exceeds a set threshold value TnumWhen is Num, i.e. Numi>TnumAnd marking the target object of the candidate carry-over as a carry-over, triggering a carry-over alarm, and marking a circumscribed rectangular area of the carry-over in the source image according to the logic position of the carry-over.
The method for detecting the remnant of the present invention produces the following excellent effects:
effect 1: the invention adopts the video image frame-extracting and down-sampling processing method to reduce the workload of the analysis processing of the full high-definition video image and improve the real-time performance of the detection algorithm.
Effect 2: the invention adopts a selective Gaussian filtering method, and under the all-weather monitoring mode, the invention considers that the daytime mode has less noise and high image quality and does not need filtering treatment; and if the influence of the night mode noise is large and the image quality is poor, filtering processing is carried out. The robustness and flexibility of the detection algorithm are improved.
Effect 3: the invention adopts a rapid Gaussian mixture model improvement algorithm to model the background. Considering that the Gaussian mixture model is particularly suitable for the situation that complex light rays are transformed outdoors in the background modeling process, the Gaussian mixture model is complex in calculation, not only models the background, but also models the foreground, and the instantaneity is poor. The accuracy of the detection of the remnant is improved by a quick Gaussian mixture model improvement algorithm, and the execution rate of the algorithm is improved.
Effect 4: the method adopts the Gaussian mixture shadow model to model the shadow in the foreground, has good shadow detection effect, and can better separate the moving target from the moving shadow cast by the moving target under the all-weather complex light change condition.
Effect 5: the invention adopts a strict elimination method, eliminates the influence of interference targets (noise, taken objects and static pedestrians) on the left objects, improves the accuracy of the algorithm and reduces the missing detection rate and the false detection rate as much as possible.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a flow chart of a real-time detection method for the left-over object of a full high-definition video monitoring system based on all-weather complex light conditions;
FIG. 2 is a flow chart of an algorithm for detecting moving shadows based on a Gaussian mixture shadow model.
Detailed Description
The following detailed description of the embodiments of the present invention with reference to the drawings and specific examples is provided for further understanding the objects, aspects and effects of the present invention, but not for limiting the scope of the appended claims.
The invention provides a method for detecting a remnant in real time based on a full high-definition video monitoring system under all-weather complex light conditions, wherein the flow chart of the method is shown in figure 1, and the method specifically comprises the following steps:
step 10: and acquiring video image data in a frame extraction mode.
This step uses a frame pull from the streaming server every 30 frames for the detection of the carry-over. When the monitoring camera is used for capturing images, the image rate is kept at 25 frames/second or 30 frames/second, and after the full high-definition images are further upgraded, the frame rate can reach 60 frames/second. The image data volume is too large, a large part of the image data volume is a repeated scene, each frame of image does not need to be processed, the analysis workload is reduced by adopting a frame extraction mode, and the real-time performance of a detection algorithm is improved.
Step 20: the image preprocessing comprises the steps of firstly carrying out down-sampling processing on a video image, and then carrying out Gaussian filtering and noise reduction processing on the video image acquired in a night mode.
The step 20 further comprises the steps of:
step 21: the video image is subjected to down-sampling processing by utilizing a bilinear interpolation method, the video image after being zoomed by the interpolation method is high in quality, and the condition of discontinuous pixels is eliminated.
In the bilinear interpolation method, for a target pixel in an image, a floating point coordinate obtained by inverse transformation of coordinates is set as (i + u, j + v), where i and j are both non-negative integers, and u and v are floating point numbers in an interval of [0,1), so that a value f (i + u, j + v) of the pixel can be determined by values of four surrounding pixels corresponding to coordinates (i, j), (i +1, j), (i, j +1), (i +1, j +1) in an original image, that is:
f(i+u,j+v)=(1-u)(1-v)f(i,j)+(1-u)vf(i,j+1)+u(1-v)f(i+1,j)+uvf(i+1,j+1)
where f (i, j) represents the pixel value at the source image (i, j).
Step 22: and a Gaussian filter is used for carrying out noise reduction processing on the image acquired in the night mode, and the noise reduction processing is not required on the image acquired in the day mode. Gaussian filtering is very effective in suppressing noise.
The gaussian filter is an average filter with weight, the color of a certain pixel is determined by the weighted average of the pixels of the squared figure with the pixel as the center, and a 3 × 3 gaussian filter template is adopted, namely:
1 16 × 1 2 1 2 4 2 1 2 1
step 30: a fast Gaussian mixture model improvement algorithm is adopted to model the background, and a long-period background model and a short-period background model are respectively established.
The Gaussian mixture model algorithm is to establish a Gaussian mixture model containing K (3-5) Gaussian functions for each color channel of each pixel of a video image, in order to process complex scenes and obtain a good target detection effect, the larger the K is, the better the K is, and each function in the K Gaussian functions contains a weight omega, a mean value mu and a variance2Three parameters of each gaussian function of each color channel of each pixel point are required to be updated after each frame of new video image is obtained, the calculation amount of the algorithm is very large, and the requirement on real-time performance is difficult to meet.
The fast Gaussian mixture model improvement algorithm can process complex scenes to obtain a good target detection effect, improves the processing speed, updates Gaussian parameters of pixel points not in each frame of image, and reduces the parameter updating frequency of the Gaussian mixture model.
Quick Gaussian mixture model modificationThe algorithm is further used for noticing that only a small part of the area in the monitoring scene is relatively chaotic, and a large part of the area is static, and the background is divided into a stable area (a large part of the static area) and a dynamic area (a small part of the chaotic area). The pixel points in the stable region always present the same pixel value, and the pixel value is always matched with the same Gaussian distribution in the mixed Gaussian background model, the matching frequency of the Gaussian distribution and the newly entered pixel value in the image sequence is very high, and the distribution weight omega is learnedi,tWill be large and varianceSmaller, where ω isi,tThe weight of the ith gaussian distribution at time t,is the variance of the ith Gaussian distribution at time t, then ωi,ti,tThe maximum value is kept for a long time, so that the average value of the same Gaussian distribution can be used as the pixel value of the background in a long time, and therefore the background parameter model of the pixel point in the stable region does not need to be updated every frame. And improving a mixed Gaussian background modeling algorithm, wherein when the frequency of matching of a certain Gaussian distribution with each newly-entered pixel value in a background model of a certain pixel is higher than a certain threshold value, each Gaussian distribution parameter of the background model of the pixel in the next N (100-200) frame image is not updated, and after N (100-200) frames, each Gaussian distribution parameter omega is re-updatedi,tAnd starting learning under a relatively even state until the matching frequency of the Gaussian distribution and the newly entered pixel value is greater than a set threshold value, and repeating the steps in a circulating way. In addition to the improved algorithm for the stable region, the method is also used in the dynamic region to increase the speed of the algorithm. In a dynamic area with chaotic repeated motion, a pixel point always presents a plurality of values repeatedly, a Gaussian model is continuously trained through a newly obtained pixel value, two or three Gaussian functions are continuously and alternately matched with the newly obtained pixel value, and the weights omega of the Gaussian functions arei,tWill be large and varianceSmaller, ωi,ti,tWeight omega between several Gaussian functions, which remains large for a long timei,tThe difference is not large. When the several Gaussian function weights ωi,tWhen the sum is larger than a certain threshold value, each Gaussian distribution parameter of the background model of the pixel point in the next M (10-50) frame image is not updated any more, the average value of the Gaussian distributions represents the background value of the pixel point, after the M (10-50) frame image, each Gaussian distribution parameter of the mixed Gaussian model is reset and the learning is restarted until the sum of new Gaussian distribution weights is larger than the certain threshold value, and the process is repeated.
Gaussian distribution weight omega of a certain pixeli,tSet to a relatively even state as follows:
1)ωi,ti,tthe weighted value corresponding to the gaussian distribution with the maximum value is set as:
ω i , t = μ ‾ + β
2) the weights corresponding to the rest Gaussian distributions are set as:
ω i , t = 1 - μ ‾ - β K - 1
wherein,β is a floating point number in the interval [0, 1).
Step 40: respectively subtracting the long-period background model and the short-period background model from the current video frame to obtain a long-period foreground FLAnd short cycle foreground FS
Step 50: for long period foreground FLAnd short cycle foreground FSAnalyzing and detecting the remnant, marking the remnant and alarming.
Specifically, step 50 includes the steps of:
step 510: eliminating long-period foreground F by adopting shadow suppression algorithm based on mixed gaussLAnd short cycle foreground FSMoving shadows in (1).
Due to the fact that a monitoring scene is all-weather, light rays are complex, the flow of people is large, shadows exist under the conditions of strong light and weak light, even the shadows and other targets are overlapped, the dangerousness of the targets cannot be judged accurately in real time, and the alarm cannot be given in time. So the shadow in the foreground needs to be eliminated when detecting the carry-over.
Shadow suppression algorithms in the existing remnant detection methods mainly adopt shadow suppression algorithms based on color spaces, and the methods are effective in detection and suppression of weak shadows, but are not ideal in suppression of strong shadows under the condition of strong illumination.
A shadow suppression algorithm based on mixed Gaussian firstly utilizes the characteristic of shadow in HSV color space to judge whether a pixel detected as a moving foreground is a suspected shadow, and a non-suspected shadow is a moving target. If the foreground pixel is matched with the effective shadow state of the Gaussian mixture shadow model, the moving foreground pixel is judged as a moving shadow, otherwise, the moving foreground pixel is judged as a moving target. The shadow inhibition algorithm can effectively inhibit the influence of the shadow on the detection of the moving target, has good detection effect on weak shadow, can detect strong shadow to a great extent, and has strong real-time property.
A flow chart of an algorithm for detecting moving shadows based on a gaussian mixture shadow model is shown in fig. 2.
The step 510 further comprises the steps of:
step 511: detection of Long-period Foreground FLAnd short cycle foreground FSIs detected by the false shadow in (1).
The decision formula for detecting the shadow by using the shadow model based on the HSV color space and judging whether the foreground pixel is a suspected shadow is as follows:
SP ( x , y ) = α S ≤ I V ( x , y ) B V ( x , y ) ≤ β S ^ 1 , ( I S ( x , y ) - B S ( x , y ) ) ≤ τ S ^ | I H ( x , y ) - B H ( x , y ) | ≤ τ R 0 ,
in the formula IH(x,y)、IS(x,y)、IV(x, y) and SH(x,y)、SS(x,y)、SV(x, y) represent H, S, V components of the new input value I (x, y) and the background pixel value S (x, y) of the pixel at the coordinate point (x, y), respectively, if I (x, y) is determined to be a shadow, the point mask SP (x, y) is set to 1, otherwise SP (x, y) is set to 0, parameter 0 ≦ αS≤βSLess than or equal to 1, parameter αSThe value is taken to account for the intensity of the shadow, α when the more intense the shadow cast against the background isSThe smaller, βSTo enhance the robustness to noise, i.e. the luminance of the current frame cannot be too similar to the background. Parameter tauSLess than zero, parameter τRThe selection of (A) is mainly adjusted by experience.
Step 512: according to long period foreground FLAnd short cycle foreground FSAnd (4) performing learning updating of the Gaussian mixture shadow model on the pixels which are judged to be the suspected shadows. In order to ensure that the Gaussian mixture shadow model is fully learned, the background is divided into different regions, the regions are not necessarily connected, but the pixel color value in each regionSimilarly, whenever a pixel point value in a certain area is detected as a suspected shadow, the pixel is used to update the parameters of the Gaussian mixture shadow model of all the points in the area.
The difference between the Gaussian mixture shadow model and the Gaussian mixture background model during motion foreground detection is that the Gaussian mixture background model learns according to all input values of pixel points, and the Gaussian mixture shadow model learns and updates according to the input pixel values which are detected as the foreground and determined as suspected shadows. In the Gaussian mixture shadow model, if the input suspected shadow value meets a certain Gaussian distribution in the shadow Gaussian model:
| I t - μ i , t - 1 S | ≤ D S * σ i , t - 1 S , i = 1 , K , K S
where the superscript S denotes a mixed gaussian shading model. The distribution parameters are updated according to the following rules:
ω i , t S = ( 1 - α S ) ω i , t - 1 S + α S
μ i , t S = ( 1 - ρ S ) μ i , t - 1 S + ρ S I t
( σ i , t S ) 2 = ( 1 - ρ S ) ( σ i , t - 1 S ) 2 + ρ S ( I t - μ i , t S ) 2
if there is no Gaussian distribution and no suspected shadow pixel value ItMatching, the Gaussian distribution with the minimum weight will be updated by the new Gaussian distribution with the mean value of ItInitializing a larger standard deviationAnd smaller weightThe remaining gaussian distributions retain the same mean and variance, but their weights decay, i.e.:
ω i , t S = ( 1 - α S ) ω i , t - 1 S
finally, normalizing the weight values of all Gaussian distributions, and dividing each part according toArranged from large to small ifΚ,Is each Gaussian distribution according toIn the descending order, the first N distributions are considered as shadow distributions if they satisfy the following criteria, i.e.:
Σ K = 1 S i N S ω k , t S ≥ τ S
step 513: determine long period foreground FLAnd short cycle foreground FSWhether the suspected shadow in (1) is a moving shadow or not and eliminates the long-period foreground FLAnd short cycle foreground FSThe foreground of eliminating the moving shadow is the moving target.
The decision formula for judging whether the suspected shadow is the moving shadow is as follows:
in the formula, the superscript S represents a gaussian mixture model, i is 1,2, Λ, Kt. If I (x, y) is determined to be a moving shadow, then the point mask SPP (x, y) is set to 1, otherwise SPP (x, y) is set to 0. If the shadow is suspected ItAnd the absolute value of the difference between the mean value of each shadow distribution is less than or equal to the standard deviation of the distributionSMultiple, then ItAnd judging the shadow as a moving shadow, otherwise, judging the shadow as a moving target.
Step 520: respectively for long period foreground FLAnd short cycle foreground FSPost-processing operations are performed to ensure the integrity of the target in the foreground and to eliminate the effect of small area targets (noise points). Obtaining a long-period foreground binary image FL /And short-period foreground binary image FS /
The step 520 further comprises the steps of:
step 521: for long period foreground FLAnd short cycle foreground FSCarrying out binarization processing to obtain a long-period foreground binary image FL /And short-period foreground binary image FS /
Step 522: processing long-period foreground binary image F by adopting morphological methodL /And short-period foreground binary image FS /The integrity of the target in the foreground is guaranteed. To FL /And FS /And performing closed operation (firstly expanding and then corroding) on the image, filling fine holes in the target object according to different structural elements, and smoothing the boundary of the target object without obviously changing the area of the target object.
Step 523: eliminating F by using region marking methodL /And FS /Medium and small area targets (noise spots). Firstly, marking connected regions in the foreground by adopting a region growing method, and calculating the area R of each connected regioni(number of pixels included in a region), if the area R of the connected regioniLess than a predetermined area threshold RminThen the connected region is removed from the foreground.
The region growing method utilizes the idea of region growing, one connected region can be marked in each growing process, and all the connected regions can be marked only by scanning the image once. The algorithm comprises the following steps:
1) inputting a foreground image to be marked, and initializing a mark matrix, a queue and a mark count Index which have the same size as the input image;
2) scanning foreground images from left to right and from top to bottom, adding 1 to Index when an unmarked foreground pixel p is scanned, marking p in a marking matrix (the value of a corresponding point is marked as Index), scanning eight neighborhood points of p, marking in the marking matrix if the unmarked foreground pixel exists, and putting the marked foreground pixel in a queue as a seed for region growth;
3) when the queue is not empty, taking out a growth seed point p1 from the queue, scanning eight neighborhood points of p1, if foreground pixels which are not marked exist, marking in a marking matrix, and putting in the queue;
4) repeating step 3 until the queue is empty, and marking a communication area;
5) and turning to 2 until the whole image is scanned, and obtaining the mark matrix and the number Index of the connected regions.
Step 530: by analyzing a long-period foreground binary image FL /And short-period foreground binary image FS /The method is characterized in that the targets in the foreground are classified to obtain a target object O suspected of being leftcur. The rules for classification are as follows:
1)FL /(x, y) is 1 and FS /(x, y) ═ 1, (x, y) point pixels belong to a moving object;
2)FL /(x, y) is 1 and FS /(x, y) — 0, the (x, y) point pixel belongs to the target object O of the suspected tracecur
3)FL /(x, y) is 0 and FS /(x, y) ═ 1, (x, y) point pixels belong to a scene change target or noise;
4)FL /(x, y) is 0 and FS /(x, y) ═ 0, (x, y) point pixels belong to the background object;
step 540: method for detecting removed object O by combining target contour with target central peripheral area color histogramremoveEliminating the removed material OremoveEliminating the suspected objects left by the influence of the foreground on the targetcurIs taken away of the substance OremoveObtaining a temporarily stationary target object Oabandon
Judging taken-away object O in existing remnant detection methodremoveThe false alarm cannot be well removed by the edge matching method mainly according to the contour features of the target under the condition that the background contour is complex, and the defect that the edge matching color information is lost can be overcome by the color histogram analysis method of the peripheral region of the center of the target.
Target object O of suspected remainscurIncluding a temporarily stationary target object OabandonAnd removing the substance OremoveThe step is mainly to distinguish the temporarily stationary object OabandonAnd removing the substance OremoveAnd removing misjudgment.
The step 540 further comprises the steps of:
step 541: judging candidate taken object O according to target contour characteristicsEtemp
Extracting a target object O of suspected remnantscurAnd the ROI edge points in the current video frame, and counting OcurTotal number N of edge pixels per ROI in foregroundcurAnd the total number N of corresponding ROI edge pixels in the current video frametemp. Judging candidate taken object O according to target contour characteristicsEtempThe decision formula for the judgment is as follows:
wherein, TeIs the threshold value for judging the difference value of the edge pixel points of the taken object.
Step 542: judging candidate taken object O according to the color histogram characteristics of the peripheral region of the target centerHtemp
The decision formula for judging according to the color histogram characteristics of the target central peripheral area is as follows:
wherein, TdThe distance threshold value between the temporarily stationary object and the removed object is judged, if the smaller D is, the more similar the two histograms are, the higher the possibility of detecting the temporarily stationary object is, and otherwise, the higher the possibility of detecting the temporarily stationary object is. The formula for calculating the value of D is as follows:
D ( H C , H E ) = 1 - Σ i n H C ( i ) · H E ( i ) Σ i H C ( i ) · Σ i H E ( i )
wherein HEAnd HCRespectively represent peripheral regions AEAnd a central region ACThe two histograms have the same gray level and are both gray normalized before calculation.
Step 543: based on candidate takeaway OEtempAnd candidate takeaway OHtempDetermining the removed substance OremoveRemoving the suspected objectscurIs taken away of the substance OremoveObtaining a temporarily stationary target object Oabandon
Determining a temporarily stationary target object OabandonAnd removing the substance OremoveThe decision of (c) is disclosed as follows:
step 550: detecting pedestrians by adopting a pedestrian detection algorithm based on HOG and skin color characteristics, eliminating the influence of the pedestrians which are suddenly static after moving for a distance on the target in the foreground, and eliminating the temporarily static target object OabandonThe target object of the candidate carry-over is obtained.
The pedestrian detection algorithm based on the HOG and the skin color features not only fully utilizes the excellent characteristics of the HOG features, but also overcomes the problems of large HOG vector dimension and slow calculation, and the addition of the skin color features obviously improves the detection precision and reduces the false detection rate and the missing detection rate of pedestrians.
The method mainly comprises the steps that HOG and skin color feature descriptors of positive and negative samples as many as possible are firstly applied to train an SVM classifier, the training of the SVM classifier is an off-line process, the more positive and negative sample data are selected, the wider the coverage is, and the more accurate the classification result of the classifier is obtained through training. Then extracting a temporarily static target object O in the foregroundabandonThe HOG and the skin color feature descriptor are classified by using a trained SVM classifier, and then the temporary static target object O can be judgedabandonWhether it is a pedestrian object that is temporarily stationary. Rejecting temporarily stationary target objects OabandonTo obtain a target object of the candidate carry-over.
Extracting a temporarily stationary target object OabandonThe algorithm steps of the HOG and skin color feature descriptor are as follows:
1) each temporarily stationary target object OabandonThe ROI is regarded as the ROI of a current frame, and graying is carried out on the ROI;
2) the color space standardization (normalization) is carried out on the ROI by adopting a Gamma correction method, so that the contrast of the image is adjusted, the influence caused by local shadow and illumination change of the image is reduced, and meanwhile, the interference of noise can be inhibited;
3) calculating the gradient (including size and direction) of each pixel of the ROI; the purpose is to capture contour information while further attenuating the interference of illumination.
4) Dividing the ROI into a plurality of small cells (cells);
5) counting the gradient histogram of each cell to form a feature descriptor of each cell;
6) and (3) forming each plurality of cells into a block (block), and connecting the feature descriptors of all the cells in the block in a substring manner to obtain the HOG feature descriptor of the block. In order to improve the calculation speed, an integral vector diagram is introduced when the HOG features are calculated, so that repeated calculation caused by block overlapping is avoided, and the calculation speed is improved.
7) In each block, n-dimensional histogram statistics is carried out on the number of pixel points in Cb and Cr spaces respectively, the interval division of the histogram is determined by RCb and RCr, and each block obtains a 2-x-n-dimensional skin color feature descriptor. I.e. a 2 x n-dimensional vector representing skin tone is added to each block. The skin color of the human face has good clustering performance in YCrCb space, the value of the information skin color in the CrCb space is only concentrated in a certain range, and the characteristic can be utilized to distinguish the skin color from the background and other colors.
8) And selecting some blocks with stronger classification capability from all blocks in the ROI as final features, and combining HOG of the selected blocks with skin color feature descriptor strings to obtain HOG feature descriptors of the ROI. The HOG and skin color feature descriptors of the ROI are the final feature vectors available for classification.
Step 560: performing block mass tracking on the target object of each candidate remnant, and keeping the number Num of the staying frames of each candidate remnantiRespectively counting, when the number of frames of a certain object staying in the accumulating way exceeds a certain threshold value TnumWhen is Num, i.e. Numi>TnumAnd the target object of the candidate carry-over is marked as the carry-over, and a carry-over alarm is triggered. And marking a circumscribed rectangular area of the vestige in the source image according to the logical position of the vestige.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A real-time detection method for a carry-over, characterized by comprising the following steps:
s10, acquiring video image data;
s30, background modeling is carried out on the video image by adopting a Gaussian mixture model improvement algorithm, a long-period background model and a short-period background model are respectively established, wherein the background is divided into a stable region and a dynamic region, in the stable region, when the matching frequency of a Gaussian distribution of the Gaussian mixture model and each newly entered pixel value in the background model of a pixel is higher than a set threshold value, each Gaussian distribution parameter of the background model of the pixel in the next N-frame image is not updated any more, after N frames, each Gaussian distribution parameter of the Gaussian mixture model is reset and learning is restarted until the matching frequency of the Gaussian distribution and the newly entered pixel value is higher than the set threshold value, and the steps are repeated; in the dynamic region, when two or three Gaussian functions are continuously and alternately matched with a newly obtained pixel value in a background model of a pixel, and the sum of the weights of the Gaussian functions is greater than a set threshold, each Gaussian distribution parameter of the background model of the pixel point in a next M-frame image is not updated any more, the average value of the Gaussian distributions represents the background value of the pixel point, after M frames, each Gaussian distribution parameter of the mixed Gaussian model is reset and learning is restarted until the sum of the weights of new Gaussian distributions is greater than the set threshold, and the process is repeated in a circulating manner, wherein M, N is an integer;
s40, subtracting the long-period background model from the current video frame to obtain a long-period foreground FLSubtracting the short-period background model from the current video frame to obtain a short-period foreground FS
S50, for the long period foreground FLAnd short cycle foreground FSAnalyzing and detecting the remnant, marking the remnant and alarming.
2. The method for detecting the belongings in real time according to claim 1, wherein the step S30 of resetting the gaussian distribution parameters of the gaussian mixture model comprises the steps of:
S31,ωi,ti,tthe weighted value corresponding to the gaussian distribution with the maximum value is set as:
ω i , t = μ ‾ + β
s32, setting the weights corresponding to the rest Gaussian distributions as:
ω i , t = 1 - μ ‾ - β K - 1
wherein, ω isi,tIs the weight, σ, of the ith Gaussian distribution at time ti,tIs the variance of the ith Gaussian distribution at the time t, K is the number of Gaussian functions in the mixed Gaussian model,β is a floating point number in the interval [0, 1).
3. The method according to claim 1, wherein the step S10 is implemented by using a frame-extraction method to obtain video image data.
4. The method for real-time detecting the belongings according to claim 1, further comprising a step S20 between the step S10 and the step S30, wherein the step S20 comprises the steps of:
s21, the video image is down-sampled by a bilinear interpolation method.
5. The method according to claim 4, wherein the step S20 further comprises the steps of:
and S22, performing noise reduction processing on the video image data acquired in the night mode by using a Gaussian filter.
6. The method according to claim 1, wherein the step S50 further comprises the steps of:
s510, eliminating the long-period foreground F by adopting a shadow suppression algorithm based on mixed GaussianLAnd short cycle foreground FSMoving shadows in (1).
7. The method according to claim 6, wherein the step S510 comprises the steps of:
s511, detecting the long-period foreground F by utilizing the shade model based on the HSV color spaceLAnd short cycle foreground FSA suspected shadow in (1);
s512, according to the long-period foreground FLAnd short cycle foreground FSCarrying out learning updating of a Gaussian mixture shadow model on the pixels which are judged to be suspected shadows;
s513, judging the long-period foreground FLAnd short cycle foreground FSWhether the suspected shadow in (1) is a moving shadow or not and eliminates the long-period foreground FLAnd short cycle foreground FSMoving shadows in (1).
8. The method according to claim 7, wherein the step S50 further comprises a step S520, and the step S520 comprises the following steps:
s521, for the long period foreground FLAnd short cycle foreground FSCarrying out binarization processing to obtain a long-period foreground binary image FL /And short-period foreground binary image FS /
S522, processing the long-period foreground binary image F by adopting a morphological methodL /And short-period foreground binary image FS /
S523, eliminating the long-period foreground binary image F by adopting a region marking methodL /And short-period foreground binary image FS /A middle communicating region.
9. The method according to claim 8, wherein the step S523 further comprises the steps of:
long period foreground binary image F by region growing methodL /And short-period foreground binary image FS /The connected regions in (1) are marked, and the area R of each connected region is calculatediIf the area R of the communicating regioniLess than a predetermined area threshold RminThen the connected region is removed from the foreground.
10. The method according to claim 9, wherein the step S650 further comprises the steps of:
s530, analyzing the long-period foreground binary image FL /And short-period foreground binary image FS /The method is characterized in that the targets in the foreground are classified to obtain a target object O suspected of being leftcurThe classification rule is as follows:
FL /(x, y) is 1 and FS /(x, y) ═ 1, (x, y) point pixels belong to a moving object;
FL /(x, y) is 1 and FS /(x, y) — 0, the (x, y) point pixel belongs to the target object O of the suspected tracecur
FL /(x, y) is 0 and FS /(x, y) ═ 1, (x, y) point pixels belong to a scene change target or noise;
FL /(x, y) is 0 and FS /And (x, y) ═ 0, and the (x, y) point pixel belongs to the background object.
11. The method according to claim 10, wherein the step S50 further comprises the steps of:
s540, detecting the suspected object O of the remnant by adopting a method of combining the target contour with the color histogram of the central and peripheral area of the targetcurIs taken away of the substance OremoveAnd removing the suspected objects OcurIs taken away of the substance OremoveObtaining a temporarily stationary target object Oabandon
12. The method according to claim 11, wherein the step S540 further comprises the steps of:
s541, according to the object OcurDetermining candidate takeaway object O based on the target profile characteristicsEtemp
S542, according to the target object OcurCenter peripheral area color histogram feature determination candidate takeaway object OHtemp
S543, according to the candidate removed object OEtempAnd candidate takeaway OHtempDetermining the removed substance OremoveRemoving the suspected objectscurIs taken away of the substance OremoveObtaining a temporarily stationary target object OabandonJudging the temporarily stationary object OabandonAnd removing the substance OremoveThe decision of (a) is disclosed as:
13. the method according to claim 11, wherein the step S50 further comprises the steps of:
s550, detecting pedestrians by adopting a pedestrian detection algorithm based on HOG and skin color characteristics, and eliminating temporary static target object OabandonThe target object of the candidate carry-over is obtained.
14. The method according to claim 13, wherein the step S50 further comprises the steps of:
s560, performing blob tracking on the target object of each candidate remnant, and performing blob tracking on each candidate remnantNumber Num of frames for continuous stay of target objectiRespectively counting, when the number of the accumulated staying frames of the target object of a certain candidate remnant exceeds a set threshold value TnumWhen is Num, i.e. Numi>TnumAnd marking the target object of the candidate carry-over as a carry-over, triggering a carry-over alarm, and marking a circumscribed rectangular area of the carry-over in the source image according to the logic position of the carry-over.
CN201410472162.5A 2014-09-16 2014-09-16 A kind of residue real-time detection method Active CN105404847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410472162.5A CN105404847B (en) 2014-09-16 2014-09-16 A kind of residue real-time detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410472162.5A CN105404847B (en) 2014-09-16 2014-09-16 A kind of residue real-time detection method

Publications (2)

Publication Number Publication Date
CN105404847A true CN105404847A (en) 2016-03-16
CN105404847B CN105404847B (en) 2019-01-29

Family

ID=55470326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410472162.5A Active CN105404847B (en) 2014-09-16 2014-09-16 A kind of residue real-time detection method

Country Status (1)

Country Link
CN (1) CN105404847B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650638A (en) * 2016-12-05 2017-05-10 成都通甲优博科技有限责任公司 Abandoned object detection method
CN107545559A (en) * 2016-06-29 2018-01-05 鸿富锦精密电子(天津)有限公司 Legacy decision-making system and legacy decision method
CN108053418A (en) * 2017-11-29 2018-05-18 中国农业大学 A kind of animal background modeling method and device
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
CN109583414A (en) * 2018-12-10 2019-04-05 江南大学 Indoor road occupying detection method based on video detection
CN109948455A (en) * 2019-02-22 2019-06-28 中科创达软件股份有限公司 One kind leaving object detecting method and device
CN110232359A (en) * 2019-06-17 2019-09-13 中国移动通信集团江苏有限公司 It is detained object detecting method, device, equipment and computer storage medium
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
CN111062273A (en) * 2019-12-02 2020-04-24 青岛联合创智科技有限公司 Tracing detection and alarm method for left-over articles
CN111369529A (en) * 2020-03-04 2020-07-03 厦门脉视数字技术有限公司 Article loss and leave-behind detection method and system
CN111524158A (en) * 2020-05-09 2020-08-11 黄河勘测规划设计研究院有限公司 Method for detecting foreground target in complex scene of hydraulic engineering
CN111723773A (en) * 2020-06-30 2020-09-29 创新奇智(合肥)科技有限公司 Remnant detection method, device, electronic equipment and readable storage medium
CN111914670A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method, device and system for detecting left-over article and storage medium
CN112070033A (en) * 2020-09-10 2020-12-11 天津城建大学 Video carry-over detection method based on finite-state machine analysis
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article
CN113743212A (en) * 2021-08-02 2021-12-03 日立楼宇技术(广州)有限公司 Detection method and device for jam or left object at entrance and exit of escalator and storage medium
CN114022468A (en) * 2021-11-12 2022-02-08 珠海安联锐视科技股份有限公司 Method for detecting article leaving and losing in security monitoring
CN114926463A (en) * 2022-07-20 2022-08-19 深圳市尹泰明电子有限公司 Production quality detection method suitable for chip circuit board
CN115272174A (en) * 2022-06-15 2022-11-01 武汉市市政路桥有限公司 Municipal road detection method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509075A (en) * 2011-10-19 2012-06-20 北京国铁华晨通信信息技术有限公司 Remnant object detection method and device
CN103714325A (en) * 2013-12-30 2014-04-09 中国科学院自动化研究所 Left object and lost object real-time detection method based on embedded system
US8798403B2 (en) * 2012-01-31 2014-08-05 Xerox Corporation System and method for capturing production workflow information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509075A (en) * 2011-10-19 2012-06-20 北京国铁华晨通信信息技术有限公司 Remnant object detection method and device
US8798403B2 (en) * 2012-01-31 2014-08-05 Xerox Corporation System and method for capturing production workflow information
CN103714325A (en) * 2013-12-30 2014-04-09 中国科学院自动化研究所 Left object and lost object real-time detection method based on embedded system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李亚辉等: "改进双背景模型的遗留物检测算法研究", 《计算机工程与设计》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545559A (en) * 2016-06-29 2018-01-05 鸿富锦精密电子(天津)有限公司 Legacy decision-making system and legacy decision method
CN106650638A (en) * 2016-12-05 2017-05-10 成都通甲优博科技有限责任公司 Abandoned object detection method
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
CN108053418B (en) * 2017-11-29 2020-10-23 中国农业大学 Animal background modeling method and device
CN108053418A (en) * 2017-11-29 2018-05-18 中国农业大学 A kind of animal background modeling method and device
CN109583414A (en) * 2018-12-10 2019-04-05 江南大学 Indoor road occupying detection method based on video detection
CN109948455A (en) * 2019-02-22 2019-06-28 中科创达软件股份有限公司 One kind leaving object detecting method and device
CN110232359A (en) * 2019-06-17 2019-09-13 中国移动通信集团江苏有限公司 It is detained object detecting method, device, equipment and computer storage medium
CN110232359B (en) * 2019-06-17 2021-10-01 中国移动通信集团江苏有限公司 Retentate detection method, device, equipment and computer storage medium
CN111062273A (en) * 2019-12-02 2020-04-24 青岛联合创智科技有限公司 Tracing detection and alarm method for left-over articles
CN111062273B (en) * 2019-12-02 2023-06-06 青岛联合创智科技有限公司 Method for tracing, detecting and alarming remaining articles
CN111369529A (en) * 2020-03-04 2020-07-03 厦门脉视数字技术有限公司 Article loss and leave-behind detection method and system
CN111369529B (en) * 2020-03-04 2021-05-14 厦门星纵智能科技有限公司 Article loss and leave-behind detection method and system
CN111524158B (en) * 2020-05-09 2023-03-24 黄河勘测规划设计研究院有限公司 Method for detecting foreground target in complex scene of hydraulic engineering
CN111524158A (en) * 2020-05-09 2020-08-11 黄河勘测规划设计研究院有限公司 Method for detecting foreground target in complex scene of hydraulic engineering
CN111723773A (en) * 2020-06-30 2020-09-29 创新奇智(合肥)科技有限公司 Remnant detection method, device, electronic equipment and readable storage medium
CN111723773B (en) * 2020-06-30 2024-03-29 创新奇智(合肥)科技有限公司 Method and device for detecting carryover, electronic equipment and readable storage medium
CN111914670A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method, device and system for detecting left-over article and storage medium
CN112070033A (en) * 2020-09-10 2020-12-11 天津城建大学 Video carry-over detection method based on finite-state machine analysis
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article
CN113743212A (en) * 2021-08-02 2021-12-03 日立楼宇技术(广州)有限公司 Detection method and device for jam or left object at entrance and exit of escalator and storage medium
CN113743212B (en) * 2021-08-02 2023-11-14 日立楼宇技术(广州)有限公司 Method and device for detecting congestion or carryover at entrance and exit of escalator and storage medium
CN114022468A (en) * 2021-11-12 2022-02-08 珠海安联锐视科技股份有限公司 Method for detecting article leaving and losing in security monitoring
CN115272174A (en) * 2022-06-15 2022-11-01 武汉市市政路桥有限公司 Municipal road detection method and system
CN114926463A (en) * 2022-07-20 2022-08-19 深圳市尹泰明电子有限公司 Production quality detection method suitable for chip circuit board
CN114926463B (en) * 2022-07-20 2022-09-27 深圳市尹泰明电子有限公司 Production quality detection method suitable for chip circuit board

Also Published As

Publication number Publication date
CN105404847B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN105404847B (en) A kind of residue real-time detection method
CN112052797B (en) MaskRCNN-based video fire disaster identification method and MaskRCNN-based video fire disaster identification system
CN108446617B (en) Side face interference resistant rapid human face detection method
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN107622258B (en) Rapid pedestrian detection method combining static underlying characteristics and motion information
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN103871029B (en) A kind of image enhaucament and dividing method
JP6330385B2 (en) Image processing apparatus, image processing method, and program
CN101957997B (en) Regional average value kernel density estimation-based moving target detecting method in dynamic scene
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN104978567B (en) Vehicle checking method based on scene classification
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN105513053B (en) One kind is used for background modeling method in video analysis
CN103810703B (en) A kind of tunnel based on image procossing video moving object detection method
CN115620212B (en) Behavior identification method and system based on monitoring video
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN107066963B (en) A kind of adaptive people counting method
CN107578048B (en) Vehicle type rough classification-based far-view scene vehicle detection method
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN111709300A (en) Crowd counting method based on video image
CN113177439B (en) Pedestrian crossing road guardrail detection method
CN107871315B (en) Video image motion detection method and device
CN104299234B (en) The method and system that rain field removes in video data
CN115731493A (en) Rainfall micro physical characteristic parameter extraction and analysis method based on video image recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant