CN113963431A - Moving target detection method integrating visual background extraction and improved Lansiki function - Google Patents

Moving target detection method integrating visual background extraction and improved Lansiki function Download PDF

Info

Publication number
CN113963431A
CN113963431A CN202111045428.4A CN202111045428A CN113963431A CN 113963431 A CN113963431 A CN 113963431A CN 202111045428 A CN202111045428 A CN 202111045428A CN 113963431 A CN113963431 A CN 113963431A
Authority
CN
China
Prior art keywords
vector
background model
target image
background
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111045428.4A
Other languages
Chinese (zh)
Other versions
CN113963431B (en
Inventor
张运胜
冷凯君
张耀峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUBEI UNIVERSITY OF ECONOMICS
Original Assignee
HUBEI UNIVERSITY OF ECONOMICS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUBEI UNIVERSITY OF ECONOMICS filed Critical HUBEI UNIVERSITY OF ECONOMICS
Priority to CN202111045428.4A priority Critical patent/CN113963431B/en
Publication of CN113963431A publication Critical patent/CN113963431A/en
Application granted granted Critical
Publication of CN113963431B publication Critical patent/CN113963431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a moving target detection method for fusing visual background extraction and improved Lansiki function, which comprises the following steps: constructing a pixel space background model based on a space-time sample consistency principle; initializing a background model by multiple frames at short time intervals; constructing an improved Langerhans matrix determinant; and importing a target image, judging the linear correlation between a vector constructed by target image pixels and a sample vector in a background model based on the improved Langerhans' matrix determinant, counting the times of the linear correlation, and if the times of the linear correlation are smaller than a preset threshold of the times of the linear correlation, determining pixel points in the target image as moving targets, namely foreground, otherwise, determining the pixel points as background. The invention has the beneficial effects that: the method can effectively process various interference factors in the scene of illumination change at night, and effectively realize the extraction of the moving target in the scene of slow illumination or sudden change.

Description

Moving target detection method integrating visual background extraction and improved Lansiki function
Technical Field
The invention relates to the field of intelligent traffic and complex environment target detection research, in particular to a moving target detection method integrating visual background extraction and improved Lansiki functions.
Background
The digital image is used as a carrier for recording human visual information and is closely related to the daily life of people. Computer vision is an important research direction in the field of artificial intelligence, and digital images are widely applied to the fields of daily life and military affairs. However, digital images are susceptible to sensors, shooting scenes, imaging environments and the like, and images shot in severe environments (such as fog, sand, dust, rain, snow, underwater, low light and the like) often have the phenomena of low contrast, poor definition, severe color cast and the like. The image with degraded quality can not clearly and accurately record and express information in a scene, so that the human visual perception is directly influenced, the subsequent computer visual task processing effect is further influenced, and the application value of various images is severely limited. With the development of artificial intelligence technology, the research of high-level visual tasks in severe environments, including image enhancement and restoration and related scenes, in the image processing and computer vision fields is highly concerned and gradually becomes a research hotspot in recent years. Meanwhile, as an important part of an intelligent traffic system and a smart city, the intellectualization of urban traffic gets more attention, at present, video sensors are installed at a plurality of traffic gates of the city, thousands of video data can be generated every day, the urban traffic has high traffic density and serious traffic jam, users on various roads present diversity, moving vehicles are obtained from the severe environment of the urban traffic, and the method has important significance for the subsequent behavior analysis, risk identification, congestion relief and the like of the urban traffic vehicles, however, the accuracy and real-time collaboration of target detection under the condition of illumination change of a complex urban traffic scene is still a challenge.
At present, moving target detection is a core component of an intelligent video monitoring system, and the accuracy of the moving target detection is the basis of higher-level processing such as target tracking, classification and behavior understanding. The background difference method for extracting the moving target based on the background model has balanced detection performance and efficiency, so that the background modeling method is widely applied to the fields of intelligent security, intelligent transportation and the like. The background model method has very good performance in a simple scene, but is often easily interfered by factors such as noise, illumination change and the like in an actual complex application environment, and how to effectively inhibit illumination change and noise and construct a moving object detection method based on a background model with stable performance becomes a very challenging research topic in the field of computer vision.
The Visual Background extraction (ViBe) method is concerned by a large number of scholars in the field of computer vision due to the advantages of simple and fast Background modeling logic, but still faces a series of problems of ghosting, noise interference, illumination change and the like in a real application scene. In order to solve the ghost problem, a method for initializing a background model by two or more frames is provided, and a reliable background sample is introduced into the background, so that the speed of ghost elimination is increased. In order to solve the problem of high false detection rate caused by noise interference in a dynamic scene, a method for fusing regional information and textural features is provided, and a background model fused with pixel point space-time information enhances the capability of coping with a noise and slow illumination change scene. The common method for solving the illumination change is as follows: the illumination robustness characteristic and the subspace weighting decomposition method, but the acquisition of the illumination invariant characteristic or the subspace construction method increases the complexity of the algorithm to a certain extent and influences the application real-time performance of the algorithm. The Lansik basis function efficiently handles illumination variations using vector correlations and does not require the construction of illumination invariant features. The Langsky function and the single Gaussian model are combined to obtain a better moving target detection effect. The detection performance of a new model formed by combining the Langsky function and the mixed Gaussian model is obviously improved, and the stability of target detection is enhanced by a moving target detection method of integrating the Langsky function and the codebook background model. However, these minute-divided wronskian functions require manual selection of their determinant calculation and cannot deal with scenes in which moving objects are dark or light in color with the background.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a moving target detection method integrating visual background extraction and improved Lansiki function, so as to overcome the defects in the prior art.
The technical scheme for solving the technical problems is as follows: a method for detecting a moving target by fusing visual background extraction and improved Lansiki function comprises the following steps:
s100, constructing a pixel space background model based on a space-time sample consistency principle;
s200, initializing a background model by multiple frames at short time intervals;
s300, constructing an improved Lansiki matrix determinant;
s400, importing a target image, judging the linear correlation between a vector constructed by target image pixels and a sample vector in a background model based on the improved Langerhans matrix determinant, counting the times of the linear correlation, and if the times of the linear correlation is smaller than a preset threshold of the times of the linear correlation, determining pixel points in the target image as moving targets, namely foreground, otherwise, determining the pixel points as background.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, S100 specifically is:
s110, collecting traffic scene videos with illumination changes at night in real time;
s120, acquiring multi-neighborhood pixel points of the pixel points in each image in the video, and forming a vector by the pixel points and the neighborhood pixel points thereof to be used as a sample vector;
s130, obtaining N sample vectors according to a sample consistency principle, and constructing a pixel space background model according to the N sample vectors.
Further, the multi-neighborhood is an 8-neighborhood.
Further, the formula of the pixel space background model is as follows:
B(x,y)={V1(x,y),V2(x,y),...,VM(x,y),VN(x,y)};
VM(x,y),M∈[1,N]is the mth vector in the sample vectors.
Further, the formula of the initial background model is:
B(x,y)={I1(x,y),...,I1+(N-2)×K(x,y),I1+(N-1)×K(x,y)}
I1(x, y) is the vector feature of frame 1, I1+(N-1)×K(x, y) is the vector feature for the 1+ (N-1) xK frame, K is a short time interval.
Further, the initialization background model uses the interval frames of the original background model.
Further, the improved Lansiki matrix determinant is as follows:
Figure BDA0003251026040000041
or the like, or, alternatively,
Figure BDA0003251026040000042
further, S400 specifically is:
s410, importing a real traffic scene target image, recording the target image as I (x, y), and recording a target image vector as VI(x,y);
S420, respectively obtaining matrixes formed by the support areas of the pixel points of the target image and the support areas corresponding to the sample vectors of the background model, and respectively calculating the sum V of the characteristic values of the corresponding matrixesI seAnd
Figure BDA0003251026040000043
s430, if
Figure BDA0003251026040000044
The selection of the determinant of the Lansiki matrix is improved:
Figure BDA0003251026040000045
otherwise, improving the selection of the determinant of the Lansiki matrix:
Figure BDA0003251026040000046
s440, according to the selected W (V)I(x,y),VM(x, y)) calculating a target image vector VI(x, y) and sample vector V in background modelM(x, y) whether or not there is a linear correlation P (x, y);
Figure BDA0003251026040000047
t is a distance threshold;
s450, counting the times T (x, y) of the linear correlation,
Figure BDA0003251026040000048
s460, final judgment result D (x, y) of foreground or background:
Figure BDA0003251026040000049
th is a preset threshold value of linear correlation times;
if D (x, y) is 1, the pixel point in the target image is a moving target, i.e. a foreground, otherwise, the pixel point is a background.
And further, updating the sample vector in the background model based on a combined conservative updating and random sampling method according to the foreground or background judgment result.
Further, when the target image pixel point is detected as the background, the corresponding vector has the probability of 1/theta to replace the sample vector in the background model, and theta is a sampling factor;
and replacing a randomly selected sample vector in a pixel point background model in an F multiplied by F neighborhood by a target image pixel corresponding vector with the probability of 1/theta.
The invention has the beneficial effects that:
the method comprises the steps of firstly, constructing a pixel space background model according to the principle of consistency of space-time samples, and initializing the background model by utilizing multiple frames at short time intervals; then, the improved Langerhans matrix determinant is used for judging the linear correlation between the vector constructed by the current pixel and the background sample vector, and the current pixel is judged to be the foreground or the background based on the times of the linear correlation between the current pixel and the sample in the background model; and finally, updating the background model based on the judgment result, wherein the proposed model can effectively process various interference factors under the scene of illumination change at night, and effectively realizes the extraction of the moving target under the scene of slow illumination or sudden change.
Drawings
FIG. 1 is a process of constructing a neighborhood vector of a pixel 8;
FIG. 2 is a sample initialization process of a pixel background model;
FIG. 3 is a flow chart of a moving object detection method incorporating visual background extraction and improved Lansiki functions;
FIG. 4 shows the detection result of the moving object of the traffic scene with the change of illumination at real night.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, the examples of which are set forth to illustrate the invention and are not intended to limit the scope of the invention.
Example 1
As shown in fig. 3, a method for detecting a moving object by fusing visual background extraction and improved wronskian function includes the following steps:
s100, constructing a pixel space background model based on a space-time sample consistency principle;
s200, initializing a background model by multiple frames at short time intervals;
s300, constructing an improved Lansiki matrix determinant;
s400, importing a target image, judging the linear correlation between a vector constructed by target image pixels and a sample vector in a background model based on the improved Langerhans matrix determinant, counting the times of the linear correlation, and if the times of the linear correlation is smaller than a preset threshold of the times of the linear correlation, determining pixel points in the target image as moving targets, namely foreground, otherwise, determining the pixel points as background.
Example 2
This example is a further optimization performed on the basis of example 1, and specifically includes the following:
s100 specifically comprises the following steps:
s110, collecting traffic scene videos with illumination changes at night in real time;
s120, acquiring multi-neighborhood pixel points of pixel points in each image in the video, and then forming a vector by the pixel points and the neighborhood pixel points thereof to be used as a sample vector;
s130, obtaining sample vectors corresponding to pixel points in the N observed images and neighborhood pixel points thereof according to a sample consistency principle, and constructing a pixel space background model according to the sample vectors.
The device for acquiring the traffic scene video with the illumination change at night can be a video sensor, but other devices are not excluded.
Typically, the multi-neighborhood region described in S120 is an 8-neighborhood region, although in practice, other forms are not excluded.
The vector formation process corresponding to the pixel point is shown in fig. 1.
The detection of the moving foreground object is carried out in a real night traffic scene data set, the detection result is shown in fig. 4, the first action in the figure is a representative 5-frame illumination change image, and the main traffic light scenes are respectively as follows: green light, yellow light, red light, green light and yellow light, the second column is the accurate result of the corresponding target, and the third column is the result of the detection method of the invention.
Example 3
This example is a further optimization performed on the basis of example 1 or 2, and is specifically as follows:
the formula of the pixel space background model is as follows:
B(x,y)={V1(x,y),V2(x,y),...,VM(x,y),VN(x,y)};
VM(x,y),M∈[1,N]is the mth vector in the sample vectors.
Example 4
This example is a further optimization performed on the basis of example 3, and it is specifically as follows:
the formula of the initial background model is:
B(x,y)={I1(x,y),...,I1+(N-2)×K(x,y),I1+(N-1)×K(x,y)}
I1(x, y) is the vector feature of frame 1, I1+(N-1)×KAnd (x, y) is the vector characteristic of the 1+ (N-1) xK frame, K is a short time interval, and K is usually 25, so that most of dynamic scenes can be processed.
The process of initializing the background model is shown in FIG. 2; to avoid generating an incorrect initialization background model, based on the formula B (x, y) { V ═ V1(x,y),V2(x,y),...,VM(x,y),VNThe interval frames of (x, y) } initialize the background model, which will reduce the probability that a slowly moving or temporarily parked vehicle will merge into the background model.
Example 5
The present embodiment is further optimized based on any one of embodiments 1 to 4, and specifically includes the following steps:
the improved Lansiki matrix determinant is as follows:
Figure BDA0003251026040000071
or the like, or, alternatively,
Figure BDA0003251026040000081
example 6
This example is a further optimization performed on the basis of example 5, and it is specifically as follows:
s410, importing a real traffic scene target image, recording the target image as I (x, y), and recording a target image vector as VI(x,y);
S420, respectively obtaining matrixes formed by the support areas of the pixel points of the target image and the support areas corresponding to the sample vectors of the background model, and respectively calculating the sum V of the characteristic values of the corresponding matrixesI seAnd
Figure BDA0003251026040000082
s430, if
Figure BDA0003251026040000083
The selection of the determinant of the Lansiki matrix is improved:
Figure BDA0003251026040000084
otherwise, improving the selection of the determinant of the Lansiki matrix:
Figure BDA0003251026040000085
s440, according to the selected W (V)I(x,y),VM(x, y)) calculating a target image vector VI(x, y) and sample vector V in background modelM(x, y) whether or not there is a linear correlation P (x, y);
Figure BDA0003251026040000086
t is a distance threshold;
s450, counting the times T (x, y) of the linear correlation,
Figure BDA0003251026040000087
s460, final judgment result D (x, y) of foreground or background:
Figure BDA0003251026040000088
th is a preset threshold value of the number of linear correlations, and in general, the threshold value is greater than or equal to 2;
if D (x, y) is 1, the pixel point in the target image is a moving target, i.e. a foreground, otherwise, the pixel point is a background.
W(VI(x,y),VM(x, y)) is a modified Lansiki determinant, and the original Lansiki model needs to select to use the Lansiki determinant according to the dark or light color relation state of the moving object and the background color in the video scene;
in an actual scene, a Lansik determinant calculation mode needs to be artificially selected, and a scene with a moving object and a background which are both dark or light cannot be processed.
Thus improving the sum V of characteristic values of the region introduced by the Lansiki functionI seAnd
Figure BDA0003251026040000091
thereby automatically selecting a Lansiki calculation mode; if W (V)I(x,y),VM(x, y)) < T means that there is an increased chance that I (x, y) belongs to the background.
Example 7
The present embodiment is further optimized based on any one of embodiments 1 to 6, and specifically includes the following steps:
and updating the sample vector in the background model based on a combined conservative updating and random sampling method according to the judgment result.
Background model updating is to make the model sample adapt to frequent changes of the scene, such as changes of background objects, swaying of branches, etc. The fusion algorithm employs a strategy that combines conservative updates and random sub-sampling updates, which is better than the current more popular first-in-first-out (FIFO) update strategy in real lighting changing scenes.
Example 8
This example is a further optimization performed on the basis of example 7, and it is specifically as follows:
the updating of the background model comprises two steps:
firstly, when a target image pixel point is detected as a background, a corresponding vector has a probability of 1/theta to replace a sample vector in a background model, wherein theta is a sampling factor, and the value of the sampling factor is 16 under a common condition;
then, the vector corresponding to the target image pixel replaces a randomly selected sample vector in the background model of a pixel point in the F × F neighborhood with a probability of 1/θ, and the F × F neighborhood can be 3 × 3 in general.
Although embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and not to be construed as limiting the invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the invention.

Claims (10)

1. The method for detecting the moving target by fusing the visual background extraction and the improved Lansiki function is characterized by comprising the following steps of:
s100, constructing a pixel space background model based on a space-time sample consistency principle;
s200, initializing a background model by multiple frames at short time intervals;
s300, constructing an improved Lansiki matrix determinant;
s400, importing a target image, judging the linear correlation between a vector constructed by target image pixels and a sample vector in a background model based on the improved Langerhans matrix determinant, counting the times of the linear correlation, and if the times of the linear correlation is smaller than a preset threshold of the times of the linear correlation, determining pixel points in the target image as moving targets, namely foreground, otherwise, determining the pixel points as background.
2. The method for counting vehicles based on time-space images in traffic scene according to claim 1, characterized in that:
s100 specifically comprises the following steps:
s110, collecting traffic scene videos with illumination changes at night in real time;
s120, acquiring multi-neighborhood pixel points of the pixel points in each image in the video, and forming a vector by the pixel points and the neighborhood pixel points thereof to be used as a sample vector;
s130, obtaining N sample vectors according to a sample consistency principle, and constructing a pixel space background model according to the N sample vectors.
3. The method for counting vehicles based on time-space images in traffic scene according to claim 1 or 2, characterized in that: the multi-neighborhood is an 8-neighborhood.
4. The method for counting vehicles based on the time-space images in the traffic scene according to any one of claims 1 to 3, characterized in that:
the formula of the pixel space background model is as follows:
B(x,y)={V1(x,y),V2(x,y),...,VM(x,y),VN(x,y)};
VM(x,y),M∈[1,N]is the mth vector in the sample vectors.
5. The method of claim 4, wherein the method comprises:
the formula of the initial background model is:
B(x,y)={I1(x,y),...,I1+(N-2)×K(x,y),I1+(N-1)×K(x,y)}
I1(x, y) is the vector feature of frame 1, I1+(N-1)×K(x, y) is the vector feature for the 1+ (N-1) xK frame, K is a short time interval.
6. The method of claim 5, wherein the method comprises: the initialization background model uses the interval frames of the original background model.
7. The method for counting vehicles based on time-space images in traffic scene according to claim 1, characterized in that:
the improved Lansiki matrix determinant is as follows:
Figure FDA0003251026030000021
or the like, or, alternatively,
Figure FDA0003251026030000022
8. the method of claim 8, wherein the method comprises: s400 specifically comprises the following steps:
s410, importing a real traffic scene target image, recording the target image as I (x, y), and recording a target image vector as VI(x,y);
S420, respectively obtaining matrixes formed by the support areas of the pixel points of the target image and the support areas corresponding to the sample vectors of the background model, and respectively calculating the sum of the characteristic values of the corresponding matrixes
Figure FDA0003251026030000025
And
Figure FDA0003251026030000026
s430, if
Figure FDA0003251026030000024
The selection of the determinant of the Lansiki matrix is improved:
Figure FDA0003251026030000023
otherwise, improving the selection of the determinant of the Lansiki matrix:
Figure FDA0003251026030000031
s440, according to the selected W (V)I(x,y),VM(x, y)) calculating a target image vector VI(x, y) and sample vector V in background modelM(x, y) whether or not there is a linear correlation P (x, y);
Figure FDA0003251026030000032
t is a distance threshold;
s450, counting the times T (x, y) of the linear correlation,
Figure FDA0003251026030000033
s460, final judgment result D (x, y) of foreground or background:
Figure FDA0003251026030000034
th is a preset threshold value of linear correlation times;
if D (x, y) is 1, the pixel point in the target image is a moving target, i.e. a foreground, otherwise, the pixel point is a background.
9. The method for counting vehicles based on time-space images in traffic scene according to claim 1, characterized in that:
and updating the sample vector in the background model based on a combined conservative updating and random sampling method according to the foreground or background judgment result.
10. The method of claim 9, wherein the method comprises:
when a target image pixel point is detected as a background, a corresponding vector has a probability of 1' theta to replace a sample vector in a background model, and the theta is a sampling factor;
the vector corresponding to the target image pixel replaces a randomly selected sample vector in a pixel point background model in an F multiplied by F neighborhood by the probability of 1' theta.
CN202111045428.4A 2021-09-07 2021-09-07 Moving object detection method integrating visual background extraction and improving Langerhans function Active CN113963431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111045428.4A CN113963431B (en) 2021-09-07 2021-09-07 Moving object detection method integrating visual background extraction and improving Langerhans function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111045428.4A CN113963431B (en) 2021-09-07 2021-09-07 Moving object detection method integrating visual background extraction and improving Langerhans function

Publications (2)

Publication Number Publication Date
CN113963431A true CN113963431A (en) 2022-01-21
CN113963431B CN113963431B (en) 2024-08-16

Family

ID=79461047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111045428.4A Active CN113963431B (en) 2021-09-07 2021-09-07 Moving object detection method integrating visual background extraction and improving Langerhans function

Country Status (1)

Country Link
CN (1) CN113963431B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN106199743A (en) * 2016-07-13 2016-12-07 中国科学院电子学研究所 Magnetic anomaly signal detecting method
WO2017054455A1 (en) * 2015-09-30 2017-04-06 深圳大学 Motion target shadow detection method and system in monitoring video
CN109978916A (en) * 2019-03-11 2019-07-05 西安电子科技大学 Vibe moving target detecting method based on gray level image characteristic matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
WO2017054455A1 (en) * 2015-09-30 2017-04-06 深圳大学 Motion target shadow detection method and system in monitoring video
CN106199743A (en) * 2016-07-13 2016-12-07 中国科学院电子学研究所 Magnetic anomaly signal detecting method
CN109978916A (en) * 2019-03-11 2019-07-05 西安电子科技大学 Vibe moving target detecting method based on gray level image characteristic matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张汝峰;薛瑞;项璟;张亚娟;陈鹏;冯鑫鑫;: "基于ViBe的人流量统计算法研究", 南方农机, no. 07, 15 April 2020 (2020-04-15) *
李浩;张运胜;: "基于反馈背景模型的城市道路交叉口前景目标检测", 交通运输系统工程与信息, no. 06, 15 December 2017 (2017-12-15) *

Also Published As

Publication number Publication date
CN113963431B (en) 2024-08-16

Similar Documents

Publication Publication Date Title
Xu et al. Background modeling methods in video analysis: A review and comparative evaluation
US9652863B2 (en) Multi-mode video event indexing
Li et al. Statistical modeling of complex backgrounds for foreground object detection
US7424175B2 (en) Video segmentation using statistical pixel modeling
CN101827204B (en) Method and system for detecting moving object
CN104978567B (en) Vehicle checking method based on scene classification
CN110781721B (en) Outdoor scene moving object detection method based on improved VIBE algorithm
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CA2649389A1 (en) Video segmentation using statistical pixel modeling
Zhang et al. Moving vehicles segmentation based on Bayesian framework for Gaussian motion model
CN110334703B (en) Ship detection and identification method in day and night image
CN103077530A (en) Moving object detection method based on improved mixing gauss and image cutting
Patil et al. Motion saliency based generative adversarial network for underwater moving object segmentation
Filonenko et al. Real-time flood detection for video surveillance
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN115393774A (en) Lightweight fire smoke detection method, terminal equipment and storage medium
Yaghoobi Ershadi et al. Vehicle tracking and counting system in dusty weather with vibrating camera conditions
Eng et al. Robust human detection within a highly dynamic aquatic environment in real time
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
Kavasidis et al. Quantitative performance analysis of object detection algorithms on underwater video footage
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
CN113963431B (en) Moving object detection method integrating visual background extraction and improving Langerhans function
UKINKAR et al. Object detection in dynamic background using image segmentation: A review
CN113936030A (en) Moving target detection method and system based on convolutional coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant