CN107481269B - Multi-camera moving object continuous tracking method for mine - Google Patents

Multi-camera moving object continuous tracking method for mine Download PDF

Info

Publication number
CN107481269B
CN107481269B CN201710671192.2A CN201710671192A CN107481269B CN 107481269 B CN107481269 B CN 107481269B CN 201710671192 A CN201710671192 A CN 201710671192A CN 107481269 B CN107481269 B CN 107481269B
Authority
CN
China
Prior art keywords
image
tracking
target
frame
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710671192.2A
Other languages
Chinese (zh)
Other versions
CN107481269A (en
Inventor
逯彦
黄庆享
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201710671192.2A priority Critical patent/CN107481269B/en
Publication of CN107481269A publication Critical patent/CN107481269A/en
Application granted granted Critical
Publication of CN107481269B publication Critical patent/CN107481269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method for continuously tracking a moving target by multiple cameras for a mine. All cameras are installed at fixed positions in tracking, and the core of the tracking is as follows: an interesting moving target is searched in a large amount of video data by adopting a target detection and tracking algorithm, and a computer is used for replacing manual monitoring, so that the manual searching time is reduced, and the searching efficiency is improved; the walking route of underground workers is provided for the underground managers, so that the managers can timely inform the underground workers of the change of the surrounding environment or timely warn the underground workers of dangers in the surrounding environment; the detection and tracking algorithm among different cameras is provided, the approximate position of a worker can be grasped, a rescue target can be found out in time when a mine accident happens, time is won for a rescue team, and therefore rescue is carried out on the rescue team in time.

Description

Multi-camera moving object continuous tracking method for mine
Technical Field
The invention belongs to the field of moving target tracking, and particularly relates to a multi-camera moving target continuous tracking method for a mine.
Background
At the present stage, the coal mine monitoring system still mainly adopts manual monitoring, but scientific research shows that visual fatigue occurs when human eyes continuously monitor a screen for more than 20 minutes, so that cerebral palsy is caused, and various emergencies cannot be processed in real time. Therefore, an intelligent vision technology is introduced into a coal mine video monitoring system, and machine processing is used for replacing human eye observation, so that the monitoring efficiency is improved. At present, monitoring systems are installed in common coal mines, and the measures can avoid accidents to a certain extent or provide analysis data required for rescue after the accidents occur. However, once the workers in the coal mine enter the underground, the actual position, working state and some emergency situations or dangerous situations of the workers in the underground are difficult to directly and timely master by the workers or rescue teams in the underground. Because the underground coal mine roadway is complicated and difficult to clearly identify, a large amount of harmful gas or other harmful factors may exist in some areas, or some equipment is heavy, and the areas are usually prohibited to enter the mine at will without effective protective measures. Because the underground adopts the mode of artificial lighting, visibility is not good, the tunnel is more complicated, and the warning effect of general warning sign is not powerful enough in such environment to long-time engaging boring and heavy work, thereby make the staff neglect these warning signs because of tired easily.
The coal mine underground environment is more special compared with the common environment, and the underground video has the following characteristics:
(1) the environment under the mine is dim, and the artificial lighting mode is adopted throughout the year, and although the artificial lighting mode is provided with a lighting device, the lighting mode is different from natural light, so that the intensity of underground lighting of the coal mine is uneven, and even under the same monitoring environment, different distances from a light source can also cause different illumination.
(2) In the mine environment, except some special marks or equipment which are relatively striking, the identification processing is easy, and the image information of other areas is almost mainly black, white and gray, so that almost no abundant color information can be utilized when the image processing is carried out on the monitoring video.
(3) Generally, a mine lamp is provided for each worker in a coal mine, and when the worker carries the mine lamp into a detection area, the brightness of the area is greatly influenced.
In order to solve the above problems, the prior art discloses some tracking methods for moving objects, for example, an object tracking method using motion blur information disclosed in CN104091350, which solves the problems of image degradation caused by motion blur by using sparse representation, and makes the tracking algorithm more effective by using the extracted motion information and combining with the particle filtering algorithm; however, the method cannot solve the problem of continuous tracking of targets among different fixed cameras in a mine environment, and the problems of uneven illumination, lack of color information, proximity of the targets to a background environment and the like exist in a video environment under a coal mine, so that the traditional target detection tracking algorithm is poor in application under the mine, and therefore the target tracking loss can occur in many cases.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for continuously tracking a moving target with multiple cameras for a mine, which adopts a target detection and tracking algorithm to search an interested moving target in a large amount of video data, and uses a computer to replace manual monitoring, thereby reducing the time for manual searching and improving the searching efficiency; the method provides a walking route of underground workers for the underground managers, so that the managers can timely inform the underground workers of the change of the surrounding environment or timely warn the underground workers of dangers in the surrounding environment; the method provides a detection tracking algorithm among different cameras, can master the approximate position of a worker, can find a rescue target in time when a mine accident occurs, and strives for time for a rescue team, so that rescue can be carried out on the rescue team in time.
The specific technical scheme of the invention is as follows:
the invention provides a method for continuously tracking a moving target of multiple cameras for a mine, which comprises the following steps:
s1: extracting video frames from a video acquired by a first camera, and selecting a tracking target image through a tracking frame;
s2: moving the tracking frame by a pixels in different directions to obtain a plurality of moving target templates;
s3: extracting moving target characteristics, performing dimension reduction processing on the moving target characteristics, and creating a moving target dictionary;
s4: according to the result of the previous video frame extracted in the step S1, screening N particles from the created moving object dictionary according to gaussian distribution, to obtain an initial particle sample set Dj;
s5: obtaining a sparse representation coefficient of the moving target dictionary according to a BLOOOMP algorithm;
s6: obtaining the particles with the maximum weight by utilizing maximum posterior estimation, wherein the corresponding moving target template is a tracking result;
s7: updating the initial particle sample set Dj-1 by utilizing an updating template algorithm;
s8: and detecting a moving object in the video collected by the (k-1) th camera, and performing steps S3-S7.
The flow of the BLOOMP algorithm is as follows:
the input comprises a sparse basis matrix psi, a measurement vector b, an iteration termination condition η (η is more than 0)
Initialization: object scene x00, echo residual r0B, set of branches
Figure BDA0001373126060000031
Iteration: forn is 1, … …, s
1)
Figure BDA0001373126060000032
2)Sn=LO(Sn-1∪{max})
3)xn=argmaxz||Az-b||2S.t.sup(z)∈Sn
4)rn=b-Axn
And (3) outputting: object scene xS
In the further improvement, the invention adopts a Gabor feature extraction algorithm to extract the moving target features. The Gabor kernel function is defined as:
Figure BDA0001373126060000041
where z ═ x, y is a positional parameter, μ is an orientation parameter of the Gabor nucleus, ν is a size parameter of the Gabor nucleus, and k isμ,ν=kνejФμRepresenting wavelet vectors, phi pi mu/8, kμ=kmax/fν,kmaxPi/2, denotes the maximum frequency, sigma 2 pi,
Figure BDA0001373126060000042
is the spatial factor between frequency and kernel, | | | · | |, is a norm operator.
In the invention, the PCA algorithm is adopted to carry out dimension reduction processing on the characteristics of the moving target.
In a further improvement, step S1 includes:
s11: extracting a video frame from the video collected by the first camera, and displaying the video frame to a manager;
s12: the manager judges whether a target needing to be tracked exists through a human-computer interaction interface, if so, the step S13 is carried out, and if not, the step S14 is carried out;
s13: selecting a target image to be tracked by a manual frame;
s14: and inputting a command of re-extracting the video frames through the human-computer interaction interface until the video frames containing the target to be tracked are extracted, and performing the step S13.
In a further improvement, step S1 further includes:
s15: constructing a background image of a video frame containing a target to be tracked by using a self-adaptive Gaussian mixture algorithm;
s16: performing intersection processing on the target image to be tracked selected from the frames and the background image to obtain a common image;
s17: judging whether the shared image is empty, if so, performing step S18, and if not, performing step S19;
s18: determining the target image to be tracked selected by the frame as a tracking target image;
s19: and performing subtraction operation on the pre-tracking target image and the common image to obtain a first difference image, determining the first difference image as a tracking target image, and changing the shape of the tracking frame into the contour shape of the first difference image.
In a further improvement, step S2 includes:
s21: respectively moving the tracking frame up, down, left and right by 1-5 pixels or rotating by 1-5 degrees to obtain a plurality of candidate moving target templates;
s22: carrying out subtraction operation on the candidate moving target template and the tracking target image selected by the frame to obtain a second difference image;
s23: and calculating pixel points of the second difference image, comparing the pixel points with the pixel point threshold value, and selecting a candidate moving target template with the pixel points larger than the pixel point threshold value as a moving target template.
Further improvement, the specific method for extracting the moving target features comprises the following steps:
s31: respectively acquiring a tracking target image and a feature point set of each moving target template;
s32: performing intersection processing on all the feature point sets to obtain a common feature point set;
s33: discharging mismatching points in the common characteristic point set and the characteristic point set of the tracking target image obtained in the step S31 to obtain a matching characteristic point set and a tracking target characteristic point set;
s34: and merging the matching feature point set obtained in the step S33 with the tracking target feature point set to obtain a moving target feature point set.
In a further improvement, step S8 includes:
s81: performing intersection processing on the q frame image and the q-1 frame image and the q-2 frame image to obtain an intersection image;
s82: carrying out subtraction operation on the q frame image and the obtained intersection image to obtain a difference image;
s83: performing intersection processing on the difference image and the reconstructed background image of the q-1 frame image to obtain a secondary intersection image, judging whether the intersection image is empty, if so, judging that the difference image is a moving target image, and if not, performing step S84;
s84: and subtracting the difference image and the secondary intersection image to obtain a secondary difference image, wherein the secondary difference image is a moving target image.
The invention has the following beneficial effects:
the invention provides a method for continuously tracking a moving target with multiple cameras for a mine, which adopts a target detection tracking algorithm to search an interested moving target in a large amount of video data, and uses a computer to replace manual monitoring, thereby reducing the manual searching time and improving the searching efficiency; the method provides a walking route of underground workers for the underground managers, so that the managers can timely inform the underground workers of the change of the surrounding environment or timely warn the underground workers of dangers in the surrounding environment; the method provides a detection tracking algorithm among different cameras, can master the approximate position of a worker, can find a rescue target in time when a mine accident occurs, and strives for time for a rescue team, so that rescue can be carried out on the rescue team in time.
Drawings
FIG. 1 is a flow chart of a method for continuously tracking a moving object with multiple cameras in a mine in embodiment 1;
FIG. 2 is a flowchart of step S1 in example 2;
FIG. 3 is a flowchart of step S2 in example 3;
FIG. 4 is a flowchart of embodiment 4 for extracting moving object features;
FIG. 5 is a flowchart of step S8 in example 5.
Detailed Description
The present invention will be described in further detail with reference to the following examples and drawings.
Example 1
The embodiment 1 of the invention provides a method for continuously tracking a moving target with multiple cameras for a mine, which comprises the following steps of:
s1: extracting video frames from a video acquired by a first camera, and selecting a tracking target image through a tracking frame;
s2: moving the tracking frame by a pixels in different directions to obtain a plurality of moving target templates;
s3: extracting moving target features by adopting a Gabor feature extraction algorithm, and performing dimensionality reduction on the moving target features by adopting PCA (principal component analysis) to create a moving target dictionary;
s4: according to the result of the previous video frame extracted in the step S1, screening N particles from the created moving object dictionary according to gaussian distribution, to obtain an initial particle sample set Dj;
s5: obtaining a sparse representation coefficient of the moving target dictionary according to a BLOOOMP algorithm;
s6: obtaining the particles with the maximum weight by utilizing maximum posterior estimation, wherein the corresponding moving target template is a tracking result;
s7: updating the initial particle sample set Dj-1 by utilizing an updating template algorithm;
s8: and detecting a moving object in the video collected by the (k-1) th camera, and performing steps S3-S7.
According to the method for continuously tracking the moving target of the multiple cameras for the mine, the interested moving target is searched in different cameras through a detection and tracking algorithm, and the result of each frame is output, so that manual searching is avoided, the time is greatly reduced, and the searching efficiency is improved. The method mainly comprises three systems of dictionary establishment, target detection and sparse representation tracking for a target template. All cameras are mounted in fixed positions during tracking. Firstly, video frames with targets are searched in a first camera, and the moving targets are manually selected in a frame mode. Secondly, moving target detection is firstly carried out in other videos, a dictionary is built for the detected moving target, finally, the candidate target template is matched with the framed target template through sparse representation, and the result of each frame is output. The core of the invention is: (1) an interesting moving target is searched in a large amount of video data by adopting a target detection and tracking algorithm, and a computer is used for replacing manual monitoring, so that the manual searching time is reduced, and the searching efficiency is improved; (2) the technology provides a walking route of underground workers for the underground managers, so that the managers can timely inform the underground workers of the change of the surrounding environment or timely warn the underground workers of dangers in the surrounding environment; (3) the detection and tracking algorithm among different cameras is provided, the approximate position of a worker can be grasped, a rescue target can be found out in time when a mine accident happens, time is won for a rescue team, and therefore rescue is carried out on the rescue team in time.
Example 2
Embodiment 2 of the present invention provides a method for continuously tracking a moving target with multiple cameras in a mine, which is basically the same as that in embodiment 1, except that, as shown in fig. 2, step S1 includes:
s11: extracting a video frame from the video collected by the first camera, and displaying the video frame to a manager;
s12: the manager judges whether a target needing to be tracked exists through a human-computer interaction interface, if so, the step S13 is carried out, and if not, the step S14 is carried out;
s13: selecting a target image to be tracked by a manual frame;
s14: inputting a command of re-extracting the video frames through the human-computer interaction interface until the video frames containing the target to be tracked are extracted, and performing the step S13;
s15: constructing a background image of a video frame containing a target to be tracked by using a self-adaptive Gaussian mixture algorithm;
s16: performing intersection processing on the target image to be tracked selected from the frames and the background image to obtain a common image;
s17: judging whether the shared image is empty, if so, performing step S18, and if not, performing step S19;
s18: determining the target image to be tracked selected by the frame as a tracking target image;
s19: and performing subtraction operation on the pre-tracking target image and the common image to obtain a first difference image, determining the first difference image as a tracking target image, and changing the shape of the tracking frame into the contour shape of the first difference image.
The invention further limits the step S1, the concrete method is that firstly, a video frame is extracted from the video frame collected by the first camera, whether an object to be tracked exists is judged, if yes, the pre-tracking target image is selected by a manual frame, if not, the video frame is extracted again until the video frame containing the object to be tracked is extracted; then, a background image of the target image to be tracked is constructed, intersection processing is carried out on the target image to be tracked and the background image selected by the frame, a common image is obtained, whether the intersection exists between the target image to be tracked and the background image is judged, if the intersection does not exist between the target image to be tracked and the background image, the background image is not selected in the target image to be tracked selected by the frame, the target image to be tracked is determined to be the tracking target image, if the intersection exists between the target image to be tracked and the background image, the difference image is subtracted from the tracking target image, the obtained difference image is the tracking target image, and the shape of the tracking frame is changed into the contour shape of the first difference image.
Example 3
Embodiment 3 of the present invention provides a method for continuously tracking a moving object with multiple cameras in a mine, which is basically the same as that in embodiment 2, except that, as shown in fig. 3, step S2 includes:
s21: respectively moving the tracking frame up, down, left and right by 1-5 pixels or rotating by 1-5 degrees to obtain a plurality of candidate moving target templates;
s22: carrying out subtraction operation on the candidate moving target template and the tracking target image selected by the frame to obtain a second difference image;
s23: and calculating pixel points of the second difference image, comparing the pixel points with the pixel point threshold value, and selecting a candidate moving target template with the pixel points larger than the pixel point threshold value as a moving target template.
The invention further screens the moving target template, so that the obtained moving target template has high similarity with the tracking target image, and subsequent analysis and tracking are facilitated.
Example 4
Embodiment 4 of the present invention provides a method for continuously tracking a moving target with multiple cameras in a mine, which is basically the same as embodiment 3, except that, as shown in fig. 4, the specific method for extracting the features of the moving target includes:
s31: respectively acquiring a tracking target image and a feature point set of each moving target template;
s32: performing intersection processing on all the feature point sets to obtain a common feature point set;
s33: discharging mismatching points in the common characteristic point set and the characteristic point set of the tracking target image obtained in the step S31 to obtain a matching characteristic point set and a tracking target characteristic point set;
s34: and merging the matching feature point set obtained in the step S33 with the tracking target feature point set to obtain a moving target feature point set.
The extracted feature points are further matched, the traditional matching algorithm selects a fixed matching value, when the setting of the matching value is larger, the program calculation amount is increased, the operation efficiency of the algorithm is influenced, if the setting of the matching value is smaller, the average matching point number of images is reduced, and an optimal solution which cannot be obtained by the minimum product is caused, in order to solve the problems, the invention firstly obtains the feature point sets of a tracking target image and each moving target template through a feature extraction algorithm, then carries out intersection processing on all the feature point sets to obtain a common feature point set, and then discharges mismatching points in the common feature point set and the feature point set of the tracking target image obtained in the step S31 by using a conventional method to obtain a matching feature point set and a tracking target feature point set; and then merging the matched characteristic point set and the tracking target characteristic point set to obtain a moving target characteristic point set. Through the limitation, the traffic volume is reduced, matched moving target feature matching points can be found, subsequent tracking is facilitated, and the tracking efficiency is improved.
Example 5
Embodiment 5 of the present invention provides a method for continuously tracking a moving object with multiple cameras in a mine, which is basically the same as that in embodiment 4, except that, as shown in fig. 5, step S8 includes:
s81: performing intersection processing on the q frame image and the q-1 frame image and the q-2 frame image to obtain an intersection image;
s82: carrying out subtraction operation on the q frame image and the obtained intersection image to obtain a difference image;
s83: performing intersection processing on the difference image and the reconstructed background image of the q-1 frame image to obtain a secondary intersection image, judging whether the intersection image is empty, if so, judging that the difference image is a moving target image, and if not, performing step S84;
s84: and subtracting the difference image and the secondary intersection image to obtain a secondary difference image, wherein the secondary difference image is a moving target image.
The invention further specifically limits the step S8, and obviously improves the inspection of the moving target in the video frame and the inspection efficiency by combining the frame difference method and the background filtering method, thereby being convenient for the subsequent extraction and tracking of the moving target and improving the tracking efficiency.
The present invention is not limited to the above-mentioned preferred embodiments, and any other products in various forms can be obtained by anyone in the light of the present invention, but any changes in the shape or structure thereof, which have the same or similar technical solutions as those of the present application, fall within the protection scope of the present invention.

Claims (7)

1. A method for continuously tracking a moving target of multiple cameras for a mine is characterized by comprising the following steps:
s1: extracting video frames from a video acquired by a first camera, and selecting a tracking target image through a tracking frame;
s2: moving the tracking frame by a pixels in different directions to obtain a plurality of moving target templates;
s3: extracting moving target characteristics, performing dimension reduction processing on the moving target characteristics, and creating a moving target dictionary; the specific method for extracting the characteristics of the moving target comprises the following steps:
s31: respectively acquiring a tracking target image and a feature point set of each moving target template;
s32: performing intersection processing on all the feature point sets to obtain a common feature point set;
s33: discharging mismatching points in the common characteristic point set and the characteristic point set of the tracking target image obtained in the step S31 to obtain a matching characteristic point set and a tracking target characteristic point set;
s34: merging the matching feature point set obtained in the step S33 with the tracking target feature point set to obtain a moving target feature point set;
s4: according to the result of the previous video frame extracted in the step S1, screening N particles from the created moving object dictionary according to gaussian distribution, to obtain an initial particle sample set Dj;
s5: obtaining a sparse representation coefficient of the moving target dictionary according to a BLOOOMP algorithm;
s6: obtaining the particles with the maximum weight by utilizing maximum posterior estimation, wherein the corresponding moving target template is a tracking result;
s7: updating the initial particle sample set Dj-1 by utilizing an updating template algorithm;
s8: and detecting a moving object in the video collected by the (k-1) th camera, and performing steps S3-S7.
2. The method for continuously tracking the moving object with the multiple cameras in the mine as claimed in claim 1, wherein a Gabor feature extraction algorithm is adopted to extract the features of the moving object.
3. The method for continuously tracking the moving target with the multiple cameras in the mine as claimed in claim 1, wherein a PCA algorithm is adopted to perform the dimension reduction processing on the characteristics of the moving target.
4. The method for continuously tracking the moving object with the multiple cameras in the mine as claimed in claim 1, wherein the step S1 includes:
s11: extracting a video frame from the video collected by the first camera, and displaying the video frame to a manager;
s12: the manager judges whether a target needing to be tracked exists through a human-computer interaction interface, if so, the step S13 is carried out, and if not, the step S14 is carried out;
s13: selecting a target image to be tracked by a manual frame;
s14: and inputting a command of re-extracting the video frames through the human-computer interaction interface until the video frames containing the target to be tracked are extracted, and performing the step S13.
5. The method for continuously tracking the moving object with the multiple cameras in the mine as claimed in claim 1, wherein the step S1 further comprises:
s15: constructing a background image of a video frame containing a target to be tracked by using a self-adaptive Gaussian mixture algorithm;
s16: performing intersection processing on the target image to be tracked selected from the frames and the background image to obtain a common image;
s17: judging whether the shared image is empty, if so, performing step S18, and if not, performing step S19;
s18: determining the target image to be tracked selected by the frame as a tracking target image;
s19: and performing subtraction operation on the pre-tracking target image and the common image to obtain a first difference image, determining the first difference image as a tracking target image, and changing the shape of the tracking frame into the contour shape of the first difference image.
6. The method for continuously tracking the moving object with the multiple cameras in the mine as claimed in claim 1, wherein the step S2 includes:
s21: respectively moving the tracking frame up, down, left and right by 1-5 pixels or rotating by 1-5 degrees to obtain a plurality of candidate moving target templates;
s22: carrying out subtraction operation on the candidate moving target template and the tracking target image selected by the frame to obtain a second difference image;
s23: and calculating pixel points of the second difference image, comparing the pixel points with the pixel point threshold value, and selecting a candidate moving target template with the pixel points larger than the pixel point threshold value as a moving target template.
7. The method for continuously tracking the moving object with the multiple cameras in the mine as claimed in claim 1, wherein the step S8 includes:
s81: performing intersection processing on the q frame image and the q-1 frame image and the q-2 frame image to obtain an intersection image;
s82: carrying out subtraction operation on the q frame image and the obtained intersection image to obtain a difference image;
s83: performing intersection processing on the difference image and the reconstructed background image of the q-1 frame image to obtain a secondary intersection image, judging whether the intersection image is empty, if so, judging that the difference image is a moving target image, and if not, performing step S84;
s84: and subtracting the difference image and the secondary intersection image to obtain a secondary difference image, wherein the secondary difference image is a moving target image.
CN201710671192.2A 2017-08-08 2017-08-08 Multi-camera moving object continuous tracking method for mine Active CN107481269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710671192.2A CN107481269B (en) 2017-08-08 2017-08-08 Multi-camera moving object continuous tracking method for mine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710671192.2A CN107481269B (en) 2017-08-08 2017-08-08 Multi-camera moving object continuous tracking method for mine

Publications (2)

Publication Number Publication Date
CN107481269A CN107481269A (en) 2017-12-15
CN107481269B true CN107481269B (en) 2020-07-03

Family

ID=60599876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710671192.2A Active CN107481269B (en) 2017-08-08 2017-08-08 Multi-camera moving object continuous tracking method for mine

Country Status (1)

Country Link
CN (1) CN107481269B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665484B (en) * 2018-05-22 2021-07-09 国网山东省电力公司电力科学研究院 Danger source identification method and system based on deep learning
CN109165600B (en) * 2018-08-27 2021-11-26 浙江大丰实业股份有限公司 Intelligent search platform for stage performance personnel
CN114140493B (en) * 2021-12-03 2022-07-19 湖北微模式科技发展有限公司 Target multi-angle display action continuity detection method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456225B (en) * 2010-10-22 2014-07-09 深圳中兴力维技术有限公司 Video monitoring system and moving target detecting and tracking method thereof
CN102004910A (en) * 2010-12-03 2011-04-06 上海交通大学 Video target tracking method based on SURF (speeded-up robust features) feature point diagram matching and motion generating model
CN104424638A (en) * 2013-08-27 2015-03-18 深圳市安芯数字发展有限公司 Target tracking method based on shielding situation
CN104091350B (en) * 2014-06-20 2017-08-25 华南理工大学 A kind of object tracking methods of utilization motion blur information
CN104239865B (en) * 2014-09-16 2017-04-12 宁波熵联信息技术有限公司 Pedestrian detecting and tracking method based on multi-stage detection
CN104376577A (en) * 2014-10-21 2015-02-25 南京邮电大学 Multi-camera multi-target tracking algorithm based on particle filtering
CN104361609B (en) * 2014-11-18 2017-12-01 电子科技大学 A kind of method for tracking target based on rarefaction representation
CN104680516B (en) * 2015-01-08 2017-09-29 南京邮电大学 A kind of acquisition methods of image quality features set of matches
CN105718899A (en) * 2016-01-22 2016-06-29 张健敏 Solar water heater based on visual characteristics

Also Published As

Publication number Publication date
CN107481269A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
JP5325899B2 (en) Intrusion alarm video processor
CN107481269B (en) Multi-camera moving object continuous tracking method for mine
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN111339883A (en) Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene
CN112287875B (en) Abnormal license plate recognition method, device, equipment and readable storage medium
CN104966054B (en) Detection method of small target in unmanned plane visible images
CN111292321A (en) Method for identifying defect image of insulator of power transmission line
CN110852179B (en) Suspicious personnel invasion detection method based on video monitoring platform
CN110441320B (en) Coal gangue detection method, device and system
CN114881869A (en) Inspection video image preprocessing method
Joshi et al. Damage identification and assessment using image processing on post-disaster satellite imagery
CN112966618A (en) Dressing identification method, device, equipment and computer readable medium
CN108320299A (en) A kind of target tracking algorism based on motor behavior analysis
CN116452976A (en) Underground coal mine safety detection method
Mandal et al. Real-time fast fog removal approach for assisting drivers during dense fog on hilly roads
Mandal et al. Human visual system inspired object detection and recognition
Nicolas et al. Video traffic analysis using scene and vehicle models
Nivetha et al. Video-object detection using background subtraction in spartan 3 fpga kit
Yin et al. Flue gas layer feature segmentation based on multi-channel pixel adaptive
Barhoun et al. A Machine Vision Based Method for Extracting Visual Features of Froth in Copper Floatation Process
Kadam et al. Rain Streaks Elimination Using Image Processing Algorithms
Pava et al. Object Detection and Motion Analysis in a Low Resolution 3-D Model
Ismael Comparative study for different color spaces of image segmentation based on Prewitt edge detection technique
Aqel et al. Shadow detection and removal for traffic sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant