CN113194249A - Moving object real-time tracking system and method based on camera - Google Patents

Moving object real-time tracking system and method based on camera Download PDF

Info

Publication number
CN113194249A
CN113194249A CN202110438641.5A CN202110438641A CN113194249A CN 113194249 A CN113194249 A CN 113194249A CN 202110438641 A CN202110438641 A CN 202110438641A CN 113194249 A CN113194249 A CN 113194249A
Authority
CN
China
Prior art keywords
camera
moving object
image
real
time tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110438641.5A
Other languages
Chinese (zh)
Inventor
谢洪途
吴轩
杨飚
唐佳浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110438641.5A priority Critical patent/CN113194249A/en
Publication of CN113194249A publication Critical patent/CN113194249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a camera-based moving object real-time tracking system and a camera-based moving object real-time tracking method, wherein the system comprises a camera, a holder matched with the camera, a steering engine matched with the holder, a single chip microcomputer and a power supply; the camera, the steering engine and the power supply are all connected with the single chip microcomputer; the method has the advantages of low hardware cost, low power consumption, strong concealment and the like, and is suitable for remote areas which cannot be covered by security monitoring, simple civil alarm equipment and the like. The night vision tracking camera can effectively solve the problems that the existing night vision tracking camera is high in cost, complex in deployment condition, high in later maintenance cost, incapable of being deployed in some remote areas and the like, and provides guarantee for social stability. The method balances the target identification and tracking precision and the hardware cost, reduces the hardware cost while achieving the high precision as much as possible, and is suitable for monitoring and civil warning devices in remote areas.

Description

Moving object real-time tracking system and method based on camera
Technical Field
The invention relates to the field of image processing, in particular to a moving object real-time tracking system and method based on a camera.
Background
In recent years, artificial intelligence technology is widely applied to the fields of voice recognition, target detection, unmanned driving and the like, and the technology for recognizing and tracking a specific dynamic target also enters the peak period of research and application in the industries of production, security protection, monitoring, consumption and the like.
Some removal tracking cameras appear in the market at present, there is the camera of part integration can directly carry out commercial installation, but most camera all is from taking the light source, and generally use infrared light source or LED visible light filling, the disguise of this type of camera is not very good, very easily by people's discovery, and it is more expensive to generally price, if be used in the less place of moving object, like the alley at night, pedestrian's places such as corridor, infrastructure such as corresponding information transmission optic fibre need be collocated to the use expense is more expensive. However, in the patent technology of the related aspects, the technical core is not put on how to identify the moving object, the identification algorithm is not specifically explained, and a clear flow is not provided for analyzing the image information acquired by the camera and guiding the operation of the holder carrying the camera. In addition, some people can only track the people or objects in the information base, and the patent can track strange moving objects.
In terms of algorithm, the conventional object detection algorithm firstly uses a sliding window to perform feature extraction on an image, and then uses a classifier to classify the image. The more common feature extraction methods in the traditional algorithm comprise a Cascade method, a HOG/DPM method and a HAAR/SVM method, and some improvement and optimization algorithms of the methods. The disadvantage is that the operation efficiency is not suitable for tracking the dynamic target in real time.
However, many common dynamic object detection algorithms, such as an optical flow method, are widely applied in some fields, and the idea is mainly to calculate an optical flow field, that is, under a proper smoothness constraint condition, a motion field is estimated according to a space-time gradient of an image sequence, and a moving object and a scene are detected and segmented by estimating a change of the motion field. Generally, two schemes based on a global optical flow field and a characteristic point optical flow field are adopted, but the method has the defects that the calculated amount of the global optical flow field is large, the characteristic point optical flow field cannot accurately extract the characteristics of a moving target and the like, the anti-noise performance of an optical flow method is poor, and the target is difficult to process and analyze a moving track in real time without special equipment.
In addition, there is a background subtraction method applied to a special scene. The background subtraction method is an effective moving object detection algorithm, and the basic idea is that a background parameter model is used to approximate the pixel value of a background image, and a current image and the background image are compared and differentiated to realize the detection of a moving area. At present, two algorithms, namely a non-regression recursive algorithm and a regression recursive algorithm, mainly exist, and have the defect that the algorithm has higher requirements on the environment and can only be used in specific occasions.
In addition, there are detection algorithms based on neural networks. For example, a raspberry group is used as a carrier of edge calculation, a one stage algorithm based on a deep learning method is used, a MobileNet-SSD algorithm is selected as a target detection and identification algorithm through experimental comparison and is optimized, under the condition that the precision is not greatly influenced, the model is smaller, the speed is higher, the real-time performance of target detection is improved, and a real-time target detection system is realized; and a YOLO v3 algorithm based on a GoogleNet convolutional neural network is also provided, the turning and rotating amplitude and the obstacle position of the target motion are identified by taking Jetson TX2 as a control core and combining a tracking obstacle avoidance algorithm based on a fuzzy logic idea, and the accurate control and the accurate obstacle avoidance of the motion of the system are realized. The two implementation methods based on deep learning and convolutional neural network have the disadvantages that the requirements on hardware are high, and the training processing on data needs to be implemented locally with enough calculation power, so that the hardware cost is increased and the algorithm is complex.
Disclosure of Invention
The invention provides a camera-based moving object real-time tracking system, which can realize real-time detection and tracking of a small amount of moving objects.
The invention further aims to provide a moving object real-time tracking method of the moving object real-time tracking system based on the camera.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a moving object real-time tracking system based on a camera comprises the camera, a cradle head matched with the camera, a steering engine matched with the cradle head, a single chip microcomputer and a power supply; the camera, the steering engine and the power supply are all connected with the single chip microcomputer.
Preferably, the camera is an OpenMV camera.
Preferably, the cloud deck matched with the camera has two degrees of freedom, and the steering engine supports two-dimensional motion; the number of the steering engines of the matched tripod head is 2, one steering engine is responsible for the movement in the horizontal direction, and the other steering engine is responsible for the movement in the vertical direction.
Preferably, the two steering engines are powered by two 18650 battery combination L298N motors; the single chip microcomputer is an Arduino single chip microcomputer; the steering engine and the camera are connected with the single chip microcomputer through serial ports.
A real-time tracking method for a moving object comprises the following steps:
s1: after the camera is stabilized, shooting six images by the camera and entering a buffer area;
s2: sequentially carrying out difference on two adjacent frames of images to obtain a frame difference image of the five frames of moving targets;
s3: denoising and morphologically expanding each frame difference image to obtain a binary image;
s4: searching the center of the largest connected white block in each frame of binary image as a target position;
s5: the target position is subjected to weighted summation, the current position of the moving target can be estimated, and the rotating angle of the holder is calculated;
s5: the holder rotates, so that the moving target is kept at the center of the camera picture.
Furthermore, the camera moves relative to the ground after the holder is driven to rotate every time, images change rapidly to a large extent, a large amount of interference is generated on the result of the frame difference method, the images need to be waited for to be stable, and then the frame difference method is used for detecting objects moving relative to the ground in a scene, so that errors in identification caused by the movement of the camera per se are avoided.
Further, after two frames of images are extracted from the buffer area, the two frames of images are grayed to obtain the grayscale images of the two frames of images, and then the two grayscale images are differentiated according to a frame difference method, namely two-dimensional matrixes are subtracted, so that the difference can generate large noise at some mutation points and points which are originally irrelevant, and a mean value filtering algorithm is adopted for eliminating the noise.
Further, after the mean filtering algorithm, only the contour of the target is reserved in the image, a threshold value is dynamically set according to the idea of a background subtraction method, then the image is binarized into a two-dimensional matrix with 0 and 1, after the image is binarized, the noise is filtered by using the mean filtering algorithm, the image and a kernel are convoluted by using a morphological expansion algorithm, so that some information is added to increase the information content of the image, the contour of the moving object is more definitely obtained, and the center of the connected white color block obtained at the moment is the estimation of the position of the moving target, so that the track of the moving target is obtained.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
1. the method comprises the steps of achieving image acquisition and recognition through OpenMV, and detecting a dynamic target through a series of image processing algorithms; meanwhile, the holder is used as a bearing tool of the camera, and the two orthogonal steering engines are controlled by the Arduino to control the holder, so that the camera is guaranteed to move along with a target in real time, and a better visual angle is provided to acquire more information;
2. the camera does not have a light source, but because the camera is carried on the platform with three-dimensional freedom, the platform with two rotating shafts can be realized, dynamic targets can be quickly captured by applying a recognition algorithm, the targets can be kept in the visual angle of the camera for a long time, the power consumption is reduced on the one hand, the concealment is better, the mobility is stronger, in the places with rare people, the moving targets are continuously tracked, the utilization rate of monitoring resources can be greatly improved, and more information can be provided for case detection, accident reason analysis and the like under the condition of accidents. For example, if a frequently-occurring thief escapes after stealing, and the monitored and acquired picture cannot provide effective information for catching the thief at all, if the mobile tracking camera designed by the patent is applied, very much favorable information can be provided for case detection. The camera also has certain wireless communication capability, and in the coming 5G era, the camera can realize continuous tracking of targets to form regional monitoring by means of coordination compound networking, so that guarantee is provided for stability of future society. In fact, besides civil use, the patent can also be applied to the field of military weapons, such as being carried on an electromagnetic gun, and can accurately strike a moving object in the field of view of a camera.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIGS. 3-5 are the processing results after the target detection algorithm is performed on the surveillance video; wherein, each image is respectively the result obtained after the operations of graying, frame difference, mean value filtering, binaryzation, median filtering, morphological expansion and the like are carried out from left to right and from top to bottom;
fig. 6 shows the results of the system when tested in actual conditions.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1, a moving object real-time tracking system based on a camera comprises the camera, a cradle head matched with the camera, a steering engine matched with the cradle head, a single chip microcomputer and a power supply; the camera, the steering engine and the power supply are all connected with the single chip microcomputer.
The camera is an OpenMV camera; the tripod head matched with the camera has two degrees of freedom, and the steering engine supports the motion of two dimensions; the number of the steering engines matched with the pan-tilt is 2, wherein one steering engine is responsible for moving in the horizontal direction, and the other steering engine is responsible for moving in the vertical direction; the two steering engines are powered by two 18650 battery combination L298N motors; the single chip microcomputer is an Arduino single chip microcomputer; the steering engine and the camera are connected with the single chip microcomputer through serial ports.
The camera is used for obtaining image information, so it is crucial to select a camera that the performance is good, compared ordinary camera and OpenMV's camera, it is big to discover ordinary camera volume, and the expansibility is not strong and the interface function of self is comparatively complicated, so this patent uses the OpenMV camera to regard as image capture's camera, and this kind of camera is small and exquisite light, can directly use the similar MicroPython language with Python to program it and be used for realizing concrete function. The images are captured using an OpenMV onboard OV7725 camera. Meanwhile, the camera is used as an upper computer, an STM32H7 processor of OpenMV is used for directly binarizing the image, and operations such as frame difference and expansion of the image are completed.
The cloud platform of carrying on the camera has two degrees of freedom, and it supports the motion of two dimensions by two steering engines. One steering engine is responsible for the removal of horizontal direction, and another steering engine is responsible for the removal of vertical direction, and two steering engines are supplied power by two 18650 battery combination L298N motor drives, uses a singlechip to control the deflection angle of two steering engines in addition, and the singlechip receives the signal that OpenMV sent, solves, outputs two steering engine angle of adjustment of two corresponding PWM ripples control according to corresponding PID value.
The image processing result is displayed by a notebook computer which is linked with OpenMV through serial port communication (data can be transmitted to the computer through Bluetooth or wifi communication in commercial application), and by means of the flexibility and good computing performance of an STM32H743II ARM Cortex M7 processor which is carried by the OpenMV, image processing can be carried out with low time delay, corresponding PID parameters are computed, and then the image processing result is transmitted to a cloud deck to control the movement of the cloud deck so as to track the target in real time.
As shown in fig. 2, a method for tracking a moving object in real time includes the following steps:
s1: after the camera is stabilized, shooting six images by the camera and entering a buffer area;
s2: sequentially carrying out difference on two adjacent frames of images to obtain a frame difference image of the five frames of moving targets;
s3: denoising and morphologically expanding each frame difference image to obtain a binary image;
s4: searching the center of the largest connected white block in each frame of binary image as a target position;
s5: the target position is subjected to weighted summation, the current position of the moving target can be estimated, and the rotating angle of the holder is calculated;
s5: the holder rotates, so that the moving target is kept at the center of the camera picture.
Through experiments, the fact that the camera moves relative to the ground after the cradle head is driven to rotate every time is found, images change rapidly to a large extent, and a large amount of interference is generated on the result of a frame difference method. And (3) waiting for image stabilization, and detecting an object moving relative to the ground in the scene by using a frame difference method so as to avoid identification errors caused by the motion of the camera. After stabilization, the camera takes six images and enters a buffer area. And then the processor sequentially performs difference on two adjacent frames of images to obtain a frame difference image of the five frames of moving targets, and performs denoising and morphological expansion on each frame difference image to obtain a binary image. The center of the largest connected white block in each frame of the binarized image can then be found as the target position. According to the weighted summation of the target positions, the current position of the moving target can be estimated, and the rotating angle of the holder is calculated. And finally, the holder rotates, so that the moving target can be kept at the center of the picture of the camera.
Because the cradle head needs to be reset when the target object is not detected for a long time, or in order to meet the requirement that monitoring personnel find abnormality during observation and manually adjust the angle of the camera, a manual mode relative to a tracking mode is added to the system, namely the angle of x and y axes is manually input to control a steering engine to adjust the visual angle.
After two frames of images are extracted from the buffer area, the two frames of images are firstly grayed to obtain the grayscale images of the two frames of images, and then the two grayscale images are differentiated according to a frame difference method, namely the two-dimensional matrixes are subtracted actually, so that the method has high operation speed. Since the difference at this time will generate large noise at some abrupt points and at otherwise irrelevant points, the simplest mean filtering algorithm is adopted to eliminate the noise. The mean filter is used mainly in consideration of the fact that after graying, the target is large and does not need too many details of the target, a part of details can be added to the image through a subsequent expansion algorithm, mean filtering processing is simple, the calculation speed is high, extra expenditure is not needed, and the mean filter is very suitable for equipment with high integration level and low power consumption.
After mean filtering, the image only keeps the contour of the target, in order to simplify subsequent analysis and reduce delay, a threshold value is dynamically set according to the idea of a background subtraction method, and then the image is binarized into a two-dimensional matrix with only 0 and 1. After the image is binarized, the image can be easily segmented, but in order to more accurately obtain the position contour of the target and avoid the influence of some impulse noise and the like on the judgment, a median filter is used for filtering the noise, meanwhile, the edge of the signal can be well protected from being fuzzified, and a median filtering algorithm is very simple, has high operation speed and is easy to realize in hardware. And finally, the image and the kernel are convoluted by using a morphological dilation algorithm, so that the information amount of the image is increased by adding some information, the contour of a moving object is obtained more clearly, and the moving object is convenient to observe and track. The center of the connected white color block obtained at this time is the estimation of the position of the moving target, so that the track of the moving target is obtained.
Fig. 3-5 are processing results of the target detection algorithm performed on the surveillance video. Wherein, each graph is the result obtained after the operations of graying, frame difference, mean value filtering, binaryzation, median filtering, morphological expansion and the like are respectively carried out from left to right and from top to bottom. As can be seen from the above figures, the proposed algorithm has a very significant effect on the identification and detection of moving targets, and can identify and track targets in more complex environments.
Fig. 6 is a performance of the system in a test under an actual condition, a computer in the figure is recorded as an actual field environment, and a pan-tilt is recorded as a moving target obtained after algorithm processing, so that in the actual environment, OpenMV can be matched with a self-made two-direction pan-tilt to realize real-time detection and tracking of the target.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A moving object real-time tracking system based on a camera is characterized by comprising the camera, a cloud platform matched with the camera, a steering engine matched with the cloud platform, a single chip microcomputer and a power supply; the camera, the steering engine and the power supply are all connected with the single chip microcomputer.
2. The camera-based moving object real-time tracking system of claim 1, wherein the camera is an OpenMV camera.
3. The camera-based moving object real-time tracking system of claim 2, wherein the camera-collocated pan-tilt has two degrees of freedom, and the steering engine supports two-dimensional motion; the number of the steering engines of the matched tripod head is 2, one steering engine is responsible for the movement in the horizontal direction, and the other steering engine is responsible for the movement in the vertical direction.
4. The camera-based moving object real-time tracking system of claim 3, wherein the two steering engines are powered by two 18650 battery pack L298N motors.
5. The camera-based moving object real-time tracking system of claim 4, wherein the single chip microcomputer is an Arduino single chip microcomputer.
6. The camera-based moving object real-time tracking system of claim 5, wherein the steering engine and the camera are both connected with the single chip microcomputer through serial ports.
7. A moving object real-time tracking method using the camera-based moving object real-time tracking system of claim 6, comprising the steps of:
s1: after the camera is stabilized, shooting six images by the camera and entering a buffer area;
s2: sequentially carrying out difference on two adjacent frames of images to obtain a frame difference image of the five frames of moving targets;
s3: denoising and morphologically expanding each frame difference image to obtain a binary image;
s4: searching the center of the largest connected white block in each frame of binary image as a target position;
s5: the target position is subjected to weighted summation, the current position of the moving target can be estimated, and the rotating angle of the holder is calculated;
s5: the holder rotates, so that the moving target is kept at the center of the camera picture.
8. The method for tracking the moving object in real time as claimed in claim 7, wherein the camera moves relative to the ground after the pan tilt is driven to rotate every time, the image changes rapidly to a large extent, a large amount of interference is generated on the result of the frame difference method, the image needs to be waited for to be stable, and then the frame difference method is used for detecting the object moving relative to the ground in the scene, so as to avoid the occurrence of errors in identification caused by the motion of the camera.
9. The method of claim 8, wherein after two frames of images are extracted from the buffer, the two frames of images are grayed to obtain grayscale images of the two frames of images, and then the two grayscale images are differentiated according to a frame difference method, i.e., two-dimensional matrices are subtracted from each other, and since the difference generates large noise at some abrupt change points and originally unrelated points, a mean filtering algorithm is used to eliminate the noise.
10. The method for tracking the moving object in real time according to claim 9, wherein after the mean filtering algorithm, the image only retains the contour of the object, the threshold is dynamically set according to the idea of the background subtraction method, then the image is binarized into a two-dimensional matrix with only 0 and 1, after the image is binarized, the noise is filtered by using the median filtering algorithm, the image and the kernel are convolved by using the morphological dilation algorithm, so that some information is added to increase the information content of the image, the contour of the moving object is more definitely obtained, and the center of the connected white color block obtained at this time is the estimation of the position of the moving object, so that the track of the moving object is obtained.
CN202110438641.5A 2021-04-22 2021-04-22 Moving object real-time tracking system and method based on camera Pending CN113194249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110438641.5A CN113194249A (en) 2021-04-22 2021-04-22 Moving object real-time tracking system and method based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110438641.5A CN113194249A (en) 2021-04-22 2021-04-22 Moving object real-time tracking system and method based on camera

Publications (1)

Publication Number Publication Date
CN113194249A true CN113194249A (en) 2021-07-30

Family

ID=76978410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110438641.5A Pending CN113194249A (en) 2021-04-22 2021-04-22 Moving object real-time tracking system and method based on camera

Country Status (1)

Country Link
CN (1) CN113194249A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724324A (en) * 2021-08-30 2021-11-30 杭州华橙软件技术有限公司 Control method and device of holder, storage medium and electronic device
CN114010155A (en) * 2021-10-29 2022-02-08 中山大学 Automatic animal pain testing system
CN114010155B (en) * 2021-10-29 2024-06-11 中山大学 Automatic change painful test system of animal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202929488U (en) * 2012-12-06 2013-05-08 江西理工大学 Indoor human motion target automatic tracker
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN103914856A (en) * 2014-04-14 2014-07-09 贵州电网公司输电运行检修分公司 Moving object detection method based on entropy
CN109799844A (en) * 2019-01-30 2019-05-24 华通科技有限公司 A kind of the dynamic object tracing system and method for cradle head camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202929488U (en) * 2012-12-06 2013-05-08 江西理工大学 Indoor human motion target automatic tracker
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN103914856A (en) * 2014-04-14 2014-07-09 贵州电网公司输电运行检修分公司 Moving object detection method based on entropy
CN109799844A (en) * 2019-01-30 2019-05-24 华通科技有限公司 A kind of the dynamic object tracing system and method for cradle head camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724324A (en) * 2021-08-30 2021-11-30 杭州华橙软件技术有限公司 Control method and device of holder, storage medium and electronic device
CN113724324B (en) * 2021-08-30 2023-12-19 杭州华橙软件技术有限公司 Control method and device of cradle head, storage medium and electronic device
CN114010155A (en) * 2021-10-29 2022-02-08 中山大学 Automatic animal pain testing system
CN114010155B (en) * 2021-10-29 2024-06-11 中山大学 Automatic change painful test system of animal

Similar Documents

Publication Publication Date Title
CN105894702B (en) A kind of intrusion detection warning system and its detection method based on multiple-camera data fusion
CN109872483B (en) Intrusion alert photoelectric monitoring system and method
CN112016414A (en) Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
CN104378582B (en) A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CA2674311C (en) Behavioral recognition system
CN110619276B (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
KR101839827B1 (en) Smart monitoring system applied with recognition technic of characteristic information including face on long distance-moving object
WO2001084844A1 (en) System for tracking and monitoring multiple moving objects
Ekinci et al. Silhouette based human motion detection and analysis for real-time automated video surveillance
CN105611244A (en) Method for detecting airport foreign object debris based on monitoring video of dome camera
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
Dietsche et al. Powerline tracking with event cameras
WO2003098922A1 (en) An imaging system and method for tracking the motion of an object
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN113194249A (en) Moving object real-time tracking system and method based on camera
Leonida et al. A Motion-Based Tracking System Using the Lucas-Kanade Optical Flow Method
Fahn et al. Abnormal maritime activity detection in satellite image sequences using trajectory features
Landabaso et al. Robust tracking and object classification towards automated video surveillance
Perez-Cutino et al. Event-based human intrusion detection in UAS using deep learning
CN106339666B (en) A kind of night monitoring method of human body target
Martínez-de Dios et al. Towards UAS surveillance using event cameras
CN113740847A (en) Multi-radar cooperative detection alarm system based on humanoid target recognition
Syahbana et al. Early detection of incoming traffic for automatic traffic light signaling during roadblock using vanishing point-guided object detection and tracking
CN113014861A (en) Video monitoring method for forbidden area in station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210730

RJ01 Rejection of invention patent application after publication