CN102646309A - Intelligent video perimeter rail system and control method thereof - Google Patents

Intelligent video perimeter rail system and control method thereof Download PDF

Info

Publication number
CN102646309A
CN102646309A CN201210154871XA CN201210154871A CN102646309A CN 102646309 A CN102646309 A CN 102646309A CN 201210154871X A CN201210154871X A CN 201210154871XA CN 201210154871 A CN201210154871 A CN 201210154871A CN 102646309 A CN102646309 A CN 102646309A
Authority
CN
China
Prior art keywords
video
target
mrow
alarm
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210154871XA
Other languages
Chinese (zh)
Inventor
黄鹏宇
周建雄
何跃凯
彭元华
郭振中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU BESTVISION TECHNOLOGY Co Ltd
Original Assignee
CHENGDU BESTVISION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU BESTVISION TECHNOLOGY Co Ltd filed Critical CHENGDU BESTVISION TECHNOLOGY Co Ltd
Priority to CN201210154871XA priority Critical patent/CN102646309A/en
Publication of CN102646309A publication Critical patent/CN102646309A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides an intelligent video perimeter rail system and a control method thereof, belonging to the technical field of video monitoring. The intelligent video perimeter rail system comprises front-end subsystems, a middle transmission subsystem, and a back-end center monitoring subsystem. Each front-end subsystem consists of an automatic tracer and a plurality of fixed video alarms; the middle transmission subsystem consists of a wireless router, a switchboard and a transmission cable; the back-end center monitoring subsystem comprises a center server, a monitoring host and a client; the front-end subsystems are connected in a wired, wireless or combined wired and wireless transmission manner; and the front-end subsystems are connected with the back-end center monitoring subsystem in a wired, wireless or combined wired and wireless transmission manner. Intelligent analyzing modules are arranged in the video alarms and the automatic tracer in an embedded mode, thereby reducing a report negative rate and a false positive rate in a traditional solution. The intelligent video perimeter rail system records and alarms after an abnormal target enters a prevention area through intelligent analysis, so that the network bandwidth is effectively reduced; the cost is reduced; and the technical defects and problems in automatic alarming, tracing, recording triggered by alarming, real-time transmission long-distance monitoring can be solved.

Description

Intelligent video perimeter fence system and control method thereof
Technical Field
The invention relates to the technical field of video monitoring, in particular to an intelligent video perimeter fence system for outermost periphery aggregation of buildings such as villas, warehouses, airports and the like.
Background
The traditional perimeter products generally adopt infrared correlation or a camera with infrared correlation, and have the technical defect that the boundary-crossing targets and the interference targets cannot be analyzed and identified, so that a large amount of false reports and false reports are caused. The camera with the infrared correlation function has no intelligent analysis function, all video images need to be transmitted, a large amount of network bandwidth and storage space are occupied to transmit and store video data, and meanwhile, the details of a target far away from the camera are not clear, so that great difficulty is brought to the resolution of the target by monitoring personnel. For example, in a perimeter camera with a correlation function, a projector or an optical receiver is connected to a camera, and a pair of cameras is installed on the perimeter to be protected. The camera is transmitted back to the control center through the video cable and the power line. The scheme has no intelligent analysis and wireless transmission module, and has higher false alarm and missed alarm due to environmental interference in actual use. The closest to the invention is a comparison document named as an intelligent target detail capturing device and method in a video monitoring system in Chinese patent application No. 200710012370.7; the method comprises the steps that one or more fixed cameras are adopted to carry out overall monitoring on a monitored area, and a pan-tilt camera is adopted to capture target details in a set area; the video monitoring device and the method have the advantages that large-scene monitoring is considered, and the specific target details can be automatically captured, so that the contradiction between the monitoring range and the monitoring target details in the prior art is solved. But it does not solve the problems of automatic alarm, automatic tracking, alarm triggered video and real-time transmission remote monitoring. The inventor of the invention is different from the prior art in that the improved automatic tracker and the video alarm are designed and combined for use, so that the clear image of the target can be obtained, the misinformation and the missing report are reduced, and the subsequent evidence obtaining analysis is facilitated. Meanwhile, intelligent algorithm modules such as target analysis and the like are added in the automatic alarm and the automatic tracker, and the intelligent analysis carries out video recording and alarm only after judging that an abnormal target enters a precaution area, so that the network bandwidth and the storage space occupation are greatly reduced, the resources are saved, and the cost is reduced. The software module runs on an automatic alarm and an automatic tracker, and the hardware device adopts a high-definition digital camera and a wireless transmission mode. Although the subject matter of the invention is similar, the technical solutions are different.
Disclosure of Invention
The invention aims to provide an intelligent video perimeter fence system, and the other aim of the invention is to provide a control method of the intelligent video perimeter fence system, which takes an intelligent video analysis module as a core and combines an automatic tracker and a video alarm which are adopted in a perimeter defense area so as to effectively solve the problems of false alarm and missing alarm, unclear target details, large video recording storage space occupation and the like in the traditional perimeter scheme.
The technical measures for realizing the purpose of the invention are as follows: an intelligent video perimeter fence system comprises a front terminal system, an intermediate transmission subsystem and a rear end central monitoring subsystem; the front terminal system comprises at least one automatic tracker and at least one or more fixed video alarms, the middle transmission subsystem comprises a wireless router, a switch and a transmission cable, and the rear-end central monitoring subsystem comprises a central server, a monitoring host, a client and a UPS (uninterrupted power supply); the front terminal system equipment and the front terminal system and the rear end central monitoring subsystem are connected in a wired or wireless or mixed transmission mode.
The video alarm comprises an ARM + DSP dual-core architecture mode core processor, wherein the dual cores are communicated through a PCIE bus, and a WIFI, 3G and RJ45 network interface module and an embedded intelligent video analysis module digital camera are arranged in the video alarm.
The automatic tracker is composed of a holder system, a communication system and a camera system, wherein the camera system is a high-definition digital camera embedded with an intelligent video analysis module.
Further, wireless transmission signal transmission is adopted among the video alarm, the automatic tracker and the monitoring center, and the specific communication mode is any one or combination of 3G, WIFI, Bluetooth, COFDM, FSK, Zigbee and wired communication modes.
Further, the control method of the intelligent video perimeter fence system comprises the following steps:
5.1) after the system is powered on or reset, initializing a video acquisition module, initializing a storage module, loading the system and an application program from respective FLASH by a video alarm and an automatic tracker, completing initialization of a chip and configuration of peripheral hardware, and entering a normal working state; 5.2) creating a video acquisition thread; 5.3) judging whether waiting audio and video are input, if so, entering the next step, and if not, cycling the LOOP to the previous step; 5.4) the video analysis module judges whether a target intrusion event exists according to the key information in the video source, if so, the next step is carried out, and if not, the LOOP LOOPs to the previous step; 5.5) alarming, if the video alarm is operated, sending a target tracking instruction to the automatic tracker, and instructing the automatic tracker to track the target; simultaneously starting an alarm video and a sharp picture snapshot; 5.6) audio and video coding, compression and local storage are carried out, and an alarm signal, an alarm picture and an alarm video can be sent to a monitoring center; 5.7) creating a video acquisition thread, starting a client service thread and starting a watchdog program at the same time when the video analysis thread is started; 5.8) the client service thread is always in a waiting state, when the client sends a connection request, the client immediately responds, and simultaneously, the XML analysis module is called to analyze the client connection request name and the command processing module is called to process the client connection request name; and 5.9) waiting whether the audio and video data are sent or not, and if so, sending the audio and video data.
Furthermore, the control method of the intelligent video perimeter fence system comprises the following steps of starting an intrusion detection algorithm, confirming the intrusion of a target, and tracking the target:
6.1) acquiring the number, the position parameter and the focal length parameter of the monitoring area; 6.2) waiting whether a target tracking instruction comes or not, and if not, looping to the previous step by the LOOP; if a target tracking instruction exists, entering the next step;
6.3) rotating the holder and positioning the target; 6.4) video analysis; 6.5) judging whether the target invasion exists, if not, circulating the LOOP to the previous step; if the target is invaded, entering the next step; 6.6) recording, capturing, tracking and sending the images of the intrusion target.
Compared with the prior art, the invention has the advantages and beneficial effects that:
1. the automatic tracker is combined with the video alarm, so that clear images of targets can be obtained, false reports and missing reports are reduced, and later evidence obtaining analysis is facilitated.
2. The perimeter fence intelligent video analysis module has advanced algorithm. By adopting the innovative perimeter intrusion detection, false alarm detection and moving target identification algorithms, the performance can be kept extremely high even under various severe environments and lighting conditions. The false alarm rate of missing report in the traditional solution is effectively reduced.
3. Saving resources and reducing cost. Because of adopting automatic intelligent detection and alarm triggering video mode, the network bandwidth and storage space occupation are greatly reduced.
4. The equipment is convenient to use, install and maintain. The wireless transmission is adopted, no transmission and control cable is adopted, no special requirement is required in installation, the wiring mode is simple, and the use and the maintenance are easy.
Drawings
Fig. 1 is a schematic diagram of the intelligent video perimeter system architecture of the present invention.
Fig. 2 is a schematic diagram of the video alarm/automatic tracker circuit of the present invention.
FIG. 3 is an embodiment of an intelligent video perimeter system according to the present invention.
Fig. 4 is a schematic structural diagram of a software module of the video alarm according to the present invention.
Fig. 5 is a schematic diagram of a software flow of the video alarm according to the present invention.
FIG. 6 is a block diagram of the flow of the auto-tracker software according to the present invention.
Fig. 7 is a schematic diagram of a three-dimensional scene modeling monitoring scene included in the intelligent video analysis module.
FIG. 8 is a schematic diagram of the calibration interaction of the alarm and tracker.
Interpretation of terms:
perimeter: refers to the collection of the outermost periphery of buildings in areas such as villas, warehouses, airports, etc.
A perimeter defense area: a perimeter fortification area having a straight line length of no more than 100 meters.
Perimeter fencing: the perimeter protection system is composed of a plurality of perimeter protection zones.
An abnormal target: lingering at the perimeter, stepping on points, breaking walls, suspicious molecules near the perimeter. Suspected molecules that cross, climb the perimeter.
The meaning of the letter:
bluetooth (2): bluetooth, a radio technology that supports short-range communication of devices.
COFDM: coded Orthogonal Frequency Division Multiplexing, short for Coded Frequency Division Multiplexing.
FSK: a digital modulation technique for transmitting information by keying carrier frequency by using the discrete value-taking characteristic of baseband digital signal.
An SD card: secure Digital Memory Card, a Secure Digital Card, is a new generation of Memory devices based on semiconductor flash Memory.
VPP: and (6) video preprocessing.
Wifi: wireless fidelity, a wireless networking technology.
Zigbee: ieee802.15.4, a substitute term, the technology supported by the protocol is a short-distance, low-power consumption wireless communication technology.
Detailed Description
Referring to fig. 1, 2 and 3, the intelligent video perimeter fence system of the present invention includes a front terminal system, an intermediate transmission subsystem, and a rear-end central monitoring subsystem; the front terminal system comprises at least one automatic tracker and a plurality of fixed video alarms; in this embodiment, a defense is formed by 3 video alarms and an automatic tracker, a perimeter defense area 1 is monitored, a perimeter defense area 2, a perimeter defense area 3 to a perimeter defense area N are the same as the perimeter defense area 1, and the front terminal system is formed. The intermediate transmission subsystem comprises a wireless router, a switch and a transmission cable, and the switch is adopted in the embodiment. The transmission subsystem supports 3G, WIFI, Bluetooth, COFDM, FSK, Zigbee and TCP (UDP)/IP communication protocols, and wired, wireless or mixed transmission modes are adopted among the front-end equipment and between the front-end equipment and the back-end equipment according to actual conditions.
The rear-end central monitoring subsystem comprises a central server, a monitoring host, a client and a UPS (uninterrupted power supply); the monitoring center manages all video images in the perimeter defense area in a centralized manner, receives early warning and alarm signals of the defense area in real time and triggers an alarm; for example, a prompt tone, an acousto-optic warning sign, a loudspeaker, a short message and multimedia information are recorded, and alarm images and alarm videos uploaded by each defense area are recorded; the authorized monitoring personnel can monitor or play back images of one or more monitoring defense areas randomly in real time, and control and operate front-end equipment such as an automatic alarm and the like.
The video alarm comprises an ARM + DSP dual-core architecture mode core processor, wherein the ARM selects a processor HI3516 of Haisin semiconductor company, and the DSP selects a multimedia processing chip of a Texas Instrument (TI) in America; the dual cores are communicated through a PCIE bus, and a WIFI, 3G and RJ45 network interface module and an embedded intelligent video analysis module digital camera are arranged in the dual cores.
The automatic tracker consists of a holder system, a communication system and a camera system, wherein the camera system refers to a high-definition digital camera embedded with an intelligent video analysis module. The video alarm can respond to a control instruction transmitted by the video alarm in real time, zoom and rotate, automatically track and lock the target according to the position and track information of the target, and record the video of the target. Compared with a video alarm, the automatic tracker has higher requirement on the resolution of the camera. In the embodiment, an automatic tracker is selected as a high-definition intelligent high-speed dome camera with all-dimensional up-and-down movement, self-adaptive zoom control and more than 100 ten thousand pixels, and a video analysis module is embedded in the camera. The camera system is used as a tracker core, the processor adopts a master-slave mode of 'ARM + DSP' dual-core architecture mode, and the design and the type selection of main components are as follows:
lens: high-definition lenses with more than 100 pixels.
A Sensor: 100 ten thousand pixels SenSor OV10633 of the United states OV company
A CPU: ARM selects processor HI3516 from Haesi semiconductor; the DSP is selected from TMS320DM648 of a processing chip of Texas Instruments (TI) in America.
Communication interface: WIFI, 3G and RJ45 network interfaces are built in. It should be noted that the video alarm and the automatic tracker have the same circuit structure, but the camera configuration is different; firstly, the cameras have different resolutions, and the automatic tracker at least adopts 100 ten thousand pixels; and the other is an automatic tracker with a cloud platform device.
FIG. 4 is a schematic diagram of the software module structure of the intelligent video perimeter fence system according to the present invention. The method mainly realizes the functions of audio and video acquisition, encoding, storage, video analysis and network sharing. The intelligent video analysis module is a software module and is stored in a FLASH, wherein the intelligent analysis module of the video alarm is also stored in a FLASH memory of the video alarm, and mainly comprises the following two parts:
modeling a three-dimensional scene: the method mainly comprises the modeling of the area where the fence or the fence is located and the mutual calibration of the alarm and the tracker. The three-dimensional scene modeling data is utilized to carry out image analysis, so that the detection and tracking precision is improved to a great extent, and meanwhile, false alarms are reduced.
And (3) intrusion detection: the system runs in a video alarm and is mainly used for detecting illegal intruders in scenes, acquiring position parameters of the intruders, informing an automatic tracker that an intrusion event occurs and needs to be confirmed by the automatic tracker, and uploading the position parameters of the intruders to the automatic tracker.
And (3) three-dimensional scene modeling, mainly modeling the area where the fence or the fence is located.
Referring to fig. 7, a schematic view of a monitoring scene of a single alarm is shown, and an outer wall surface, an outer wall surface top and a road surface can be seen in a monitoring visual field range.
A user draws an interesting area through a human-computer interaction interface, wherein the interesting area comprises a wall surface area W, a ground surface area G, an intersection line L of a wall surface and the ground, and a line segment V perpendicular to the ground. Assuming that the height of the pedestrian is L, the coordinate points (h) of the top and the bottom of the head and the bottom of the foot of N groups of pedestrians are uniformly selected from near to far in the interested areai(x,y),fi(x, y)), the value of N is wrong between 3 and 5! No reference source is found. .
Assuming that the height variation of the object in the image is linear in the y-direction, there is substantially no variation in the x-direction, i.e. equation (1) is satisfied:
h(x,y)=ky+b (1)
wherein H (x, y) represents the height of the pedestrian with height H in the image when the foot coordinates of the pedestrian are (x, y); y represents the ordinate of the sole of the foot. At least two sets of vertex and sole coordinates are needed to solve the parameters k and b. Under actual circumstances, due to measurement noise and deviation caused by modeling, multiple sets of measurement data are generally adopted to fit parameters to be solved, and errors caused by the problems are reduced to a certain extent, wherein the solving formula of k and b is as follows:
<math> <mrow> <msub> <mi>k</mi> <mi>ls</mi> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>h</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>h</mi> <mi>i</mi> </msub> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>y</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>N</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>y</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>b</mi> <mi>ls</mi> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>h</mi> <mi>i</mi> </msub> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>N</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>h</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>N</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>y</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
according to the nature of the invariant cross ratio in the perspective projection of the camera, as shown in formula (4), the object at the same position of the scene is obtained
The ratio of the target actual height to the pixel height is unchanged as shown in equation (5):
<math> <mrow> <mfrac> <msub> <mi>H</mi> <mi>o</mi> </msub> <msub> <mi>H</mi> <mi>c</mi> </msub> </mfrac> <mo>&ap;</mo> <mfrac> <mi>h</mi> <msub> <mi>h</mi> <mi>hor</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mfrac> <msub> <mi>H</mi> <mi>o</mi> </msub> <mi>h</mi> </mfrac> <mo>&ap;</mo> <mi>R</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein HoRepresenting the height of the object in the actual scene, H representing the height of the object in the image, HcIndicating the mounting height of the camera, hhorIndicating the height of the horizontal vanishing line in the image.
And (3) combining the formula (1) and the formula (5) to obtain the actual height of the target with the given pixel height at any position in the region of interest in the image, as shown in the formula (6):
<math> <mrow> <msub> <mi>H</mi> <mi>o</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>H</mi> <mo>&times;</mo> <msub> <mi>h</mi> <mi>o</mi> </msub> </mrow> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
the actual height of the moving object in the scene can be calculated through the formula (6), a user can set a concerned target height range, and the height range of the moving object can be filtered when the height range of the moving object does not meet a set value, so that the false alarm can be greatly reduced.
The following are specifically mentioned: when an intruder climbs over an enclosure (i.e., the target appears in a wall area), the true height of the target needs to be corrected due to the ambiguity of two-dimensional imaging. And finding a vertical projection point P of the target sole coordinates f (x, y) on the ground, namely a passing point f (x, y) in the image, and making an intersection point of a straight line parallel to the vertical line V and the intersection line L. And (5) taking the point P as the sole coordinates of the target, and substituting the point P into the expression (6) to calculate the actual height of the target.
Referring to fig. 8, the calibration interaction of the alarm with the tracker is illustrated: through mutual calibration of the alarm and the tracker, when the alarm sends an alarm signal and uploads target position information, the tracker can adjust a monitoring angle and a camera focal length according to the position information, and a monitoring view field is aligned to an area where an intrusion event occurs. The figure is a top view with the alarm mounted on the top of the enclosure and the tracker mounted inside the enclosure or fence. The monitored enclosure or fence is divided into several sub-sections according to the top, and a calibration object such as a red flag is placed on the end point of each sub-section. The tracker is provided with a plurality of preset positions, so that the tracker can be ensured to be aligned with the middle point of the subsegment at the center of the visual field when moving to each preset position, and simultaneously, the monitoring visual field of the tracker can cover the monitoring range of two adjacent subsegments. And for the alarm, recording the coordinates of the calibration object in the monitoring view field of the calibration object, obtaining the area distribution of each subsection in the image according to the established three-dimensional model of the enclosing wall, and sending the position information of the subsection corresponding to the occurrence event to the tracker when the intrusion event occurs.
The intelligent tracking algorithm of the automatic tracker is designed as follows: the monitoring range of the alarm is fixed under normal conditions, the pixel area of a target is smaller at the far end of a monitoring visual field, in order to accurately detect a small target at the far end, the system is required to have higher detection sensitivity to the small target, false alarm can be increased while the sensitivity is improved, the monitoring range of the tracker is large, the far-end scenery can be zoomed in and magnified to be viewed through moving the angle and zooming the focal length, so that the target can be confirmed by the tracker, after the target is confirmed, an alarm signal is sent to the center, the target is tracked simultaneously, the angle of the tracker is corrected according to the position coordinate feedback of the target, and the target is ensured to be always positioned in the center of the monitoring visual field of the tracker.
When the tracker receives an intrusion alarm signal, the tracker adjusts the target position to a corresponding prefabricated position according to a target position signal provided by the alarm, starts target detection, has a similar method to that of motion detection in the alarm, confirms that a target intrudes when a moving object is detected, sends an alarm signal and starts to track the target. A mean-shift tracking algorithm based on texture and color features is employed here.
Establishing a target model and measuring similarity: the texture feature selects a Local Binary Pattern (LBP) feature. LBP is an effective texture description operator, has the characteristics of strong texture recognition capability and insensitivity to brightness change, and is defined as formula (22):
<math> <mrow> <msub> <mi>LBP</mi> <mrow> <mi>P</mi> <mo>,</mo> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>P</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>g</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mn>2</mn> <mi>p</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>x</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>x</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
r represents the distance between the central pixel and the neighborhood pixels, P represents the number of the neighborhood pixels, gpIs expressed as gcThe gray value of the p-th bisector point on the circular ring with the center at the distance R. Here, P =8 and R =1 are taken, i.e. 8 neighborhood pixels are considered. The LBP histogram may be formed by counting the LBP value of each pixel in the region, where the LBP histogram is quantized to 32 th order.
The color feature selects an H component reflecting the color feature of the target and a V component reflecting the luminance feature of the target, and the color component is quantized to 32 steps.
The final target features are expressed as a three-dimensional feature histogram including two-dimensional color features and one-dimensional texture features, and the quantization order of each dimension of the feature histogram is 32.
Here, a weighted feature histogram is selected as a target model, the weighted feature histogram reflects the statistical features of a target region, and the Kernel function selects epaneechnikov Kernel, as shown in formula (23):
<math> <mrow> <msub> <mi>K</mi> <mi>E</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>c</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mtd> <mtd> <mo>|</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>|</mo> <mo>&le;</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>otherwise</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow> </math>
target model creation by equation (23)
<math> <mrow> <msub> <mi>p</mi> <msub> <mi>x</mi> <mn>0</mn> </msub> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>C</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>k</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>h</mi> </mfrac> <mo>)</mo> </mrow> <mi>&delta;</mi> <mo>[</mo> <mi>h</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>n</mi> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein C is a normalized coefficient, and C is a normalized coefficient,
Figure BDA00001654634400085
is represented by x0Weight of the N-th order histogram as the center, N represents the number of pixels in the region, k (.) is the Epanechnikov kernel function, xiIs any point in the region, | | xi-x0| is xiTo x0Is equal to (d).]Is a unit impulse function, h (x)i) Is xiThe corresponding order in the three-dimensional feature histogram.
Selecting a common Bhattacharyya coefficient in the aspect of similarity measurement to calculate the similarity:
<math> <mrow> <msub> <mi>&rho;</mi> <msub> <mi>x</mi> <mn>0</mn> </msub> </msub> <mo>[</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>]</mo> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msqrt> <msub> <mi>p</mi> <msub> <mi>x</mi> <mn>0</mn> </msub> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>25</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein
Figure BDA00001654634400092
Is represented by x0Using the built weighted feature histogram as the target model
Figure BDA00001654634400093
And a pre-established template q (n), the greater p,the higher the degree of similarity; m represents the order of the histogram.
Mean shift tracking: the target tracking comprises three parts of position prediction, mean shift search and feature update.
The position prediction of the target is realized by adopting a gray template matching method, the approximate position of the target in the current frame can be found through the position prediction, and the accurate position of the target is obtained through mean shift search.
The mean shift location search is shown as equation (26):
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>W</mi> <mo>&times;</mo> <mi>H</mi> </mrow> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>&omega;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>W</mi> <mo>&times;</mo> <mi>H</mi> </mrow> </munderover> <msub> <mi>&omega;</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>26</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&omega;</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>&delta;</mi> <mo>[</mo> <mi>h</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>u</mi> <mo>]</mo> <msqrt> <mfrac> <msub> <mi>q</mi> <mi>u</mi> </msub> <mrow> <msub> <mi>p</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>27</mn> <mo>)</mo> </mrow> </mrow> </math>
where W and H represent the width and height of the target template,
Figure BDA00001654634400096
a geometric center coordinate point, x, representing the current targetiFor the sample point, g (.) is the derivative of the kernel function, h represents the kernel bandwidth, ωiAre weighting coefficients.
The updating of the target model is necessary for realizing stable and accurate target tracking, and blind updating may cause external interference to be doped into the model, so that the model cannot completely describe the characteristics of the target, and as time increases, the model deviates from the real condition of the target more and more, resulting in reduced tracking accuracy.
The strategy for model update is shown in equation (28):
if | ρkk-1|>ρk-1×0.9ANDρk>0.9
qi=qi-1×0.95+pk×(1-0.95)(28)
Where rhokBhattacharyya coefficient, q, representing the best position of the k-th frameiRepresenting the target color model after the i-th update, pkA model representing the acquired image object of the k-th frame.
FIG. 5 is a schematic diagram of a software program flow chart of a video alarm, which describes a control method of the intelligent video perimeter fence system, and comprises the following steps:
5.1) after the system is powered on or reset, initializing a video acquisition module, initializing a storage module, loading the system and an application program from respective FLASH by a video alarm and an automatic tracker, completing initialization of a chip and configuration of peripheral hardware, and entering a normal working state; 5.2) creating a video acquisition thread; 5.3) judging whether waiting audio and video are input, if so, entering the next step, and if not, cycling the LOOP to the previous step; 5.4) the video analysis module judges whether a target intrusion event exists according to the key information in the video source, if so, the next step is carried out, and if not, the LOOP LOOPs to the previous step; 5.5) alarming, sending to an automatic tracker, instructing the tracker to track a target, starting an alarm video and a clear picture snapshot at the same time, and sending an early warning signal, an alarm picture and the alarm video to a monitoring center; 5.6) audio and video coding, compression and local storage are carried out, and an alarm signal is sent to a monitoring center; 5.7) creating a video acquisition thread, starting a client service thread and starting a watchdog program at the same time when the video analysis thread is started; 5.8) the client service thread is always in a waiting state, when the client sends a connection request, the client immediately responds, and simultaneously, the XML analysis module is called to analyze the client connection request name and the command processing module is called to process the client connection request name; and 5.9) waiting whether the audio and video data are sent or not, and if so, sending the audio and video data.
The working process is that the embedded operating system starts to start the application program after being started, video acquisition initialization and storage management initialization are sequentially carried out, a video acquisition thread, a video analysis thread and a client service thread are created, and meanwhile, a watchdog program is started.
When audio and video data enter, the acquisition thread immediately calls the video analysis module to detect whether an abnormal target exists in the perimeter fence, if so, the acquisition thread sequentially calls the coding (compression) processing module and the storage processing module to complete coding and compression of the audio and video data and storage on the SD card.
The client service thread is always in a waiting state, when a client sends a connection request, the client immediately responds, meanwhile, the XML analysis module is called to analyze the name of the client connection request, and the command processing module is called to process the name.
FIG. 6 is a schematic diagram showing a software flow chart of an automatic tracker, i.e., the steps of starting an intrusion detection algorithm, confirming the intrusion of a target, and performing response tracking on the target are as follows:
6.1) acquiring the number, the position parameter and the focal length parameter of the monitoring area; 6.2) waiting whether a target tracking instruction comes or not, and if not, looping to the previous step by the LOOP; if a target tracking instruction exists, entering the next step; 6.3 rotating the holder and positioning the target; 6.4) video analysis; 6.5) judging whether the target invasion exists, if not, circulating the LOOP to the previous step; if the target is invaded, entering the next step; 6.6) recording, capturing, tracking and sending the images of the intrusion target.
After the target response tracking module is started, the serial numbers of all set areas in the perimeter defense area and the corresponding position parameters and focal length parameters are firstly obtained. After receiving a tracking instruction sent by a video alarm, searching corresponding position parameters and focal length parameters according to a target position area code in the instruction, controlling a holder to rotate through a control port, adjusting the focal length, zooming a lens, locking a target, simultaneously performing behavior analysis on the target, immediately starting video recording, tracking and snapshotting the target if the target is an invasive target, and transmitting the shot image to an alarm management center. It should be noted that the operating system of the auto-tracker software module still adopts the embedded Linux operating system. Except for the newly added target tracking response module, the other modules of the application software are completely the same as the video alarm application software module.
The system work flow is as follows:
1. after the system is powered on or reset, the video alarm and the automatic tracker load the system and the application program from respective FLASH to complete the initialization of the chip and the configuration of peripheral hardware, and enter a normal working state.
2. The video alarm continuously collects video images in the perimeter defense area through the video collection module and sends the video images to the video analysis module for analysis and processing.
3. The video analysis module judges an abnormal event according to key information in a video source, and immediately acquires target position information and motion track information and sends the information to the automatic tracker according to whether target motion and behavior characteristics violate rules set by an alarm or not, the target is instructed to track the target by the automatic tracker, an alarm video and a clear picture snapshot are started at the same time, and an early warning signal, an alarm picture and the alarm video are sent to a monitoring center.
4. After receiving a target tracking instruction, the automatic tracker controls a self holder to carry out omnibearing rotation and self-adaptive zoom control according to target position information and motion track information in the instruction, and zooms a lens to lock a target; and starting an intrusion detection algorithm, confirming that the target invades, and tracking the target. And simultaneously starting video recording, snapshotting clear pictures, and sending an alarm signal, an alarm picture and an alarm video to a monitoring center.
5. After receiving the early warning or alarm, the monitoring center automatically switches the current picture into an alarm picture and simultaneously sends out alarms (warning tone, sound and light alarm number, horn, short message and multimedia information), and the monitoring personnel confirms the alarm condition and takes corresponding measures. And if necessary, the automatic tracking device of the defense area can be manually controlled to manually search or track the target.

Claims (6)

1. An intelligent video perimeter fence system comprises a front terminal system, an intermediate transmission subsystem and a rear end central monitoring subsystem; the system is characterized in that the front terminal system comprises at least one automatic tracker and at least one or more fixed video alarms, the middle transmission subsystem comprises a wireless router, a switch and a transmission cable, and the rear-end central monitoring subsystem comprises a central server, a monitoring host, a client and a UPS (uninterrupted power supply); the front terminal system equipment and the front terminal system and the rear end central monitoring subsystem are connected in a wired or wireless or mixed transmission mode.
2. The intelligent video perimeter fence system as claimed in claim 1, wherein said video alarm comprises an "ARM + DSP" dual core architecture mode core processor, wherein the dual cores communicate with each other via a PCIE bus, and a digital camera with built-in WIFI, 3G and RJ45 network interface modules and an embedded intelligent video analysis module.
3. The intelligent video perimeter fence system of claim 1 wherein said auto-tracker is comprised of a pan-tilt system, a communications system, and a camera system, the camera system being a high-definition digital camera with an embedded intelligent video analysis module.
4. The intelligent video perimeter fence system of claim 1 wherein the video alarm, the automatic tracker, and the monitoring center are wirelessly transmitted, and the specific communication method is as follows: 3G, WIFI, Bluetooth, COFDM, FSK, Zigbee, wired, any one or combination thereof.
5. The method of controlling an intelligent video perimeter fencing system as claimed in claim 1, further comprising the steps of:
5.1) after the system is powered on or reset, initializing a video acquisition module, initializing a storage module, loading the system and an application program from respective FLASH by a video alarm and an automatic tracker, completing initialization of a chip and configuration of peripheral hardware, and entering a normal working state;
5.2) creating a video acquisition thread;
5.3) judging whether waiting audio and video are input, if so, entering the next step, and if not, cycling the LOOP to the previous step;
5.4) the video analysis module judges whether a target intrusion event exists according to the key information in the video source, if so, the next step is carried out, and if not, the LOOP LOOPs to the previous step;
5.5) alarming, if the video alarm is operated, sending a target tracking instruction to the automatic tracker, and instructing the automatic tracker to track the target; simultaneously starting an alarm video and a sharp picture snapshot;
5.6) audio and video coding, compression and local storage are carried out, and an alarm signal, an alarm picture and an alarm video can be sent to a monitoring center;
5.7) creating a video acquisition thread, starting a client service thread and starting a watchdog program at the same time when the video analysis thread is started;
5.8) the client service thread is always in a waiting state, when the client sends a connection request, the client immediately responds, and simultaneously, the XML analysis module is called to analyze the client connection request name and the command processing module is called to process the client connection request name;
and 5.9) waiting whether the audio and video data are sent or not, and if so, sending the audio and video data.
6. The method for controlling the intelligent video perimeter fence system according to claim 1, wherein the steps of starting an intrusion detection algorithm, confirming the intrusion of the target, and tracking the target are as follows:
6.1) acquiring the number, the position parameter and the focal length parameter of the monitoring area;
6.2) waiting whether a target tracking instruction comes or not, and if not, looping to the previous step by the LOOP; if a target tracking instruction exists, entering the next step;
6.3) rotating the holder and positioning the target;
6.4) video analysis;
6.5) judging whether the target invasion exists, if not, circulating the LOOP to the previous step; if the target is invaded, entering the next step;
6.6) recording, capturing, tracking and sending the images of the intrusion target.
CN201210154871XA 2012-05-18 2012-05-18 Intelligent video perimeter rail system and control method thereof Pending CN102646309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210154871XA CN102646309A (en) 2012-05-18 2012-05-18 Intelligent video perimeter rail system and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210154871XA CN102646309A (en) 2012-05-18 2012-05-18 Intelligent video perimeter rail system and control method thereof

Publications (1)

Publication Number Publication Date
CN102646309A true CN102646309A (en) 2012-08-22

Family

ID=46659120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210154871XA Pending CN102646309A (en) 2012-05-18 2012-05-18 Intelligent video perimeter rail system and control method thereof

Country Status (1)

Country Link
CN (1) CN102646309A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327308A (en) * 2013-06-28 2013-09-25 四川优美信息技术有限公司 Angle adjustable audio-video monitoring device
CN103338358A (en) * 2013-06-28 2013-10-02 四川优美信息技术有限公司 Video monitoring system with multi-angle adjusting function
CN103533035A (en) * 2013-09-29 2014-01-22 中国水电顾问集团昆明勘测设计研究院有限公司 Reservoir inspection information acquisition terminal for hydropower station
CN103778754A (en) * 2012-10-25 2014-05-07 北京航天长峰科技工业集团有限公司 Water edge safety protection system
CN103905786A (en) * 2012-12-27 2014-07-02 龙永贤 Wireless network monitoring system
CN104200592A (en) * 2014-09-25 2014-12-10 北京世纪之星应用技术研究中心 Perimeter protection alarm system utilizing linear displacement to detect invasion and linear displacement detector
CN106297132A (en) * 2016-11-02 2017-01-04 北京弘恒科技有限公司 building intrusion detection early warning system
WO2017020663A1 (en) * 2015-07-31 2017-02-09 腾讯科技(深圳)有限公司 Live-comment video live broadcast method and apparatus, video source device, and network access device
CN106408833A (en) * 2016-11-02 2017-02-15 北京弘恒科技有限公司 Perimeter intrusion detection method and system
CN106454215A (en) * 2016-04-26 2017-02-22 安徽师范大学 High speed video data acquisition display system and display method
CN108377367A (en) * 2018-03-19 2018-08-07 广东电网有限责任公司中山供电局 Intelligent substation video surveillance system based on DSP
CN108540772A (en) * 2018-04-03 2018-09-14 南京理工大学 A kind of mobile control monitoring system and method preventing region for safety
CN108650489A (en) * 2018-04-17 2018-10-12 广州创龙电子科技有限公司 A kind of acquiring and processing method and system of audio and video
CN108683709A (en) * 2018-04-24 2018-10-19 安徽展航信息科技发展有限公司 A kind of teaching is mobile to be broadcast live platform and its application
CN109903499A (en) * 2019-03-27 2019-06-18 河南九乾电子科技有限公司 The intelligent control method and device of wireless self-organization network
CN110225215A (en) * 2019-06-06 2019-09-10 四川赛科安全技术有限公司 A method of realizing that signal transmits between fire telephone host and extension set
CN110390288A (en) * 2019-04-26 2019-10-29 上海鹰觉科技有限公司 Intelligent target search, positioning and evidence-obtaining system based on computer vision and method
CN111880611A (en) * 2020-06-19 2020-11-03 深圳宏芯宇电子股份有限公司 Server for fast transaction and fast transaction data processing method
CN112738390A (en) * 2020-12-02 2021-04-30 北京飞讯数码科技有限公司 Control method and system of pan-tilt-zoom camera
CN112767442A (en) * 2021-01-18 2021-05-07 中山大学 Pedestrian three-dimensional detection tracking method and system based on top view angle
CN114038146A (en) * 2022-01-10 2022-02-11 深圳市艾科维达科技有限公司 Camera identification alarm device supporting internet connection
CN114302059A (en) * 2021-12-27 2022-04-08 维坤智能科技(上海)有限公司 Three-dimensional online intelligent inspection system and method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271752B1 (en) * 1998-10-02 2001-08-07 Lucent Technologies, Inc. Intelligent multi-access system
CN101483761A (en) * 2008-01-10 2009-07-15 上海诚丰数码科技有限公司 Intelligent video monitoring system based on complete IP network
CN201577164U (en) * 2009-08-10 2010-09-08 北京中海锦安高科技有限公司 Fire-fighting pre-warning device for mobile phone surveillance
CN101918989A (en) * 2007-12-07 2010-12-15 常州环视高科电子科技有限公司 Video surveillance system with object tracking and retrieval
CN202067379U (en) * 2010-11-26 2011-12-07 上海电力带电作业技术开发有限公司 Remote wireless video intelligent monitoring prewarning system for electricity transmission line
CN202600885U (en) * 2012-05-18 2012-12-12 成都百威讯科技有限责任公司 Intelligent video perimeter fence system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271752B1 (en) * 1998-10-02 2001-08-07 Lucent Technologies, Inc. Intelligent multi-access system
CN101918989A (en) * 2007-12-07 2010-12-15 常州环视高科电子科技有限公司 Video surveillance system with object tracking and retrieval
CN101483761A (en) * 2008-01-10 2009-07-15 上海诚丰数码科技有限公司 Intelligent video monitoring system based on complete IP network
CN201577164U (en) * 2009-08-10 2010-09-08 北京中海锦安高科技有限公司 Fire-fighting pre-warning device for mobile phone surveillance
CN202067379U (en) * 2010-11-26 2011-12-07 上海电力带电作业技术开发有限公司 Remote wireless video intelligent monitoring prewarning system for electricity transmission line
CN202600885U (en) * 2012-05-18 2012-12-12 成都百威讯科技有限责任公司 Intelligent video perimeter fence system

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778754A (en) * 2012-10-25 2014-05-07 北京航天长峰科技工业集团有限公司 Water edge safety protection system
CN103905786A (en) * 2012-12-27 2014-07-02 龙永贤 Wireless network monitoring system
CN103338358A (en) * 2013-06-28 2013-10-02 四川优美信息技术有限公司 Video monitoring system with multi-angle adjusting function
CN103327308A (en) * 2013-06-28 2013-09-25 四川优美信息技术有限公司 Angle adjustable audio-video monitoring device
CN103533035A (en) * 2013-09-29 2014-01-22 中国水电顾问集团昆明勘测设计研究院有限公司 Reservoir inspection information acquisition terminal for hydropower station
CN104200592A (en) * 2014-09-25 2014-12-10 北京世纪之星应用技术研究中心 Perimeter protection alarm system utilizing linear displacement to detect invasion and linear displacement detector
US10645445B2 (en) 2015-07-31 2020-05-05 Tencent Technology (Shenzhen) Company Limited Barrage video live broadcast method and apparatus, video source device, and network access device
WO2017020663A1 (en) * 2015-07-31 2017-02-09 腾讯科技(深圳)有限公司 Live-comment video live broadcast method and apparatus, video source device, and network access device
CN106454215A (en) * 2016-04-26 2017-02-22 安徽师范大学 High speed video data acquisition display system and display method
CN106408833A (en) * 2016-11-02 2017-02-15 北京弘恒科技有限公司 Perimeter intrusion detection method and system
CN106297132A (en) * 2016-11-02 2017-01-04 北京弘恒科技有限公司 building intrusion detection early warning system
CN108377367A (en) * 2018-03-19 2018-08-07 广东电网有限责任公司中山供电局 Intelligent substation video surveillance system based on DSP
CN108540772A (en) * 2018-04-03 2018-09-14 南京理工大学 A kind of mobile control monitoring system and method preventing region for safety
CN108650489A (en) * 2018-04-17 2018-10-12 广州创龙电子科技有限公司 A kind of acquiring and processing method and system of audio and video
CN108683709A (en) * 2018-04-24 2018-10-19 安徽展航信息科技发展有限公司 A kind of teaching is mobile to be broadcast live platform and its application
CN109903499A (en) * 2019-03-27 2019-06-18 河南九乾电子科技有限公司 The intelligent control method and device of wireless self-organization network
CN110390288B (en) * 2019-04-26 2021-05-25 上海鹰觉科技有限公司 Target intelligent searching, positioning and tracking system and method based on computer vision
CN110390288A (en) * 2019-04-26 2019-10-29 上海鹰觉科技有限公司 Intelligent target search, positioning and evidence-obtaining system based on computer vision and method
CN110225215A (en) * 2019-06-06 2019-09-10 四川赛科安全技术有限公司 A method of realizing that signal transmits between fire telephone host and extension set
CN110225215B (en) * 2019-06-06 2020-12-29 四川赛科安全技术有限公司 Method for realizing signal transmission between fire-fighting telephone main unit and extension set
CN111880611A (en) * 2020-06-19 2020-11-03 深圳宏芯宇电子股份有限公司 Server for fast transaction and fast transaction data processing method
CN112738390A (en) * 2020-12-02 2021-04-30 北京飞讯数码科技有限公司 Control method and system of pan-tilt-zoom camera
CN112738390B (en) * 2020-12-02 2022-09-27 北京飞讯数码科技有限公司 Control method and system of pan-tilt-zoom camera
CN112767442A (en) * 2021-01-18 2021-05-07 中山大学 Pedestrian three-dimensional detection tracking method and system based on top view angle
CN112767442B (en) * 2021-01-18 2023-07-21 中山大学 Pedestrian three-dimensional detection tracking method and system based on top view angle
CN114302059A (en) * 2021-12-27 2022-04-08 维坤智能科技(上海)有限公司 Three-dimensional online intelligent inspection system and method thereof
CN114038146A (en) * 2022-01-10 2022-02-11 深圳市艾科维达科技有限公司 Camera identification alarm device supporting internet connection

Similar Documents

Publication Publication Date Title
CN102646309A (en) Intelligent video perimeter rail system and control method thereof
US11443555B2 (en) Scenario recreation through object detection and 3D visualization in a multi-sensor environment
CN109686109B (en) Parking lot safety monitoring management system and method based on artificial intelligence
US9412268B2 (en) Vehicle detection and counting
CN103270536B (en) Stopped object detection
US20150015787A1 (en) Automatic extraction of secondary video streams
CN107483889A (en) The tunnel monitoring system of wisdom building site control platform
KR101496390B1 (en) System for Vehicle Number Detection
US20130166711A1 (en) Cloud-Based Video Surveillance Management System
US20030043160A1 (en) Image data processing
KR102397837B1 (en) An apparatus and a system for providing a security surveillance service based on edge computing and a method for operating them
CN101448145A (en) IP camera, video monitor system and signal processing method of IP camera
CN204129891U (en) A kind of high ferro anti-intrusion system along the line
CN102081844A (en) Traffic video behavior analyzing and alarming server
JP2000295600A (en) Monitor system
CN202600885U (en) Intelligent video perimeter fence system
CN111914592B (en) Multi-camera combined evidence obtaining method, device and system
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
KR101832274B1 (en) System for crime prevention of intelligent type by video photographing and method for acting thereof
KR101290782B1 (en) System and method for Multiple PTZ Camera Control Based on Intelligent Multi-Object Tracking Algorithm
KR20220000226A (en) A system for providing a security surveillance service based on edge computing
US20230046840A1 (en) Vehicular access control based on virtual inductive loop
CN113034828A (en) System for realizing target detection and identification based on embedded computing terminal and layout method
Fawzi et al. Embedded real-time video surveillance system based on multi-sensor and visual tracking
CN109841022B (en) Target moving track detecting and alarming method, system and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120822