CN104363426A - Traffic video monitoring system and method with target associated in multiple cameras - Google Patents
Traffic video monitoring system and method with target associated in multiple cameras Download PDFInfo
- Publication number
- CN104363426A CN104363426A CN201410685403.4A CN201410685403A CN104363426A CN 104363426 A CN104363426 A CN 104363426A CN 201410685403 A CN201410685403 A CN 201410685403A CN 104363426 A CN104363426 A CN 104363426A
- Authority
- CN
- China
- Prior art keywords
- target
- layer
- central control
- video
- control layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a traffic video monitoring system and method with target associated in multiple cameras. According to the traffic video monitoring system for target associated in multiple cameras, a new characteristic is put forward, namely, target characteristics are described by combining an ASIFT characteristic and a color histogram characteristic, and then the problem that the shooting angles of cameras are slightly different is solved. According to the traffic video monitoring system and method with target associated in multiple cameras, the problem that the shorting angles of multiple cameras are completely different is solved by adopting an indirect characteristic matching idea. A central layer controls video traffic, and the network bandwidth pressure of a central control layer is reduced effectively. Association algorithm modules of the central control layer are distributed into a front-end processing layer, the central control layer is only in charge of calculation of target characteristics and calculation of a space-time prediction model, the calculation pressure of the central control layer is lowered effectively, and automatic association of the target in the multiple cameras is achieved.
Description
Technical field
The present invention relates to a kind of traffic video monitoring system and method for supervising of multiple-camera target association.
Background technology
Single camera shooting area is limited, can not cover whole urban area, and in actual life, needs a system to carry out complete associating target, obtains the running orbit of target in whole urban area, and then helps to analyze its behavior.By disposing multi-section video camera within the scope of whole urban area, making system monitoring region can cover whole city, by the reasonable cooperation of multi-section video camera, obtaining the movement locus of target in whole city scope.
Traditional traffic video monitoring system adopts centralized processing, by each camera video data all by network bandwidth transmission to central control layer, carry out calculating and target association at central control layer.
This architectural framework will cause following two subject matters:
1, offered load pressure is excessive.Video is all sent to central control layer by each video camera, and the network bandwidth of central control layer must become system bottleneck.
2, algoritic module all performs at central control layer, and so computing capability of central control layer, storage capacity all can affect the performance of whole system.
Traditional traffic video monitoring system is difficult to realize target auto-associating in multi-section video camera.Major Difficulties can be divided into following two classes:
1, the difference of multi-section video camera shooting background.Change due to shooting background causes the changing features of the moving target extracted, and one of them typical case is exactly because the change of illumination causes moving target color characteristic to produce acute variation.
2, the change of multi-section video camera shooting angle.Because video camera is in the difference of each crossing or overline bridge setting angle, the motion target area causing video camera to be taken is different, extreme situation is exactly that video camera A has photographed headstock, and video camera B has photographed the tailstock, in this case, traditional traffic video monitoring system is difficult to carry out clarification of objective coupling.
Summary of the invention
The invention provides a kind of traffic video monitoring system and method for supervising of multiple-camera target association, the deficiencies in the prior art can be overcome, the present invention is by the traffic video monitoring system towards multiple-camera target association, a kind of new feature is proposed, namely describe target signature in conjunction with ASIFT characteristic sum color histogram feature, and then solve camera shooting angle slightly difference problem.The present invention proposes to adopt indirect characteristic matching thought to solve the complete different problem of multi-section camera shooting angle.Control video flow by central stratum, effectively reduce the network bandwidth pressure of central control layer.By the association algorithm module distribution of central control layer in front-end processing layer, central control layer is only responsible for the calculating of target signature calculating and spatio-temporal segmentation, effectively reduce the calculating pressure of central control layer, and achieve the auto-associating of target in multi-section video camera.
The present invention is realized by following technological means:
Towards a traffic video monitoring system for multiple-camera target association, comprise front-end processing layer, transport network layer, central control layer;
Described front-end processing layer comprises video acquisition module, target association module, front end communications module;
Described video acquisition module reads the video of CCTV camera shooting, and by described transmission of video to target association module;
The target signature that described target association module is transmitted according to central control layer and monitor video data associate target;
Association results is transferred to transport network layer by described front end communications module, and is finally transferred to central control layer and shows;
Described Internet Transmission comprises communication control module, comprises switch, router and UDP communication module, TCP communication module; UDP communication module is used for transmitting video data, and TCP communication module is for transmitting control command;
Described central control layer, comprises target signature computing module, spatio-temporal prediction module, central communication module and GIS map display module; Central control layer carries out global information maintenance and control, namely according to spatio-temporal segmentation, determines then target signature and control command to be transferred to next video camera that target will occur corresponding video camera, to carry out next step association.
Further, described front-end processing layer comprises headend equipment, and described headend equipment comprises hardware layer, system layer, application layer; Hardware layer comprises video monitoring equipment, mainboard, CPU, memory, network interface card, video card, hardware layer provides architecture, perform machinery order, for system layer provides service, system layer comprises operating system, Video Codec, trawl performance, video driver, the logical order of application layer is converted to the machine order that hardware layer can be resolved by system layer, consign to hardware layer to perform, and hardware layer result is returned to application layer, application layer comprises video acquisition module, target association module and front end communications module.
Further, can distributed variable-frequencypump be carried out between headend equipment, substantially increase the computing capability of whole system.
Towards a method for supervising for the traffic video monitoring system of multiple-camera target association, comprise following steps,
S101, transmitted control message by central control layer and target feature vector to front end communications module, front-end processing layer reads monitor video, each pixel stores a sample set, in sample set, sampled value is the pixel value in this pixel past and the pixel value of its neighbours point, then each new pixel value and sample set are compared and judge whether to belong to background dot, if a certain pixel does not belong to background dot, so this pixel just belongs to foreground point, namely moving object, the method that system adopts foreground pixel point to assemble extracts moving object;
S102, calculate the ASIFT characteristic sum color histogram feature of the moving object that previous step is extracted, carry out described ASIFT characteristic sum color histogram feature with the data set of this front end and compare the characteristic vector obtaining moving object, the clarification of objective vector that the characteristic vector of comparing motion object and central control layer transmit, obtains the movement position of target in this front end;
S103, utilizes TLD algorithm to associate the moving target detected;
S104, by target association result and target space time information, by transport network layer, returns to central control layer; By the spatio-temporal prediction module of central control layer, notice be used for relay next video camera, perform next step target association.
Further, described central control layer determines relay video camera according to spatio-temporal prediction module and sends the corresponding command to this video camera, and detailed process is:
S201, central control layer sends CAMERA_OPEN order to a certain front-end A, and front-end A sends real-time video to central control layer, the capture video of central control layer display front-end A;
S202, target is selected at central control layer frame, calculate the ASIFT characteristic sum color histogram feature of target, and carry out feature with current data set and compare, obtain clarification of objective vector, clarification of objective vector and OBJECT_TRACK order are sent to respective front ends A, and front-end A associates target, and associated video and target space time information are turned back to central control layer;
S203, front-end A target association completes, the target signature information that central control layer returns according to front-end A and space time information, the next front end B that target of prediction will occur, and target signature information and OBJECT_TRACK order are sent to front end B, OBJECT_CANCEL order is sent to front-end A;
S204, repeats S202.S203, and then obtains the track of target in whole urban area.
The present invention's advantage is compared with prior art:
1, autonomous Design front end hardware system, assembling monitoring camera and electronic chip devices.The distributed execution of realize target association algorithm.On existing monitoring camera basis, add new hardware device, do not need to redeploy monitoring camera.Substantially reduce the expense of deployment facility.
2, with different levels structure system is adopted, by the load of central control layer net control, carry out target association by distributed front end system, so just significantly reduce network bandwidth pressure and the calculating pressure of central control layer, improve calculated performance and the scalability of entire system.The movement locus of intelligentized monitoring objective in whole metropolitan area can be realized.
3, ViBe algorithm is adopted to extract moving object.ViBe is that each pixel stores a sample set, and in sample set, sampled value is exactly the pixel value in this pixel past and the pixel value of its neighbours point, then each new pixel value and sample set is compared and judges whether to belong to background dot.ViBe algorithm realization is simple, and algorithm implementation effect is good, and firmware mode can be used to be fixed in front end system.
4, propose new goal description feature, namely describe object in conjunction with ASIFT characteristic sum color histogram feature.ASIFT realizes affine constant completely by simulation longitude and latitude, then uses SIFT (SIFT has scale invariability completely) algorithm that analog image is compared, last realization character coupling.ASIFT feature is to illumination variation, and dimensional variation, shooting angle change is insensitive.But ASIFT have ignored target color information, the present invention adopts color of object histogram to carry out this shortcoming of supplementary ASIFT.
5, adopt indirect characteristic matching thought to mate target.The sample set that a series of vehicle taken by this front end forms is set in end in each of front, calculates the ASIFT characteristic sum color histogram feature of moving object, and compare with the data set of this front end, obtain the characteristic vector of moving object.The object that the data set of different front end points to is identical, and namely the data of same sequence number all point to same moving object, and the angle of just shooting is different, causes data different.With this, the characteristic vector of comparing motion object and clarification of objective vector, judge whether moving object is the target that will mate.So just can solve the problem that two front end monitoring camera setting angles are different.
6, operating personnel only need a frame to select target, and ensuing target association and target selection are all realized automatically by native system, significantly reduce the work of operating personnel.
Accompanying drawing explanation
Fig. 1 is the traffic video monitoring system structural representation towards multiple-camera target association of the present invention;
Fig. 2 is the system pie graph of front-end processing layer of the present invention;
Fig. 3 is the workflow diagram of association algorithm module in front-end processing layer of the present invention;
Workflow diagram when Fig. 4 is multiple-camera target association in central control layer of the present invention.
Embodiment
Below with reference to accompanying drawing, specific embodiment of the invention process is described in detail.
A traffic video monitoring system for multiple-camera target association, as shown in Figure 1, comprises three-decker, seven functional modules; Described three-decker is for being front-end processing layer, transport network layer, central control layer, wherein front-end processing layer comprises video acquisition module, target association module and front end communications module, transport network layer comprises communication control module, and central control layer comprises target signature computing module, spatio-temporal prediction module and central communication module.
Specifically, seven described functional modules are respectively:
Video acquisition module: be in front-end control layer, is made up of capital equipments such as the front end CCTV cameras be erected in metropolitan area (protective cover, video camera, camera lens, support).For the video information in each traffic section of Real-time Collection, and be kept in local memory device, read for central control layer.
Target association module: be in front-end control layer, compare according to the moving object characteristic vector that moving target characteristic vector and this front end of central control layer transmission are extracted, namely extract the ASIFT characteristic sum color histogram feature of the moving object of this front end, the data set stored with this front end carries out SIFT feature and compares with color histogram feature and obtain moving object characteristic vector.Determine the positional information of target in this front end, carry out target association, and generate association results video data.
Front end communications module: be in front-end control layer, according to target signature and the control command of central control layer transmission, controls the operation of front-end control layer, and by association results video data transmission to central control layer.
Network communication module: be in transport network layer, this module comprises two parts TCP communication module and UDP communication module, TCP communication module primary responsibility sending and receiving control command, UDP communication module primary responsibility sending and receiving video data.
Target feature vector computing module: be in central control layer, according to the target of operating personnel's frame choosing, calculate the ASIFT characteristic sum color histogram feature of this target, and compare with current data set and obtain moving target characteristic vector, for being transferred to each front-end control layer, carry out target association.
Spatio-temporal prediction module: be in central control layer, according to the target travel information that front-end control layer returns, according to road network and Gauss model, the next headend equipment that target of prediction will occur.
Central communication module " be in central control layer, predicting the outcome according to spatio-temporal segmentation, to the issue an order of corresponding front end, and receiving front-end return association results video data, target space time information.
Above-mentioned three-tier system structure, achieves Distributed Calculation, achieves multiple-camera target association.In ground floor, the present invention proposes a kind of new moving object Expressive Features, namely carrys out Describing Motion object in conjunction with ASIFT characteristic sum color histogram feature, and the complete affine-invariant features of realization character also comprises distribution of color information.In third layer, the present invention proposes to adopt indirect characteristic matching thought, moving object is between multi-section camera in driving process, directly comparing motion object in last portion camera feature and between next camera in feature, but moving object is compared with data set in this camera and is obtained characteristic vector, compares the similitude of camera features vector.Experiment finds, this thought can improve the precision of multi-section camera subject coupling.
As shown in Figure 2, described front-end processing layer comprises headend equipment, the architecture be made up of hardware layer, system layer and application layer.Hardware layer comprises video monitoring equipment, mainboard, CPU, memory, network interface card, video card etc.Hardware layer provides architecture, performs machinery order, for system layer provides service.System layer comprises operating system, Video Codec, trawl performance, video driver etc.The logical order of application layer is converted to the machine order that hardware layer can be resolved by system layer, consigns to hardware layer and performs, and hardware layer result is returned to application layer.Application layer comprises video acquisition module, target association module and front end communications module.Application layer receives control command and the target signature of central control layer issue, control command is sent to system layer, and finally consigns to hardware layer.The video collected according to video acquisition module and target signature carry out target association, and association results is returned to central control layer.
As shown in Figure 3, for the present invention is towards the monitoring flow chart of the traffic video monitoring system of multiple-camera target association, carry out as follows:
First transmitted control message by central control layer and target feature vector to front end communications module, front-end processing layer read monitor video, utilize ViBe algorithm extract moving object.ViBe is that each pixel stores a sample set, and in sample set, sampled value is exactly the pixel value in this pixel past and the pixel value of its neighbours point, then each new pixel value and sample set is compared and judges whether to belong to background dot.
Then calculate the ASIFT characteristic sum color histogram feature of moving object that previous step is extracted, and carry out mating with the data set of this front end and obtain moving object characteristic vector.The target feature vector that the characteristic vector of comparing motion object and central control layer transmit, obtains the position of target in this front end.Due to phase machine face take the photograph object in vain time, the optical axis direction of camera may change, and brings distortion.ASIFT realizes affine constant completely by simulation longitude and latitude, then uses SIFT (SIFT has scale invariability completely) algorithm that analog image is compared, last realization character coupling.But ASIFT characteristic matching is that objective contour feature have ignored target color information, and the present invention improves ASIFT feature, add colouring information.
Then, TLD algorithm is utilized to associate the moving target detected.
TLD (Tracking-Learning-Detection) is long-time (longterm tracking) association algorithm of a kind of new single goal that a Czech nationality doctor Zdenek Kalal of Surrey university proposed in its period that does one's doctorate.This algorithm and the remarkable difference of traditional association algorithm are traditional association algorithm and traditional detection algorithm to combine and solve the problem such as deformation, partial occlusion that associated target occurs in associated process.Meanwhile, constantly update " remarkable characteristic " of relating module and the object module of detection module and relevant parameter by a kind of on-line study of improvement mechanism, thus make interrelating effect more stable, reliably.
Finally, by target association result and target space time information, by communication layers, central control layer is returned to.By the spatio-temporal prediction module of central control layer, notice is used for the front end of relay, performs next step target association.
As shown in table 1, a set of control information rule defined in communication protocol module of the present invention, is first defined as follows symbol:
Table 1
As shown in Figure 4, be the workflow diagram in central control layer of the present invention during multiple-camera target association.
At the video camera being numbered 1 to label be n video camera between carry out video monitoring, according to spatio-temporal segmentation, central control layer determines relay video camera, and sends the corresponding command to this video camera, is implemented as:
First central control layer sends CAMERA_OPEN order to a certain front-end A, and front-end A sends real-time video to central control layer, the capture video of central control layer display front-end A.
Then, operating personnel select target at central control layer frame, calculate the ASIFT characteristic sum color histogram feature of target, and carry out feature with current data set and compare, and obtain clarification of objective vector.Clarification of objective vector and OBJECT_TRACK order are sent to respective front ends A, and front-end A associates target, and associated video and target space time information are turned back to central control layer.
Then, front-end A target association completes, the target signature information that central control layer returns according to A and space time information, the next front end B that target of prediction will occur, and target signature information and OBJECT_TRACK order are sent to front end B, OBJECT_CANCEL order is sent to front-end A.
Finally, repeat step 2.3, and then obtain the track of target in whole urban area.
In a word, target association module distribution, towards the multiple-camera target association field in full metropolitan area, performs to front end, alleviates the calculating pressure of central control layer by the present invention.Headend equipment adds communication module for receiving control command and sending video data.Determine whether that opening head end video transmits, and alleviates the network bandwidth pressure of central control layer effectively by central control layer issue an order.The present invention carrys out Describing Motion object in conjunction with ASIFT characteristic sum color histogram feature, adopts indirect feature matching method to solve multiple front end surveillance device setting angle difference and the different target signature matching problem brought of shooting background.
The content be not described in detail in specification of the present invention belongs to the known prior art of professional and technical personnel in the field.
The above; be only part embodiment of the present invention, but protection scope of the present invention is not limited thereto, any those skilled in the art are in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.
Claims (5)
1. towards a traffic video monitoring system for multiple-camera target association, it is characterized in that: comprise front-end processing layer, transport network layer, central control layer;
Described front-end processing layer comprises video acquisition module, target association module, front end communications module;
Described video acquisition module reads the video of CCTV camera shooting, and by described transmission of video to target association module;
The target signature that described target association module is transmitted according to central control layer and monitor video data associate target;
Association results is transferred to transport network layer by described front end communications module, and is finally transferred to central control layer and shows;
Described Internet Transmission comprises communication control module, comprises switch, router and UDP communication module, TCP communication module; UDP communication module is used for transmitting video data, and TCP communication module is for transmitting control command;
Described central control layer, comprises target signature computing module, spatio-temporal prediction module, central communication module and GIS map display module; Central control layer carries out global information maintenance and control, namely according to spatio-temporal segmentation, determine next video camera that target will occur, adopt moving object to compare with data set in this camera and obtain characteristic vector, the relatively similitude of camera features vector, then target signature and control command are transferred to corresponding video camera, carry out next step association.
2. the traffic video monitoring system towards multiple-camera target association according to claim 1, is characterized in that: described front-end processing layer comprises headend equipment, and described headend equipment comprises hardware layer, system layer, application layer; Hardware layer comprises video monitoring equipment, mainboard, CPU, memory, network interface card, video card, hardware layer provides architecture, perform machinery order, for system layer provides service, system layer comprises operating system, Video Codec, trawl performance, video driver, the logical order of application layer is converted to the machine order that hardware layer can be resolved by system layer, consign to hardware layer to perform, and hardware layer result is returned to application layer, application layer comprises video acquisition module, target association module and front end communications module.
3. the traffic video monitoring system towards multiple-camera target association according to claim 2, is characterized in that: can carry out distributed variable-frequencypump between headend equipment, substantially increase the computing capability of whole system.
4. the method for supervising of the traffic video monitoring system towards multiple-camera target association according to claim 1, comprises following steps,
S101, transmitted control message by central control layer and target feature vector to the communication module of headend equipment, front-end processing layer reads monitor video, each pixel stores a sample set, in sample set, sampled value is the pixel value in this pixel past and the pixel value of its neighbours point, then each new pixel value and sample set is compared and judges whether to belong to background dot; If a certain pixel does not belong to background dot, so this pixel just belongs to foreground point, namely moving object; The method that system adopts foreground pixel point to assemble extracts moving object;
S102, calculate the ASIFT characteristic sum color histogram feature of the moving object that previous step is extracted, carry out described ASIFT characteristic sum color histogram feature with the data set of this headend equipment and compare the characteristic vector obtaining moving object, the clarification of objective vector that the characteristic vector of comparing motion object and central control layer transmit, obtains the movement position of target at this headend equipment;
S103, utilizes TLD algorithm to associate the moving target detected;
S104, by target association result and target space time information, by transport network layer, returns to central control layer; By the spatio-temporal prediction module of central control layer, next video camera that notice is used for relay performs next step target association.
5. method for supervising according to claim 4, is characterized in that: described central control layer determines relay video camera according to spatio-temporal prediction module and sends the corresponding command to this video camera, and detailed process is:
S201, central control layer sends CAMERA_OPEN order to a certain front-end A, and front-end A sends real-time video to central control layer, the capture video of central control layer display front-end A;
S202, target is selected at central control layer frame, calculate the ASIFT characteristic sum color histogram feature of target, and carry out feature with current data set and compare, obtain clarification of objective vector, clarification of objective vector and OBJECT_TRACK order are sent to respective front ends A, and front-end A associates target, and associated video and target space time information are turned back to central control layer;
S203, front-end A target association completes, the target signature information that central control layer returns according to front-end A and space time information, the next front end B that target of prediction will occur, and target signature information and OBJECT_TRACK order are sent to front end B, OBJECT_CANCEL order is sent to front-end A;
S204, repeats S202.S203, and then obtains the track of target in whole urban area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410685403.4A CN104363426A (en) | 2014-11-25 | 2014-11-25 | Traffic video monitoring system and method with target associated in multiple cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410685403.4A CN104363426A (en) | 2014-11-25 | 2014-11-25 | Traffic video monitoring system and method with target associated in multiple cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104363426A true CN104363426A (en) | 2015-02-18 |
Family
ID=52530649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410685403.4A Pending CN104363426A (en) | 2014-11-25 | 2014-11-25 | Traffic video monitoring system and method with target associated in multiple cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104363426A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106331650A (en) * | 2016-09-27 | 2017-01-11 | 北京乐景科技有限公司 | Video data transmission method and apparatus |
CN106559645A (en) * | 2015-09-25 | 2017-04-05 | 杭州海康威视数字技术股份有限公司 | Based on the monitoring method of video camera, system and device |
CN109740573A (en) * | 2019-01-24 | 2019-05-10 | 北京旷视科技有限公司 | Video analysis method, apparatus, equipment and server |
WO2019206143A1 (en) * | 2018-04-27 | 2019-10-31 | Shanghai Truthvision Information Technology Co., Ltd. | System and method for traffic surveillance |
CN111080637A (en) * | 2019-12-25 | 2020-04-28 | 深圳力维智联技术有限公司 | Cloud service-based advertisement remote method, device, system, product and medium |
CN111654668A (en) * | 2020-05-26 | 2020-09-11 | 李绍兵 | Monitoring equipment synchronization method and device and computer terminal |
CN112243029A (en) * | 2020-10-14 | 2021-01-19 | 河北中兴冀能电力发展有限公司 | Computing power integration multiplexing system applied to power instrument equipment |
CN113473091A (en) * | 2021-07-09 | 2021-10-01 | 杭州海康威视数字技术股份有限公司 | Camera association method, device, system, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101055662A (en) * | 2007-06-01 | 2007-10-17 | 北京汇大通业科技有限公司 | Multi-layer real time forewarning system based on the intelligent video monitoring |
US20070285511A1 (en) * | 2006-06-13 | 2007-12-13 | Adt Security Services, Inc. | Video verification system and method for central station alarm monitoring |
CN103607576A (en) * | 2013-11-28 | 2014-02-26 | 北京航空航天大学深圳研究院 | Traffic video monitoring system oriented to cross camera tracking relay |
-
2014
- 2014-11-25 CN CN201410685403.4A patent/CN104363426A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070285511A1 (en) * | 2006-06-13 | 2007-12-13 | Adt Security Services, Inc. | Video verification system and method for central station alarm monitoring |
CN101055662A (en) * | 2007-06-01 | 2007-10-17 | 北京汇大通业科技有限公司 | Multi-layer real time forewarning system based on the intelligent video monitoring |
CN103607576A (en) * | 2013-11-28 | 2014-02-26 | 北京航空航天大学深圳研究院 | Traffic video monitoring system oriented to cross camera tracking relay |
Non-Patent Citations (2)
Title |
---|
付小磊: "运动目标跟踪技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
曹蓓: "粒子滤波改进算法及其应用研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559645A (en) * | 2015-09-25 | 2017-04-05 | 杭州海康威视数字技术股份有限公司 | Based on the monitoring method of video camera, system and device |
CN106559645B (en) * | 2015-09-25 | 2020-01-17 | 杭州海康威视数字技术股份有限公司 | Monitoring method, system and device based on camera |
CN106331650A (en) * | 2016-09-27 | 2017-01-11 | 北京乐景科技有限公司 | Video data transmission method and apparatus |
WO2019206143A1 (en) * | 2018-04-27 | 2019-10-31 | Shanghai Truthvision Information Technology Co., Ltd. | System and method for traffic surveillance |
CN111699679A (en) * | 2018-04-27 | 2020-09-22 | 上海趋视信息科技有限公司 | Traffic system monitoring and method |
US11689697B2 (en) * | 2018-04-27 | 2023-06-27 | Shanghai Truthvision Information Technology Co., Ltd. | System and method for traffic surveillance |
CN109740573A (en) * | 2019-01-24 | 2019-05-10 | 北京旷视科技有限公司 | Video analysis method, apparatus, equipment and server |
CN111080637A (en) * | 2019-12-25 | 2020-04-28 | 深圳力维智联技术有限公司 | Cloud service-based advertisement remote method, device, system, product and medium |
CN111654668A (en) * | 2020-05-26 | 2020-09-11 | 李绍兵 | Monitoring equipment synchronization method and device and computer terminal |
CN112243029A (en) * | 2020-10-14 | 2021-01-19 | 河北中兴冀能电力发展有限公司 | Computing power integration multiplexing system applied to power instrument equipment |
CN113473091A (en) * | 2021-07-09 | 2021-10-01 | 杭州海康威视数字技术股份有限公司 | Camera association method, device, system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104363426A (en) | Traffic video monitoring system and method with target associated in multiple cameras | |
Wang et al. | Enabling edge-cloud video analytics for robotics applications | |
CN107705574A (en) | A kind of precisely full-automatic capturing system of quick road violation parking | |
CN102480615B (en) | Image target area tracking system and method | |
CN111710177B (en) | Intelligent traffic signal lamp networking cooperative optimization control system and control method | |
CN111526324B (en) | Monitoring system and method | |
Amato et al. | A wireless smart camera network for parking monitoring | |
CN110087041B (en) | Video data processing and transmitting method and system based on 5G base station | |
CN110210427B (en) | Corridor bridge working state detection system and method based on image processing technology | |
CN112836683B (en) | License plate recognition method, device, equipment and medium for portable camera equipment | |
CN112766038B (en) | Vehicle tracking method based on image recognition | |
CN113160272B (en) | Target tracking method and device, electronic equipment and storage medium | |
KR20210102122A (en) | Light color identifying method and apparatus of signal light, and roadside device | |
WO2024083113A1 (en) | Methods, systems, and computer-readable media for target tracking | |
Wang et al. | An end-to-end traffic vision and counting system using computer vision and machine learning: the challenges in real-time processing | |
CN114120165A (en) | Gun and ball linked target tracking method and device, electronic device and storage medium | |
Cho et al. | Object recognition network using continuous roadside cameras | |
CN117275216A (en) | Multifunctional unmanned aerial vehicle expressway inspection system | |
Sreekumar et al. | TPCAM: Real-time traffic pattern collection and analysis model based on deep learning | |
CN114782496A (en) | Object tracking method and device, storage medium and electronic device | |
ElHakim et al. | Traffisense: A smart integrated visual sensing system for traffic monitoring | |
CN113724295A (en) | Unmanned aerial vehicle tracking system and method based on computer vision | |
CN109697857A (en) | Intelligent traffic control system based on image recognition and neural network algorithm | |
CN117395378B (en) | Road product acquisition method and acquisition system | |
Koutsia et al. | Automated visual traffic monitoring and surveillance through a network of distributed units |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150218 |