CN208241780U - A kind of one master and multiple slaves formula intelligent video monitoring apparatus - Google Patents

A kind of one master and multiple slaves formula intelligent video monitoring apparatus Download PDF

Info

Publication number
CN208241780U
CN208241780U CN201820676070.2U CN201820676070U CN208241780U CN 208241780 U CN208241780 U CN 208241780U CN 201820676070 U CN201820676070 U CN 201820676070U CN 208241780 U CN208241780 U CN 208241780U
Authority
CN
China
Prior art keywords
camera
video
processor module
image
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201820676070.2U
Other languages
Chinese (zh)
Inventor
郭志波
张懿
张越
李敬
龚张杰
郭笑言
薛莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Priority to CN201820676070.2U priority Critical patent/CN208241780U/en
Application granted granted Critical
Publication of CN208241780U publication Critical patent/CN208241780U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The utility model discloses a kind of one master and multiple slaves formula intelligent video monitoring apparatus in monitoring field, including main camera and several from video camera, the main camera and it is connected from video camera with image acquisition units, described image acquisition unit is connect with Video processor module, the Video processor module is connected with peripheral unit, image-display units and from camera control unit respectively, described to be connected from camera control unit with several from video camera, the image that the Video processor module is used to transmit image acquisition units is handled;Main camera can be made to cooperate with from video camera, be conducive to that the case where target is grasped and monitored, both without individually setting up more main cameras or replacement high-definition camera, have without consuming interpretation of a large amount of manpower to monitored picture, resource is saved, monitoring efficiency is improved, the utility model can be used for the region for needing to carry out security monitoring.

Description

A kind of one master and multiple slaves formula intelligent video monitoring apparatus
Technical field
The utility model relates to a kind of video monitoring apparatus, in particular to a kind of one master and multiple slaves formula intelligent video monitoring dress It sets.
Background technique
Currently, in field of video monitoring, video monitoring apparatus is mainly made of single video camera, it may be assumed that by single Video camera is responsible for the shooting of fixed location, and such monitoring system often shows the features such as monitoring range is small, clarity is not high, Wide viewing angle cannot functionally be taken into account to obtain with local detail feature, although the quantity of distribution video camera can be increased, replacement high definition is taken the photograph Camera is improved, but so then increases cost, and needs to consume a large amount of manpowers to carry out interpretation to monitored picture, and And video content is analyzed by manpower, inefficiency is error-prone, and in addition to real time monitoring, video monitoring system also needs Subsequent verification work is undertaken, subsequent verification, which is dependent on, under normal conditions manually retrieves in data server, time-consuming and laborious, fallibility Important information is crossed, this way greatly wastes resource, so proposing a kind of more efficient monitoring device and its control method Have great importance.
Utility model content
The purpose of the utility model is to provide a kind of one master and multiple slaves formula intelligent video monitoring apparatus, can make main camera It cooperates with from video camera, is conducive to that the case where target is grasped and monitored, both taken the photograph without individually setting up more masters Camera or replacement high-definition camera, have without consuming interpretation of a large amount of manpower to monitored picture, have saved resource, improved Monitoring efficiency.
To achieve the above object, the utility model provides a kind of one master and multiple slaves formula intelligent video monitoring apparatus, including master Video camera and it is several be connected from video camera, the main camera and from video camera with image acquisition units, described image acquisition is single Member is connect with Video processor module, and the Video processor module is respectively with peripheral unit, image-display units and from taking the photograph Camera control unit is connected, described to be connected from camera control unit with several from video camera;
The image that the Video processor module is used to transmit image acquisition units is handled;
Described image acquisition unit is used for main camera and from the acquisition of camera review;
It is described from camera control unit include the input terminal and view from camera control unit from camera control unit Frequency processor module is connected, and output end is connected with from video camera from described to be used for from camera control unit by video processor mould The processing result of block is sent to from camera control unit, and is dispatched and be monitored from video camera;
The peripheral unit is monitored for being manually controlled by Video processor module from video camera;
Described image display unit includes display, the image for transmitting Video processor module by display into Row display.
Compared with prior art, the beneficial effects of the utility model are, above-mentioned Video processor module, Image Acquisition list Member from camera control unit and peripheral unit is hardware module, is monitored by main camera, when occurring needing to track When the target of observation, the information that main camera is shot is fed back to from camera control unit by video processor, from video camera From video camera, to target multiregion, persistently tracking is captured with multi-angle for controller control, can make main camera with from video camera Cooperate, be conducive to that the case where target is grasped and monitored, both without individually set up more main cameras or more High-definition camera is changed, has and is not necessarily to consume interpretation of a large amount of manpower to monitored picture, saved resource, improved monitoring efficiency, The utility model can be used for the region for needing to carry out security monitoring.
Described image acquisition unit includes video frequency collection card as a further improvement of the utility model, and the video is adopted The input terminal of truck and main camera and it is several be connected from video camera, output end is handled by USB communication module one and video Device module is connected, and the image of master and slave video camera shooting can be made faster and better to pass to Video processor module in this way.
The peripheral unit includes mouse and keyboard as a further improvement of the utility model, the mouse and keyboard It is connected by USB communication module two with Video processor module, the display is handled by display communication module and video Device module is connected, and can more easily can be scheduled manually to from video camera by mouse and keyboard in this way, and show Device seems that image is more clear accurately.
As a further improvement of the utility model, the video processing module by ethernet communication module come with data Center is connected, for by video processing module, treated that image data is sent to data center stores, in the data The heart can carry out the searching classification of data, can enable Video processor module in this way treated that image data obtains very well Preservation, in order to carry out retrieval verification to image data afterwards.
To achieve the goals above, the utility model additionally provides the control method of one master and multiple slaves formula intelligent video monitoring, The following steps are included:
Step 1, by camera calibration method, video camera transition matrix K0, K1, K2 ... Kn are obtained;
Step 2, by principal and subordinate's camera coordinates conversion method, principal and subordinate's camera coordinates transition matrix T1, T2, T3 ... are obtained Tn;
Step 3, using Three image difference, obtain moving target in main camera image coordinate system center-of-mass coordinate (U0, V0);
Step 4, according to center-of-mass coordinate (U0, V0), video camera transition matrix and principal and subordinate's camera coordinates transition matrix, meter Object is calculated from the world coordinates (XW1, YW1, ZW1) in camera coordinate system;
Step 5, angle, θ X and the θ Y for needing to rotate from video camera are acquired according to angle reduction formula;
Step 6, according to rotational angle θ X and θ Y, control is rotated from video camera and is captured to target.
Compared with prior art, the beneficial effects of the utility model are, using camera calibration method, obtain video camera and turn Change matrix;According to video camera transition matrix, the position coordinates of camera point are calculated;Coordinate according to principal and subordinate's video camera converts square Battle array acquires object in main camera image from the coordinate in camera review coordinate system;Coordinate is converted to from video camera Holder kinematic parameter;Control captures target from video camera, improves the performance of intelligent monitor system, can completely disengage Manpower carries out full-automatic video monitoring, saves manpower, and data center has the function of systematic searching, investigates convenient for subsequent, The utility model can be used for the region for needing to carry out security monitoring.
The camera calibration method is BP neural network camera calibration method as a further improvement of the utility model, The accuracy of video camera transition matrix data can be improved in this way, so that the precision from video capture is higher.
Principal and subordinate's camera coordinates conversion method turns as a further improvement of the utility model, for ICP space coordinate Method is changed, the accuracy of principal and subordinate's camera coordinates transition matrix can be improved in this way, so that the precision from video capture is higher.
Specific step is as follows: step for the Three image difference acquisition center-of-mass coordinate as a further improvement of the utility model, Rapid 3.1, choose continuous three frames image in main camera sequence of video images;Step 3.2, the difference diagram of adjacent two frame is calculated separately Picture;Step 3.3, difference image is subjected to binary conversion treatment by choosing threshold value appropriate;Step 3.4, in each pixel Obtained bianry image carries out logic and operation, common ground is obtained, to obtain the profile information of moving target;Step 3.5, Its center-of-mass coordinate in main camera image coordinate system is calculated using vector method;Center-of-mass coordinate can be preferably acquired in this way, Ensure the accuracy of center-of-mass coordinate.
The object is calculating step from the world coordinates in camera coordinate system as a further improvement of the utility model, It is rapid as follows: step 4.1, the center-of-mass coordinate of object to be converted into object in main camera coordinate system using video camera transition matrix K In world coordinates;Step 4.2, using principal and subordinate camera coordinates transition matrix T, by generation of the object in main camera coordinate system Boundary's coordinate is converted to from the world coordinates in camera coordinate system;It can acquire from camera coordinate system well in this way World coordinates further promotes the candid photograph precision from video camera.
The triangle conversion formula includes θ X=arcsin (X/2r) and θ Y=as a further improvement of the utility model, Arcsin (Y/2r) acquires the relative displacement (X, Y) from camera vision central axis to object space by triangulo operation, then passes through Triangle conversion formula acquires angle, θ X and the θ Y for needing to rotate from video camera, easily can accurately acquire from video camera in this way The angle of rotation, it is ensured that angle is more accurate when capturing target from video camera, and the image of shooting is more clear accurately.
Detailed description of the invention
Fig. 1 is structure of the invention figure.
Fig. 2 is image acquisition units block diagram of the present invention.
Fig. 3 is video processing module block diagram of the present invention.
Fig. 4 is USB communication module circuit diagram of the present invention.
Fig. 5 is power circuit diagram of the present invention.
Fig. 6 is inventive display circuit diagram.
Fig. 7 is ethernet module circuit diagram of the present invention.
Fig. 8 is control method flow chart of the present invention.
Fig. 9 is the utility model main camera, from video camera and world coordinates transition diagram.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing:
A kind of one master and multiple slaves formula intelligent video monitoring apparatus as shown in figs. 1-7, including main camera and several from camera shooting Machine, main camera and is connected from video camera with image acquisition units, and image acquisition units are connect with Video processor module, video Processor module is connected with peripheral unit, image-display units and from camera control unit respectively, from camera control list It is first to be connected with several from video camera;The image that Video processor module is used to transmit image acquisition units is handled;Image Acquisition unit is used for main camera and from the acquisition of camera review;
It include from camera control unit, from the input terminal and video processor of camera control unit from camera control unit Module is connected, and output end is connected with from video camera from from camera control unit for by the processing result of Video processor module It is sent to from camera control unit, and dispatches and be monitored from video camera;Peripheral unit is used to pass through Video processor module hand Dynamic control is monitored from video camera;Image-display units include display, the image for transmitting Video processor module It is shown by display.
Image acquisition units include video frequency collection card, the input terminal and main camera of video frequency collection card and several from camera shooting Machine is connected, and output end is connected by USB communication module one with Video processor module;Peripheral unit includes mouse and keyboard, mouse Mark and keyboard are connected by USB communication module two with Video processor module, and display passes through display communication module and video Processor module is connected;Video processing module is connected by ethernet communication module with data center, for handling video Image data after resume module is sent to data center and is stored, and data center can carry out the searching classification of data;
Wherein image acquisition units include ADV7181B video encoder, ADV7181B video encoder and main camera and It is connected from video camera, ADV7181B video encoder passes through the port IC2_SCLK and the port IC2_SDAT and ADV7181B video Encoder configuration module is connected, and ADV7181B video encoder passes through the port TD_DA [7:0], the port TD_HS and the end TD_VS Mouth is connected with video acquisition module, and video acquisition module is connected with H.264/AVC coding module, H.264/AVC coding module phase H.264/AVC encoding code stream output end be connected with Video processor module;Video acquisition module is connected with SRAM interface module, The port ADDR, the port DATA, the port CE, the port OE and the port WE of SRAM interface module are connected with image buffer storage SRAM, The output end of SRAM interface module is connected with H.264/AVC coding module;ADV7181B video encoder configuration module, video are adopted Collect module, SRAM interface module and H.264/AVC coding module is integrated into fpga chip;Video processor module includes video Processor, video processor use the processing chip of ARM Cortex-A53 series, but are not limited to ARM Cortex-A53 series Chip.
When work, power supply realizes power supply;Under normal circumstances, the video pictures of main camera shooting pass through Image Acquisition Unit is sent to Video processor module through USB communication module one, is sent to display viewing by Video processor module, when Appearance is when needing the target of tracing and monitoring, and Video processor module passes to echo signal from camera control unit, by from Camera control unit scheduling captures target from video camera, the video pictures of the video pictures of candid photograph and the shooting of main camera It will be sent to display by video acquisition unit and video processor and be watched, it is accurately right to be apparent in this way Target is monitored, and when needing hand monitor, can dispatch from video camera be monitored manually by mouse, keyboard;Master takes the photograph Camera is sent in data with the video pictures shot from video camera by the ethernet communication module being connected with video processor The heart is stored, and can be taken out at any time when in order to need in the future.
The control method of a kind of one master and multiple slaves formula intelligent video monitoring as Figure 8-9, comprising the following steps: (1) adopt It is demarcated with BP neural network camera calibration method, obtains video camera transition matrix K0, K1, K2 ... Kn;
(2) by ICP space coordinate conversion method, principal and subordinate's camera coordinates transition matrix T1, T2, T3 ... Tn are obtained;
(3) Three image difference is used, continuous three frames image in main camera sequence of video images is chosen and calculates separately phase Then difference image is carried out binary conversion treatment by choosing threshold value appropriate, obtains binary picture by the difference image of adjacent two frames Picture finally carries out logic and operation in the bianry image that each pixel obtains, common ground is obtained, to obtain movement mesh Target profile information.The image for remembering the (n+1)th frame, n-th frame and the n-th 1 frames in video sequence is respectively fn+1, fn and fn1, three frames pair The gray value of pixel is answered to be denoted as fn+1 (x, y), fn (x, y) and fn1 (x, y), according to the following formula:
Dn(x, y)=| fn(x,y)-fn-1(x,y)|
Difference image Dn+1 and Dn are respectively obtained, according to the following formula to difference image Dn+1 and Dn:
D′n=| fn+1(x,y)-fn(x,y)|∩|fn(x·1)-fn-1(x,y)|
It is operated, obtains image Dn', carry out threshold process, connectivity analysis again later, moving target profile can be obtained Then image calculates its center-of-mass coordinate (U0, V0) in main camera image coordinate system using vector method;
(4) center-of-mass coordinate (U0, V0) and video camera transition matrix K0 according to object in main camera image coordinate system, K1, K2 ... Kn obtain world coordinates (XW0, YW0, ZW0) of the object in principal and subordinate's camera coordinate system, then take the photograph according to principal and subordinate Camera coordinate conversion matrix T1, T2, T3 ... Tn, acquire object from camera coordinate system world coordinates (XW1, YW1, ZW1);
(5) it according to acquiring the coordinate (UX, VX) from camera review coordinate system, can be acquired by triangulo operation from taking the photograph Camera optic centre axis is to the relative displacement (X, Y) of object space, and then basis is with lower angle reduction formula:
θ X=arcsin (X/2r), θ Y=arcsin (Y/2r)
Acquire angle, θ X and the θ Y for needing to rotate from video camera;
(6) according to rotational angle θ X and the θ Y acquired, control is rotated from video camera and is captured to target.
In the utility model, camera calibration is in order to obtain main camera image coordinate (U0, V0) and respectively from video camera Following camera matrix K can be obtained in relationship between image coordinate (Un, Vn) and world coordinates (Xw, Yw, Zw):
Wherein, fx, fy are focal length, and under normal circumstances, the two is equal, θ x, θyFor principal point coordinate (relative to imaging plane), D is reference axis tilt parameters, is ideally 0;
Camera matrix is made of internal reference matrix and outer ginseng matrix, is carried out QR to camera matrix and is decomposed available internal reference Matrix and outer ginseng matrix.Internal reference includes focal length, principal point, inclination factor, distortion factor.Outer ginseng includes spin matrix R3 × 3, translation Vector T 3 × 1, they describe how that a point is transformed into camera coordinate system from world coordinate system jointly, and spin matrix describes Direction of the reference axis of world coordinate system relative to camera coordinates axis, translation vector are described in camera coordinate system down space The position of origin;Video camera by calibration contains the more information about scene and image, facilitates subsequent control from taking the photograph Camera accurately captures target.
BP neural network camera calibration method: counterpropagation network (Back Propagation Network) is a kind of right Non-linear differentiable function carries out the multilayer feedforward neural network of Weight Training, and neural network learns two dimension using Multi-layered Feedforward Networks The relationship of pixel coordinate and three-dimensional world coordinate does not require the accurate model structure for pre-establishing imaging system, does not also need The priori knowledge of parameter is carried out it is assumed that but neural network is utilized to sit by directly study two-dimensional pixel coordinate and three-dimensional world Mark, to obtain relationship between the two, process is as follows:
(1) inner parameter that video camera is obtained using linear calibration's method, calculates separately the Three dimensions control point for calibration Theoretical diagram picture point;
(2) the theoretical diagram picture point at the control point for calibration and the deviation of corresponding real image point are sought, process is such as Under:
1) netinit
The random number in a section (- 1,1) is assigned respectively to each connection weight, sets error function e, gives computational accuracy Value ε and maximum study number M;
2) k-th of input sample and corresponding desired output are randomly selected:
X (k)=(x1(k),x2(k),…,xn(k))
d0(k)=(d1(k),d2(k),…,dq(k))
3) outputting and inputting for each neuron of hidden layer is calculated:
hOh(k)=f (hih(k)) h=1,2 ..., p
yOo(k)=f (yio(k)) O=1,2 ..., q
4) network desired output and reality output are utilized, calculates error function to the partial derivative δ o of each neuron of output layer (k):
5) error letter is calculated using the output of the connection weight of hidden layer to output layer, the δ o (k) of output layer and hidden layer The partial derivative δ h (k) of several pairs of each neurons of hidden layer:
6) connection weight Who is corrected using the output of the δ o (k) of each neuron of output layer and each neuron of hidden layer (k):
7) the Introduced Malaria connection weight of the δ h (k) of each neuron of hidden layer and each neuron of input layer are utilized:
8) global error E is calculated:
9) judge whether network error meets the requirements.It is greater than the maximum set when error reaches default precision or learns number Number then terminates algorithm.Otherwise, next learning sample and corresponding desired output are chosen, third step is returned to, entrance is next Wheel study;
(3) each video camera requires oneself a corrective network, using its practical image coordinate as neural network Input, output of the respective deviation obtained by (2) step as neural network set up video camera mirror by the training of network The calibration model of head distortion, it is as follows to obtain video camera transition matrix K:
Wherein, fx, fy are focal length, and under normal circumstances, the two is equal, and θ x, θ y are principal point coordinate (relative to imaging plane), D is reference axis tilt parameters, is ideally 0.
ICP space coordinate conversion method: ICP (Iterative Closest Point) algorithm is a kind of coordinate points registration calculation Method, ICP algorithm can be such that the coordinate data of the point under different coordinate systems is merged into the same coordinate system.It finds first One available transformation, registration operation are really the rigid transformation that find from coordinate system 1 to coordinate system 2.Assuming that three There are two points in dimension space:
Their Euclidean distance indicates are as follows:
The purpose of three-dimensional coordinate point matching problem is to find the matrix R and T of P and Q variation, for
Solving optimal solution using least square method makes:
R and T when minimum, wherein T is from coordinate system 1 to the transformation matrix of coordinate system 2.
The conversion of principal and subordinate's camera coordinates: it in order to realize the accurate candid photograph from video camera, needs object coordinates from main camera shooting Machine coordinate system is converted to from camera coordinate system.When main camera is with same target is directed at from video camera, can be taken the photograph according to master The object that camera attitude description matrix [P0, D0] (wherein P is pitch angle, and D is deflection angle) and video camera transition matrix K are acquired exists Object is from camera coordinate system under real coordinate A0 (XW0, YW0, ZW0) and current pose in main camera coordinate system Real coordinate A1 (X1, Y1, Z1) then can be obtained the coordinate O1 (XW0- from video camera relative to main camera by shifting principal axis transformation XW1, YW0-YW1, ZW0-ZW1).Later, it is converted using from video camera attitude description matrix [P1, D1] and shaft:
Object is obtained from the real coordinate B (XW1, YW1, ZW1) in camera coordinate system.Finally, being sat by the space ICP Mark transformation approach obtains principal and subordinate's camera coordinates transition matrix T.
It when work, is demarcated first using BP neural network camera calibration method, obtains video camera transition matrix K0, K1, K2 ... Kn, then by ICP space coordinate conversion method, principal and subordinate's camera coordinates transition matrix T1, T2, T3 ... Tn are obtained, later Center-of-mass coordinate (U0, V0) of the object in main camera image coordinate system is obtained using Three image difference;According to object in master Center-of-mass coordinate (U0, V0) and video camera transition matrix K0, K1, K2 ... Kn in camera review coordinate system obtain object in master From the world coordinates (XW0, YW0, ZW0) in camera coordinate system, then according to principal and subordinate's camera coordinates transition matrix T1, T2, T3 ... Tn acquires object from the world coordinates (XW1, YW1, ZW1) in camera coordinate system;
According to the coordinate (UX, VX) acquired from camera review coordinate system, can be acquired by triangulo operation from camera shooting Machine optic centre axis to object space relative displacement (X, Y), then according to degree reduction formula: θ X=arcsin (X/2r), θ Y =arcsin (Y/2r) acquires angle, θ X and the θ Y for needing to rotate from video camera;According to rotational angle θ X and the θ Y acquired, control From video camera rotate and target is captured, improve the performance of intelligent monitor system in this way, can completely disengage manpower into Manpower, and the data center with systematic searching function are saved in the monitoring of row full-automatic video, convenient for subsequent verification.
The utility model is not limited to the above embodiment, on the basis of the technical solution of the disclosure, the technology of this field According to disclosed technology contents, some of which technical characteristic can be made personnel by not needing creative labor Replacement and deformation, these replacements and deformation are within the protection scope of the present utility model.

Claims (4)

1. a kind of one master and multiple slaves formula intelligent video monitoring apparatus, it is characterised in that: including main camera and several from video camera, institute It states main camera and is connected from video camera with image acquisition units, described image acquisition unit is connect with Video processor module, The Video processor module is connected with peripheral unit, image-display units and from camera control unit respectively, it is described from Camera control unit is connected with several from video camera;
The image that the Video processor module is used to transmit image acquisition units is handled;
Described image acquisition unit is used for main camera and from the acquisition of camera review;
It is described from camera control unit include from camera control unit, from the input terminal from camera control unit and video It manages device module to be connected, output end is connected with from video camera from described to be used for from camera control unit by Video processor module Processing result is sent to from camera control unit, and is dispatched and be monitored from video camera;
The peripheral unit is monitored for being manually controlled by Video processor module from video camera;
Described image display unit includes display, and the image for transmitting Video processor module is shown by display Show.
2. a kind of one master and multiple slaves formula intelligent video monitoring apparatus according to claim 1, it is characterised in that: described image is adopted Collecting unit includes video frequency collection card, the input terminal of the video frequency collection card and main camera and it is several be connected from video camera, it is defeated Outlet is connected by USB communication module one with Video processor module.
3. a kind of one master and multiple slaves formula intelligent video monitoring apparatus according to claim 2, it is characterised in that: the peripheral hardware list Member includes mouse and keyboard, and the mouse and keyboard are connected by USB communication module two with Video processor module, the display Device is connected by display communication module with Video processor module.
4. a kind of one master and multiple slaves formula intelligent video monitoring apparatus according to claim 1 to 3, it is characterised in that: The video processing module is connected by ethernet communication module with data center, for by video processing module, treated Image data is sent to data center and is stored, and the data center can carry out the searching classification of data.
CN201820676070.2U 2018-05-08 2018-05-08 A kind of one master and multiple slaves formula intelligent video monitoring apparatus Expired - Fee Related CN208241780U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201820676070.2U CN208241780U (en) 2018-05-08 2018-05-08 A kind of one master and multiple slaves formula intelligent video monitoring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201820676070.2U CN208241780U (en) 2018-05-08 2018-05-08 A kind of one master and multiple slaves formula intelligent video monitoring apparatus

Publications (1)

Publication Number Publication Date
CN208241780U true CN208241780U (en) 2018-12-14

Family

ID=64581931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201820676070.2U Expired - Fee Related CN208241780U (en) 2018-05-08 2018-05-08 A kind of one master and multiple slaves formula intelligent video monitoring apparatus

Country Status (1)

Country Link
CN (1) CN208241780U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108377368A (en) * 2018-05-08 2018-08-07 扬州大学 A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108377368A (en) * 2018-05-08 2018-08-07 扬州大学 A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method

Similar Documents

Publication Publication Date Title
CN108921893B (en) Image cloud computing method and system based on online deep learning SLAM
CN111679291B (en) Inspection robot target positioning configuration method based on three-dimensional laser radar
CN105825511B (en) A kind of picture background clarity detection method based on deep learning
CN107953329B (en) Object recognition and attitude estimation method and device and mechanical arm grabbing system
CN114495274B (en) System and method for realizing human motion capture by using RGB camera
CN107680116A (en) A kind of method for monitoring moving object in video sequences
CN114529605A (en) Human body three-dimensional attitude estimation method based on multi-view fusion
Liu et al. Using unsupervised deep learning technique for monocular visual odometry
CN110555408A (en) Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN108377368A (en) A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method
CN110135277B (en) Human behavior recognition method based on convolutional neural network
CN111027415A (en) Vehicle detection method based on polarization image
CN113159466A (en) Short-time photovoltaic power generation prediction system and method
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
CN114549391A (en) Circuit board surface defect detection method based on polarization prior
CN110553650B (en) Mobile robot repositioning method based on small sample learning
CN114387679B (en) System and method for realizing sight estimation and attention analysis based on recurrent convolutional neural network
CN112365578A (en) Three-dimensional human body model reconstruction system and method based on double cameras
CN114494427B (en) Method, system and terminal for detecting illegal behaviors of person with suspension arm going off station
CN208241780U (en) A kind of one master and multiple slaves formula intelligent video monitoring apparatus
CN113420776B (en) Multi-side joint detection article classification method based on model fusion
CN111222459A (en) Visual angle-independent video three-dimensional human body posture identification method
Zhang et al. EventMD: High-speed moving object detection based on event-based video frames
Deng et al. An automatic body length estimating method for Micropterus salmoides using local water surface stereo vision
CN106971385A (en) A kind of aircraft Situation Awareness multi-source image real time integrating method and its device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181214