CN105979203B - A kind of multiple-camera cooperative monitoring method and device - Google Patents

A kind of multiple-camera cooperative monitoring method and device Download PDF

Info

Publication number
CN105979203B
CN105979203B CN201610280010.4A CN201610280010A CN105979203B CN 105979203 B CN105979203 B CN 105979203B CN 201610280010 A CN201610280010 A CN 201610280010A CN 105979203 B CN105979203 B CN 105979203B
Authority
CN
China
Prior art keywords
dimensional
foreground
point
fusion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610280010.4A
Other languages
Chinese (zh)
Other versions
CN105979203A (en
Inventor
梁华庆
曹旭东
杨勇
李凤民
潘居臣
宋松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum Beijing
Petrochina Huabei Oilfield Co
Original Assignee
China University of Petroleum Beijing
Petrochina Huabei Oilfield Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum Beijing, Petrochina Huabei Oilfield Co filed Critical China University of Petroleum Beijing
Priority to CN201610280010.4A priority Critical patent/CN105979203B/en
Publication of CN105979203A publication Critical patent/CN105979203A/en
Application granted granted Critical
Publication of CN105979203B publication Critical patent/CN105979203B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

Embodiments of the present invention provide a kind of multiple-camera cooperative monitoring method and apparatus.The multiple-camera cooperative monitoring method includes: that the multiple cameras of monitoring site is combined into multiple camera chains;The multiframe bidimensional image figure that each camera chain is acquired in each collection period is obtained, and foreground extraction and projection are carried out to every frame bidimensional image figure, obtains three-dimensional foreground image;The three-dimensional foreground image of same acquisition moment each camera chain is subjected to fusion treatment and forms three-dimensional fusion foreground picture;According to the situation of change of the corresponding position in the three-dimensional fusion foreground picture at each acquisition moment of the point in the three-dimensional fusion foreground picture and real world at each acquisition moment, target object and its spatial distribution characteristic and operation distribution characteristics are determined;Judge whether there is target context appearance.The present invention takes full advantage of the three-dimensional spatial information of monitoring site, even if illumination is poor, the big visual field, in remote monitor can obtain preferable monitoring result.

Description

A kind of multiple-camera cooperative monitoring method and device
Technical field
Embodiments of the present invention are related to field of intelligent monitoring, more specifically, to be related to one kind more for embodiments of the present invention Video camera cooperative monitoring method and device.
Background technique
Background that this section is intended to provide an explanation of the embodiments of the present invention set forth in the claims or context.Herein Description recognizes it is the prior art not because not being included in this section.
In intelligent monitoring research field for equipment and personnel safety, the discoveries of abnormal patterns and analysis are all the time all It is the direction of a primary study, and has a wide range of applications.However it to realize to personnel under real scene and object Real-time accurate detection, and judge its behavior there is many challenges, illumination variation, video camera under irregular natural conditions The influence at the visual angle of deployment, the reduction of remote bring resolution ratio etc. are all the unfavorable factors for influencing accurately to detect.
It detects scene based on video camera to alarm extremely, the method for mainstream is mainly based upon the coloured silk for obtaining video camera Color or the two-dimensional image information of gray scale carry out the detection of abnormal patterns.Wherein a major class is the side of the detection and tracking based on people Method, this method are realized by one detection and tracking, and people can roughly be calculated in two dimensional image coordinate system In track, the analysis of anomalous event is carried out with the trajectory analysis of final more people, this method is often suitable only in low close The scene of degree, middle short distance, in real scene, mixed and disorderly background is interpersonal to block and the variation of illumination is usual Meeting is so that detecting and tracking algorithm fails, so that accurate result can not be provided.Having one kind again is the method based on low-level image feature, this Kind method generally first passes through the background model for establishing scene, then extracts the prospect in scene, Zhi Houti using background subtraction method Feature of foreground area, such as the temporal signatures of foreground area profile, texture, prospect etc. is taken to be used as input, using trained side Method carries out the detection of anomalous event, and the quality of effect is subject to the variation of illumination in environment, the resolution ratio and thing of image The factors such as distance of the part apart from video camera.
Have using depth information a method for carrying out abnormal monitoring at present, this method be mostly based on active light, can be with Small-scale depth value (general range is 1~8 meter) is accurately obtained, can be used for Indoor Video.
Summary of the invention
But the current existing method for carrying out abnormal monitoring using depth information, existing is influenced by outdoor optical photograph Greatly, the disadvantages of operating distance is small, field angle is small is not used to outdoor a wide range of, big wide-angle and remote abnormal monitoring.
For this purpose, the present invention provides a kind of multiple-camera cooperative monitoring method and apparatus, for overcoming existing monitoring method The problem being limited in outdoor a wide range of, big wide-angle, remote abnormal monitoring.
In the first aspect of implementation method of the present invention, a kind of multiple-camera cooperative monitoring method is provided, comprising:
Step A will be deployed in the more of monitoring site in such a way that at least two video cameras form a camera chain Platform camera chain is at multiple camera chains;
Step B obtains the bidimensional image figure that each camera chain is acquired in each collection period, and to current camera It combines the multiframe bidimensional image figure successively acquired in current collection period and carries out step B1~step B3 processing;
Step B1 extracts prospect to the multiframe bidimensional image figure, obtains multiple two-dimensional foreground images;By the multiple two Dimension foreground image is projected into three-dimensional space, obtains multiple three-dimensional foreground images;Wherein, when each acquisition of current collection period Quarter, the multiframe bidimensional image figure, the multiple two-dimensional foreground image and the multiple three-dimensional foreground image, which have, to be corresponded Relationship;
Step B2 calculates the variation feelings of point corresponding position in the multiple two-dimensional foreground image in real world Condition;Wherein, the point in the real world is that each two-dimensional foreground point in the two-dimensional foreground image is right in real world The point answered;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
Step B3, according to the variation feelings of the corresponding position in the multiple two-dimensional foreground image of the point in real world Condition, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image, calculate in real world The situation of change of point corresponding position in the multiple three-dimensional foreground image;
Step C, for each acquisition moment of current collection period, according to the complete of identical point in real world will be corresponded to Three-dimensional foreground point fusion in portion's becomes the rule of a three-dimensional fusion foreground point, takes the photograph monitoring site is each described in the same acquisition moment Camera combines the three-dimensional foreground point in corresponding three-dimensional foreground image and carries out fusion treatment, and the whole three-dimensionals obtained after fusion are melted It closes foreground point to combine, forms the three-dimensional fusion foreground picture at acquisition moment;Wherein, the three-dimensional foreground point is described three Tie up the tissue points of foreground image;
Step D, according to the situation of change of the corresponding position in the multiple three-dimensional foreground image of the point in real world, And it is closed according to each three-dimensional fusion foreground point is corresponding with each three-dimensional foreground point for being fused into each three-dimensional fusion foreground point It is corresponding in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period to calculate the point in real world for system The situation of change of position;
Step E, according to multiple three-dimensional fusion foreground pictures and real world at each acquisition moment of current collection period In point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period situation of change, It determines target object, and determines the spatial distribution characteristic and operation distribution characteristics of the target object;
Step F judges whether there is target context according to the spatial distribution characteristic of the target object and operation distribution characteristics Occur;
Step G exports judging result, and is having target context alarm occur.
In the second aspect of implementation method of the present invention, a kind of multiple-camera cooperative monitoring device is provided, comprising:
Video camera division module, for will dispose in such a way that at least two video cameras form a camera chain Multiple camera chains are combined into the multiple cameras of monitoring site;
Striograph obtains module, the bidimensional image figure acquired for obtaining each camera chain in each collection period;
Image processing module, for combining the multiframe two dimension shadow successively acquired in current collection period to current camera As figure carries out image procossing;
Described image processing module further comprises:
First image processing module obtains multiple two-dimensional foreground figures for extracting prospect to the multiframe bidimensional image figure Picture;
Second image processing module, for into three-dimensional space, obtaining multiple the multiple two-dimensional foreground image projection Three-dimensional foreground image;Wherein, each acquisition moment of current collection period, the multiframe bidimensional image figure, the multiple two dimension Foreground image and the multiple three-dimensional foreground image have one-to-one relationship;
Third image processing module, it is corresponding in the multiple two-dimensional foreground image for calculating the point in real world The situation of change of position;Wherein, the point in the real world is that each two-dimensional foreground point in the two-dimensional foreground image exists Corresponding point in real world;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
4th image processing module, for corresponding in the multiple two-dimensional foreground image according to the point in real world The situation of change of position, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image, calculate The situation of change of point corresponding position in the multiple three-dimensional foreground image in real world;
Fusion treatment module, for being directed to each acquisition moment of current collection period, according to real world will be corresponded to Whole three-dimensional foreground points fusion of middle identical point becomes the rule of a three-dimensional fusion foreground point, will supervise described in the same acquisition moment Three-dimensional foreground point in the corresponding three-dimensional foreground image of each camera chain in control scene carries out fusion treatment, will obtain after fusion Whole three-dimensional fusion foreground points combine, form the three-dimensional fusion foreground picture at acquisition moment;Wherein, before the three-dimensional Sight spot is the tissue points of the three-dimensional foreground image;
Be displaced computing module, for according to the point in real world in the multiple three-dimensional foreground image corresponding position Situation of change, and according to each three-dimensional fusion foreground point and each three-dimensional foreground for being fused into each three-dimensional fusion foreground point Point corresponding relationship, calculate real world in point current collection period each acquisition moment multiple three-dimensional fusion prospects The situation of change of corresponding position in figure;
Target search module, for multiple three-dimensional fusion foreground pictures according to each acquisition moment of current collection period, And the corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period of the point in real world The situation of change set determines target object, and determines the spatial distribution characteristic and operation distribution characteristics of the target object;
Judgment module is judged whether there is for the spatial distribution characteristic and operation distribution characteristics according to the target object Target context occurs;
Output module for exporting judging result, and is having target context alarm occur.
With the aid of the technical scheme, the image of all video camera acquisitions of monitoring site is synergistically utilized in the present invention, fills The three-dimensional spatial information of captured scene point is utilized, compared with existing method, even if illumination is poor, big wide-angle and remote A wide range of scene in, the present invention can also provide more accurate detection as a result, improving detection accuracy and the Shandong of outdoor-monitoring Stick is particularly suitable for the monitoring field that patrol is few, weather conditions are poor.
Detailed description of the invention
The following detailed description is read with reference to the accompanying drawings, above-mentioned and other mesh of exemplary embodiment of the invention , feature and advantage will become prone to understand.In the accompanying drawings, if showing by way of example rather than limitation of the invention Dry embodiment, in which:
Fig. 1 is the schematic diagram that the present invention is applied to oil/gas well equipment safety monitoring scene;
Fig. 2 is the flow diagram of multiple-camera cooperative monitoring method provided by the invention;
Fig. 3 is to carry out image procossing in the bidimensional image figure of each acquisition moment acquisition to a camera chain to obtain two Tie up the schematic diagram of foreground image and three-dimensional foreground image;
Fig. 4 is that the three-dimensional foreground image co-registration of same acquisition moment each camera chain is become three-dimensional fusion foreground picture Schematic diagram;
Fig. 5 is the input of multiple-camera cooperative monitoring device, output schematic diagram;
Fig. 6 is the structural block diagram of multiple-camera cooperative monitoring device;
In the accompanying drawings, identical or corresponding label indicates identical or corresponding part.
Specific embodiment
The principle and spirit of the invention are described below with reference to several illustrative embodiments.It should be appreciated that providing this A little embodiments are used for the purpose of making those skilled in the art can better understand that realizing the present invention in turn, and be not with any Mode limits the scope of the invention.On the contrary, these embodiments are provided so that this disclosure will be more thorough and complete, and energy It is enough that the scope of the present disclosure is completely communicated to those skilled in the art.
Those skilled in the art will understand that embodiments of the present invention can be implemented as a kind of system, device, equipment, Method or computer program product.Therefore, the present disclosure may be embodied in the following forms, it may be assumed that complete hardware, complete soft The form that part (including firmware, resident software, microcode etc.) or hardware and software combine.
It should be noted that term " two-dimensional foreground point " referred to herein refers to the pixel of two-dimensional foreground image, art Language " three-dimensional foreground point " refers to the tissue points of three-dimensional foreground image.
Embodiment according to the present invention proposes a kind of multiple-camera cooperative monitoring method and apparatus.
Below with reference to several representative embodiments of the invention, the principle and spirit of the present invention are explained in detail.
Summary of the invention
The present invention makes full use of the abnormal conditions in existing multichannel video camera real time monitoring scene.Firstly, monitoring is existing The video camera combination of two of field obtains multiple camera chains, obtains the bidimensional image figure of each camera chain acquisition in real time. It calculates two-dimensional depth figure secondly, successively being executed to the bidimensional image figure of each camera chain acquisition, extract two-dimensional foreground point simultaneously Projection obtains the processing such as three-dimensional foreground point, calculating three-dimensional foreground light stream;Then, to the three-dimensional foreground point cloud of all camera chains It carries out fusion treatment and obtains three-dimensional fusion foreground point cloud;Birds-eye view is obtained according to three-dimensional fusion foreground point cloud and three-dimensional foreground light stream With three-dimensional fusion prospect light stream;Finally, carrying out template matching, pattern classification etc. to birds-eye view and three-dimensional fusion prospect light stream Reason, judges and exports target context.
After introduced the basic principles of the present invention, lower mask body introduces various non-limiting embodiment party of the invention Formula.
Application scenarios overview
The signal that the present invention is applied to oil/gas well equipment safety monitoring scene is described shown in Fig. 1, that is, passes through oil gas (video camera a, video camera b, video camera c and video camera d) are set the deployed good multiple cameras in well scene to monitor oil/gas well Standby and personnel.In the monitoring range of multiple cameras, personnel and equipment are monitored in real-time, once there is abnormal behaviour hair It is raw, it can alarm.Here abnormal behaviour includes but is not limited to that someone fights, someone is run, someone steals oil/gas well equipment, has People places article to oil/gas well equipment, someone enter the region that should not enter, someone falls down, someone holds dangerous article and (cuts Knife, rimmer knife) etc..
It should be noted that Fig. 1 only representatively illustrates a kind of application scenarios of the invention, the present invention is not limited in Applied to scene shown in FIG. 1, it will be understood by those skilled in the art that present invention can also apply to other any monitoring fields Scape, such as logistics warehouse supervision, army's frontier defense monitoring etc..
Illustrative methods
Below with reference to the application scenarios of Fig. 1, it is described with reference to Figure 2 multiple-camera cooperative monitoring method provided by the invention.
It should be noted which is shown only for the purpose of facilitating an understanding of the spirit and principles of the present invention for above-mentioned application scenarios, this The embodiment of invention is unrestricted in this regard.On the contrary, embodiments of the present invention can be applied to it is applicable any Scene.
As shown in Fig. 2, multiple-camera cooperative monitoring method provided by the invention includes:
Step S1 will be deployed in monitoring site in such a way that at least two video cameras form a camera chain Multiple cameras is combined into multiple camera chains;
Step S2 obtains the bidimensional image figure that each camera chain is acquired in each collection period, and to current camera shooting The multiframe bidimensional image figure that machine combination successively acquires in current collection period carries out step S21~step S23 processing;
Step S21 extracts prospect to the multiframe bidimensional image figure, obtains multiple two-dimensional foreground images;It will be the multiple Two-dimensional foreground image projection obtains multiple three-dimensional foreground images into three-dimensional space;Wherein, each acquisition of current collection period Moment, the multiframe bidimensional image figure, the multiple two-dimensional foreground image and the multiple three-dimensional foreground image have an a pair The relationship answered;
Step S22 calculates the variation feelings of point corresponding position in the multiple two-dimensional foreground image in real world Condition;Wherein, the point in the real world is that each two-dimensional foreground point in the two-dimensional foreground image is right in real world The point answered;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
Step S23, according to the variation feelings of the corresponding position in the multiple two-dimensional foreground image of the point in real world Corresponding relationship between condition and the two-dimensional foreground image and the three-dimensional foreground image, the point calculated in real world exist The situation of change of corresponding position in the multiple three-dimensional foreground image;
Step S3, for each acquisition moment of current collection period, according to identical point in real world will be corresponded to Whole three-dimensional foreground point fusions become the rule of a three-dimensional fusion foreground point, and monitoring site described in the same acquisition moment is each Three-dimensional foreground point in the corresponding three-dimensional foreground image of camera chain carries out fusion treatment, all three-dimensional by what is obtained after fusion Fusion foreground point is combined, and the three-dimensional fusion foreground picture at acquisition moment is formed;Wherein, the three-dimensional foreground point is described The tissue points of three-dimensional foreground image;
Step S4, according to the variation feelings of the corresponding position in the multiple three-dimensional foreground image of the point in real world Condition, and it is corresponding with each three-dimensional foreground point for being fused into each three-dimensional fusion foreground point according to each three-dimensional fusion foreground point It is right to calculate the institute in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period of the point in real world for relationship Answer the situation of change of position;
Step S5, according to multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period and true generation The variation feelings of point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period in boundary Condition determines target object, and determines the spatial distribution characteristic and operation distribution characteristics of the target object;
Step S6 judges whether there is target feelings according to the spatial distribution characteristic of the target object and operation distribution characteristics Scape occurs;
Step S7 exports judging result, and is having target context alarm occur.
Multiple-camera cooperative monitoring side shown in FIG. 1 is introduced below by way of Fig. 3~image processing process shown in Fig. 4 Method.
Assuming that monitoring site shares tri- camera chains of Z1, Z2, Z3, the difference of each acquisition moment of current collection period For T1, T2, T3.
As shown in figure 3, camera chain Z1 successively acquires bidimensional image figure in acquisition moment T1, T2, T3, to this 3 frame two Dimension striograph extracts prospect respectively, obtains 3 corresponding two-dimensional foreground images, then by this 3 two-dimensional foreground image projections to three In dimension space, 3 corresponding three-dimensional foreground images are obtained.Wherein, two-dimensional foreground image projection is obtained three into three-dimensional space The detailed process of dimension foreground image is: each two-dimensional foreground point in two-dimensional foreground image being projected and obtains three into three-dimensional space Foreground point is tieed up, the corresponding three-dimensional foreground point of all two-dimensional foreground points, which is combined, is formed three-dimensional foreground image.Before three-dimensional Scape image is actually a kind of cloud, and corresponding is two-dimensional foreground image object corresponding in real world.
For the point P and point Q in real world, the position in the two-dimensional foreground image of acquisition moment T1 is respectively P1 And Q1, the position in the three-dimensional foreground image at the moment is respectively P1 ' and Q1 ';In the two-dimensional foreground image of acquisition moment T2 In position be respectively P2 and Q2, the position in the three-dimensional foreground image at the moment is respectively P2 ' and Q2 ';At the acquisition moment Position in the two-dimensional foreground image of T3 is respectively P3 and Q3, and the position in the three-dimensional foreground image at the moment is respectively P3 ' And Q3 '.
According to the position of P1, P2, P3, the point P in real world can be calculated in the two-dimensional foreground at each acquisition moment The situation of change of corresponding position in image, this situation of change can use bivectorIt indicates;According to Q1, The position of Q2, Q3 can calculate the corresponding position in the two-dimensional foreground image at each acquisition moment the point Q in real world Situation of change, this situation of change can use bivectorIt indicates.
The corresponding relationship and bivector of ' corresponding relationship, P2, P2 ' according to P1, P1It can determine true generation The situation of change of the corresponding position in the three-dimensional foreground image of acquisition moment T1 and T2 point P in boundary, that is, determine three-dimensional vectorThe corresponding relationship and bivector of ' corresponding relationship, P3, P3 ' similar, according to P2, P2It can be true The situation of change for determining the corresponding position in the three-dimensional foreground image of acquisition moment T2 and T3 point P in real world, that is, determine Three-dimensional vectorIn summary, that is, it can determine the point P in real world in the three-dimensional foreground image at each acquisition moment In corresponding position situation of change.Similarly, the point Q in real world can be determined in the three-dimensional foreground figure at each acquisition moment The situation of change of corresponding position as in.
As shown in figure 4, in acquisition moment T1, the three-dimensional foreground figure of point X and point Y in real world in camera chain Z1 Corresponding position is respectively X-Z1-T1 and Y-Z1-T1 as in, corresponding in the three-dimensional foreground image of camera chain Z2 Position is respectively X-Z2-T1, Y-Z2-T1, and corresponding position is respectively X- in the three-dimensional foreground image of camera chain Z3 Z3-T1, Y-Z3-T1.In acquisition moment T1, whole three-dimensional foreground points fusion of the point X corresponded in real world is become One three-dimensional fusion foreground point, is denoted asWhole three-dimensional foreground points fusion of the point Y corresponded in real world is become One three-dimensional fusion foreground point, is denoted asIt is similar toWithThis whole three-dimensional fusion foreground points constitute The three-dimensional fusion foreground picture of acquisition moment T1.
Similar, moment T2 is being acquired, three-dimensional foreground image of the point X and point Y in real world in camera chain Z1 In corresponding position be respectively X-Z1-T2 and Y-Z1-T2, the corresponding position in the three-dimensional foreground image of camera chain Z2 Respectively X-Z2-T2, Y-Z2-T2 are set, corresponding position is respectively X-Z3- in the three-dimensional foreground image of camera chain Z3 T2, Y-Z3-T2.In acquisition moment T2, the whole three-dimensional foreground points fusion for corresponding to the point X in real world is become one Three-dimensional fusion foreground point, is denoted asThe whole three-dimensional foreground points fusion for corresponding to the point Y in real world is become one Three-dimensional fusion foreground point, is denoted asIt is similar toWithThis whole three-dimensional fusion foreground points constitute this and adopt Collect the three-dimensional fusion foreground picture of moment T1.
Similar, moment T3 is being acquired, three-dimensional foreground image of the point X and point Y in real world in camera chain Z1 In corresponding position be respectively X-Z1-T3 and Y-Z1-T3, the corresponding position in the three-dimensional foreground image of camera chain Z2 Respectively X-Z2-T3, Y-Z2-T3 are set, corresponding position is respectively X-Z3- in the three-dimensional foreground image of camera chain Z3 T3, Y-Z3-T3.In acquisition moment T3, the whole three-dimensional foreground points fusion for corresponding to the point X in real world is become one Three-dimensional fusion foreground point, is denoted asThe whole three-dimensional foreground points fusion for corresponding to the point Y in real world is become one Three-dimensional fusion foreground point, is denoted asIt is similar toWithThis whole three-dimensional fusion foreground points constitute this and adopt Collect the three-dimensional fusion foreground picture of moment T1.
In conjunction with Fig. 3 and Fig. 4 it is found that relationship between three-dimensional foreground point X-Z1-T1, X-Z1-T2, X-Z1-T3 and it is three-dimensional before Sight spot P1 ', P2 ', the relationship consistency between P3 '.The point P in real world has been calculated in acquisition according to the explanation of front Carve the situation of change of corresponding position in the three-dimensional foreground image of T1, T2 and T3, i.e. three-dimensional vector ? That is the situation of change of the position three-dimensional foreground point X-Z1-T1, X-Z1-T2, X-Z1-T3 herein can be calculated Come, is denoted as three-dimensional vector
Likewise, the situation of change of the position three-dimensional foreground point X-Z2-T1, X-Z2-T2, X-Z2-T3 can also be counted It calculates, is denoted as three-dimensional vector
Likewise, the situation of change of the position three-dimensional foreground point X-Z3-T1, X-Z3-T2, X-Z3-T3 can also be counted It calculates, is denoted as three-dimensional vector
Point X in acquisition moment T1, real world corresponds to three-dimensional foreground point X-Z1-T1, X-Z2-T1, X-Z3-T1; Point X in acquisition moment T2, real world corresponds to three-dimensional foreground point X-Z1-T2, X-Z2-T2, X-Z3-T2;In acquisition T3 is carved, the point X in real world corresponds to three-dimensional foreground point X-Z1-T3, X-Z2-T3, X-Z3-T3;
According to three-dimensional fusion foreground pointCorresponding pass between three-dimensional foreground point X-Z1-T1, X-Z2-T1, X-Z3-T1 System,With the corresponding relationship between three-dimensional foreground point X-Z1-T2, X-Z2-T2, X-Z3-T2,With three-dimensional foreground point X- Corresponding relationship between Z1-T3, X-Z2-T3, X-Z3-T3, and according to three-dimensional vector Just Three-dimensional fusion foreground point can be calculatedThe situation of change of position can use three-dimensional vectorIt indicates, i.e., the point X in real world is before the three-dimensional fusion at each acquisition moment The situation of change of corresponding position in scape figure.
The image of all video camera acquisitions of monitoring site is synergistically utilized in the present invention, takes full advantage of captured scene Three-dimensional spatial information, even if more accurate inspection can also be provided illumination is poor, big wide-angle and at a distance in a wide range of scene It surveys as a result, improve the detection accuracy and robustness of outdoor-monitoring, is particularly suitable for the monitoring that patrol is few, weather conditions are poor The fields such as field, such as the supervision of oil well operation field, logistics warehouse, army's frontier defense monitoring.
In conjunction with Fig. 3~Fig. 4, describe in detail respectively to each step in this method.
Step S1 will be deployed in monitoring site in such a way that at least two video cameras form a camera chain Multiple cameras is combined into multiple camera chains.
The purpose that camera chain is formed in this step is to utilize each video camera image collected in camera chain To calculate the depth value of captured scene.
When it is implemented, forming several camera chains it is alternatively possible to which the video camera of monitoring site is grouped two-by-two.It takes the photograph The two-dimentional shadow that the eyes that two video cameras two-way bidimensional image figure collected in camera combination is respectively equivalent to people are seen Picture can be used for calculating depth value, i.e. the point corresponding in real world of pixel in bidimensional image figure is (in captured scene Point on body surface) arrive video camera distance.
3D vision principle based on human eye, be to the two video cameras two-dimentional shadow collected utilized in camera chain As figure calculates depth value, this requires the scene that is covered of two video cameras being located in same camera chain needs that there are one Fixed intersection.Under normal circumstances, the scene that the video camera being positioned proximate to is covered can have intersection, based on this consideration, specifically Monitoring site any two position can be formed into a camera chain at a distance of the video camera less than a pre-determined distance when implementation. Four video cameras a, b, c, the d of example as shown in figure 1, match according to its installation position, finally obtain three camera chains (a, b), (b,c),(c,d)。
This step can use remote-terminal unit (Remote Terminal Unit, RTU) and obtain every video camera base In Network Time Protocol (Network Time Protocol, NTP) acquired image frame, i.e. bidimensional image figure.
According to the difference of camera function, bidimensional image figure collected may be gray level image, it is also possible to cromogram As (such as RGB color image).
At actual monitored scene, model, parameter of each video camera etc. may be different, and the image shot may also be big It is small, shape is different, it is contemplated that this point, when it is implemented, the present invention can also to each video camera acquire bidimensional image figure Carry out distortion correction and registration process.For example, this step can obtain the abnormal of each video camera using based on tessellated method Bending moment battle array, video camera internal reference and outer ginseng, and the distortion matrix based on obtained each video camera, internal reference and outer ginseng are to corresponding camera shooting The bidimensional image figure of machine acquisition carries out distortion correction and registration process, and specific implementation can refer to OpenCV (Open Source Computer Vision Library) provide method, repeats no more herein.
Step S2 obtains the bidimensional image figure that each camera chain is acquired in each collection period, and to current camera shooting The multiframe bidimensional image figure that machine combination successively acquires in current collection period carries out step S21~step S23 image procossing:
Step S21 extracts prospect to the multiframe bidimensional image figure, obtains multiple two-dimensional foreground images;It will be the multiple Two-dimensional foreground image projection obtains multiple three-dimensional foreground images into three-dimensional space;Wherein, each acquisition of current collection period Moment, the multiframe bidimensional image figure, the multiple two-dimensional foreground image and the multiple three-dimensional foreground image have an a pair The relationship answered.
Specifically, this step when completing to extract the process of prospect to bidimensional image figure, can be built using background is first carried out Mould (e.g. the methods of static background modeling or mixed Gaussian background modeling), then utilizes the method for background subtraction.
Specifically, this step is in the mistake for completing to obtain two-dimensional foreground image projection into three-dimensional foreground image into three-dimensional space Cheng Shi needs to consider the transformational relation between the two-dimensional space of video camera imaging and the three-dimensional space of real world.
Optionally, in step S21 can according to step S211~step S215 process by two-dimensional foreground image projection extremely In three-dimensional space:
Step S211 determines that the two dimension of each two-dimensional foreground point is sat in the corresponding bidimensional image figure of two-dimensional foreground image Mark;
Step S212 determines the depth value of each two-dimensional foreground point in the corresponding two-dimensional depth figure of two-dimensional foreground image. Wherein, two-dimensional depth figure and the bidimensional image figure of video camera acquisition use identical reference axis, for the picture with same coordinate For vegetarian refreshments, the pixel value in bidimensional image figure is the image information of the pixel, and the pixel value in two-dimensional depth figure is this The pixel distance (i.e. depth value) of corresponding point to video camera in real world.
When calculating depth value, the side such as method, half global registration can be cut using traditional Block- matching, dynamic programming, figure Method.Optionally, the present invention can also calculate with the following method depth value: firstly, being obtained just according to conventional parallax calculation method Beginning parallax;Then, graph model is constructed to all pixels in bidimensional image figure, wherein the node of graph model is the parallax of pixel Value, similarity measure of the side of graph model between pixel;Finally, the parallax value of pixel is propagated by the successive ignition in graph model It is global optimal to reach, parallax information is converted to by depth value according to the outer ginseng and internal reference of video camera.
Two-dimensional foreground point is projected to video camera and sat by step S213 according to the two-dimensional coordinate and depth value of two-dimensional foreground point In mark system.Wherein, which refers to that the two-dimensional depth figure is acquired in camera chain (therefrom extracts the two-dimensional foreground Image) video camera coordinate system.
Camera coordinate system is the coordinate system closely related with observer.Camera coordinate system is similar to bidimensional image figure Coordinate system, the difference is that camera coordinate system is in three-dimensional space, and the coordinate system of bidimensional image figure is in two-dimensional space.It takes the photograph In three reference axis of camera coordinate system, two of them are parallel with two reference axis of bidimensional image figure respectively, another is Perpendicular to bidimensional image figure.The origin of camera coordinate system is the center of bidimensional image figure.
Optionally, this step can complete the projection that two-dimensional foreground point is projected to camera coordinate system using following formula Process:
Wherein, (u, v) is the two-dimensional coordinate of two-dimensional foreground point, corresponds respectively to two reference axis of bidimensional image figure;Z is two Tie up the depth value of foreground point;cuAnd cvIt is the centre coordinate of bidimensional image figure;fuAnd fvIt is to acquire the frame bidimensional image figure respectively Focal length of the video camera in two change in coordinate axis direction;UC、VC、ZCIt is subpoint of the two-dimensional foreground point in camera coordinate system Coordinate.
Subpoint of the two-dimensional foreground point in camera coordinate system is continued to project in world coordinate system by step S214, It will be determined as three-dimensional foreground point in subpoint obtained in world coordinate system.
Since position, height, angle etc. that different cameras is placed are had nothing in common with each other, taken the photograph to describe difference in monitoring site The absolute position and relative position of camera describe position, the height, angle of each video camera using world coordinate system.Camera shooting It can be converted with rotational transformation matrix and translation transformation matrix between machine coordinate system and world coordinate system.
Optionally, this step can complete the subpoint by two-dimensional foreground point in camera coordinate system using following formula Continue to project in world coordinate system:
Wherein, (UW,VW,ZW) be three-dimensional foreground point three-dimensional coordinate, R, T, which are respectively indicated, acquires the frame bidimensional image figure Rotational transformation matrix and translation transformation matrix of the camera coordinate system of this video camera relative to world coordinate system.
In order to complete the above process, it is thus necessary to determine that rotation of the camera coordinate system of each video camera relative to world coordinate system Turn transformation matrix and translation transformation matrix.
Optionally, step S214 can determine rotational transformation matrix peace using step S2141~step S2145 process Move transformation matrix:
The point for being used to show ground in bidimensional image figure is determined as ground point by step S2141.
Specifically, this step is that the point that the image information indicated in bidimensional image figure is ground is determined as ground point.
Step S2142 determines the two-dimensional coordinate of all ground points in bidimensional image figure, and in the frame bidimensional image figure pair The depth value of all ground points is determined in the two-dimensional depth figure answered.
Assuming that the two-dimensional coordinate of a certain ground point is (u ', v ') in bidimensional image figure, due to bidimensional image figure and its correspondence Two-dimensional depth figure be using identical reference axis, therefore, coordinate is the pixel value (z ') of the point of (u ', v ') in two-dimensional depth figure The as depth value of the ground point.
When it is implemented, if bidimensional image figure acquired in camera chain is not clear enough in certain position textures, It may be failure when calculating depth value, that is, cannot obtain the depth value of these positions.Identified ground in this step If point belongs to such case, give up the ground point.
Step S2143 carries out three-dimensional planar fitting using the two-dimensional coordinate and its depth value of all ground points.
Step S2144, the parameter in function corresponding to the obtained maximum plane of area of fitting is determined as acquisition should The deployment parameters of the camera chain of frame bidimensional image figure.
Wherein, deployment parameters reflect situations such as installation site, height and angle of video camera.
Specifically, when this step carries out three-dimensional planar fitting using the two-dimensional coordinate and its depth value of all ground points, by The influence for calculating error may obtain multiple three-dimensional planars that size does not wait, wherein the maximum three-dimensional planar of area is most It may correspond to the ground in real world.Ground in the real world of shot by camera is able to reflect out video camera Situations such as installation site, height and angle, and the function parameter of three-dimensional planar then reflects the attribute of three-dimensional planar, it therefore, should Parameter in function corresponding to the maximum three-dimensional planar of area (corresponding to the ground in real world) (reflects real world In ground attribute) be able to reflect the installation site, height and angle of video camera, so as to by the function of three-dimensional planar join Number is determined as the deployment parameters of the camera chain.
Assuming that the obtained corresponding function of maximum planes of fitting is Ax+By+Cz+D=0, wherein x, y, z is variable, A, B, C, D is parameter, then parameter A, B, C, D is the deployment parameters of the camera chain.
When it is implemented, this step can be using PCL (Point Cloud Library, the reference site of open source Http:// pointclouds.org/) disclosed in three-dimensional planar approximating method.
It should be noted that the present invention is not construed as limiting the three-dimensional planar approximating method of use, i.e., described above is only this The specific embodiment of invention, is not intended to limit the scope of protection of the present invention, all within the spirits and principles of the present invention, Select other any three-dimensional planar approximating methods should all be included in the protection scope of the present invention, for example, least square method or Person is based on the methods of gradient decline.
Step S2145 is carried out calibrated and calculated to the deployment parameters, is obtained acquiring the frame on the basis of world coordinate system The rotational transformation matrix and translation transformation matrix of the camera chain of bidimensional image figure.
The corresponding three-dimensional foreground point of two-dimensional foreground points whole in two-dimensional foreground image is formed three-dimensional foreground figure by step S215 Picture.
In fact, three-dimensional foreground image is a kind of cloud, corresponding this cloud is two-dimensional foreground image in real world In corresponding object.Therefore, step S215 is actually the process that point cloud data set is formed to point cloud.
Step S22 calculates the point in real world is acquired in current camera combination in current collection period multiple two Tie up the situation of change of corresponding position in foreground image.Wherein, the point in the real world is in each two-dimensional foreground image Each two-dimensional foreground point in real world corresponding point.
Specifically, the situation of change of corresponding position in this multiframe two-dimensional foreground image of the point in real world can lead to It crosses and light stream is calculated to this multiframe two-dimensional foreground image to obtain.
Light stream refers to expression of the movement velocity of the point on space object surface on the imaging plane of visual sensor.? In the present invention, light stream refers to the movement velocity of the point in real world on body surface on the bidimensional image figure that video camera acquires Expression, when the point in real world on body surface is limited to the two-dimensional foreground point in two-dimensional foreground image in true generation In boundary at corresponding, light stream just refers to the movement velocity of the point in real world on body surface the two of the different acquisition moment Tie up the expression on foreground image.
Therefore, this step can be by calculating light stream to this multiple two-dimensional foreground image, and the point obtained in real world is (every Each two-dimensional foreground point in a two-dimensional foreground image corresponding point in real world) institute in this multiple two-dimensional foreground image The situation of change of corresponding position.
Optionally, this step is realizing the multiple two-dimensional foreground figures acquired to current camera combination in current collection period When as calculating this process of light stream, the corresponding multiframe bidimensional image figure of this multiple two-dimensional foreground image can be calculated first dense Then light stream is filtered processing to dense optical flow using this multiple two-dimensional foreground image.
Specifically, this step can calculate dense optical flow to multiple bidimensional image figures using Lucas-Kanade method, examine Consider calculating speed, GPU (Graphics Processing Unit, graphics processor) can also be used to be accelerated.Its In, processing is filtered to dense optical flow using two-dimensional foreground image, is to remove background parts in each bidimensional image figure The Optic flow information of (part other than two-dimensional foreground image), the filtration treatment used here can be gaussian filtering, mean filter Or the processing modes such as median filtering.
Step S23, according to the variation feelings of the corresponding position in the multiple two-dimensional foreground image of the point in real world Condition, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image, calculate in real world The situation of change of point corresponding position in the multiple three-dimensional foreground image.
It should be noted that combining the multiframe bidimensional image figure successively acquired in current collection period to current camera When carrying out step S21~step S23 image processing process, in order to determine the point in real world the two of each acquisition moment Position corresponding in foreground image and three-dimensional foreground image is tieed up, preferably, this multiframe bidimensional image figure is by current camera Same video camera acquisition in combination, with guarantee scene that this multiframe bidimensional image figure is covered as far as possible for Same Scene, To be conducive to find the same point in real world in the two-dimensional foreground image and three-dimensional foreground image at each acquisition moment. In fact, when each camera chain be the multiple cameras similar in geographical location combine formed when, even if this multiframe is two-dimentional Striograph is that the different cameras in being combined by current camera acquires, and the scene that this multiframe bidimensional image figure is covered is also There are certain intersection, it is also beneficial to find in the two-dimensional foreground image and three-dimensional foreground image at each acquisition moment true Same point in the world.
Step S3, for each acquisition moment of current collection period, according to identical point in real world will be corresponded to Whole three-dimensional foreground point fusions become the rule of a three-dimensional fusion foreground point, and monitoring site described in the same acquisition moment is each Three-dimensional foreground point in the corresponding three-dimensional foreground image of camera chain carries out fusion treatment, all three-dimensional by what is obtained after fusion Fusion foreground point is combined, and the three-dimensional fusion foreground picture at acquisition moment is formed.
Since entire monitoring site can not be completely covered in the captured scene of single camera combination, in order to obtain entirely The three-dimensional scene information of monitoring site, this step by Scenario integration captured by all camera chains together, specific hand Section be by it is same acquisition moment each camera chain three-dimensional foreground image co-registration together.
When it is implemented, step S3 can be carried out according to step S31~step S34 process:
Step S31 successively chooses each successively using each acquisition moment of current collection period as the current acquisition moment Camera chain is successively chosen at every in the corresponding three-dimensional foreground image of camera chain that the current acquisition moment is currently chosen A three-dimensional foreground point is as current three-dimensional foreground point.
Step S32 judges the corresponding three-dimensional foreground figure of other camera chains of monitoring site described in the currently acquisition moment As in, if there is the three-dimensional foreground point for corresponding to the same point in real world with current three-dimensional foreground point.
Specifically, the step is when judging whether two three-dimensional foreground points correspond to same in real world According to the two Euclidean distances of three-dimensional foreground point in world coordinate system threshold decision whether given less than one, when being less than When the threshold value, judge that the two three-dimensional foreground points correspond to the same point in real world, otherwise judgement does not correspond to The same point in real world.
When it is implemented, the color difference of two three-dimensional foreground points can also be weighted with Euclidean distance, so Judge that the threshold value whether calculated result gives less than one judges that the two three-dimensional foreground points are corresponding when being less than the threshold value afterwards The same point in real world, otherwise judgement does not correspond to the same point in real world.
Step S33, if it does not exist, then current three-dimensional foreground point is determined as three-dimensional fusion foreground point;If it is present The whole three-dimensional foreground points fusion for corresponding to the same point in real world is become into a three-dimensional fusion according to following formula Foreground point:
Wherein, U, V, Z are the three-dimensional coordinates of three-dimensional fusion foreground point, correspond respectively to three coordinates of world coordinate system Axis;Each three-dimensional foreground point of the same point corresponded in real world is determined as point to be fused, N corresponds to true generation The number of the whole point to be fused of the same point in boundary;N is the serial number of point to be fused;(UWn,VWn,ZWn) be serial number n to The three-dimensional coordinate of merging point;weightnIt is the weight of the point to be fused of serial number n;distnIt is the point to be fused of serial number n to certainly The distance of the centre coordinate of the corresponding camera chain of body;Wherein, the centre coordinate of camera chain is each of camera chain The coordinate of the central symmetry point of each subpoint of the installation position of a video camera in world coordinate system.
When each camera chain is made of two video cameras, the centre coordinate of camera chain is camera chain In the two video cameras two subpoint of the installation position in world coordinate system midpoint coordinate.
For example, two three-dimensional foreground points of the same point corresponded in real world are denoted as point A and B to be fused respectively, Wherein, point A to be fused comes from camera chain (a, b), and point B to be fused comes from camera chain (b, c), camera chain (a, B) centre coordinate is (U1,V1,Z1), the centre coordinate of camera chain (b, c) is (U2,V2,Z2), the three-dimensional of point A to be fused Coordinate is (UW1,VW1,ZW1), the three-dimensional coordinate of point B to be fused is (UW2,VW2,ZW2), point A to be fused to camera chain (a, b) The distance between centre coordinate be dist1, the distance between the centre coordinate of point B to be fused to camera chain (b, c) is dist2, then have:
The weight of point A to be fused is weight1, the weight of point B to be fused is weight2, then have:
Step S34, by whole three-dimensional fusion foreground point composition current acquisition moment corresponding three-dimensional fusion foreground pictures.
Step S4, according to the variation feelings of the corresponding position in the multiple three-dimensional foreground image of the point in real world Condition, and it is corresponding with each three-dimensional foreground point for being fused into each three-dimensional fusion foreground point according to each three-dimensional fusion foreground point It is right to calculate the institute in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period of the point in real world for relationship Answer the situation of change of position.
Step S5, according to multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period and true generation The variation feelings of point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period in boundary Condition determines the spatial distribution characteristic and operation distribution characteristics of target object and the target object.
Specifically, step S5 can be carried out according to step S51-a~step S57-a process:
Step S51-a successively chooses the three-dimensional fusion foreground picture at each acquisition moment of current collection period.
It is empty to be divided into several three-dimensional sons by step S52-a for three-dimensional space where the three-dimensional fusion foreground picture currently chosen Between, one of statistics the following terms information or a variety of: all three-dimensional fusion foreground points for including in each three n-dimensional subspace n Quantity;Most colouring informations that all three-dimensional fusion foreground points for including in each three n-dimensional subspace n are presented;Each three-dimensional son The maximum height for all three-dimensional fusion foreground points for including in space.
Specifically, this step can be using the cuboid (bins) with pre-set dimension size to the three-dimensional currently chosen Three-dimensional space where fusion foreground picture is divided.
When it is implemented, statistical result can be indicated in the form of histogram in order to facilitate later use statistical result Out, i.e., abscissa is the three-dimensional coordinate data of each three n-dimensional subspace n, and ordinate is statistical data.
Step S53-a, according to statistical result, all three-dimensional fusions that will include in three n-dimensional subspace ns that cluster condition be met Assemble to form three-dimensional fusion foreground blocks in foreground point.
Wherein, the cluster condition is space length less than the first preset threshold, and the difference of statistical data is less than second Preset threshold.That is, three n-dimensional subspace ns for meeting cluster condition must be space length less than the first preset threshold and system Three n-dimensional subspace ns of the difference counted less than the second preset threshold.
Specifically, this step assembles all three-dimensional fusion foreground points for including in three n-dimensional subspace ns for meeting cluster condition When forming three-dimensional fusion foreground blocks, the method that cluster operation can be used, for example (,) connected domain analysis, mean shift algorithm etc..
Step S54-a carries out template matching to the three-dimensional fusion foreground blocks of formation in world coordinate system, judges described three Whether dimension fusion foreground blocks are target object.
Specifically, corresponding template first can be designed according to target object, then using these templates to three-dimensional fusion before Scape block is matched, and when successful match, determines that three-dimensional fusion foreground blocks are target object.
When it is implemented, the characteristics of this step can be according to monitoring site, determines target object, such as oil/gas well is monitored Scene can regard people, vehicle, crowd etc. as target object.
Step S55-a is working as when determining the three-dimensional fusion foreground blocks is target object according to the point in real world The situation of change of corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of preceding collection period, determination are currently adopted Collect in the period in the three-dimensional fusion foreground image at different acquisition moment, before aggregation forms each three-dimensional fusion of the target object The three-dimensional coordinate situation of change at sight spot and other three-dimensional fusion foreground points of the corresponding identical point in real world, and according to This determines the position of the target object described in each acquisition moment of current collection period and misalignment.
The position of target object described in each acquisition moment in current collection period is determined as described by step S56-a The spatial distribution characteristic of target object.
The misalignment of target object described in each acquisition moment in current collection period is determined as by step S57-a The operation distribution characteristics of the target object.
Specifically, the operation distribution characteristics of target object includes but is not limited to the motion amplitude of target object, direction and adds The information such as speed.
The above process is to carry out template matching, this template to the three-dimensional fusion foreground blocks of formation in world coordinate system Method of completing the square can be related to biggish operand, take a long time, in order to reduce operand, be promoted due to being carried out in three-dimensional space Arithmetic speed, optionally, this step can also project three-dimensional fusion foreground picture to obtaining the projection of fusion prospect on two-dimensional surface Figure, then carries out template matching, specifically, step S5 can also be according to step S51-b~step S58-b in two-dimensional space Process carries out:
Step S51-b successively chooses the three-dimensional fusion foreground picture at each acquisition moment of current collection period.
Step S52-b projects each three-dimensional fusion foreground point in the three-dimensional fusion foreground picture currently chosen to one or two In dimensional plane, fusion prospect subpoint is obtained;By whole three-dimensional fusion foreground points in the three-dimensional fusion foreground picture currently chosen Corresponding fusion prospect subpoint is combined, and fusion prospect perspective view is obtained.
This step can be projection to horizontal plane, be also possible to projection to vertical guide.
Two-dimensional space where the fusion prospect perspective view is divided into several two-dimensional sub-spaces by step S53-b, statistics One of the following terms information is a variety of: the quantity for the fusion prospect subpoint for including in each two-dimensional sub-spaces;Each two Most colouring informations that all fusion prospect subpoints for including in n-dimensional subspace n are presented;Include in each two-dimensional sub-spaces The maximum height of all fusion prospect subpoints.
Step S54-b, according to statistical result, all fusion prospects that will include in the two-dimensional sub-spaces that cluster condition be met Subpoint is assembled to form two dimension fusion foreground blocks;Wherein, the cluster condition is space length less than the first preset threshold, and is united The difference counted is less than the second preset threshold.
Step S55-b carries out template matching to the two dimension fusion foreground blocks of formation in the two-dimensional space, described in judgement Whether two dimension fusion foreground blocks are target object.
Step S56-b is working as when determining the two dimension fusion foreground blocks is target object according to the point in real world The situation of change of corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of preceding collection period, determination are currently adopted Collect in the period in the corresponding fusion prospect perspective view of three-dimensional fusion foreground image at different acquisition moment, aggregation forms the target Other fusion prospect subpoints of each fusion prospect subpoint and corresponding identical point in real world of object Two-dimensional coordinate situation of change, and determine therefrom that the position and position of the target object described in each acquisition moment of current collection period Shift one's love condition.
The position of target object described in each acquisition moment in current collection period is determined as described by step S57-b The spatial distribution characteristic of target object.
The misalignment of target object described in each acquisition moment in current collection period is determined as by step S58-b The operation distribution characteristics of the target object.
Step S6 judges whether there is target context and goes out according to the spatial distribution characteristic of target object and operation distribution characteristics It is existing.
When it is implemented, can be according to monitoring site the characteristics of, target context is determined, such as oil/gas well monitoring site, Can will whether someone's invasion, someone's removing objects, someone abandon that object, someone is run, someone fights etc. is used as target context.
Specifically, this step first the spatial distribution characteristic to the target object and operation distribution characteristics can carry out mode Classification;Then, target context is judged whether there is according to the result of pattern classification to occur.
Method for classifying modes adopted in this step can be to be completed based on multi-categorizer, such as random forest, utilizes it Being done directly feature can also be completed to multi-class mapping based on multiple single classifiers, such as multiple SVM (Support Vector Machine, support vector machines) classifier, such as some SVM classifier carries out detection of fighting, will fight conduct Positive example, other situations are as negative example, with the training of this classifier of completing to fight, and so on.
For example, the schematic diagram of the judgement of more Exception Types is carried out based on random forest, the spatial distribution of target object is special Distribution of movement of seeking peace feature by the study of sample, is determined the model of multiple trees, is carried out based on multiple tree-models as input Prediction and Nearest Neighbor with Weighted Voting, obtain the spatial distribution characteristic or the corresponding Exception Type of distribution of movement feature of current detection.
Step S7 exports judging result, and is having target context alarm occur.
Specifically, the case where monitoring personnel understands monitoring site in time for convenience, the present invention can be by showing or sending out The modes such as sound directly export judging result, judging result can also be uploaded in the network platform by network, for connecting the net The computer or mobile terminal (such as mobile phone, portable notebook computer etc.) of network platform browse.
It should be noted that although describing the behaviour of multiple-camera cooperative monitoring method of the present invention in the accompanying drawings with particular order Make, still, this does not require that or implies must execute these operations in this particular order, or have to carry out whole institutes The operation shown just is able to achieve desired result.Additionally or alternatively, it is convenient to omit multiple steps are merged into one by certain steps A step executes, and/or a step is decomposed into execution of multiple steps.
Exemplary means
After describing multiple-camera cooperative monitoring method provided by the invention, next, with reference to Fig. 5~Fig. 6 introduction Multiple-camera cooperative monitoring device provided by the invention.
It is illustrated in figure 5 input, the output schematic diagram of the multiple-camera cooperative monitoring device.Wherein, input is that monitoring is existing The bidimensional image figure that place has video camera to acquire, output are that (such as someone invades, fights, runs, steals for the target context that detects The abnormal conditions such as surreptitiously) and warning message.
When the specific implementation present invention, which can pass through RTU All video cameras of (RemoteTerminalUnit, remote-terminal unit) monitoring and control monitoring site, wherein these camera shootings Machine is all based on Network Time Protocol acquisition image.
It is illustrated in figure 6 the structural block diagram of the multiple-camera cooperative monitoring device, the multiple-camera cooperative monitoring device packet It includes:
Video camera division module 601, in such a way that at least two video cameras form a camera chains, by portion The multiple cameras for being deployed on monitoring site is combined into multiple camera chains;
Striograph obtains module 602, the bidimensional image acquired for obtaining each camera chain in each collection period Figure;
Image processing module 603, for combining the multiframe two successively acquired in current collection period to current camera It ties up striograph and carries out image procossing;
Described image processing module 603 further comprises:
First image processing module 604 obtains multiple two-dimensional foregrounds for extracting prospect to the multiframe bidimensional image figure Image;
Second image processing module 605, for into three-dimensional space, obtaining more the multiple two-dimensional foreground image projection A three-dimensional foreground image;Wherein, each acquisition moment of current collection period, the multiframe bidimensional image figure, the multiple two Tieing up foreground image and the multiple three-dimensional foreground image has one-to-one relationship;
Third image processing module 606, for calculating the institute in the multiple two-dimensional foreground image of the point in real world The situation of change of corresponding position;Wherein, the point in the real world is each two-dimensional foreground in the two-dimensional foreground image Point corresponding point in real world;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
4th image processing module 607, for according to the point in real world in the multiple two-dimensional foreground image institute The situation of change of corresponding position, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image, Calculate the situation of change of point corresponding position in the multiple three-dimensional foreground image in real world;
Fusion treatment module 608, for being directed to each acquisition moment of current collection period, according to true generation will be corresponded to Whole three-dimensional foreground points fusion of identical point becomes the rule of a three-dimensional fusion foreground point in boundary, will be described in the same acquisition moment Three-dimensional foreground point in the corresponding three-dimensional foreground image of each camera chain of monitoring site carries out fusion treatment, will obtain after fusion To whole three-dimensional fusion foreground points combine, form the three-dimensional fusion foreground picture at acquisition moment;Wherein, the three-dimensional Foreground point is the tissue points of the three-dimensional foreground image;
It is displaced computing module 609, for corresponding in the multiple three-dimensional foreground image according to the point in real world The situation of change of position, and according to each three-dimensional fusion foreground point and each three-dimensional for being fused into each three-dimensional fusion foreground point The corresponding relationship of foreground point, calculate real world in point current collection period each acquisition moment multiple three-dimensional fusions The situation of change of corresponding position in foreground picture;
Target search module 610, for multiple three-dimensional fusion prospects according to each acquisition moment of current collection period Point in figure and real world is corresponding in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period The situation of change of position determines target object, and determines the spatial distribution characteristic and operation distribution characteristics of the target object;
Judgment module 611 judges whether for the spatial distribution characteristic and operation distribution characteristics according to the target object There is target context appearance;
Output module 612 for exporting judging result, and is having target context alarm occur.
Second image processing module 605 further comprises: two-dimensional coordinate computational submodule, depth value computational submodule, One projection submodule, the second projection submodule and three-dimensional foreground points cloud processing submodule.
Two-dimensional coordinate computational submodule, it is each for determining in the corresponding bidimensional image figure of the two-dimensional foreground image The two-dimensional coordinate of two-dimensional foreground point;
Depth value computational submodule, for determining each two in the corresponding two-dimensional depth figure of the two-dimensional foreground image Tie up the depth value of foreground point;Wherein, the two-dimensional depth figure is using each video camera in current camera combination while to acquire The corresponding bidimensional image figure that calculates of bidimensional image figure in the depth value of each pixel formed;
First projection submodule, for the two-dimensional coordinate and depth value according to the two-dimensional foreground point, before the two dimension Sight spot projects in camera coordinate system;
Second projection submodule, for continuing to throw subpoint of the two-dimensional foreground point in the camera coordinate system Shadow will be determined as the three-dimensional foreground point in subpoint obtained in world coordinate system into world coordinate system;
Three-dimensional foreground points cloud processing submodule is used for two-dimensional foreground points corresponding three whole in the two-dimensional foreground image It ties up foreground point and forms the three-dimensional foreground image.
Second image processing module 606 is to obtain real world by calculating light stream to the multiple two-dimensional foreground image In point corresponding position in the multiple two-dimensional foreground image situation of change.
Fusion treatment module 608 further comprises: the first poll submodule, search submodule, first fusion submodule, Second fusion submodule and three-dimensional fusion prospect points cloud processing submodule.
First poll submodule, for successively acquiring the moment using each acquisition moment of current collection period as current, Each camera chain is successively chosen, before being successively chosen at the corresponding three-dimensional of camera chain that the current acquisition moment is currently chosen Each three-dimensional foreground point in scape image is as current three-dimensional foreground point;
Submodule is searched, for judging the corresponding three-dimensional of other camera chains of monitoring site described in the currently acquisition moment In foreground image, if there is the three-dimensional foreground point for corresponding to the same point in real world with current three-dimensional foreground point;When In the absence of being judged as, triggering the first fusion submodule;In the presence of being judged as, triggering the second fusion submodule;
First fusion submodule, for current three-dimensional foreground point to be determined as three-dimensional fusion foreground point;
Second fusion submodule, for all three-dimensional of the same point in real world will to be corresponded to according to following formula Foreground point fusion becomes a three-dimensional fusion foreground point:
Wherein, U, V, Z are the three-dimensional coordinates of three-dimensional fusion foreground point, correspond respectively to three coordinates of world coordinate system Axis;Each three-dimensional foreground point of the same point corresponded in real world is determined as point to be fused, N corresponds to true generation The number of the whole point to be fused of the same point in boundary;N is the serial number of point to be fused;(UWn,VWn,ZWn) be serial number n to The three-dimensional coordinate of merging point;weightnIt is the weight of the point to be fused of serial number n;distnIt is the point to be fused of serial number n to certainly The distance of the centre coordinate of the corresponding camera chain of body;Wherein, the centre coordinate of camera chain is each of camera chain The coordinate of the central symmetry point of each subpoint of the installation position of a video camera in world coordinate system;
Three-dimensional fusion prospect points cloud processing submodule, for whole three-dimensional fusion foreground point compositions currently to be acquired the moment Corresponding three-dimensional fusion foreground picture.
In one embodiment, target search module 610 further comprises: the second poll submodule, three n-dimensional subspace ns are drawn Molecular modules, three-dimensional clustering processing submodule, three-dimensional template matched sub-block, three-dimensional position and displacement computational submodule, three-dimensional Space characteristics computational submodule and three-dimensional operation characteristic computational submodule.
Second poll submodule, the three-dimensional fusion prospect at each acquisition moment for successively choosing current collection period Figure;
Three-dimensional Subspace partition submodule, the three-dimensional space where the three-dimensional fusion foreground picture for will currently choose divide For several three n-dimensional subspace ns, one of the following terms information or a variety of: include in each three n-dimensional subspace n all three is counted The quantity of dimension fusion foreground point;Most color letters that all three-dimensional fusion foreground points for including in each three n-dimensional subspace n are presented Breath;The maximum height for all three-dimensional fusion foreground points for including in each three n-dimensional subspace n;
Three-dimensional clustering processing submodule, for will include in three n-dimensional subspace ns that meet cluster condition according to statistical result All three-dimensional fusion foreground points assemble to form three-dimensional fusion foreground blocks;Wherein, the cluster condition is space length less than the One preset threshold, and the difference of statistical data is less than the second preset threshold;
Three-dimensional template matched sub-block, for carrying out template to the three-dimensional fusion foreground blocks of formation in world coordinate system Match, judges whether the three-dimensional fusion foreground blocks are target object;
Three-dimensional position and displacement computational submodule, for when determine the three-dimensional fusion foreground blocks be target object when, root According to the point in real world in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period corresponding position Situation of change, determine in the three-dimensional fusion foreground image at different acquisition moment in current collection period, aggregation form the mesh Mark each three-dimensional fusion foreground point of object and other three-dimensional fusion foreground points of the corresponding identical point in real world Three-dimensional coordinate situation of change, and determine therefrom that the target object described in each acquisition moment of current collection period position and Misalignment;
Three-dimensional space feature calculation submodule, for by target object described in each acquisition moment in current collection period Position be determined as the spatial distribution characteristic of the target object;
Three-dimensional operation characteristic computational submodule, for by target object described in each acquisition moment in current collection period Misalignment be determined as the operation distribution characteristics of the target object.
In another embodiment, target search module 610 further comprises: third poll submodule, two-dimensional projection's Module, two-dimensional sub-spaces divide submodule, two-dimentional clustering processing submodule, two dimension pattern plate matched sub-block, two-dimensional position and position Move computational submodule, two-dimensional space feature calculation submodule and two-dimentional operation characteristic computational submodule.
Third poll submodule, the three-dimensional fusion prospect at each acquisition moment for successively choosing current collection period Figure;
Two-dimensional projection's submodule, each three-dimensional fusion foreground point in the three-dimensional fusion foreground picture for will currently choose are thrown Shadow obtains fusion prospect subpoint into a two-dimensional surface;Whole three-dimensionals in the three-dimensional fusion foreground picture currently chosen are melted It closes the corresponding fusion prospect subpoint in foreground point to combine, obtains fusion prospect perspective view;
Two-dimensional sub-spaces divide submodule, several for the two-dimensional space where the fusion prospect perspective view to be divided into Two-dimensional sub-spaces, one of statistics the following terms information or a variety of: the fusion prospect projection for including in each two-dimensional sub-spaces The quantity of point;Most colouring informations that all fusion prospect subpoints for including in each two-dimensional sub-spaces are presented;Each two The maximum height for all fusion prospect subpoints for including in n-dimensional subspace n;
Two-dimentional clustering processing submodule, for will include in the two-dimensional sub-spaces that meet cluster condition according to statistical result All fusion prospect subpoints assemble to be formed two dimension fusion foreground blocks;Wherein, the cluster condition is space length less than the One preset threshold, and the difference of statistical data is less than the second preset threshold;
Two dimension pattern plate matched sub-block, for carrying out template to the two dimension fusion foreground blocks of formation in the two-dimensional space Matching judges whether the two dimension fusion foreground blocks are target object;
Two-dimensional position and displacement computational submodule, for when determine it is described two dimension fusion foreground blocks be target object when, root According to the point in real world in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period corresponding position Situation of change, determine the corresponding fusion prospect projection of the three-dimensional fusion foreground image at different acquisition moment in current collection period In figure, assemble each fusion prospect subpoint and the corresponding identical point in real world for forming the target object The two-dimensional coordinate situation of change of other fusion prospect subpoints, and determine therefrom that each acquisition moment institute in current collection period State position and the misalignment of target object;
Two-dimensional space feature calculation submodule, for by target object described in each acquisition moment in current collection period Position be determined as the spatial distribution characteristic of the target object;
Two-dimentional operation characteristic computational submodule, for by target object described in each acquisition moment in current collection period Misalignment be determined as the operation distribution characteristics of the target object.
Judgment module 611 further comprises: pattern classification handles submodule and pattern classification judging submodule.
Pattern classification handles submodule, for the spatial distribution characteristic to the target object and runs distribution characteristics progress Pattern classification;
Pattern classification judging submodule judges whether there is target context for the result according to pattern classification and occurs.
Video camera division module 601 be by the monitoring site installation position at a distance of less than any the two of a pre-determined distance Platform video camera forms a camera chain.
Image processing module 603 combines the multiframe bidimensional image successively acquired in current collection period to current camera When figure carries out image procossing, this multiframe bidimensional image figure is that same video camera in being combined by current camera is acquired.
In multiple-camera cooperative monitoring device provided by the invention, the bidimensional image figure of video camera acquisition can be gray scale Image or color image.
Multiple-camera cooperative monitoring device provided by the invention and multiple-camera cooperative monitoring method are based on identical invention Thought realizes that specific embodiment can refer to the aforementioned introduction to multiple-camera cooperative monitoring method, and details are not described herein again.
It should be noted that although being referred to several units or son of multiple-camera cooperative monitoring device in the above detailed description Unit, but this division is only not enforceable.In fact, embodiment according to the present invention, above-described two Or more the feature and function of unit can embody in a unit.Conversely, the feature of an above-described unit and Function can be to be embodied by multiple units with further division.
Although detailed description of the preferred embodimentsthe spirit and principles of the present invention are described by reference to several, it should be appreciated that, this It is not limited to the specific embodiments disclosed for invention, does not also mean that the feature in these aspects cannot to the division of various aspects Combination is benefited to carry out, this to divide the convenience merely to statement.The present invention is directed to cover appended claims spirit and Included various modifications and equivalent arrangements in range.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects Describe in detail it is bright, it should be understood that the above is only a specific embodiment of the present invention, the guarantor being not intended to limit the present invention Range is protected, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in this Within the protection scope of invention.
Those skilled in the art will also be appreciated that the various illustrative components, blocks that the embodiment of the present invention is listed (illustrative logical block), unit and step can by electronic hardware, computer software, or both knot Conjunction is realized.For the replaceability (interchangeability) for clearly showing that hardware and software, above-mentioned various explanations Property component (illustrative components), unit and step universally describe their function.Such function It can be that the design requirement for depending on specific application and whole system is realized by hardware or software.Those skilled in the art Can be can be used by various methods and realize the function, but this realization is understood not to for every kind of specific application Range beyond protection of the embodiment of the present invention.
Various illustrative logical blocks or unit or device described in the embodiment of the present invention can be by general Processor, digital signal processor, specific integrated circuit (ASIC), field programmable gate array or other programmable logic dress It sets, discrete gate or transistor logic, discrete hardware components or above-mentioned any combination of design carry out function described in implementation or operation Energy.General processor can be microprocessor, and optionally, which may be any traditional processor, control Device, microcontroller or state machine.Processor can also realize by the combination of computing device, for example, digital signal processor and Microprocessor, multi-microprocessor, one or more microprocessors combine a digital signal processor core or any other class As configuration to realize.
The step of method described in the embodiment of the present invention or algorithm can be directly embedded into hardware, processor execute it is soft The combination of part module or the two.Software module can store in RAM memory, flash memory, ROM memory, EPROM storage Other any form of storaging mediums in device, eeprom memory, register, hard disk, moveable magnetic disc, CD-ROM or this field In.Illustratively, storaging medium can be connect with processor, so that processor can read information from storaging medium, and It can be to storaging medium stored and written information.Optionally, storaging medium can also be integrated into the processor.Processor and storaging medium can To be set in asic, ASIC be can be set in user terminal.Optionally, processor and storaging medium also can be set in In different components in the terminal of family.
In one or more exemplary designs, above-mentioned function described in the embodiment of the present invention can be in hardware, soft Part, firmware or any combination of this three are realized.If realized in software, these functions be can store and computer-readable On medium, or it is transferred on a computer readable medium in the form of one or more instructions or code forms.Computer readable medium includes electricity Brain storaging medium and convenient for so that computer program is allowed to be transferred to from a place telecommunication media in other places.Storaging medium can be with It is that any general or special computer can be with the useable medium of access.For example, such computer readable media may include but It is not limited to RAM, ROM, EEPROM, CD-ROM or other optical disc storages, disk storage or other magnetic storage devices or other What can be used for carry or store with instruct or data structure and it is other can be by general or special computer or general or specially treated The medium of the program code of device reading form.In addition, any connection can be properly termed computer readable medium, example Such as, if software is to pass through a coaxial cable, fiber optic cables, double from a web-site, server or other remote resources Twisted wire, Digital Subscriber Line (DSL) are defined with being also contained in for the wireless way for transmitting such as example infrared, wireless and microwave In computer readable medium.The disk (disk) and disk (disc) includes compress disk, radium-shine disk, CD, DVD, floppy disk And Blu-ray Disc, disk is usually with magnetic replicate data, and disk usually carries out optically replicated data with laser.Combinations of the above Also it may be embodied in computer readable medium.

Claims (18)

1. a kind of multiple-camera cooperative monitoring method characterized by comprising
Step A takes the photograph more that are deployed in monitoring site in such a way that at least two video cameras form a camera chain Camera is combined into multiple camera chains;
Step B obtains the bidimensional image figure that each camera chain is acquired in each collection period, and combines to current camera The multiframe bidimensional image figure successively acquired in current collection period carries out step B1~step B3 processing;
Step B1 extracts prospect to the multiframe bidimensional image figure, obtains multiple two-dimensional foreground images;Before the multiple two dimension Scape image projection obtains multiple three-dimensional foreground images into three-dimensional space;Wherein, each acquisition moment of current collection period, The multiframe bidimensional image figure, the multiple two-dimensional foreground image and the multiple three-dimensional foreground image have one-to-one close System;
Step B2 calculates the situation of change of point corresponding position in the multiple two-dimensional foreground image in real world;
Wherein, the point in the real world is that each two-dimensional foreground point in the two-dimensional foreground image is right in real world The point answered;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
Step B3, according to the situation of change of the corresponding position in the multiple two-dimensional foreground image of the point in real world,
And it according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image, calculates in real world The situation of change of point corresponding position in the multiple three-dimensional foreground image;
Step C, for each acquisition moment of current collection period, according to the whole three that will correspond to identical point in real world Tieing up foreground point fusion becomes the rule of a three-dimensional fusion foreground point, by each video camera of monitoring site described in the same acquisition moment The three-dimensional foreground point combined in corresponding three-dimensional foreground image carries out fusion treatment, before the whole three-dimensional fusions obtained after fusion Sight spot is combined, and the three-dimensional fusion foreground picture at acquisition moment is formed;Wherein, before the three-dimensional foreground point is the three-dimensional The tissue points of scape image;
Step D, according to the situation of change of the corresponding position in the multiple three-dimensional foreground image of the point in real world, and According to the corresponding relationship of each three-dimensional fusion foreground point and each three-dimensional foreground point for being fused into each three-dimensional fusion foreground point, meter Calculate the corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period of the point in real world Situation of change;
Step E, according in multiple three-dimensional fusion foreground pictures and real world at each acquisition moment of current collection period The situation of change of point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period, determines Target object, and determine the spatial distribution characteristic and operation distribution characteristics of the target object;
Step F judges whether there is target context and goes out according to the spatial distribution characteristic of the target object and operation distribution characteristics It is existing;
Step G exports judging result, and is having target context alarm occur;
Wherein, the step C, comprising:
Step C1 successively chooses each camera shooting successively using each acquisition moment of current collection period as the current acquisition moment Machine combination is successively chosen at each of corresponding three-dimensional foreground image of camera chain that the current acquisition moment is currently chosen three Foreground point is tieed up as current three-dimensional foreground point;
Step C2 judges to acquire in the corresponding three-dimensional foreground image of other camera chains of monitoring site described in the moment current, With the presence or absence of the three-dimensional foreground point of the same point corresponded in real world with current three-dimensional foreground point;
Step C3, if it does not exist, then current three-dimensional foreground point is determined as three-dimensional fusion foreground point;If it is present according to The whole three-dimensional foreground points fusion for corresponding to the same point in real world is become a three-dimensional fusion prospect by following formula Point:
Wherein, U, V, Z are the three-dimensional coordinates of three-dimensional fusion foreground point, correspond respectively to three reference axis of world coordinate system;It will Each three-dimensional foreground point corresponding to the same point in real world is determined as point to be fused, and N is corresponded in real world The same point whole point to be fused number;N is the serial number of point to be fused;(UWn,VWn,ZWn) it is the to be fused of serial number n The three-dimensional coordinate of point;weightnIt is the weight of the point to be fused of serial number n;distnBe serial number n point to be fused it is right to itself The distance of the centre coordinate for the camera chain answered;Wherein, the centre coordinate of camera chain is that each of camera chain takes the photograph The coordinate of the central symmetry point of each subpoint of the installation position of camera in world coordinate system;
Step C4, by whole three-dimensional fusion foreground point composition current acquisition moment corresponding three-dimensional fusion foreground pictures.
2. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that the step B1, it will be described Multiple two-dimensional foreground image projections obtain multiple three-dimensional foreground images into three-dimensional space, comprising:
In the corresponding bidimensional image figure of the two-dimensional foreground image, the two-dimensional coordinate of each two-dimensional foreground point is determined;
In the corresponding two-dimensional depth figure of the two-dimensional foreground image, the depth value of each two-dimensional foreground point is determined;Wherein, described Two-dimensional depth figure is to utilize the correspondence that each video camera while the bidimensional image figure of acquisition calculate in current camera combination Bidimensional image figure in the depth value of each pixel formed;
According to the two-dimensional coordinate and depth value of the two-dimensional foreground point, the two-dimensional foreground point is projected into camera coordinate system In;
Subpoint of the two-dimensional foreground point in the camera coordinate system is continued to project in world coordinate system, it will be alive Subpoint obtained in boundary's coordinate system is determined as the three-dimensional foreground point;
The corresponding three-dimensional foreground point of two-dimensional foreground points whole in the two-dimensional foreground image is formed into the three-dimensional foreground image.
3. multiple-camera cooperative monitoring method according to claim 2, which is characterized in that the step B2 is by institute It states multiple two-dimensional foreground images and calculates light stream, obtain the corresponding position in the multiple two-dimensional foreground image of the point in real world The situation of change set.
4. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that the step E, comprising:
Step E1 successively chooses the three-dimensional fusion foreground picture at each acquisition moment of current collection period;
Three-dimensional space where the three-dimensional fusion foreground picture currently chosen is divided into several three n-dimensional subspace ns by step E2, statistics One of the following terms information is a variety of: the quantity for all three-dimensional fusion foreground points for including in each three n-dimensional subspace n;Often Most colouring informations that all three-dimensional fusion foreground points for including in a three n-dimensional subspace n are presented;It is wrapped in each three n-dimensional subspace n The maximum height of all three-dimensional fusion foreground points contained;
Step E3, according to statistical result, all three-dimensional fusion foreground points that will include in three n-dimensional subspace ns that cluster condition be met Aggregation forms three-dimensional fusion foreground blocks;Wherein, the cluster condition is space length less than the first preset threshold, and statistical data Difference less than the second preset threshold;
Step E4 carries out template matching to the three-dimensional fusion foreground blocks of formation in world coordinate system, judges the three-dimensional fusion Whether foreground blocks are target object;
Step E5 is acquired according to the point in real world currently when determining the three-dimensional fusion foreground blocks is target object The situation of change of corresponding position, determines current collection period in multiple three-dimensional fusion foreground pictures at each acquisition moment in period In the three-dimensional fusion foreground image at middle different acquisition moment, aggregation formed the target object each three-dimensional fusion foreground point and The three-dimensional coordinate situation of change of other three-dimensional fusion foreground points of the corresponding identical point in real world, and determine therefrom that The position of the target object described in each acquisition moment of current collection period and misalignment;
The position of target object described in each acquisition moment in current collection period is determined as the target object by step E6 Spatial distribution characteristic;
The misalignment of target object described in each acquisition moment in current collection period is determined as the target by step E7 The operation distribution characteristics of object.
5. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that the step E, comprising:
Step E1 successively chooses the three-dimensional fusion foreground picture at each acquisition moment of current collection period;
Step E2 projects each three-dimensional fusion foreground point in the three-dimensional fusion foreground picture currently chosen to a two-dimensional surface In, obtain fusion prospect subpoint;Whole three-dimensional fusion foreground points in the three-dimensional fusion foreground picture currently chosen are corresponding Fusion prospect subpoint is combined, and fusion prospect perspective view is obtained;
Two-dimensional space where the fusion prospect perspective view is divided into several two-dimensional sub-spaces, counted following by step E3 One of item information is a variety of: the quantity for the fusion prospect subpoint for including in each two-dimensional sub-spaces;Each two dimension is empty Between in include all fusion prospect subpoints present most colouring informations;Include in each two-dimensional sub-spaces all melts The maximum height of conjunction prospect subpoint;
Step E4, according to statistical result, all fusion prospect subpoints that will include in the two-dimensional sub-spaces that cluster condition be met Aggregation forms two dimension fusion foreground blocks;Wherein, the cluster condition is space length less than the first preset threshold, and statistical data Difference less than the second preset threshold;
Step E5 carries out template matching to the two dimension fusion foreground blocks of formation in the two-dimensional space, judges that the two dimension is melted Close whether foreground blocks are target object;
Step E6 is acquired according to the point in real world currently when determining the two dimension fusion foreground blocks is target object The situation of change of corresponding position, determines current collection period in multiple three-dimensional fusion foreground pictures at each acquisition moment in period In the corresponding fusion prospect perspective view of the three-dimensional fusion foreground image at middle different acquisition moment, aggregation forms the target object The two dimension that other of each fusion prospect subpoint and the corresponding identical point in real world merge prospect subpoint is sat Situation of change is marked, and determines therefrom that position and the displacement feelings of the target object described in each acquisition moment of current collection period Condition;
The position of target object described in each acquisition moment in current collection period is determined as the target object by step E7 Spatial distribution characteristic;
The misalignment of target object described in each acquisition moment in current collection period is determined as the target by step E8 The operation distribution characteristics of object.
6. multiple-camera cooperative monitoring method according to claim 4 or 5, which is characterized in that the step F, comprising:
Spatial distribution characteristic and operation distribution characteristics to the target object carry out pattern classification;
Target context is judged whether there is according to the result of pattern classification to occur.
7. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that the step A is will be in institute It states monitoring site installation position and forms a camera chain at a distance of any two video cameras less than a pre-determined distance.
8. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that described more in the step B Frame bidimensional image figure is that same video camera in being combined by current camera is acquired.
9. multiple-camera cooperative monitoring method according to claim 1, which is characterized in that the bidimensional image figure is ash Spend image or color image.
10. a kind of multiple-camera cooperative monitoring device characterized by comprising
Video camera division module, for prison will to be deployed in such a way that at least two video cameras form a camera chain The multiple cameras at control scene is combined into multiple camera chains;
Striograph obtains module, the bidimensional image figure acquired for obtaining each camera chain in each collection period;
Image processing module, for combining the multiframe bidimensional image figure successively acquired in current collection period to current camera Carry out image procossing;
Described image processing module further comprises:
First image processing module obtains multiple two-dimensional foreground images for extracting prospect to the multiframe bidimensional image figure;
Second image processing module, for the multiple two-dimensional foreground image projection into three-dimensional space, to be obtained multiple three-dimensionals Foreground image;Wherein, each acquisition moment of current collection period, the multiframe bidimensional image figure, the multiple two-dimensional foreground Image and the multiple three-dimensional foreground image have one-to-one relationship;
Third image processing module, for calculating the corresponding position in the multiple two-dimensional foreground image of the point in real world Situation of change;Wherein, the point in the real world is each two-dimensional foreground point in the two-dimensional foreground image true Corresponding point in the world;The two-dimensional foreground point is the pixel in the two-dimensional foreground image;
4th image processing module, for according to the point in real world in the multiple two-dimensional foreground image corresponding position Situation of change, and according to the corresponding relationship between the two-dimensional foreground image and the three-dimensional foreground image,
Calculate the situation of change of point corresponding position in the multiple three-dimensional foreground image in real world;
Fusion treatment module, for being directed to each acquisition moment of current collection period, according to phase in real world will be corresponded to Become the rule of a three-dimensional fusion foreground point with whole three-dimensional foreground points fusion of point, monitoring described in the same acquisition moment is existing Three-dimensional foreground point in the corresponding three-dimensional foreground image of each camera chain in field carries out fusion treatment, complete by what is obtained after fusion Portion combines three-dimensional fusion foreground point, forms the three-dimensional fusion foreground picture at acquisition moment;Wherein, the three-dimensional foreground point For the tissue points of the three-dimensional foreground image;
It is displaced computing module, for the change according to the corresponding position in the multiple three-dimensional foreground image of the point in real world Change situation, and according to each three-dimensional fusion foreground point and each three-dimensional foreground point for being fused into each three-dimensional fusion foreground point Corresponding relationship calculates the point in real world in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period The situation of change of corresponding position;
Target search module, for multiple three-dimensional fusion foreground pictures according to each acquisition moment of current collection period, and Point in the real world corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period Situation of change determines target object, and determines the spatial distribution characteristic and operation distribution characteristics of the target object;
Judgment module judges whether there is target for the spatial distribution characteristic and operation distribution characteristics according to the target object Scene occurs;
Output module for exporting judging result, and is having target context alarm occur;
Wherein, the fusion treatment module further comprises:
First poll submodule, for successively acquiring the moment using each acquisition moment of current collection period as current, successively Each camera chain is chosen, the corresponding three-dimensional foreground figure of camera chain that currently the acquisition moment is currently chosen successively is chosen at Each three-dimensional foreground point as in is as current three-dimensional foreground point;
Submodule is searched, for judging the corresponding three-dimensional foreground of other camera chains of monitoring site described in the currently acquisition moment In image, if there is the three-dimensional foreground point for corresponding to the same point in real world with current three-dimensional foreground point;Work as judgement For in the absence of, triggering the first fusion submodule;In the presence of being judged as, triggering the second fusion submodule;
First fusion submodule, for current three-dimensional foreground point to be determined as three-dimensional fusion foreground point;
Second fusion submodule, for whole three-dimensional foregrounds of the same point in real world will to be corresponded to according to following formula Point fusion becomes a three-dimensional fusion foreground point:
Wherein, U, V, Z are the three-dimensional coordinates of three-dimensional fusion foreground point, correspond respectively to three reference axis of world coordinate system;It will Each three-dimensional foreground point corresponding to the same point in real world is determined as point to be fused, and N is corresponded in real world The same point whole point to be fused number;N is the serial number of point to be fused;(UWn,VWn,ZWn) it is the to be fused of serial number n The three-dimensional coordinate of point;weightnIt is the weight of the point to be fused of serial number n;distnBe serial number n point to be fused it is right to itself The distance of the centre coordinate for the camera chain answered;Wherein, the centre coordinate of camera chain is that each of camera chain takes the photograph The coordinate of the central symmetry point of each subpoint of the installation position of camera in world coordinate system;
Three-dimensional fusion prospect points cloud processing submodule, for corresponding to whole three-dimensional fusion foreground point composition current acquisition moment Three-dimensional fusion foreground picture.
11. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that the second image procossing mould Block further comprises:
Two-dimensional coordinate computational submodule, for determining each two dimension in the corresponding bidimensional image figure of the two-dimensional foreground image The two-dimensional coordinate of foreground point;
Depth value computational submodule, before in the corresponding two-dimensional depth figure of the two-dimensional foreground image, determining each two dimension The depth value at sight spot;Wherein, the two-dimensional depth figure is two using each video camera in current camera combination while acquisition The depth value of each pixel is formed in the corresponding bidimensional image figure that dimension striograph calculates;
First projection submodule, for the two-dimensional coordinate and depth value according to the two-dimensional foreground point, by the two-dimensional foreground point It projects in camera coordinate system;
Second projection submodule, for continuing to project to subpoint of the two-dimensional foreground point in the camera coordinate system In world coordinate system, the three-dimensional foreground point will be determined as in subpoint obtained in world coordinate system;
Three-dimensional foreground points cloud processing submodule, being used for will be in the two-dimensional foreground image before the corresponding three-dimensional of whole two-dimensional foreground points Sight spot forms the three-dimensional foreground image.
12. multiple-camera cooperative monitoring device according to claim 11, which is characterized in that the third image procossing mould Block is to obtain the point in real world in the multiple two-dimensional foreground figure by calculating light stream to the multiple two-dimensional foreground image The situation of change of corresponding position as in.
13. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that the target search module, Further comprise:
Second poll submodule, the three-dimensional fusion foreground picture at each acquisition moment for successively choosing current collection period;
Three-dimensional Subspace partition submodule, if the three-dimensional space where three-dimensional fusion foreground picture for will currently choose is divided into Dry three n-dimensional subspace ns, one of statistics the following terms information or a variety of: all three-dimensionals for including in each three n-dimensional subspace n are melted Close the quantity of foreground point;Most colouring informations that all three-dimensional fusion foreground points for including in each three n-dimensional subspace n are presented; The maximum height for all three-dimensional fusion foreground points for including in each three n-dimensional subspace n;
Three-dimensional clustering processing submodule is used for according to statistical result, the institute that will include in three n-dimensional subspace ns that meet cluster condition There is three-dimensional fusion foreground point to assemble to form three-dimensional fusion foreground blocks;Wherein, the cluster condition is that space length is pre- less than first If threshold value, and the difference of statistical data is less than the second preset threshold;
Three-dimensional template matched sub-block, for carrying out template matching to the three-dimensional fusion foreground blocks of formation in world coordinate system, Judge whether the three-dimensional fusion foreground blocks are target object;
Three-dimensional position and displacement computational submodule, for when determine the three-dimensional fusion foreground blocks be target object when, according to true The change of point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period in the real world Change situation, determine in the three-dimensional fusion foreground image at different acquisition moment in current collection period, aggregation forms the target pair The three of other three-dimensional fusion foreground points of each three-dimensional fusion foreground point and corresponding identical point in real world of elephant Changes in coordinates situation is tieed up, and determines therefrom that position and the displacement of the target object described in each acquisition moment of current collection period Situation;
Three-dimensional space feature calculation submodule, for by the position of target object described in each acquisition moment in current collection period Set the spatial distribution characteristic for being determined as the target object;
Three-dimensional operation characteristic computational submodule, for by the position of target object described in each acquisition moment in current collection period Condition of shifting one's love is determined as the operation distribution characteristics of the target object.
14. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that the target search module, Further comprise:
Third poll submodule, the three-dimensional fusion foreground picture at each acquisition moment for successively choosing current collection period;
Two-dimensional projection's submodule, each three-dimensional fusion foreground point in the three-dimensional fusion foreground picture for will currently choose project to In one two-dimensional surface, fusion prospect subpoint is obtained;Before whole three-dimensional fusions in the three-dimensional fusion foreground picture currently chosen The corresponding fusion prospect subpoint in sight spot is combined, and fusion prospect perspective view is obtained;
Two-dimensional sub-spaces divide submodule, for the two-dimensional space where the fusion prospect perspective view to be divided into several two dimensions Subspace, one of statistics the following terms information or a variety of: the fusion prospect subpoint for including in each two-dimensional sub-spaces Quantity;Most colouring informations that all fusion prospect subpoints for including in each two-dimensional sub-spaces are presented;Each two dimension The maximum height for all fusion prospect subpoints for including in space;
Two-dimentional clustering processing submodule is used for according to statistical result, the institute that will include in the two-dimensional sub-spaces that meet cluster condition There is fusion prospect subpoint to assemble to form two dimension fusion foreground blocks;Wherein, the cluster condition is that space length is pre- less than first If threshold value, and the difference of statistical data is less than the second preset threshold;
Two dimension pattern plate matched sub-block, for carrying out template to the two dimension fusion foreground blocks of formation in the two-dimensional space Match, judges whether the two dimension fusion foreground blocks are target object;
Two-dimensional position and displacement computational submodule, for when determine it is described two dimension fusion foreground blocks be target object when, according to true The change of point corresponding position in multiple three-dimensional fusion foreground pictures at each acquisition moment of current collection period in the real world Change situation, determines the corresponding fusion prospect perspective view of the three-dimensional fusion foreground image at different acquisition moment in current collection period In, assemble its for each fusion prospect subpoint and the corresponding identical point in real world for forming the target object He merges the two-dimensional coordinate situation of change of prospect subpoint, and determines therefrom that described in each acquisition moment of current collection period The position of target object and misalignment;
Two-dimensional space feature calculation submodule, for by the position of target object described in each acquisition moment in current collection period Set the spatial distribution characteristic for being determined as the target object;
Two-dimentional operation characteristic computational submodule, for by the position of target object described in each acquisition moment in current collection period Condition of shifting one's love is determined as the operation distribution characteristics of the target object.
15. multiple-camera cooperative monitoring device described in 3 or 14 according to claim 1, which is characterized in that the judgment module into One step includes:
Pattern classification handles submodule, for the spatial distribution characteristic and operation distribution characteristics progress mode to the target object Classification;
Pattern classification judging submodule judges whether there is target context for the result according to pattern classification and occurs.
16. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that the video camera division module It is that will form a camera shooting unit at a distance of any two video cameras less than a pre-determined distance in the monitoring site installation position It closes.
17. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that described image processing module pair When the multiframe bidimensional image figure that current camera combination successively acquires in current collection period carries out image procossing, the multiframe Bidimensional image figure is that same video camera in being combined by current camera is acquired.
18. multiple-camera cooperative monitoring device according to claim 10, which is characterized in that the bidimensional image figure is Gray level image or color image.
CN201610280010.4A 2016-04-29 2016-04-29 A kind of multiple-camera cooperative monitoring method and device Expired - Fee Related CN105979203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610280010.4A CN105979203B (en) 2016-04-29 2016-04-29 A kind of multiple-camera cooperative monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610280010.4A CN105979203B (en) 2016-04-29 2016-04-29 A kind of multiple-camera cooperative monitoring method and device

Publications (2)

Publication Number Publication Date
CN105979203A CN105979203A (en) 2016-09-28
CN105979203B true CN105979203B (en) 2019-04-23

Family

ID=56993443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610280010.4A Expired - Fee Related CN105979203B (en) 2016-04-29 2016-04-29 A kind of multiple-camera cooperative monitoring method and device

Country Status (1)

Country Link
CN (1) CN105979203B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683173B (en) * 2016-12-22 2019-09-13 西安电子科技大学 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
CN106846284A (en) * 2016-12-28 2017-06-13 武汉理工大学 Active-mode intelligent sensing device and method based on cell
CN107197200A (en) * 2017-05-22 2017-09-22 北斗羲和城市空间科技(北京)有限公司 It is a kind of to realize the method and device that monitor video is shown
CN107613288B (en) * 2017-09-25 2019-06-25 北京世纪东方通讯设备有限公司 A kind of group technology and system diagnosing multiple paths of video images quality
CN107544391A (en) * 2017-09-25 2018-01-05 南京律智诚专利技术开发有限公司 Oil well surveillance equipment timesharing management system
CN108229411A (en) * 2018-01-15 2018-06-29 上海交通大学 Human body hand-held knife behavioral value system and method based on RGB color image
CN109559342B (en) * 2018-03-05 2024-02-09 北京佳格天地科技有限公司 Method and device for measuring animal body length
CN110544273B (en) * 2018-05-29 2022-08-05 杭州海康机器人技术有限公司 Motion capture method, device and system
CN108898628A (en) * 2018-06-21 2018-11-27 北京纵目安驰智能科技有限公司 Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
CN110636248B (en) * 2018-06-22 2021-08-27 华为技术有限公司 Target tracking method and device
CN109035658B (en) * 2018-08-21 2020-09-25 北京深瞐科技有限公司 Cultural relic safety protection method and device
CN109247915B (en) * 2018-08-30 2022-02-18 北京连心医疗科技有限公司 Detection label for skin surface deformation and real-time detection method
CN112997223A (en) 2018-10-29 2021-06-18 赫克斯冈技术中心 Facility monitoring system and method
CN109785429B (en) * 2019-01-25 2020-08-21 北京极智无限科技有限公司 Three-dimensional reconstruction method and device
CN112386248B (en) * 2019-08-13 2024-01-23 中国移动通信有限公司研究院 Human body falling detection method, device, equipment and computer readable storage medium
CN112669553A (en) * 2019-10-15 2021-04-16 四川省数字商企智能科技有限公司 Unattended system and method for oil and gas station
CN112735011A (en) * 2019-10-15 2021-04-30 四川省数字商企智能科技有限公司 Identification system and method for oil and gas station
CN112669554A (en) * 2019-10-15 2021-04-16 四川省数字商企智能科技有限公司 Illegal intrusion detection and driving system and method for oil and gas station
CN111127436B (en) * 2019-12-25 2023-10-20 北京深测科技有限公司 Displacement detection early warning method for bridge
CN113538584B (en) * 2021-09-16 2021-11-26 北京创米智汇物联科技有限公司 Camera auto-negotiation monitoring processing method and system and camera
CN113891048B (en) * 2021-10-28 2022-11-15 江苏濠汉信息技术有限公司 Over-sight distance image transmission system for rail locomotive
CN114387346A (en) * 2022-03-25 2022-04-22 阿里巴巴达摩院(杭州)科技有限公司 Image recognition and prediction model processing method, three-dimensional modeling method and device
CN116612594A (en) * 2023-05-11 2023-08-18 深圳市云之音科技有限公司 Intelligent monitoring and outbound system and method based on big data
CN116866522A (en) * 2023-07-11 2023-10-10 广州市图威信息技术服务有限公司 Remote monitoring method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344965A (en) * 2008-09-04 2009-01-14 上海交通大学 Tracking system based on binocular camera shooting
CN101794481A (en) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 ATM (Automatic teller machine) self-service bank monitoring system and method
CN103716579B (en) * 2012-09-28 2017-05-10 中国科学院深圳先进技术研究院 Video monitoring method and system
US9438878B2 (en) * 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
CN105374019B (en) * 2015-09-30 2018-06-19 华为技术有限公司 A kind of more depth map fusion methods and device

Also Published As

Publication number Publication date
CN105979203A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN105979203B (en) A kind of multiple-camera cooperative monitoring method and device
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
Jafarian et al. Learning high fidelity depths of dressed humans by watching social media dance videos
CN106296721B (en) Object aggregation detection method and device based on stereoscopic vision
CN104077804B (en) A kind of method based on multi-frame video picture construction three-dimensional face model
Kumar et al. Monocular fisheye camera depth estimation using sparse lidar supervision
CN109272530A (en) Method for tracking target and device towards space base monitoring scene
Matsuyama et al. 3D video and its applications
CN109218619A (en) Image acquiring method, device and system
RU2744699C1 (en) Generating apparatus, method of generation and program for a three-dimensional model
CN106572340A (en) Camera shooting system, mobile terminal and image processing method
Zhao et al. Optimal camera network configurations for visual tagging
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN109308722A (en) A kind of spatial pose measuring system and method based on active vision
CN109840500A (en) A kind of 3 D human body posture information detection method and device
CN105336005B (en) A kind of method, apparatus and terminal obtaining target object sign data
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN109784130A (en) Pedestrian recognition methods and its device and equipment again
CN112365604A (en) AR equipment depth of field information application method based on semantic segmentation and SLAM
WO2022127181A1 (en) Passenger flow monitoring method and apparatus, and electronic device and storage medium
CN106295657A (en) A kind of method extracting human height's feature during video data structure
Osman Ulusoy et al. Dynamic probabilistic volumetric models
Inoue et al. Tracking Robustness and Green View Index Estimation of Augmented and Diminished Reality for Environmental Design
Leroy et al. Volume sweeping: Learning photoconsistency for multi-view shape reconstruction
CN113160210A (en) Drainage pipeline defect detection method and device based on depth camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190423

Termination date: 20200429

CF01 Termination of patent right due to non-payment of annual fee