CN110458897A - Multi-cam automatic calibration method and system, monitoring method and system - Google Patents
Multi-cam automatic calibration method and system, monitoring method and system Download PDFInfo
- Publication number
- CN110458897A CN110458897A CN201910746029.7A CN201910746029A CN110458897A CN 110458897 A CN110458897 A CN 110458897A CN 201910746029 A CN201910746029 A CN 201910746029A CN 110458897 A CN110458897 A CN 110458897A
- Authority
- CN
- China
- Prior art keywords
- camera
- calibrated
- vision map
- information
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The application belongs to technical field of video monitoring, in particular to a kind of multi-cam automatic calibration method and system, polyphaser cooperative monitoring method and system.Multi-cam automatic calibration method includes the following steps: the first image information for obtaining specified region;The vision map in specified region is constructed according to the first image information;Multiple second image informations that multiple cameras to be calibrated are captured are obtained, and multiple second image informations are subjected to Feature Points Matching with vision map one by one;The posture information and predefined parameter information for obtaining each camera to be calibrated one by one according to matching result one by one demarcate each camera to be calibrated further according to posture information and predefined parameter information.The multi-cam automatic calibration method and system of the application, monitoring method and system, the disadvantages such as complexity existing for existing camera marking method is high, universality is low are able to solve, a kind of semi-automatic multiple-camera self-calibration mode that need to only operate single camera and can complete is realized.
Description
Technical field
The application belongs to technical field of video monitoring, in particular to a kind of multi-cam automatic calibration method and system, more
Camera cooperative monitoring method and system.
Background technique
With economic environment, political setting, the variation of social environment and increasingly sophisticated, demand of the every profession and trade to security protection is continuous
Increase, simultaneously for the application of security and guard technology, flexibility, higher requirements are also raised for hommization.Video monitoring system is made
For most direct, most effective system integration scheme, the most popular research direction of scientific research and industrial circle is also become in recent years.
Rely on the good characteristics such as its high-performance, high convenience, high stability fast in recent years with the security protection solution that camera is input
Speed occupies the vantage point in market.Intelligent video monitoring (IVS) completes pedestrian by the incoming vision signal of analysis camera
The functions such as identification, Car license recognition, human testing make further thinking and judgement according to the content identified, and in this base
It takes action on plinth, by this series of process, we can really allow video monitoring that the mankind is replaced to go to the observation world.
Mature IVS is all based on greatly single camera system or non-cascaded formula camera group to complete in the market, single
Camera provides the picture of 2D a kind of, have the defects that in the accuracy of information processing it is certain, while cannot be good
Reply object blocks, the actual scenes problem such as multi-target tracking.So multi-cam Cooperative Mode becomes new development and grinds
Study carefully hot spot.Multi-cam collaboration, i.e., by the interaction of information between multiple cameras, merge multiple camera client informations, be every
A camera distributes the task of most suitable current environment, cooperates with completion system general objective, can solve the existing view of single camera
The problem of wild narrow, target occlusion, while can satisfy some other demands in placement field.This integration mode is current
There is relatively good effect in the scientific research that pedestrian detection, target following, pedestrian identify again, while in intelligent safety and defence system
Also critical role is slowly occupied.
Although multi-cam has very big advantage in information processing and integration, accurately demarcation flow can just make
Multi-camera system works normally, generally camera calibration mainly includes that traditional camera scaling method, active vision are taken the photograph
Camera calibration method and camera self-calibration method.
Traditional scaling method is to be demarcated using known scenery structural information, it will usually be related to complicated calibration
Process and high-precision known calibration information need professional person to carry out complex operations.Also, every primary phase unit of replacement later
Position, require the operation once re-scaled.
The method of active vision camera calibration is the motion information of known certain cameras, in such a way that the equation of motion solves
Calibration result is obtained, this method is not applied for the unknown and uncontrollable occasion of camera motion.
Camera self-calibration algorithm is demarcated by the corresponding relationship between multiple image, this method it is only necessary to
The relationship between image is established, to complete camera calibration;But this method belongs to nonlinear calibration, robustness is not high.
Summary of the invention
At least one in order to solve the above-mentioned technical problem, this application provides a kind of multi-cam automatic calibration method and it is
System, polyphaser cooperative monitoring method and system.
In a first aspect, including the following steps: this application discloses a kind of multi-cam automatic calibration method
Obtain first image information in specified region;
The vision map in region is specified according to the first image information architecture;
Obtain multiple second image informations that multiple cameras to be calibrated are captured, and by multiple second image informations
Feature Points Matching is carried out with the vision map one by one;
Obtain the posture information and predefined parameter information of each camera to be calibrated, then root one by one according to matching result
Each camera to be calibrated is demarcated one by one according to the posture information and predefined parameter information.
According at least one embodiment of the application, the first image information includes video requency frame data and each image
Initial camera pose corresponding to frame, wherein
The vision map in region is specified according to the first image information architecture, comprising:
According to the acquisition time line of the video requency frame data, successively the 3D point cloud of whole picture frames is overlapped, it is raw
At the preliminary 3D model in the specified region;
According to the acquisition time line of the video requency frame data, the 2D characteristic point in each picture frame is extracted, by all images
Feature Points Matching is carried out between frame, generates the 3D structure of all picture frames;
By in the 3D structure mapping to the preliminary 3D model, to generate the vision map.
According at least one embodiment of the application, the multi-cam automatic calibration method further include:
The obtained vision map is subjected to real-time Transmission;And/or;
It obtains the camera to be calibrated and completes calibrated information, and by the synchronizing information to the vision map,
The vision map after synchronizing again carries out real-time Transmission.
According at least one embodiment of the application, by multiple second image informations one by one with the vision map
Carry out Feature Points Matching, comprising:
The characteristic point of 2D characteristic point in second image information and corresponding points cloud in 3D vision map is matched.
Second aspect, this application discloses a kind of multi-cam automatic calibration systems, comprising:
Initial camera, for obtaining first image information in specified region;
Vision map structuring module, for specifying the vision map in region according to the first image information architecture;
Feature Points Matching module, multiple second image informations captured for obtaining multiple cameras to be calibrated, and will
Multiple second image informations carry out Feature Points Matching with the vision map one by one;
Demarcating module, for obtaining the posture information of each camera to be calibrated one by one according to matching result and making a reservation for
Parameter information one by one demarcates each camera to be calibrated further according to the posture information and predefined parameter information.
According at least one embodiment of the application, the initial camera is RGBD camera, and has vision SLAM
Function, wherein
The first image information includes initial camera pose corresponding to video requency frame data and each picture frame.
According at least one embodiment of the application, the vision map structuring module includes:
Preliminary 3D model construction unit, for the acquisition time line according to the video requency frame data, successively by whole figures
As the 3D point cloud of frame is overlapped, the preliminary 3D model in the specified region is generated;
3D structural generation unit extracts in each picture frame for the acquisition time line according to the video requency frame data
2D characteristic point will carry out Feature Points Matching between all picture frames, generate the 3D structure of all picture frames;And
Vision map generation unit is used for the 3D structure mapping to the preliminary 3D model, thus described in generating
Vision map.
According at least one embodiment of the application, the vision map structuring module is also used to obtain described to be calibrated
Camera completes calibrated information, and by the synchronizing information to the vision map;Wherein
The multi-cam automatic calibration system further include:
Visualization, for the vision cartographic information according to the vision map structuring module real-time Transmission, to described
Vision map carries out real-time display.
The third aspect, this application discloses a kind of polyphaser cooperative monitoring methods, include the following steps:
By the described in any item multi-cam automatic calibration methods of above-mentioned first aspect to each phase in specified region
Machine is demarcated;
Multiple third image informations in the specified region are captured by completing calibrated multiple cameras;
Object detection process and/or multi-target tracking processing and/or pedestrian are carried out according to multiple third image informations
Weight identifying processing;
Processing result is carried out to the dynamic exhibition of 3D form in the vision map obtained in multi-cam automatic Calibration step
Show.
Fourth aspect, this application discloses a kind of polyphaser cooperative monitoring systems, comprising:
Multi-cam calibration system, the multi-cam calibration system are taken the photograph more using above-mentioned second aspect is described in any item
As head calibration system, for being demarcated to each camera in specified region;
Multiple cameras, in the multiple third image informations for completing the calibrated capture specified region;
Monitor processing module, for carrying out object detection process and/or multiple target according to multiple third image informations
Tracking process and/or pedestrian's weight identifying processing;
Display module, for processing result to be carried out to the Dynamic Display of 3D form in vision map, wherein the vision
Generation when map demarcates each camera in specified region by the multi-cam calibration system.
At least there are following advantageous effects in the application:
In the multi-cam automatic calibration method and system of the application, polyphaser cooperative monitoring method and system, pass through life
At specified region 3D model, camera calibration to be calibrated is carried out further according to existing model, solves existing camera calibration side
The disadvantages such as complexity existing for method is high, universality is low, single camera can be completed half need to be operated certainly by realizing a kind of
The multiple-camera self-calibration mode of dynamicization;Meanwhile the multiple-camera collaboration mode by calibration can preferably realize it is a series of
The process of intellectual analysis processing, provides the analysis information of 3D.
Detailed description of the invention
Fig. 1 is the flow chart of a preferred embodiment of the application multi-cam automatic calibration method;
Fig. 2 is the stream of a preferred embodiment of the vision map constructing method in the application multi-cam automatic calibration method
Cheng Tu;
Fig. 3 is the frame diagram of a preferred embodiment of the application multi-cam automatic calibration system;
Fig. 4 is the frame of a preferred embodiment of the vision map structuring module in the application multi-cam automatic calibration system
Frame figure;
Fig. 5 is the flow chart of a preferred embodiment of the application polyphaser cooperative monitoring method;
Fig. 6 is the frame diagram of a preferred embodiment of the application polyphaser cooperative monitoring system.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application implementation clearer, below in conjunction in the embodiment of the present application
Attached drawing, technical solutions in the embodiments of the present application is further described in more detail.In the accompanying drawings, identical from beginning to end or class
As label indicate same or similar element or element with the same or similar functions.Described embodiment is the application
A part of the embodiment, instead of all the embodiments.The embodiments described below with reference to the accompanying drawings are exemplary, it is intended to use
In explanation the application, and it should not be understood as the limitation to the application.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without creative efforts, shall fall in the protection scope of this application.Under
Face is described in detail embodiments herein in conjunction with attached drawing.
Before formally each method of the application and system are described, first to some dedicated in the industry being related to
Term, dedicated English abbreviation carry out English full text, Chinese full text is explained, specific as follows:
1)SLAM
SLAM (simultaneous localization and mapping), instant positioning and map structuring.Problem can
With description are as follows: a robot is put into the unknown position in circumstances not known, if having that method allows that robot .s while moving
The complete map of this environment is gradually depicted, so-called complete map (a consistent map) refers to not to be advanced to by obstacle
The enterable every nook and cranny in room.SLAM requires real-time.
2)Visual SLAM
Vision SLAM, the SLAM technology based on image matching algorithm.
3)SFM
SFM (Structure from motion), i.e. exercise recovery structure are a kind of to obtain phase by analysis image sequence
Machine parameter and the technology for carrying out three-dimensional reconstruction.SFM not seek real-time, can be with processed offline.
4) 3D map
Characteristic point in two camera pictures is projected three-dimensional space by SLAM or SFM, generates the point cloud map of 3D.These
3D point cloud is to do the key message of camera positioning.
5) camera calibration
In image measurement process and machine vision applications, for the three-dimensional geometry position for determining space object surface point
With the correlation between its in the picture corresponding points, it is necessary to establish the geometrical model of camera imaging, these geometrical model parameters
It is exactly camera parameter.These parameters must can just be obtained by experiment and calculating in most conditions, this solves parameter
Process is just referred to as camera calibration (or camera calibration).
6) intelligent video monitoring
Intelligent video monitoring (IVS intelligent video surveillance) is known using image procossing, mode
Other and computer vision technique, by increasing intelligent video analysis module in monitoring system, by the powerful data of computer
Processing capacity filters out useless video pictures or interference information, automatic identification different objects, and analysis is extracted crucial in video source
Useful information, fast and accurately localized accident scene, judges the abnormal conditions in monitored picture, and in a manner of most fast and optimal
Other movements are sounded an alarm or triggered, to effectively carry out early warning in advance, are handled in thing, that collects evidence in time afterwards is full-automatic, complete
The intelligence system of weather, real time monitoring.
Prison is cooperateed with below in conjunction with multi-cam automatic calibration method and system, polyphaser of the attached drawing 1- Fig. 6 to the application
Method and system are controlled to be further described.
By the discussion of background technology part it is found that in intelligent monitor system, multi-cam mode can bring security protection to imitate
The significantly promotion of fruit;But how to complete multiple-camera cooperation, how to complete to demarcate the emphasis hardly possible become in this task
Point, therefore using the thinking of SLAM/SFM, it goes to complete accurate quickly semi-automatic camera calibration, is the emphasis of the application.
In a first aspect, as shown in Figure 1, including the following steps: this application discloses a kind of multi-cam automatic calibration method
Step S11, first image information in specified region is obtained.
Wherein, specified region can need to be selected as according to user a variety of suitable scene areas, such as user is indoor
The regions such as parlor, Office Area foreground, classroom.
Further, in this step S11, specify the capture of first image information in region can be by a variety of suitable
Scanning collection device;In the present embodiment, definition scanning collection device is initial camera, it should be noted that the initial camera shooting
Machine can exist independently of subsequent camera to be calibrated, certainly, in some cases, camera to be calibrated can also be made
For initial camera carry out using.
Further, in the present embodiment, preferably scanning collection device is RGBD (having colour+depth mode) camera,
It specifically can be the mould group etc. of binocular camera, TOF (time of light) or structure light and RGB composition;Also, scanning is adopted
Acquisition means have Visual SLAM function.Benefit using RGBD camera is that 3D point cloud can be generated in RGBD camera, simultaneously will
RGB texture is corresponding with point cloud, to improve the precision of model.If only with RGB camera, precision and the point cloud density meeting of model
It substantially reduces.
Further, it is a time-continuing process to the shooting (or crying scanning) for determining region by initial camera, can adopts
With a variety of suitable styles of shooting;In the present embodiment, preferably user's manual mode and robot automatic mode.
User's manual mode is that the hand-held RGBD camera of user is scanned;At the same time it can also which a visualization device is arranged
(mobile phone, computer, VR glasses etc.), real-time display currently has built up model, so that it is scanned to help user more easily to complete
Journey.
Robot automatic modeling is the camera shooting using corresponding domestic robot, unmanned plane etc. with autonomous ability
Head is automatically performed full room scanning by SLAM+ Autonomous Navigation Algorithm (planning parameters of scanning paths).
Step S12, the vision map (3D map) in specified region is constructed according to the first image information.
Wherein, vision map building can using be currently known it is a variety of it is suitable by the way of.In the present embodiment, institute as above
It states, initial camera is RGBD camera, and has Visual SLAM function, when being acquired by initial camera, Ke Yiji
Record the motion track of initial camera and the real-time pose of initial camera;Specifically, so that collected first image information
In include initial camera pose corresponding to video requency frame data and each picture frame.
Based on this, as shown in Fig. 2, above-mentioned steps S12 can specifically be divided into the present embodiment:
Step S121, according to the acquisition time line of video requency frame data, successively the 3D point cloud of whole picture frames is folded
Add, is handled by scheduled optimization and gridding method, the preliminary 3D model in specified region can be generated in real time.
Step S122, also according to the acquisition time line of above-mentioned video requency frame data, the 2D feature in each picture frame is extracted
Point (feature point) further preferably uses the technology of SFM, will carry out Feature Points Matching between all 2D picture frames, raw
At the 3D structure of all picture frames.
Step S123, by 3D structure mapping to preliminary 3D model, to generate vision map.Wherein, the correspondence of selection
Algorithm can do global optimization to all the points cloud, to guarantee optimal mapping effect.
The process of above-mentioned steps S122- step S123, is the equal of the process being mapped to 2D image texture in 3D point cloud,
Therefore textured full room model can be generated.In addition, the characteristic point on 2D image also has mapping relations with 3D point in space,
So vision map can be generated.Wherein, vision map is the 3D coordinate and 2D characteristics of image by characteristic point and corresponding phase seat in the plane
Appearance is constituted, and is used for subsequent camera calibration.
It should also be noted that, above-mentioned vision map generating process can be offline, it is also possible to online, depend on service
Device (processor) computing capability, server can be disposed beyond the clouds, can also be deployed in it is local (such as home PC, possibly even
It is that robot, mobile phone etc. have the equipment for calculating power enough).Also, the application does not limit specific SFM algorithm, key is counted
Method and mapping optimization algorithm.
Step S13, multiple second image informations that multiple cameras to be calibrated are captured are obtained, and by multiple second images
Information carries out Feature Points Matching with vision map one by one.
Specifically, in the present embodiment, user can multiple cameras to be calibrated of security protection freely, get indoor 3D above-mentioned
After vision map, by the 2D characteristic point and 3D vision that each of obtain the second image information among each camera to be calibrated
The characteristic point of corresponding points cloud is matched and (is called key point matching) in map.
Step S14, the posture information and predefined parameter information of each camera to be calibrated are obtained one by one according to matching result,
Each camera to be calibrated is demarcated one by one further according to posture information and predefined parameter information.
After carrying out key point matching in this step S14, shooting this feature picture (i.e. the second image) institute can be obtained
The pose of corresponding video camera (camera i.e. to be calibrated), adjusts the parameter of video camera to be calibrated later, video camera can be completed
Calibration.
Above-mentioned steps S13- step S14 is equivalent to Camera Self-Calibration process, and whole process is full automatic.In addition, wait mark
The position for determining camera can update in 3D vision map, and the field range of camera is shown in 3D Visualization Model.
Further, it in the multi-cam automatic calibration method of the application, after vision map is completed in building, can also wrap
It includes:
Step S15, obtained vision map is subjected to real-time Transmission;And/or;
It obtains camera to be calibrated and completes calibrated information, and by synchronizing information to vision map, then after synchronizing
Vision map carries out real-time Transmission.
Which it is scheduled on after step it should be noted that this step S15 does not limit one;It can with follows step S12,
The vision map that will be obtained carries out real-time Transmission;In addition, it is also possible to follows step S14, i.e., by the view after synchronization process
Feel that map carries out real-time Transmission.Specifically, vision map can be transmitted in corresponding visualization device, such as mobile phone, computer, VR
Glasses etc..
In the multi-cam automatic calibration method of the application, camera to be calibrated is supported to complete real-time dynamic calibration, it is corresponding
The camera (robot etc.) of some cameras with mobile holder or autonomous, can be done in real time calibration, by camera
Flat image be projected directly on threedimensional model.Therefore, in use, camera to be calibrated can be the letter demarcated
Breath is synchronized in vision map, to adapt to the variation of indoor environment.
In conclusion the multi-cam automatic calibration method of the application, by generating specified region 3D model, further according to
Some models carry out camera calibration to be calibrated, and it is low to solve the height of complexity existing for existing camera marking method, universality
The disadvantages of, realize a kind of semi-automatic multiple-camera self-calibration mode that need to only operate single camera and can complete.
Second aspect, as shown in figure 3, may include initial this application discloses a kind of multi-cam automatic calibration system
The parts such as video camera 21, vision map structuring module 22, Feature Points Matching module 23, demarcating module 24.
Initial camera 21, for obtaining first image information in specified region.Wherein, the type of initial camera 21,
Style of shooting etc. may refer in the multi-cam automatic calibration method of above-mentioned first aspect about the old of scanning collection device
It states, details are not described herein again.
Vision map structuring module 22, for constructing the vision map in specified region according to the first image information.
Similarly, since initial camera is RGBD camera, and has vision SLAM function, so that the first image information packet
Include initial camera pose corresponding to video requency frame data and each picture frame;Therefore, as shown in figure 4, in the present embodiment, vision
It may include preliminary 3D model construction unit 221,3D structural generation unit 222 and visually that map structuring module 22 is specific again
Figure generation unit 223.
Preliminary 3D model construction unit 221, for the acquisition time line according to video requency frame data, successively by whole images
The 3D point cloud of frame is overlapped, and generates the preliminary 3D model in specified region.
3D structural generation unit 222 extracts the 2D in each picture frame for the acquisition time line according to video requency frame data
Characteristic point will carry out Feature Points Matching between all picture frames, generate the 3D structure of all picture frames.
Vision map generation unit 223 is used for 3D structure mapping to preliminary 3D model, to generate vision map.
Feature Points Matching module 23, multiple second image informations captured for obtaining multiple cameras to be calibrated, and
Multiple second image informations are subjected to Feature Points Matching with vision map one by one.
Demarcating module 24, for obtaining the posture information and predetermined ginseng of each camera to be calibrated one by one according to matching result
Number information, one by one demarcates each camera to be calibrated further according to posture information and predefined parameter information.
Further, above-mentioned vision map structuring module 22 can also be used to obtain camera to be calibrated complete it is calibrated
Information, and by synchronizing information to vision map, the vision map after finally synchronizing again transmits.
Wherein, the multi-cam automatic calibration system of the application further comprises visualization 25, for according to view
Feel the vision map of map structuring module real-time Transmission (after the vision map for most starting to generate and/or subsequent synchronisation processing
Vision map) information, to vision map carry out real-time display.Similarly, it is mechanical, electrical to can be such as hand for visualization 25
Brain, VR glasses etc..
In conclusion the multi-cam automatic calibration system of the application, by generating specified region 3D model, further according to
Some models carry out camera calibration to be calibrated, and it is low to solve the height of complexity existing for existing camera marking method, universality
The disadvantages of, realize a kind of semi-automatic multiple-camera self-calibration mode that need to only operate single camera and can complete.
Further, after completing multiple camera calibrations, the polyphaser spatial model of a 3D can be obtained, it can be with
Multiple camera informations are integrated, can be identified again etc. in target detection, multi-target tracking, pedestrian and obtain preferably effect in business
Fruit.
For this purpose, the third aspect, as shown in figure 5, this application discloses a kind of polyphaser cooperative monitoring methods, including walks as follows
It is rapid:
Step S31, by the described in any item multi-cam automatic calibration methods of above-mentioned first aspect in specified region
Each camera demarcated.
Step S32, multiple third image informations in the specified region are captured by completing calibrated multiple cameras.
Step S33, according to multiple third image informations carry out object detection process and/or multi-target tracking processing and/or
Pedestrian's weight identifying processing.
Step S34, processing result is carried out to 3D form in the vision map obtained in multi-cam automatic Calibration step
Dynamic Display.
Fourth aspect, as shown in fig. 6, this application discloses a kind of polyphaser cooperative monitoring system, including multi-cam mark
Determine system 41, multiple cameras 42, monitor processing module 43 and display module 44.
Multi-cam calibration system 41 is used using multi-cam calibration system described in any one of above-mentioned third aspect
It is demarcated in each camera 42 in specified region;
Multiple cameras 42 are mounted on specified region pre-position, for completing the more of the specified region of calibrated capture
A third image information.
Monitor processing module 43, for being chased after according to multiple third image informations progress object detection process and/or multiple target
Track processing and/or pedestrian's weight identifying processing.Wherein, the specific processing side that target detection, multi-target tracking and pedestrian identify again
Method can be using the alignment processing method in the intelligent video monitoring being currently known, and details are not described herein again.
Display module 44, for processing result to be carried out to the Dynamic Display of 3D form in vision map, wherein visually
Figure is as multi-cam calibration system (being vision map described in step S12 in first aspect) to each of in specified region
Generation when camera is demarcated.In addition, the display module 44 in the present embodiment can be described in above-mentioned first, second aspect
Visualization and equipment.
To sum up, the polyphaser cooperative monitoring method and system of the application are directly calculated according to 3D model and camera pose
The image information of the 2D of multiple cameras can be integrated into the spatial information of 3D, to realize 3D by the geometrical relationship between camera
Target detection track algorithm.It can more preferably handle, object blocks, the tasks such as multiple target tracking.
Further, 2D monitoring is upgraded to 3D monitoring scheme, institute by the polyphaser cooperative monitoring method and system of the application
Some video camera informations really in a spatial model, all monitored multidate informations (people, pet, mobile device etc.),
And the variation (tables and chairs mobile, etc.) of some environment can be dynamically shown in the form of 3D in the room model of 3D, from
And realize the all-around intelligent monitoring of entire indoor scene.
Multi-cam automatic calibration method and system to the application, polyphaser will be used to cooperate with a specific example below
Monitoring method and system do further statement.
Specifically multiple calibrated cameras are installed in home scenarios, to realize comprehensive pet motions
The functions such as tracking, electronics nurse and danger early warning, steps are as follows:
Video is shot at home using a camera first, and the 3D of home scenarios is completed according to the prompt of mobile phone terminal app
Modeling;Built-in computer vision algorithms make rebuilds 3D model according to collected video sequence in this process, and can be
Mobile phone terminal sees visual result after reconstruct.
Later, the position planned before four cameras being mounted on is operated also according to above-mentioned scaling method in mobile phone terminal
Realize calibration;This moment, used camera is exactly by demarcating and calibrating the multi-camera system that can be cooperated.
Finally, may be implemented highly efficient, quasi- in intelligent Applications such as detection, the motion trackings that wherein joined pet
True pet motions track generates and analysis.Meanwhile electronics nurse and danger early warning function is added also according to the above results, it can be with
Comprehensive realization pet it is unattended.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Within the technical scope of the present application, any changes or substitutions that can be easily thought of by those familiar with the art, all answers
Cover within the scope of protection of this application.Therefore, the protection scope of the application should be with the scope of protection of the claims
It is quasi-.
Claims (10)
1. a kind of multi-cam automatic calibration method, which comprises the steps of:
Obtain first image information in specified region;
The vision map in region is specified according to the first image information architecture;
Multiple second image informations that multiple cameras to be calibrated are captured are obtained, and one by one by multiple second image informations
Feature Points Matching is carried out with the vision map;
The posture information and predefined parameter information for obtaining each camera to be calibrated one by one according to matching result, further according to institute
Posture information and predefined parameter information is stated one by one to demarcate each camera to be calibrated.
2. multi-cam automatic calibration method according to claim 1, which is characterized in that the first image information includes
Initial camera pose corresponding to video requency frame data and each picture frame, wherein
The vision map in region is specified according to the first image information architecture, comprising:
According to the acquisition time line of the video requency frame data, successively the 3D point cloud of whole picture frames is overlapped, generates institute
State the preliminary 3D model in specified region;
According to the acquisition time line of the video requency frame data, extract the 2D characteristic point in each picture frame, by all picture frames it
Between carry out Feature Points Matching, generate the 3D structure of all picture frames;
By in the 3D structure mapping to the preliminary 3D model, to generate the vision map.
3. multi-cam automatic calibration method according to claim 1 or 3, which is characterized in that further include:
The obtained vision map is subjected to real-time Transmission;And/or;
It obtains the camera to be calibrated and completes calibrated information, and by the synchronizing information to the vision map, then will
The vision map after synchronizing carries out real-time Transmission.
4. multi-cam automatic calibration method according to claim 1, which is characterized in that believe multiple second images
Breath carries out Feature Points Matching with the vision map one by one, comprising:
The characteristic point of 2D characteristic point in second image information and corresponding points cloud in 3D vision map is matched.
5. a kind of multi-cam automatic calibration system characterized by comprising
Initial camera, for obtaining first image information in specified region;
Vision map structuring module, for specifying the vision map in region according to the first image information architecture;
Feature Points Matching module, multiple second image informations captured for obtaining multiple cameras to be calibrated, and will be multiple
Second image information carries out Feature Points Matching with the vision map one by one;
Demarcating module, for obtaining the posture information and predefined parameter of each camera to be calibrated one by one according to matching result
Information one by one demarcates each camera to be calibrated further according to the posture information and predefined parameter information.
6. multi-cam automatic calibration system according to claim 5, which is characterized in that the initial camera is RGBD
Camera, and have vision SLAM function, wherein
The first image information includes initial camera pose corresponding to video requency frame data and each picture frame.
7. multi-cam automatic calibration system according to claim 6, which is characterized in that the vision map structuring module
Include:
Preliminary 3D model construction unit, for the acquisition time line according to the video requency frame data, successively by whole picture frames
3D point cloud be overlapped, generate the preliminary 3D model in the specified region;
3D structural generation unit, for the acquisition time line according to the video requency frame data, the 2D extracted in each picture frame is special
Point is levied, Feature Points Matching will be carried out between all picture frames, generates the 3D structure of all picture frames;And
Vision map generation unit is used for the 3D structure mapping to the preliminary 3D model, to generate the vision
Map.
8. the multi-cam automatic calibration system according to claim 5 or 7, which is characterized in that the vision map structuring
Module is also used to obtain the camera to be calibrated and completes calibrated information, and by the synchronizing information to it is described visually
Figure;Wherein
The multi-cam automatic calibration system further include:
Visualization, for the vision cartographic information according to the vision map structuring module real-time Transmission, to the vision
Map carries out real-time display.
9. a kind of polyphaser cooperative monitoring method, which comprises the steps of:
By multi-cam automatic calibration method according to any one of claims 1-4 to each camera in specified region into
Rower is fixed;
Multiple third image informations in the specified region are captured by completing calibrated multiple cameras;
Object detection process and/or multi-target tracking processing are carried out according to multiple third image informations and/or pedestrian knows again
Other places reason;
Processing result is carried out to the Dynamic Display of 3D form in the vision map obtained in multi-cam automatic Calibration step.
10. a kind of polyphaser cooperative monitoring system characterized by comprising
Multi-cam calibration system, the multi-cam calibration system are used such as the described in any item more camera shootings of claim 5-8
Head calibration system, for being demarcated to each camera in specified region;
Multiple cameras, in the multiple third image informations for completing the calibrated capture specified region;
Monitor processing module, for carrying out object detection process and/or multi-target tracking according to multiple third image informations
Processing and/or pedestrian's weight identifying processing;
Display module, for processing result to be carried out to the Dynamic Display of 3D form in vision map, wherein the vision map
Generation when being demarcated by the multi-cam calibration system to each camera in specified region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910746029.7A CN110458897B (en) | 2019-08-13 | 2019-08-13 | Multi-camera automatic calibration method and system and monitoring method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910746029.7A CN110458897B (en) | 2019-08-13 | 2019-08-13 | Multi-camera automatic calibration method and system and monitoring method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458897A true CN110458897A (en) | 2019-11-15 |
CN110458897B CN110458897B (en) | 2020-12-01 |
Family
ID=68486268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910746029.7A Active CN110458897B (en) | 2019-08-13 | 2019-08-13 | Multi-camera automatic calibration method and system and monitoring method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458897B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910493A (en) * | 2019-11-29 | 2020-03-24 | 广州极飞科技有限公司 | Three-dimensional reconstruction method and device and electronic equipment |
CN110996087A (en) * | 2019-12-10 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Video display method and device |
CN111539299A (en) * | 2020-04-20 | 2020-08-14 | 上海曼恒数字技术股份有限公司 | Human motion capture method, device, medium and equipment based on rigid body |
CN112150562A (en) * | 2020-10-20 | 2020-12-29 | 北京嘀嘀无限科技发展有限公司 | Camera calibration method, device, equipment and computer readable storage medium |
CN112270753A (en) * | 2020-11-09 | 2021-01-26 | 珠海格力智能装备有限公司 | Three-dimensional mapping method and device and electronic equipment |
CN113077519A (en) * | 2021-03-18 | 2021-07-06 | 中国电子科技集团公司第五十四研究所 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
US11188776B2 (en) | 2019-10-26 | 2021-11-30 | Genetec Inc. | Automated license plate recognition system and related method |
CN113763481A (en) * | 2021-08-16 | 2021-12-07 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN113840159A (en) * | 2021-09-26 | 2021-12-24 | 北京沃东天骏信息技术有限公司 | Video processing method, device, computer system and readable storage medium |
US11367267B2 (en) | 2018-02-08 | 2022-06-21 | Genetec Inc. | Systems and methods for locating a retroreflective object in a digital image |
CN114726996A (en) * | 2021-01-04 | 2022-07-08 | 北京外号信息技术有限公司 | Method and system for establishing a mapping between a spatial position and an imaging position |
CN116588328A (en) * | 2023-05-18 | 2023-08-15 | 北京积加科技有限公司 | Helicopter period pitch-changing and collective pitch control device and method |
US12125234B2 (en) | 2022-10-12 | 2024-10-22 | Genetec Inc. | Automated license plate recognition system and related method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101086681A (en) * | 2006-06-09 | 2007-12-12 | 中国科学院自动化研究所 | Game control system and method based on stereo vision |
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
US20120257065A1 (en) * | 2011-04-08 | 2012-10-11 | Qualcomm Incorporated | Systems and methods to calibrate a multi camera device |
CN104374395A (en) * | 2014-03-31 | 2015-02-25 | 南京邮电大学 | Graph-based vision SLAM (simultaneous localization and mapping) method |
CN105300375A (en) * | 2015-09-29 | 2016-02-03 | 塔米智能科技(北京)有限公司 | Robot indoor positioning and navigation method based on single vision |
CN106652026A (en) * | 2016-12-23 | 2017-05-10 | 安徽工程大学机电学院 | Three-dimensional space automatic calibration method based on multi-sensor fusion |
CN107392964A (en) * | 2017-07-07 | 2017-11-24 | 武汉大学 | The indoor SLAM methods combined based on indoor characteristic point and structure lines |
CN108510530A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of three-dimensional point cloud matching process and its system |
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
-
2019
- 2019-08-13 CN CN201910746029.7A patent/CN110458897B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101086681A (en) * | 2006-06-09 | 2007-12-12 | 中国科学院自动化研究所 | Game control system and method based on stereo vision |
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
US20120257065A1 (en) * | 2011-04-08 | 2012-10-11 | Qualcomm Incorporated | Systems and methods to calibrate a multi camera device |
CN104374395A (en) * | 2014-03-31 | 2015-02-25 | 南京邮电大学 | Graph-based vision SLAM (simultaneous localization and mapping) method |
CN105300375A (en) * | 2015-09-29 | 2016-02-03 | 塔米智能科技(北京)有限公司 | Robot indoor positioning and navigation method based on single vision |
CN106652026A (en) * | 2016-12-23 | 2017-05-10 | 安徽工程大学机电学院 | Three-dimensional space automatic calibration method based on multi-sensor fusion |
CN108510530A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of three-dimensional point cloud matching process and its system |
CN107392964A (en) * | 2017-07-07 | 2017-11-24 | 武汉大学 | The indoor SLAM methods combined based on indoor characteristic point and structure lines |
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11830256B2 (en) | 2018-02-08 | 2023-11-28 | Genetec Inc. | Systems and methods for locating a retroreflective object in a digital image |
US11367267B2 (en) | 2018-02-08 | 2022-06-21 | Genetec Inc. | Systems and methods for locating a retroreflective object in a digital image |
US12067743B2 (en) | 2019-10-26 | 2024-08-20 | Genetec Inc. | Automated license plate recognition system and related method |
US11188776B2 (en) | 2019-10-26 | 2021-11-30 | Genetec Inc. | Automated license plate recognition system and related method |
CN110910493B (en) * | 2019-11-29 | 2021-05-14 | 广州极飞科技股份有限公司 | Three-dimensional reconstruction method and device and electronic equipment |
CN110910493A (en) * | 2019-11-29 | 2020-03-24 | 广州极飞科技有限公司 | Three-dimensional reconstruction method and device and electronic equipment |
CN110996087B (en) * | 2019-12-10 | 2021-04-02 | 腾讯科技(深圳)有限公司 | Video display method and device |
CN110996087A (en) * | 2019-12-10 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Video display method and device |
CN111539299A (en) * | 2020-04-20 | 2020-08-14 | 上海曼恒数字技术股份有限公司 | Human motion capture method, device, medium and equipment based on rigid body |
CN111539299B (en) * | 2020-04-20 | 2024-03-01 | 上海曼恒数字技术股份有限公司 | Human motion capturing method, device, medium and equipment based on rigid body |
CN112150562A (en) * | 2020-10-20 | 2020-12-29 | 北京嘀嘀无限科技发展有限公司 | Camera calibration method, device, equipment and computer readable storage medium |
CN112270753A (en) * | 2020-11-09 | 2021-01-26 | 珠海格力智能装备有限公司 | Three-dimensional mapping method and device and electronic equipment |
CN114726996B (en) * | 2021-01-04 | 2024-03-15 | 北京外号信息技术有限公司 | Method and system for establishing a mapping between a spatial location and an imaging location |
CN114726996A (en) * | 2021-01-04 | 2022-07-08 | 北京外号信息技术有限公司 | Method and system for establishing a mapping between a spatial position and an imaging position |
CN113077519B (en) * | 2021-03-18 | 2022-12-09 | 中国电子科技集团公司第五十四研究所 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
CN113077519A (en) * | 2021-03-18 | 2021-07-06 | 中国电子科技集团公司第五十四研究所 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
CN113763481B (en) * | 2021-08-16 | 2024-04-05 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN113763481A (en) * | 2021-08-16 | 2021-12-07 | 北京易航远智科技有限公司 | Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene |
CN113840159A (en) * | 2021-09-26 | 2021-12-24 | 北京沃东天骏信息技术有限公司 | Video processing method, device, computer system and readable storage medium |
US12125234B2 (en) | 2022-10-12 | 2024-10-22 | Genetec Inc. | Automated license plate recognition system and related method |
CN116588328A (en) * | 2023-05-18 | 2023-08-15 | 北京积加科技有限公司 | Helicopter period pitch-changing and collective pitch control device and method |
CN116588328B (en) * | 2023-05-18 | 2024-03-19 | 北京积加科技有限公司 | Helicopter period pitch-changing and collective pitch control device and method |
Also Published As
Publication number | Publication date |
---|---|
CN110458897B (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458897A (en) | Multi-cam automatic calibration method and system, monitoring method and system | |
CN110415342B (en) | Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor | |
US11176704B2 (en) | Object pose estimation in visual data | |
JP6974873B2 (en) | Devices and methods for retrieving depth information from the scene | |
EP2870428B1 (en) | System and method for 3d measurement of the surface geometry of an object | |
US9947134B2 (en) | System and method for generating a dynamic three-dimensional model | |
US7825948B2 (en) | 3D video conferencing | |
JP5093053B2 (en) | Electronic camera | |
CN108154550A (en) | Face real-time three-dimensional method for reconstructing based on RGBD cameras | |
WO2018075053A1 (en) | Object pose based on matching 2.5d depth information to 3d information | |
CN109211103A (en) | Deduction system | |
US20070076090A1 (en) | Device for generating three dimensional surface models of moving objects | |
WO2019062056A1 (en) | Smart projection method and system, and smart terminal | |
CN101581874B (en) | Tele-immersion teamwork device based on multi-camera acquisition | |
CN111046725A (en) | Spatial positioning method based on face recognition and point cloud fusion of surveillance video | |
WO2022127181A1 (en) | Passenger flow monitoring method and apparatus, and electronic device and storage medium | |
KR20100090457A (en) | Non-intrusive 3d face data acquisition system and method thereof | |
Yang et al. | Seeing as it happens: Real time 3D video event visualization | |
KR101996907B1 (en) | Apparatus for tracking object | |
JP2003256804A (en) | Visual field video generating device and method, and visual field video generating program and recording medium with its program recorded | |
KR102494479B1 (en) | Augmented reality occlusion producing system using the data of the positioning space information aquired by drone | |
CN115223023B (en) | Human body contour estimation method and device based on stereoscopic vision and deep neural network | |
WO2018114581A1 (en) | Method and apparatus for constructing lighting environment representations of 3d scenes | |
CN211019070U (en) | Three-dimensional video acquisition system | |
KR20160071172A (en) | Panoramic three-dimensional map generation system using stereo camera. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |