CN108111818A - Moving target active perception method and apparatus based on multiple-camera collaboration - Google Patents

Moving target active perception method and apparatus based on multiple-camera collaboration Download PDF

Info

Publication number
CN108111818A
CN108111818A CN201711425735.9A CN201711425735A CN108111818A CN 108111818 A CN108111818 A CN 108111818A CN 201711425735 A CN201711425735 A CN 201711425735A CN 108111818 A CN108111818 A CN 108111818A
Authority
CN
China
Prior art keywords
camera
target
slave
picture
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711425735.9A
Other languages
Chinese (zh)
Other versions
CN108111818B (en
Inventor
胡海苗
田荣朋
胡子昊
李波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Innovation Research Institute of Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201711425735.9A priority Critical patent/CN108111818B/en
Publication of CN108111818A publication Critical patent/CN108111818A/en
Application granted granted Critical
Publication of CN108111818B publication Critical patent/CN108111818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention provides a kind of moving target active perception apparatus and method based on multiple-camera collaboration.This method includes:According to main camera picture with being demarcated from camera views, position mapping relations are established;Moving target in monitoring main camera picture in real time, obtains candidate target set;Candidate target is chosen according to target importance, is chosen according to position mapping relations from video camera and carries out track up;Camera lens azimuth and zoom magnification are calculated, adjusts from video camera and is directed at candidate target region, obtains the high quality graphic of the candidate target;Candidate target classification is analyzed according to high quality graphic, is concern target the goal verification for belonging to predetermined set type, is put into concern target collection.The present invention is transferred the high quality graphic that candidate target in main camera picture is obtained from video camera, extracts target image characteristics, confirm target classification, realized that the active of target confirms using position correspondence between principal and subordinate's video camera.

Description

Moving target active sensing method and device based on multi-camera cooperation
Technical Field
The invention relates to an imaging method and device in a multi-camera monitoring system, in particular to a target active sensing method and device based on multi-camera cooperation, and belongs to the field of video monitoring.
Background
Nowadays, various video monitoring devices are widely applied in production and living environments. An important task of video monitoring is to discover and record objects in a scene, and further, to discover and record key information about the identity of the objects, so as to provide help for subsequently identifying the identity information of the objects. The images shot by the camera, such as human faces, license plates, annual inspection marks of vehicles and the like, can be used for determining the identity of the target, and are parts containing target unique description information. The camera shoots more unique description information, and can help the identification of the target identity.
The existing video monitoring equipment achieves the effect of identifying and finding targets in a scene by acquiring video images of a monitored scene and analyzing the video images. However, due to the fact that video monitoring application scenes are different, positions and postures of targets are various, imaging quality of a target image of a camera is poor when a part of targets are far away or the target image of the camera is imaged laterally, and the result of an image feature-based identification method on the part of targets is poor. Whereas existing monitoring devices use passive imaging strategies. The equipment shoots a monitoring scene at a fixed position, and the position of the camera and the imaging parameters cannot be actively adjusted to improve the imaging effect. Passive imaging equipment cannot actively acquire a high-quality target image and cannot acquire target uniqueness description information, so that the target cannot be effectively identified, and the problems of false alarm and missed report exist in application. It is necessary to design an active sensing device with an active imaging function to solve the problem of active target identification.
One type of bayonet camera system assumes monitoring equipment in an area where the target passing gesture is limited, avoids the influence of the target gesture on imaging, and solves the problem of target uniqueness description information acquisition in a limited scene. In such a system, a camera is installed in an area such as a doorway, and a flash lamp, an infrared lamp, or the like is used as an auxiliary device, thereby obtaining a high-quality image. The target image shot by the system has high resolution and good quality, the key information of the target is easy to extract, and the target identification accuracy is high. However, the system is limited by the requirements of the scene, and can be deployed only in a few positions such as toll gates, building entrances and exits, and the application scene is limited.
According to the method, the slave camera is moved to track and snapshot the target to be confirmed found in the master camera through the cooperation of the cameras, the target category is confirmed according to the high-quality target image obtained by snapshot, the requirement of active confirmation of the target is met, and the problem that the application scene of the bayonet camera is limited is solved.
The invention designs a moving target active sensing method based on multi-camera cooperation, wherein a moving area in a picture is detected from a picture of a main camera to obtain a candidate target set, and a slave camera is mobilized to obtain a high-quality image of each candidate target by utilizing a camera linkage relation; and analyzing the target high-quality image by using a classifier, extracting the characteristics of the target image, analyzing the category of the target, and confirming the target concerned by the target of a preset type according to the classification result.
Disclosure of Invention
The problem to be solved by the invention is that for a candidate object detected by a main camera, the surrounding slave camera is mobilized to acquire a high-quality image of the candidate object, the target image characteristics are extracted, the target type is analyzed, and whether the candidate object is an interested object or not is confirmed.
The cameras used in the invention are divided into two types, namely a panoramic camera fixed relative to a monitored scene and a PTZ camera with rotation (Pan), pitching (Tilt) and zooming (Zoom) functions.
The invention discloses a moving target active sensing device, which comprises a fixed panoramic camera, a plurality of PTZ cameras and the moving target active sensing device. The panoramic monitoring camera is a master camera and is used for acquiring a panoramic monitoring video, and the PTZ camera is a slave camera and is used for tracking and shooting a target and acquiring a target high-quality image. The moving target active sensing device is used for extracting a target to be confirmed from the picture of the main camera, moving the auxiliary camera to shoot a high-quality target image, analyzing the target category and confirming the target to be confirmed which belongs to a preset setting type.
The invention discloses a moving target autonomous perception imaging method based on multi-camera cooperation, which is characterized by comprising the following steps:
(1) automatically calibrating the master camera and the slave camera by means of feature extraction and feature matching according to the pictures of the master camera and the slave camera, establishing a position mapping relation,
(2) detecting a plurality of motion areas in the field of view of the main camera in real time according to the setting of a detection threshold value to obtain a set of candidate targets,
(3) and selecting the candidate target with the highest importance in the candidate target set according to the importance evaluation function, and tracking and shooting the selected candidate target from the camera according to the position mapping relation.
(4) According to the mapping relation between the candidate target position and the position between the master camera and the slave camera, the slave camera calculates the lens azimuth angle and the zoom multiple, adjusts the slave camera to be aligned with the candidate target area, acquires the high-quality image of the candidate target,
(5) extracting the characteristics of high-quality images of the targets, analyzing and confirming the target categories, confirming the targets belonging to the preset setting types as attention targets according to the target classification results, putting the attention targets into an attention target set, confirming the targets not belonging to the preset setting types as non-attention targets, not putting the attention target set,
(6) and confirming whether the confirmation of all the candidate targets is finished, if so, exiting, and otherwise, returning to the step 3.
The active perception method for the moving object based on the multi-camera cooperation is characterized in that the step 1 has the following processes:
1.1 selecting any uncalibrated slave camera.
1.2 manually adjusting the slave camera focal length to a minimum value, adjusting the slave camera lens direction until the slave camera has a maximized overlapping field of view with the master camera.
1.3 extracting the main camera view and the auxiliary camera view accelerated robust features (Speeded-Up RobustFeatures), respectively.
1.4, matching accelerated robust (SURF) feature points by using a K-Nearest Neighbor (K-Nearest Neighbor) algorithm and a brute force search algorithm to obtain matching results GoodMatches.
And 1.5, calculating an affine matrix between the main camera picture and the slave camera picture by using a least square method according to a matching result GoodMatches by the main camera, establishing a position mapping relation, and completing the calibration of the main camera and the slave camera.
1.6, judging whether all the slave cameras are completely registered, if so, returning to the step 1.1, and if not, exiting.
The active perception method for the moving target based on the multi-camera cooperation is characterized in that: in the step 1, the K nearest neighbor algorithm and the brute force search algorithm are used in the feature matching step to match the accelerated robust feature points in the main camera picture and the auxiliary camera picture. For each acceleration robust feature point of a slave camera picture, searching 3 feature points with the nearest Euclidean distance by using a K Nearest Neighbor (KNN) algorithm in a master camera acceleration robust feature point set, and recording the result into a set Matches; calculating Euclidean distances of all accelerated robust feature point pairs in Matches, taking the minimum distance as d, and taking all min (2d, minDist) point pairs with distances smaller than that in Matches to form a set GoodMatches, wherein the set is a matching feature point pair set. The minDist is a preset threshold value and can be adjusted according to actual conditions, but the number of the point pairs in the GoodMatches is not less than 15.
The active perception method for the moving target based on the multi-camera cooperation is characterized in that: in the step 1, the position mapping relationship between the master camera and the slave camera includes two parts: the picture coordinates of the master camera and the slave camera correspond to each other; the coordinate transformation relation between the pictures of the master camera and the slave camera,
the master camera view coordinates and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera view,
from the pairs of matching feature points in GoodMatches, a convex hull is computed in the master camera view that can enclose all the feature points, and in step C, the candidate targets that fall within the convex hull are assigned to the slave camera.
The master camera and slave camera view coordinate conversion relationship is represented by affine transformation,
and (3) according to the corresponding relation of the image coordinate positions of the point pairs in the set GoodMatches, calculating affine transformation from the picture of the master camera to the picture of the slave camera by using a least square method.
The active perception method for the moving target based on the multi-camera cooperation is characterized in that: in the step 2, a candidate target in the main camera picture is detected by using a frame difference method,
a candidate target in the main camera view is tracked using a continuous adaptive mean shift algorithm,
and the results of the real-time detection of the candidate targets have the following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
wherein:
ObjectID indicates the number of candidate objects,
time denotes the Time of occurrence of the candidate object,
PosX _ Left, PosY _ Left, PosX _ Right, PosY _ Right represent the time series of coordinates of the upper Left and lower Right corners of the bounding box, respectively.
The active perception method for moving objects based on multi-camera cooperation is characterized in that in the step 3, the importance of the objects is described by using the following formula:
E=Eleave+α×Ewait
wherein E isleaveTo describe the evaluation function of the time when the target leaves the picture, the shorter the time when the target leaves the picture, the larger the value of the function. Ewaitthe longer the time that the target is not grabbed is, the larger the value of the evaluation function is, and α is an adjustable parameter, the larger α is, the more attention is paid to the target entering sequence.
The active perception method for the moving object based on the multi-camera cooperation is characterized in that in the step 3, the time when the object leaves the picture is estimated by the following function:
wherein w, h is the main camera picture width and height, (x, y) is the target current position, [ x0,y0]Is the position of the object when it enters the picture,is an estimate of the velocity of the object. The time calculated by the function represents the time when the target moves linearly to the picture boundary at a constant speed along the current motion direction in the main camera picture.
The active perception method for the moving target based on the multi-camera cooperation is characterized in that: in the step 4, the coordinate of the candidate target in the master camera is converted into the relative coordinate on the initial position picture of the slave camera by using the coordinate conversion relation between the pictures of the master camera and the slave camera, and the relative coordinate of the initial position picture of the slave camera is converted into the angular coordinate in the lens direction of the slave camera according to the fisheye spherical projection rule. From the camera focal length is estimated by this method: if the maximum length and width of the given target is l*If the focal length of the slave camera when establishing the position mapping matrix is f, and the width and height of the candidate target in the slave camera picture are w and h, the adjusted focal length can be deduced as follows:
drawings
Fig. 1 is a flowchart of a moving object active perception method based on multi-camera cooperation according to an embodiment of the invention.
Fig. 2 is a system configuration diagram of a moving object active sensing device based on multi-camera cooperation according to an embodiment of the invention.
Fig. 3 is a flowchart of master-slave camera calibration based on a multi-camera cooperative moving object active sensing method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The system configuration of the moving object active sensing device based on multi-camera cooperation according to one embodiment of the invention is shown in fig. 2. The device for realizing the active perception method of the moving target based on the multi-camera cooperation comprises the following steps: the at least two cameras are used for realizing the working modes of the master camera and the slave camera; and a set of moving target active sensing devices. The working mode of the master camera and the slave camera in the invention specifically refers to that the slave camera is mobilized to actively sense a target found in the master camera to obtain a high-quality video image.
The cameras used in the present invention are classified into a panoramic camera whose corresponding monitored scene is fixed, and a PTZ camera having a rotation (Pan), Tilt (Tilt), and Zoom (Zoom) function. The master camera and the slave camera in one embodiment of the invention comprise a fixed-field-of-view panoramic camera and a plurality of PTZ cameras; the panoramic camera, which is the main camera, has a large field of view, the view of which can cover at least a large area of the monitored scene. In one embodiment according to the invention, the main camera is a fixed bolt. In another embodiment according to the present invention, the primary camera is a fixed lens orientation PTZ camera.
In one embodiment according to the invention, the monitoring scene covered by one slave camera is overlapped with the monitoring scene covered by the master camera, and is not overlapped with the scenes covered by other slave cameras. However, in yet another embodiment according to the present invention, one monitored scene covered from the camera view partially overlaps with other scenes covered from the camera.
The moving target active sensing device collects video images of a master camera and a plurality of slave cameras and processes the collected video images.
The active sensing device for the moving object according to one embodiment of the invention is arranged on a Personal Computer (PC), an embedded processing box or a board card.
In one embodiment according to the invention, the hardware carrying the moving object active perception device according to the invention is integrated within the main camera hardware. In yet another embodiment according to the present invention, the moving object active perception device according to the present invention is provided on a computer connecting the master camera and the slave camera through a network.
As shown in fig. 2, the active sensing device for a moving target according to an embodiment of the present invention includes an image capturing unit, a candidate target detecting unit, a target selecting unit, a position mapping unit, and a target tracking confirming unit.
The image acquisition unit acquires images of the master camera and the slave camera. The image acquisition unit transmits the image from the main camera to the candidate target detection unit for detecting the candidate target; the image acquisition unit also transmits the images from the cameras to a target tracking and confirming unit for extracting target characteristics, analyzing and confirming target categories.
The candidate target detection unit receives the image of the main camera from the image acquisition unit, monitors the motion area in the image in real time according to the detection threshold setting to obtain a set of candidate targets, and transmits the set to the target selection unit. According to one embodiment of the present invention, the candidate target detecting unit extracts a motion region in the main camera picture using a frame difference method and tracks the candidate target using a continuous adaptive mean shift algorithm.
The target selection unit receives the candidate target set from the candidate target detection unit, selects the candidate target with the highest importance at the current moment in the candidate target set according to the target importance evaluation function, and sends the selected candidate target to the target position mapping unit.
The position mapping unit receives the selected candidate target, selects the slave camera according to the coordinate mapping relation between the master camera and the slave camera, and sends the coordinates in the picture of the target candidate and the master camera to the selected slave camera. And the coordinates of the selected target received from the camera are calculated, the lens angle and the focal length adjustment amount are calculated, the candidate target is aligned to shoot, and a high-quality target image is transmitted to the target tracking confirmation unit. In addition, the position mapping unit controls the master camera and the slave camera to finish the automatic calibration process and record the position mapping relation in the starting process of the moving target active sensing device.
The target tracking confirmation unit receives a high-quality target image from the selected slave camera, extracts image features in the high-quality target image, classifies the image features of the target by the classifier to obtain a target class, confirms the target belonging to a preset setting type as an attention target, puts the attention target into an attention target set, confirms the target not belonging to the preset setting type as a non-attention target, and does not put the attention target set.
Fig. 1 shows an active perception method for a moving object based on multi-camera coordination according to an embodiment of the present invention, which includes 5 steps:
establishing a position mapping relation based on image feature matching;
acquiring a candidate target set based on moving target detection;
selecting a candidate target with the highest importance in the candidate target set according to an importance evaluation function, and selecting a slave camera for tracking confirmation according to a position mapping relation between the master camera and the slave camera;
according to the position mapping relation of the master camera and the slave camera, the slave camera calculates the lens position and focal length adjustment amount, adjusts the lens of the slave camera to align the candidate target position, and obtains a high-quality image;
and extracting target characteristics by using the target high-quality image, and analyzing and confirming the target category.
The above 5 steps in one embodiment according to the present invention are explained in turn.
(1) Image feature matching-based position mapping relation establishment
As shown in fig. 3, according to the method of the present invention, a position mapping relationship is established based on image feature matching, and the master camera and each slave camera are calibrated in a feature extraction and feature matching manner to establish a position mapping relationship. The calibration in the invention refers to a process of establishing coordinate mapping of the same object in the picture of the master camera and the picture of the slave camera.
The position mapping relation between the master camera and the slave camera comprises two parts: the picture coordinates of the master camera and the slave camera correspond to each other; and the coordinate conversion relation between the main camera picture and the slave camera picture.
The coordinate mapping relation of the main camera picture and the slave camera picture is described by using affine transformation. And calibrating the master camera and the slave camera by extracting speed-Up Robust Features (Speeded-Up Robust Features) points of the master camera and the slave camera and utilizing the position corresponding relation of similar feature points in the master camera picture and the slave camera picture to establish a position mapping relation. In one embodiment according to the invention, the characterization form of the mapping relationship between the master camera view coordinates and the slave camera target position coordinates is an affine transformation matrix.
Affine transformations map the composition of two types of functions, translation and linear. In the field of image processing, affine transformations are suitable for describing image translation, rotation, scaling and inversion (mirroring). Let affine transformation M, M can be represented by the following formula:
the coordinate correspondence of the same scene in the master camera view and the slave camera view can be described by affine transformation. Giving a plurality of matching point pairs known in the main camera picture and the slave camera picture, substituting the matching point pairs into equation (1), and solving the parameter a by adopting a least square method1~a4、txAnd tySo as to obtain affine transformation of two images,i.e. the position mapping matrix in the present invention.
In one embodiment according to the invention, the matching point pairs are obtained by extracting accelerated robust feature points from initial position images of the master camera and the slave camera and matching the extracted accelerated robust feature points. And in the feature matching process, similar accelerated robust feature points in the main camera picture and the auxiliary camera picture are searched violently. Firstly, extracting acceleration robust feature points of a main camera picture and a slave camera picture, searching 3 feature points with the shortest Euclidean distance by using a K Nearest Neighbor (KNN) algorithm in an acceleration robust feature point set of the main camera picture for each acceleration robust feature point of the slave camera picture, and recording results into a set of Matches; calculating Euclidean distances of all accelerated robust feature point pairs in Matches, taking the minimum distance as d, and taking all point pairs with the distance smaller than min (2d, minDist) in Matches to form a set GoodMatches, wherein the GoodMatches set is an output matching feature point pair set. Wherein minDist is a preset threshold value and can be adjusted according to actual conditions. In one embodiment according to the invention, the threshold minDist may be set to 1000.
In an embodiment of the present invention, the position mapping unit selects the target to be confirmed to be tracked and shot by the camera according to the position correspondence. The position corresponding relation is obtained by calculating matched characteristic point pairs in GoodMatches. The position mapping unit calculates a convex hull capable of containing the feature points of the main camera view in the set GoodMatches, and assigns candidate targets falling in the convex hull to corresponding slave camera tracking shots in a subsequent step.
(2) Candidate object extraction and candidate object set
The moving object is the most interesting object in video monitoring, and therefore, the candidate object extracts the moving object in the interesting scene. Regions of motion in the scene, referred to as target potential regions, may contain passing candidate targets. In the present invention, a target obtained by motion region detection is referred to as a candidate target. The totality of candidate objects constitutes a set of candidate objects. And recording the time of each candidate target from the entering picture and the time sequence of the bounding box coordinates in the candidate target set.
In the invention, a candidate target detection unit acquires a candidate target on a main camera picture by using a frame difference method, tracks the candidate target by using a continuous adaptive mean shift (Camshift) algorithm, and records a position sequence of the candidate target into a candidate target set.
In one embodiment according to the present invention, the set of candidate targets includes, but is not limited to:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
where ObjectID indicates the candidate object number, Time indicates when the candidate object appears, and PosX _ Left, PosY _ Left, PosX _ Right, and PosY _ Right indicate the coordinates of the top Left corner and bottom Right corner of the bounding box, respectively.
(3) Candidate target selection based on target importance evaluation function
And the target importance evaluation is mainly used for comprehensively evaluating the target importance according to the position, the motion direction and the speed of the candidate target and the time length which is not sensed after the candidate target enters a monitoring scene. In an embodiment according to the present invention, the evaluation principle is that the moving speed is fast, the moving direction is toward the edge and the time which is not sensed after entering the scene is about long, the importance of the target is higher. The target selection unit sorts according to importance, and selects the target with the highest importance for perception.
According to one embodiment of the invention, the importance evaluation function is of the form:
E=Eleave+α×Ewait
wherein E isleaveTo describe the evaluation function of the time when the target leaves the picture, the shorter the time when the target leaves the picture, the larger the value of the function. Ewaitα is longer α is time that α is target is not grabbed is, α is larger α is value of α is evaluation function is, alpha is an adjustable parameter, α is larger α is alpha is, α is more α is target is concernedAnd entering a sequence.
According to one embodiment of the invention, the time for the target to leave the picture is estimated by the following function:
wherein w, h is the image width and height of the main camera, (x, y) is the current position of the target, [ x ] x0,y0]Is the position of the object when it enters the picture,is an estimate of the velocity of the object. The time calculated by the function represents the time when the target moves to the boundary of the monitoring picture in a straight line at a constant speed along the current motion direction in the picture.
And for the candidate target with the highest importance, the position mapping unit selects the slave camera according to the position mapping relation and sends the target coordinate sequence recorded by the candidate target set to the slave camera.
(4) Slave camera control parameter calculation based on position mapping relation between master camera and slave camera
According to one embodiment of the present invention, the position mapping matrix M generated by the initialization step is used for coordinate conversion between cameras. The process of converting the candidate target center point (x, y) in the master camera to the relative coordinate of the slave camera initial picture center through the position mapping matrix M between the master camera and the slave camera is expressed as follows:
the relative coordinates (x ', y') from the center of the initial view of the camera are the two-dimensional pixel coordinates in the view of the slave camera, which the camera cannot adjust to, and needs to be converted to the azimuth coordinates of the slave camera. And the slave camera converts the relative coordinates of the initial position picture into the angular coordinates in the lens direction of the slave camera according to the fisheye spherical projection rule.
According to one embodiment of the invention, if the relative coordinates of the target in the initial frame center from the camera are (x, y), the frame width and height from the camera are (w, h), and the field angle is (m, h)Angle adjustment of cameraHas the following forms:
in one embodiment according to the present invention, a Bounding Box between master and slave (Bounding Box) size transformation is estimated by using coordinate transformation target top left and bottom right vertex coordinates. And the size of the bounding box enclosed by the converted upper left vertex and the converted lower right vertex is the estimated size of the target.
In one embodiment according to the invention, the change in the size of the target brought about by the camera in adjusting the focal length (field of view) is calculated from an inverse proportional function of the focal length. The method and/or the device can acquire a target snapshot image with a fixed size in operation, and can estimate the focal length value of the slave camera under the condition of giving the requirement of the size of the snapshot image. Given a target length/width maximum of l*And if the focal length of the slave camera when establishing the position mapping relationship is f and the estimated size of the candidate target in the picture of the slave camera is (w, h), the adjusted focal length can be deduced as follows:
and adjusting the camera according to the camera direction and the focal length adjustment amount calculated by the camera according to the method, and performing continuous tracking shooting for a period of time after the camera is aligned with the target to obtain a high-quality target image.
(5) Extracting target features using target high quality images, analyzing and confirming target classes
In one embodiment according to the present invention, the target tracking confirmation unit receives a target high-quality image photographed from a camera and extracts target features, analyzes a target class using a classifier, and updates the class of a candidate target according to a target classification result. In one embodiment according to the invention, objects of which the type belongs to a predetermined setting type are determined to be objects of interest (interested objects) and put into the set of objects of interest, and objects not belonging to the predetermined setting type are determined to be non-objects of interest and not put into the set of objects of interest.
In an embodiment according to the present invention, the extracted features are features capable of identifying a type of the target, and mainly refer to features such as a human face, a trunk, four limbs, or an appearance shape of a motor vehicle, wheels, a license plate region, and the like. The class of candidate objects is confirmed by such distinctive features. The classifier gives a classification result of the target by analyzing the target features in the picture.
In one embodiment according to the invention, each frame of high-quality images shot by the camera is classified, the classification result of each frame is comprehensively counted, and the classification result with the highest probability is selected as the classification result of the target. Due to the limited rotation speed of the camera, the previous frames of video may have large blurring or fail to capture the target, which has a bad influence on the classification result. In one embodiment according to the invention, low quality images may be discarded in the target classification to avoid adverse effects on the classification results.
The foregoing disclosure discloses only specific embodiments of the invention. Various changes and modifications can be made by those skilled in the art based on the basic technical concept of the present invention without departing from the scope of the claims of the present invention.

Claims (13)

1. A moving target active perception method based on multi-camera cooperation is characterized by comprising the following steps:
A) automatically calibrating the master camera and the slave camera by means of feature extraction and feature matching according to the pictures of the master camera and the slave camera, establishing a position mapping relation,
B) detecting a plurality of motion areas in the field of view of the main camera in real time according to the setting of a detection threshold value to obtain a set of candidate targets,
C) selecting the candidate target with the highest importance in the candidate target set according to the importance evaluation function, selecting the corresponding slave camera for tracking shooting according to the position mapping relation,
D) according to the mapping relation between the candidate target position and the position between the master camera and the slave camera, the slave camera calculates the lens azimuth angle and the zoom multiple, adjusts the slave camera to be aligned with the candidate target area, acquires the high-quality image of the candidate target,
E) extracting the characteristics of high-quality images of the targets, analyzing and confirming the target types, confirming the targets belonging to the preset setting types as attention targets according to the target classification results, putting the attention targets into an attention target set, confirming the targets not belonging to the preset setting types as non-attention targets, and not putting the attention target set.
2. The active perception method for moving objects based on multi-camera collaboration as claimed in claim 1, wherein the step a) includes:
A1) any slave camera that is not calibrated is selected,
A2) adjusting the focal length of the slave camera to the minimum value, adjusting the lens direction of the slave camera until the slave camera and the master camera have the maximized overlapped visual field,
A3) the accelerated robust features of the master and slave camera views are extracted separately,
A4) and matching the accelerated robust feature points by using a K-Nearest Neighbor (K-Nearest Neighbor) algorithm and a brute force search algorithm to obtain a matching result GoodMatches.
A5) And (4) calculating an affine matrix of the pictures of the master camera and the slave camera by using a least square method according to a matching result GoodMatches to finish the calibration of the master camera and the slave camera.
A6) And judging whether all the slave cameras are registered, if so, returning to the step A1), and otherwise, exiting. And the moving target active perception method based on multi-camera cooperation further comprises the following steps:
F) and C), confirming whether the confirmation of all the candidate targets is finished, if so, exiting, otherwise, returning to the step C).
3. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
the characteristic matching operation in the step A) uses a K nearest neighbor algorithm and a brute force search algorithm to match accelerated robust characteristic points in the pictures of the master camera and the slave camera,
for each acceleration robust feature point of the auxiliary camera picture, searching 3 feature points with the shortest Euclidean distance in the acceleration robust feature point set of the main camera by using a K nearest neighbor algorithm, recording the result into a set of Matches,
calculating Euclidean distances of all accelerated robust feature point pairs in a set of Matches, taking the minimum distance as d, and taking all point pairs with the distances smaller than min (2d, minDist) in the set of Matches to form a set of GoodMatches, wherein the set of GoodMatches is a matching feature point pair set, and minDist is a preset threshold value and can be adjusted according to actual conditions, but the number of the point pairs in the set of GoodMatches is not less than 15.
4. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
in the step a, the position mapping relationship between the master camera and the slave camera includes two parts: the picture coordinates of the master camera and the slave camera correspond to each other; the coordinate transformation relation between the pictures of the master camera and the slave camera,
the master camera view coordinates and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera view,
from the pairs of matching feature points in the set GoodMatches, a convex hull is computed in the master camera view that can enclose all the feature points, and in step C, a candidate object falling in the convex hull is assigned to the slave camera.
The master camera and slave camera view coordinate conversion relationship is represented by affine transformation,
and (3) according to the corresponding relation of the image coordinate positions of the point pairs in the set GoodMatches, calculating affine transformation from the picture of the master camera to the picture of the slave camera by using a least square method.
5. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
in the step B), a candidate target in the main camera picture is detected by using a frame difference method,
a candidate target in the main camera view is tracked using a continuous adaptive mean shift algorithm,
and the results of the real-time detection of the candidate targets have the following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
wherein:
ObjectID indicates the number of candidate objects,
time denotes the Time of occurrence of the candidate object,
PosX _ Left, PosY _ Left, PosX _ Right, PosY _ Right represent the time series of coordinates of the upper Left and lower Right corners of the bounding box, respectively.
6. The active perception method for moving objects based on multi-camera coordination according to claim 1, characterized in that in the step C),
the importance of the target is characterized using the following formula:
E=Eleave+α×Ewait
wherein,
Eleaveto describe the evaluation function of the time for the target to leave the picture, the shorter the time for the target to leave the picture, the larger the value of the function,
Ewaitto describe the evaluation function of the waiting time of the target in the target queue, the longer the uncaptured time is, the larger the value of the function is,
alpha is an adjustable parameter, the larger alpha is, the more attention is paid to the target entering sequence,
the time that the target leaves the frame is characterized by the following function:
wherein,
w and h are the width and height of the main camera image,
(x, y) is the current position of the target,
[x0,y0]is the position of the object when it enters the picture,
as an estimate of the velocity of the movement of the object,
the time characterized by the above equation represents the time for the object to move straight to the boundary of the picture at a uniform velocity in the current motion direction in the main camera picture.
7. The active perception method for moving objects based on multi-camera cooperation according to claim 1, wherein in the step D), the slave camera calculates the angular coordinate and the focal length of the lens direction of the slave camera using the position mapping relationship between the master camera and the slave camera generated in the step B),
the slave camera lens direction is calculated as follows:
converting the coordinates of the candidate target on the master camera into the relative coordinates on the initial position picture of the slave camera according to the coordinate mapping shutdown, and then converting the relative coordinates on the initial position picture of the slave camera into the angular coordinates in the lens direction of the slave camera according to the fisheye spherical projection rule
The camera focal length is calculated as follows:
if the maximum length and width of the given target is l × pixels, the focal length when the position mapping relationship is established from the camera is f, and the width and height of the candidate target in the picture of the slave camera is w, h, then the focal length after adjustment can be deduced as:
8. an active target sensing apparatus, comprising:
the image acquisition unit is used for acquiring video images of the master camera and the slave camera;
the candidate target detection unit is used for extracting candidate targets from the video image of the main camera to form a candidate target set;
the target selection unit is used for selecting a candidate target with the highest importance at the current moment in the candidate target set;
the position mapping unit is used for establishing a position mapping relation between the master camera and the slave camera, selecting the candidate target selected by shooting of the slave camera and sending the position information of the candidate target to the slave camera;
and a target tracking confirming unit for analyzing the target category according to the target high-quality image shot from the camera, confirming the target belonging to the preset setting type as the attention target, and putting the attention target set.
9. The active target perception device of claim 8, wherein the location mapping unit matches the accelerated robust feature points in the master and slave camera views using a K-nearest neighbor algorithm and a brute force search algorithm,
for each acceleration robust feature point of the auxiliary camera picture, searching 3 feature points with the shortest Euclidean distance in the acceleration robust feature point set of the main camera by using a K nearest neighbor algorithm, recording the result into a set of Matches,
calculating Euclidean distances of all accelerated robust feature point pairs in a set of Matches, taking the minimum distance as d, and taking all point pairs with the distances smaller than min (2d, minDist) in the set of Matches to form a set of GoodMatches, wherein the set of GoodMatches is a matching feature point pair set, and minDist is a preset threshold value and can be adjusted according to actual conditions, but the number of the point pairs in the set of GoodMatches is not less than 15.
10. The active target perception device of claim 8, wherein the position mapping relationship between the master camera and the slave camera in the position mapping unit includes two parts: the picture coordinates of the master camera and the slave camera correspond to each other; the coordinate transformation relation between the pictures of the master camera and the slave camera,
the master camera view coordinates and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera view,
from the pairs of matching feature points in GoodMatches, a convex hull is computed in the master camera view that can enclose all the feature points, and in step C, the candidate targets that fall within the convex hull are assigned to the slave camera.
The master camera and slave camera view coordinate conversion relationship is represented by affine transformation,
and (3) according to the corresponding relation of the image coordinate positions of the point pairs in the set GoodMatches, calculating affine transformation from the picture of the master camera to the picture of the slave camera by using a least square method.
11. The active target perception device of claim 8, wherein the candidate target detection unit detects the candidate target in the main camera view using a frame difference method,
a candidate target in the main camera view is tracked using a continuous adaptive mean shift algorithm,
and the results of the real-time detection of the candidate targets have the following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
wherein:
ObjectID indicates the number of candidate objects,
time denotes the Time of occurrence of the candidate object,
PosX _ Left, PosY _ Left, PosX _ Right, PosY _ Right represent the time series of coordinates of the upper Left and lower Right corners of the bounding box, respectively.
12. The active target perception device of claim 8, wherein the target selection unit characterizes the importance of the target using the following formula:
E=Eleave+α×Ewait
wherein,
Eleaveto describe the evaluation function of the time for the target to leave the picture, the shorter the time for the target to leave the picture, the larger the value of the function,
Ewaitto describe the evaluation function of the waiting time of the target in the target queue, the longer the uncaptured time is, the larger the value of the function is,
alpha is an adjustable parameter, the larger alpha is, the more attention is paid to the target entering sequence,
the time that the target leaves the frame is characterized by the following function:
wherein,
w and h are the width and height of the main camera image,
(x, y) is the current position of the target,
[x0,y0]is the position of the object when it enters the picture,
as an estimate of the velocity of the movement of the object,
the time characterized by the above equation represents the time for the object to move straight to the boundary of the picture at a uniform velocity in the current motion direction in the main camera picture.
13. The active sensing apparatus of claim 8 or 11, wherein in the target tracking unit, the slave camera calculates the angular coordinate and the focal length of the lens direction of the slave camera using the position mapping relationship between the master camera and the slave camera in the address mapping unit,
the slave camera lens direction is calculated as follows:
converting the coordinates of the candidate target on the master camera into the relative coordinates on the initial position picture of the slave camera according to the coordinate mapping shutdown, and then converting the relative coordinates on the initial position picture of the slave camera into the angular coordinates in the lens direction of the slave camera according to the fisheye spherical projection rule
The camera focal length is calculated as follows:
if the maximum length and width of the given target is l × pixels, the focal length when the position mapping relationship is established from the camera is f, and the width and height of the candidate target in the picture of the slave camera is w, h, then the focal length after adjustment can be deduced as:
CN201711425735.9A 2017-12-25 2017-12-25 Moving target actively perceive method and apparatus based on multiple-camera collaboration Active CN108111818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711425735.9A CN108111818B (en) 2017-12-25 2017-12-25 Moving target actively perceive method and apparatus based on multiple-camera collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711425735.9A CN108111818B (en) 2017-12-25 2017-12-25 Moving target actively perceive method and apparatus based on multiple-camera collaboration

Publications (2)

Publication Number Publication Date
CN108111818A true CN108111818A (en) 2018-06-01
CN108111818B CN108111818B (en) 2019-05-03

Family

ID=62213191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711425735.9A Active CN108111818B (en) 2017-12-25 2017-12-25 Moving target actively perceive method and apparatus based on multiple-camera collaboration

Country Status (1)

Country Link
CN (1) CN108111818B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN109522846A (en) * 2018-11-19 2019-03-26 深圳博为教育科技有限公司 One kind is stood up monitoring method, device, server and monitoring system of standing up
CN110059641A (en) * 2019-04-23 2019-07-26 重庆工商大学 Depth birds recognizer based on more preset points
CN110177256A (en) * 2019-06-17 2019-08-27 北京影谱科技股份有限公司 A kind of tracking video data acquisition methods and device
CN110176039A (en) * 2019-04-23 2019-08-27 苏宁易购集团股份有限公司 A kind of video camera adjusting process and system for recognition of face
CN110191324A (en) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110430395A (en) * 2019-07-19 2019-11-08 苏州维众数据技术有限公司 Video data AI processing system and processing method
CN110493569A (en) * 2019-08-12 2019-11-22 苏州佳世达光电有限公司 Monitoring objective shoots method for tracing and system
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
CN110881117A (en) * 2018-09-06 2020-03-13 杭州海康威视数字技术股份有限公司 Inter-picture area mapping method and device and multi-camera observation system
CN111131697A (en) * 2019-12-23 2020-05-08 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111179305A (en) * 2018-11-13 2020-05-19 晶睿通讯股份有限公司 Object position estimation method and object position estimation device
CN111354011A (en) * 2020-05-25 2020-06-30 江苏华丽智能科技股份有限公司 Multi-moving-target information capturing and tracking system and method
CN111541851A (en) * 2020-05-12 2020-08-14 南京甄视智能科技有限公司 Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey
CN111612812A (en) * 2019-02-22 2020-09-01 富士通株式会社 Target detection method, target detection device and electronic equipment
CN111684458A (en) * 2019-05-31 2020-09-18 深圳市大疆创新科技有限公司 Target detection method, target detection device and unmanned aerial vehicle
CN111698467A (en) * 2020-05-08 2020-09-22 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN111815722A (en) * 2020-06-10 2020-10-23 广州市保伦电子有限公司 Double-scene matting method and system
CN111866392A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Shooting prompting method and device, storage medium and electronic equipment
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN112308924A (en) * 2019-07-29 2021-02-02 浙江宇视科技有限公司 Method, device and equipment for calibrating camera in augmented reality and storage medium
CN112492261A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Tracking shooting method and device and monitoring system
CN112767452A (en) * 2021-01-07 2021-05-07 北京航空航天大学 Active sensing method and system for camera
CN112954188A (en) * 2019-12-10 2021-06-11 李思成 Human eye perception imitating active target snapshot method and device
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system
CN113190013A (en) * 2018-08-31 2021-07-30 创新先进技术有限公司 Method and device for controlling terminal movement
CN113518174A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Shooting method, device and system
CN113792715A (en) * 2021-11-16 2021-12-14 山东金钟科技集团股份有限公司 Granary pest monitoring and early warning method, device, equipment and storage medium
CN114155433A (en) * 2021-11-30 2022-03-08 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114938426A (en) * 2022-04-28 2022-08-23 湖南工商大学 Method and apparatus for creating a multi-device media presentation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285810A1 (en) * 2010-05-21 2011-11-24 Qualcomm Incorporated Visual Tracking Using Panoramas on Mobile Devices
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN103607576A (en) * 2013-11-28 2014-02-26 北京航空航天大学深圳研究院 Traffic video monitoring system oriented to cross camera tracking relay
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN105208327A (en) * 2015-08-31 2015-12-30 深圳市佳信捷技术股份有限公司 Master/slave camera intelligent monitoring method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285810A1 (en) * 2010-05-21 2011-11-24 Qualcomm Incorporated Visual Tracking Using Panoramas on Mobile Devices
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN103607576A (en) * 2013-11-28 2014-02-26 北京航空航天大学深圳研究院 Traffic video monitoring system oriented to cross camera tracking relay
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN105208327A (en) * 2015-08-31 2015-12-30 深圳市佳信捷技术股份有限公司 Master/slave camera intelligent monitoring method and device

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
US11790504B2 (en) 2018-08-07 2023-10-17 Huawei Technologies Co., Ltd. Monitoring method and apparatus
CN113190013B (en) * 2018-08-31 2023-06-27 创新先进技术有限公司 Method and device for controlling movement of terminal
CN113190013A (en) * 2018-08-31 2021-07-30 创新先进技术有限公司 Method and device for controlling terminal movement
CN110881117A (en) * 2018-09-06 2020-03-13 杭州海康威视数字技术股份有限公司 Inter-picture area mapping method and device and multi-camera observation system
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN111179305A (en) * 2018-11-13 2020-05-19 晶睿通讯股份有限公司 Object position estimation method and object position estimation device
CN111179305B (en) * 2018-11-13 2023-11-14 晶睿通讯股份有限公司 Object position estimation method and object position estimation device thereof
CN109522846A (en) * 2018-11-19 2019-03-26 深圳博为教育科技有限公司 One kind is stood up monitoring method, device, server and monitoring system of standing up
CN109522846B (en) * 2018-11-19 2020-08-14 深圳博为教育科技有限公司 Standing monitoring method, device, server and standing monitoring system
CN111612812A (en) * 2019-02-22 2020-09-01 富士通株式会社 Target detection method, target detection device and electronic equipment
CN111612812B (en) * 2019-02-22 2023-11-03 富士通株式会社 Target object detection method, detection device and electronic equipment
CN110059641B (en) * 2019-04-23 2023-02-03 重庆工商大学 Depth bird recognition algorithm based on multiple preset points
CN110176039A (en) * 2019-04-23 2019-08-27 苏宁易购集团股份有限公司 A kind of video camera adjusting process and system for recognition of face
CN110059641A (en) * 2019-04-23 2019-07-26 重庆工商大学 Depth birds recognizer based on more preset points
CN111684458B (en) * 2019-05-31 2024-03-12 深圳市大疆创新科技有限公司 Target detection method, target detection device and unmanned aerial vehicle
CN111684458A (en) * 2019-05-31 2020-09-18 深圳市大疆创新科技有限公司 Target detection method, target detection device and unmanned aerial vehicle
CN110177256B (en) * 2019-06-17 2021-12-14 北京影谱科技股份有限公司 Tracking video data acquisition method and device
CN110177256A (en) * 2019-06-17 2019-08-27 北京影谱科技股份有限公司 A kind of tracking video data acquisition methods and device
CN110191324A (en) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN112215048B (en) * 2019-07-12 2024-03-22 中国移动通信有限公司研究院 3D target detection method, device and computer readable storage medium
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN110430395A (en) * 2019-07-19 2019-11-08 苏州维众数据技术有限公司 Video data AI processing system and processing method
CN112308924A (en) * 2019-07-29 2021-02-02 浙江宇视科技有限公司 Method, device and equipment for calibrating camera in augmented reality and storage medium
CN112308924B (en) * 2019-07-29 2024-02-13 浙江宇视科技有限公司 Method, device, equipment and storage medium for calibrating camera in augmented reality
CN110493569A (en) * 2019-08-12 2019-11-22 苏州佳世达光电有限公司 Monitoring objective shoots method for tracing and system
CN112492261A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Tracking shooting method and device and monitoring system
CN112954188A (en) * 2019-12-10 2021-06-11 李思成 Human eye perception imitating active target snapshot method and device
CN111131697A (en) * 2019-12-23 2020-05-08 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111131697B (en) * 2019-12-23 2022-01-04 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN113518174A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Shooting method, device and system
CN111698467A (en) * 2020-05-08 2020-09-22 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN111541851A (en) * 2020-05-12 2020-08-14 南京甄视智能科技有限公司 Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey
CN111354011A (en) * 2020-05-25 2020-06-30 江苏华丽智能科技股份有限公司 Multi-moving-target information capturing and tracking system and method
CN111815722A (en) * 2020-06-10 2020-10-23 广州市保伦电子有限公司 Double-scene matting method and system
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN111918023B (en) * 2020-06-29 2021-10-22 北京大学 Monitoring target tracking method and device
CN111866392A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Shooting prompting method and device, storage medium and electronic equipment
CN111866392B (en) * 2020-07-31 2021-10-08 Oppo广东移动通信有限公司 Shooting prompting method and device, storage medium and electronic equipment
CN112767452B (en) * 2021-01-07 2022-08-05 北京航空航天大学 Active sensing method and system for camera
CN112767452A (en) * 2021-01-07 2021-05-07 北京航空航天大学 Active sensing method and system for camera
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system
CN113792715A (en) * 2021-11-16 2021-12-14 山东金钟科技集团股份有限公司 Granary pest monitoring and early warning method, device, equipment and storage medium
CN114155433A (en) * 2021-11-30 2022-03-08 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114938426A (en) * 2022-04-28 2022-08-23 湖南工商大学 Method and apparatus for creating a multi-device media presentation
CN114938426B (en) * 2022-04-28 2023-04-07 湖南工商大学 Method and apparatus for creating a multi-device media presentation

Also Published As

Publication number Publication date
CN108111818B (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN108111818B (en) Moving target actively perceive method and apparatus based on multiple-camera collaboration
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN110738142B (en) Method, system and storage medium for adaptively improving face image acquisition
CN108419014B (en) Method for capturing human face by linkage of panoramic camera and multiple capturing cameras
JP5688456B2 (en) Security camera tracking and monitoring system and method using thermal image coordinates
CN107240124B (en) Cross-lens multi-target tracking method and device based on space-time constraint
CN107438173B (en) Video processing apparatus, video processing method, and storage medium
Wheeler et al. Face recognition at a distance system for surveillance applications
US7321386B2 (en) Robust stereo-driven video-based surveillance
WO2017045326A1 (en) Photographing processing method for unmanned aerial vehicle
CN105915784A (en) Information processing method and information processing device
CN105059190B (en) The automobile door opening collision warning device and method of view-based access control model
CN102819847A (en) Method for extracting movement track based on PTZ mobile camera
EP1946567A2 (en) Device for generating three dimensional surface models of moving objects
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN110633648B (en) Face recognition method and system in natural walking state
CN112307912A (en) Method and system for determining personnel track based on camera
Neves et al. Acquiring high-resolution face images in outdoor environments: A master-slave calibration algorithm
CN109905641A (en) A kind of target monitoring method, apparatus, equipment and system
CN111465937B (en) Face detection and recognition method employing light field camera system
CN109799844B (en) Dynamic target tracking system and method for pan-tilt camera
JP4882577B2 (en) Object tracking device and control method thereof, object tracking system, object tracking program, and recording medium recording the program
KR20120002723A (en) Device and method for recognizing person by using 3 dimensional image information
KR20170133666A (en) Method and apparatus for camera calibration using image analysis
JP3631541B2 (en) Object tracking method using stereo images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210427

Address after: No.18 Chuanghui street, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: BUAA HANGZHOU INNOVATION INSTITUTE

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: BEIHANG University