CN113674356B - Camera screening method and related device - Google Patents
Camera screening method and related device Download PDFInfo
- Publication number
- CN113674356B CN113674356B CN202110819845.3A CN202110819845A CN113674356B CN 113674356 B CN113674356 B CN 113674356B CN 202110819845 A CN202110819845 A CN 202110819845A CN 113674356 B CN113674356 B CN 113674356B
- Authority
- CN
- China
- Prior art keywords
- camera
- observed
- cameras
- bounding box
- screening
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012216 screening Methods 0.000 title claims abstract description 74
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000003384 imaging method Methods 0.000 claims abstract description 40
- 230000000007 visual effect Effects 0.000 claims abstract description 26
- 230000000694 effects Effects 0.000 claims abstract description 15
- 230000002776 aggregation Effects 0.000 claims abstract description 9
- 238000004220 aggregation Methods 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims description 21
- 239000013598 vector Substances 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 24
- 238000004364 calculation method Methods 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000006386 memory function Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a camera screening method and a related device, wherein the camera screening method specifically comprises the following steps: acquiring position information of an object to be observed; acquiring a first set formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera; screening a second collection formed by all cameras which are not blocked and have the target to be observed in the visual field range of the target to be observed from the first collection; wherein, the judging modes of the visual field ranges corresponding to the cameras of different types are different; screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second aggregation and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode. Through the mode, the method and the device can screen the best camera more accurately and rapidly.
Description
Technical Field
The application belongs to the technical field of video monitoring, and particularly relates to a camera screening method and a related device.
Background
The Augmented Virtual Environment (AVE) technology refers to a technology of fusing a two-dimensional picture of a camera into a three-dimensional model in real time by building a three-dimensional model of the real world, and then calibrating the camera that is previously arranged in an actual scene. With rapid development of computer technology and network technology, more and more methods are used for building three-dimensional models of real scenes, such as by means of total station scanning, laser radar scanning, or unmanned aerial vehicle oblique photography measurement. When a camera is installed at a key position in the three-dimensional scene, the camera is also required to be added into the three-dimensional model together in the modeling process, and the camera is calibrated. And after a certain observation target is selected in the three-dimensional model at will, the current state can be monitored in real time by remotely observing through a camera.
Although the whole scene and the specific positions of the cameras can be intuitively displayed through the three-dimensional model, for a certain selected target to be observed, it is not known in advance which camera can observe the target, and which camera can achieve the best observation effect when observing the target, so that the cameras need to be screened according to the positions of the target to be observed and the cameras. Some common camera screening strategies consider only one type of camera and are less considered on the screening basis; in addition, the existing method directly traverses all cameras so as to find out an optimal observation camera, but the processing time of the method is prolonged along with the increase of the number of cameras, and the interaction experience is seriously affected.
Disclosure of Invention
The application provides a camera screening method and a related device, which are used for screening out an optimal camera more accurately and rapidly.
In order to solve the technical problems, the application adopts a technical scheme that: provided is a camera screening method including: acquiring position information of an object to be observed; acquiring a first set formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera; screening a second collection formed by all cameras which are not blocked and have the target to be observed in the visual field range of the target to be observed from the first collection; wherein, the judging modes of the visual field ranges corresponding to the cameras of different types are different; screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second aggregation and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided a camera screening device comprising a processor and a memory coupled to each other, and the processor and the memory cooperate to implement the camera screening method described in any of the above embodiments.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided an apparatus having a storage function, on which program data is stored, the program data being executable by a processor to implement the camera screening method described in any of the above embodiments.
Different from the prior art, the application has the following beneficial effects: in the camera screening method provided by the application, all cameras in a space where an object to be observed is positioned are screened for the first time according to the position information of the object to be observed so as to obtain a first aggregate; then, carrying out second-round screening on the first collection to obtain a second collection formed by all cameras of which the targets to be observed are in the visual field range and are not shielded; the camera with the best observation is then screened from the imaging size and the observation angle for each camera in the second set. Namely, the treatment time is shortened through the multi-round screening process in the application; and the distance from the object to be observed, whether the distance is within the visual field range of the camera, the imaging size of the object to be observed, the observation angle of the camera and the relevant factors of shielding conditions are comprehensively considered in the screening process, so that the finally screened optimal camera has the optimal observation effect. In addition, the application considers various camera types, and can screen different types of cameras so as to improve the screening effect.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the description below are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a flow chart of an embodiment of a camera screening method according to the present application;
fig. 2 is a flow chart of an embodiment corresponding to step S101 in fig. 1;
fig. 3 is a flow chart of an embodiment corresponding to step S103 in fig. 1;
FIG. 4 is a flowchart of an embodiment corresponding to step S301 when the camera of FIG. 3 is a bolt face;
FIG. 5 is a flowchart of an embodiment corresponding to step S301 when the camera in FIG. 3 is a dome camera;
FIG. 6 is a flowchart of an embodiment corresponding to the step S302 in FIG. 3;
FIG. 7 is a flow chart of an embodiment of a camera screening framework of the present application;
FIG. 8 is a schematic diagram of an embodiment of a camera screening apparatus according to the present application;
fig. 9 is a schematic structural diagram of an embodiment of a device with memory function according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, fig. 1 is a flow chart illustrating an embodiment of a camera screening method according to the present application, where the camera screening method specifically includes:
s101: and acquiring the position information of the object to be observed.
Specifically, referring to fig. 2, fig. 2 is a flow chart of an embodiment corresponding to step S101 in fig. 1. The specific implementation process of the step S101 may be:
s201: and receiving the point position selected by the user in the pre-established three-dimensional model, and taking the object with the point position as the object to be observed.
Specifically, in the present embodiment, before the step S201, the method may further include: three-dimensional modeling is carried out on a factory, a park or even a city level target scene in a total station scanning mode, a laser radar scanning mode, an unmanned aerial vehicle oblique photogrammetry mode or the like so as to obtain a three-dimensional model; and all cameras in the target scene need to be added to the three-dimensional model at the same time. In the built three-dimensional model, a user can optionally select a certain observation target, for example, the point position P selected by the mouse can be obtained by clicking the mouse, so that the target to be observed is determined.
S202: and obtaining a normal vector of the plane where the point is located, and taking the normal vector as an observation target orientation.
Because the whole three-dimensional model is drawn by triangle surface elements formed by vertexes, a triangle with the point position P can be found first, the normal vector n of the triangle plane can be obtained by utilizing the vector cross multiplication formed by two sides of the triangle, and the normal vector n can also be regarded as the normal vector (namely the direction of an observation target) n of the point position P.
S203: and acquiring a first coordinate of all vertexes of the minimum bounding box under the world coordinate system of the target to be observed.
Specifically, each object to be observed can be regarded as a single body for the object in the entire three-dimensional model, and thus all vertices constituting the object to be observed can be found. According to the acquired vertex set of the object to be observed, calculating a minimum bounding box of the object to be observed; the minimum bounding box represents a minimum cuboid which can enclose the object to be observed, and the calculation process is approximately as follows: firstly, calculating a covariance matrix according to a vertex set of an object to be observed, then obtaining three eigenvectors according to the covariance matrix, then projecting all vertexes of the object to be observed onto the three eigenvectors, and finally determining the boundary of the minimum bounding box in the three directions and the first coordinates of eight vertexes of the minimum bounding box under a world coordinate system.
S204: and obtaining the center point of the minimum bounding box according to the first coordinates of all the vertexes, and taking the second coordinates of the center point as the position information of the object to be observed.
Specifically, the first sitting at the eight vertices of the minimum bounding box is marked as: v= { V i, i=0, 1 …,7}; wherein v i=(xi,yi,zi); the center point C= (x c,yc,zc) of the minimum bounding box is calculated by the following method :xc=(x0+x1+…+x7)/8,yc=(y0+y1+…+y7)/8,Zc=(z0+z1+…+z7)/8.
S102: acquiring a first set formed by all cameras within a preset range from a target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera.
Specifically, in this embodiment, the specific implementation procedure of the step S102 may be: obtaining the maximum value of the effective observation distances of all cameras in the three-dimensional model; determining a preset range by taking the center of the minimum bounding box as a sphere center and the maximum value as a radius; a first set of all cameras located within a predetermined range is obtained. Further, in the present embodiment, the cameras in the first set may include a first type of camera and a second type of camera, for example, the cameras in the first set include a rifle bolt and a ball camera.
S103: screening a second collection formed by all cameras of which targets to be observed are in the visual field range and are not shielded from the first collection; the determination modes of the visual field ranges corresponding to different types of cameras are different.
Specifically, referring to fig. 3, fig. 3 is a flow chart of an embodiment corresponding to step S103 in fig. 1. The specific implementation process of the step S103 may be:
S301: all cameras of the object to be observed in the field of view of the object to be observed are screened out from all cameras of the first type in the first aggregate, and all cameras of the object to be observed in the field of view of the object to be observed are screened out from all cameras of the second type in the first aggregate, and all screened cameras form an intermediate aggregate.
Specifically, referring to fig. 4 when the first type of camera is a bolt face, fig. 4 is a flowchart of an embodiment corresponding to step S301 when the camera of fig. 3 is a bolt face. The step of screening all cameras of the first type from the first aggregate to obtain the object to be observed within the field of view specifically includes:
s401: for each bolt face, the first coordinates of all vertices of the minimum bounding box are converted to third coordinates in the camera coordinate system in which the bolt face is located.
Specifically, for the bolt face, let the pose and position of the bolt face in the three-dimensional model (i.e., the position of the optical center) be R and t, respectively, where R is a matrix of 3*3 and t is a matrix of 3*1, which represent the rotational and translational transformations, respectively, of the point-to-world coordinate system in the camera coordinate system. From R and t, the first coordinates of the eight vertices of the minimum bounding box in the world coordinate system can be converted to the third coordinates in the camera coordinate systemWherein,
S402: and converting the third coordinates of all the vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system of the picture shot by the current gun camera.
Specifically, the conversion formula is as follows: wherein, A third coordinate representing the vertex of the minimum bounding box in the camera coordinate system, (u, v) a fourth coordinate representing the vertex of the minimum bounding box in the pixel coordinate system, and f x、fy、cx、cy is an internal parameter of the bolt; the image capturing device can take a certain corner point of an image captured by a gun camera as an origin of a pixel coordinate system, take the width direction of the image as an X-axis direction and take the height direction of the image as a Y-axis direction, wherein the abscissa or the ordinate of each pixel point in the image is larger than or equal to 0.
S403: and judging whether fourth coordinates of all vertexes of the minimum bounding box are located in a picture range shot by the gun camera or not.
Specifically, assuming that the width and height of the picture taken by the current bolt are width and height, respectively, the specific implementation procedure of the step S403 may be: for the fourth coordinate of each vertex obtained in step S402, determining whether the abscissa value of the fourth coordinate is greater than or equal to 0 and less than width, and whether the ordinate value of the fourth coordinate is greater than or equal to 0 and less than height; the formula is as follows:
S404: if so, the object to be observed is in the visual field of the gun camera.
Specifically, the object to be observed is considered to be within the field of view of the bolt face only if the fourth coordinates of all vertices of the minimum bounding box satisfy the above condition.
S405: otherwise, the object to be observed is out of the visual field of the gun camera.
Specifically, as long as the fourth coordinate of the minimum bounding box having one vertex does not satisfy the above condition, the object to be observed is considered to be out of the field of view of the bolt face.
Referring to fig. 5, fig. 5 is a flowchart of an embodiment corresponding to step S301 when the camera of fig. 3 is a dome camera. The step of screening all cameras of the object to be observed within the field of view from all cameras of the second type in the first aggregate specifically includes:
s501: and obtaining the dead angle range of the ball machine under the current focal length for each ball machine.
Specifically, the specific implementation process of the step S501 may be: and obtaining the dead angle of the ball machine relative to the Z axis of the world coordinate system according to the field angle of the ball machine and the angle which can be lifted up on the Z axis of the world coordinate system. In a specific embodiment, the dead angle may be obtained according to the following formula:
Wherein blindAngle represents the angle of the dead angle with the Z axis of the world coordinate system; the ball machine can rotate at random by 360 degrees in the horizontal direction, and the angle which can be lifted up in the vertical direction (namely the Z-axis direction of the world coordinate system) is assumed to be PITCH; hallFov represents half of the field angle of the dome camera, and the calculation process can be as follows:
Where width and height represent the resolution of the dome camera and f represents the focal length of the dome camera.
In the calculation process of the dead angle, the angle that the ball machine can tilt up in the vertical direction is introduced, so that the calculation result of the dead angle is more accurate.
S502: and judging whether all vertexes of the minimum bounding box are out of the dead angle range.
Specifically, the specific implementation process of the step S502 may be:
A. a first included angle between a connecting line formed by each vertex of the minimum bounding box and the optical center of the ball machine and the Z axis of the world coordinate system is obtained. Specifically, the calculation formula of the first angle is as follows:
angle = cos -1((vi-tb)Z/(‖vi-tb | II Z II); wherein v i is the first coordinate of the vertex of the minimum bounding box in the world coordinate system; t b is the coordinate of the spherical machine optical center in the world coordinate system. Z= (0, 1) represents the Z axis in the world coordinate system.
B. Judging whether the first included angles of all vertexes of the minimum bounding box are larger than or equal to the dead angle. Specifically, when angle < blindAngle, then this vertex is indicated to be within the dead angle of the ball machine.
S503: and if all the vertexes of the minimum bounding box are out of the dead angle range, converting the first coordinates of all the vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the ball machine is located.
Specifically, since the ball machine can rotate 360 °, its rotation matrix R will be calculated as follows: considering the vector (C-t b) formed by the connection of the spherical position t b (i.e., the optical center position of the spherical machine) and the center point C of the minimum bounding box as the position of the z-axis of the camera coordinate system under the world coordinate system, and denoted by z c_w=(C-tb), the calculation of the position x c_w、yc_w of the x-axis and the y-axis of the camera coordinate system under the world coordinate system is as follows:
Wherein x represents the cross multiplication of the vectors, z= (0, 1) represents the Z axis of the world coordinate system, the three vectors are normalized to obtain z′c_w=zc_w/||zc_w||,x′c_w=xc_w/||xc_w||,y′c_w=yc_w/||yc_w||,, and then the rotation matrix R from the camera coordinate system to the world coordinate system is:
Thus, for each vertex v i of each minimum bounding box, its coordinates in the camera coordinate system The method comprises the following steps:
in synchronization with step S503, if at least one vertex of the minimum bounding box is within the dead angle range, the flow proceeds to step S508.
S504: and converting the third coordinates of all vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system of a picture shot by the current dome camera.
Specifically, the calculation process of the fourth coordinate is formulated as follows:
wherein, Representing coordinates in the camera coordinate system, (u, v) representing coordinates in the pixel coordinate system, and f x、fy、cx、cy being an internal parameter of the sphere machine.
S505: and judging whether fourth coordinates of all vertexes of the minimum bounding box are located in a picture range shot by the ball machine.
Specifically, assuming that the width and height of the picture shot by the current dome camera are width and height, respectively, the specific implementation procedure of the step S505 may be: for the fourth coordinate of each vertex obtained in step S505, determining whether the abscissa value of the fourth coordinate is greater than or equal to 0 and less than width, and whether the ordinate value of the fourth coordinate is greater than or equal to 0 and less than height; the formula is as follows:
S506: if yes, the target to be observed is within the field of view of the spherical camera under the current focal length, and step S508 is performed.
S507: otherwise, the object to be observed is out of the field of view of the spherical camera at the current focal length, and the step S508 is entered.
S508: and judging whether all focal lengths of the current dome camera are traversed.
S509: if all the focal lengths of the current dome camera are traversed, it is determined whether the object to be observed is within the field of view under at least one focal length of the dome camera, and the process proceeds to step S511 or step S512.
S510: if all the focal lengths of the current dome camera are not traversed, the focal length of the current dome camera is adjusted, and the process returns to step S501.
S511: and if the object to be observed is in the visual field range of the ball machine under at least one focal length, the object to be observed is in the visual field range of the ball machine.
S512: and if the object to be observed is out of the visual field range of the ball machine under all focal lengths, the object to be observed is out of the visual field range of the ball machine.
Through the process, as long as the spherical machine has one focal length and can shoot the object to be observed, the spherical machine is screened out and the subsequent steps are carried out.
S302: and screening a second set formed by all cameras of which the targets to be observed are not blocked from the middle set.
Specifically, for the bolt face and the ball machine, the following manner may be adopted to perform the shielding determination, refer to fig. 6 specifically, fig. 6 is a flow chart of an embodiment corresponding to step S302 in fig. 3. The step S302 specifically includes:
s601: and obtaining the imaging area of the minimum bounding box under the pixel coordinate system according to the fourth coordinates of all the vertexes of the minimum bounding box.
S602: for each pixel point in the imaging area, obtaining a fifth coordinate of the current pixel point on the virtual plane; the virtual plane is positioned between the camera and the object to be observed, and is perpendicular to the optical axis of the camera.
Specifically, assuming a certain pixel point p (u, v) within the imaging region, its fifth coordinate p 'on the virtual plane of z=1 in front of the camera, assuming p' = (x, y, 1), thenWherein f x、fy、cx、cy is an internal parameter of the camera, and p' is a fifth coordinate in the camera coordinate system. Of course, in other embodiments, the virtual plane in front of the camera may be other, for example, z=2, 3, etc., as long as the virtual plane is perpendicular to the optical axis of the camera.
S603: the fifth coordinate is converted into a sixth coordinate in the world coordinate system.
Specifically, the P' obtained in the step S602 may be converted into the world coordinate system according to the pose R and the position t of the camera to obtain the sixth coordinate p″ in the following specific calculation manner:
P″=Rp′+t。
S604: and (3) emitting a ray passing through the sixth coordinate from the optical center of the camera in a ray tracing mode, and obtaining an intersection point of the ray and the three-dimensional model.
Specifically, the process of obtaining the intersection point of the ray and the three-dimensional model is the process of looking at the intersection point of the vector (P' -t) and the whole three-dimensional model; where t represents the position of the camera.
S605: responding that the intersection point is positioned in the minimum bounding box or positioned on the surface of the minimum bounding box, and the current pixel point is not blocked; otherwise, the current pixel point is blocked.
Specifically, if the meaning of the point of intersection is located between the camera and the minimum bounding box, the current pixel point is considered to be blocked.
S606: and responding to that all pixel points in the imaging area are not blocked, and then the object to be observed is not blocked.
Specifically, according to steps S602-S605, all pixels of the target to be observed in the imaging area on the image are traversed, whether the pixels are blocked is determined, and only if all the pixels are not blocked, the camera is considered to have no blocking when the target to be observed is observed.
Of course, in other embodiments, the order of the step S302 and the step S301 may be exchanged, but the whole screening process is more efficient when the step S301 is located before the step S302.
S104: screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second collection and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
In particular, when the camera is a bolt face, the procedure for obtaining the imaging size of the object to be observed in the bolt face may be: referring to fig. 4 again, after the step of determining that the object to be observed is within the field of view of the camera in step S404 in fig. 4, the method further includes: obtaining a minimum polygon surrounding all the vertexes from the pixel coordinate system by using fourth coordinates (i.e. coordinates in the pixel coordinate system) of all the vertexes of the minimum bounding box (i.e. a minimum convex hull algorithm); the area of the smallest polygon is taken as the imaging size of the object to be observed in the camera.
In addition, when the camera is a rifle bolt, the observation angle obtaining process of the rifle bolt relative to the object to be observed may be: A. obtaining a first included angle alpha between the opposite direction of the optical axis z= (0, 1) of the gun camera and the direction n of the observation target; the specific first included angle α is calculated as follows: α=cos -1(-zwn/(‖zw‖·‖n‖));zw=R-1 z; wherein z w is the coordinate of the optical axis of the bolt in the world coordinate system, and R represents the gesture matrix. Of course, in other embodiments, a fourth included angle between the optical axis of the bolt and the direction n of the observation target may be obtained, and the difference between pi and the fourth included angle is taken as the first included angle.
B. Obtaining a second included angle beta between a vector (C-t) formed by a connecting line of an optical center t of the gun camera and a central point C of the minimum bounding box and an optical axis of the gun camera; wherein, the observation angleHalf of the sum of the first angle alpha and the second angle beta; the specific second included angle β is calculated as follows: β=cos -1((C-t)zw/(‖(C-t)‖·‖zw #). When alpha and beta are smaller, the more the gun camera is opposite to the object to be observed, the better the observation angle is.
When the camera is a sphere machine, the process of obtaining the imaging size of the object to be observed in the sphere machine can be as follows: referring to fig. 5 again, after the step of determining that the object to be observed is within the field of view of the spherical camera at the current focal length in step S506 in fig. 5, the method further includes: the smallest polygon surrounding all vertices is obtained from the pixel coordinate system using the fourth coordinates (i.e., coordinates under the pixel coordinate system) of all vertices of the smallest bounding box. After the step of determining that the object to be observed is within the field of view of the ball machine in step S511 in fig. 5, the method further includes: the maximum value of the areas in all the minimum polygons is taken as the imaging size of the object to be observed in the camera. For a rifle bolt, the imaging size of the object to be observed in the picture taken by the rifle bolt is fixed. For the spherical camera, as the focal length can be changed under different multiplying powers, the larger the multiplying power is, the larger the focal length is, and the imaging of the same object is also larger at the moment, therefore, for the different multiplying powers of the spherical camera, all focal lengths of an object to be observed in the visual field range of the object to be observed need to be obtained first, then the corresponding imaging size is calculated according to the pixel coordinates of the vertex of the minimum bounding box, and the largest one is selected as the final imaging size of the spherical camera.
In addition, when the camera is a sphere machine, the observation angle obtaining process of the sphere machine relative to the object to be observed may be: obtaining a third included angle gamma between a vector (t b -C) formed by a connecting line of an optical center t b of the dome camera and a central point C of the minimum bounding box and an observation target direction n; wherein, the observation angleA third included angle gamma; the specific calculation process of the third included angle γ may be: γ=cos -1((tb -C) n/(tb-c·n)). When gamma is smaller, the spherical machine is opposite to the object to be observed, and the observation angle is better.
Further, the specific implementation process of the step S104 may be: obtaining a first ratio of the imaging size of a target to be observed in the camera to the picture size of the camera, and taking the difference value of the first ratio and the normalized observation angle as a score; the highest scoring camera is taken as the best camera. The calculation formula of the specific score is as follows:
Where S i denotes the imaging size of the object to be observed in camera i, width i and height i denote the width and height of the picture of camera i respectively, Representing the angle of observation of camera i, it is known from the above calculation that, for a dome camera,Whereas in the case of a bolt face,Taking into account the possible difference in resolution of each camera, the imaging size S i is normalized while the angle of observation isThe normalization operation is also performed, so that the camera with the largest F value is selected from all the remaining cameras, and the camera is the camera with the best observation effect.
In a specific application scenario, the specific process of the above camera screening method may be:
A. acquiring position information of an object to be observed;
B. Acquiring a first collection formed by all cameras within a preset range from a target to be observed according to the position information;
C. For each camera in the first set, a result of whether the observation target is within its field of view, an imaging size, an observation angle, and a shielding situation are obtained in sequence.
D. And screening a second collection formed by all cameras of which targets to be observed are in the visual field range and are not blocked from the first collection.
E. And screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second aggregation and the observation angle of each camera. And then, sending a command to a camera with the best observation effect in the real scene, and observing the target to be observed by the camera.
In another specific application scenario, the specific process of the above-mentioned method for screening a camera may be:
A. acquiring position information of an object to be observed;
B. Acquiring a first set formed by all cameras within a preset range from a target to be observed according to the position information;
C. For each camera in the first set, a result of whether the observed object is within its field of view and occlusion is obtained.
D. And screening a second collection formed by all cameras of which targets to be observed are in the visual field range and are not blocked from the first collection.
E. For each camera in the second set, an imaging size of the object to be observed in the camera and an observation angle of each camera are calculated.
F. And screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second aggregation and the observation angle of each camera. And then, sending a command to a camera with the best observation effect in the real scene, and observing the target to be observed by the camera.
Referring to fig. 7, fig. 7 is a flow chart of an embodiment of a camera screening frame according to the present application, where the camera screening frame specifically includes:
The first acquisition module 10 is configured to acquire position information of an object to be observed.
A first screening module 12, coupled to the first obtaining module 10, for obtaining a first aggregate set formed by all cameras within a predetermined range from the target to be observed according to the location information; wherein the first collection includes a first type of camera and a second type of camera.
A second screening module 14, coupled to the first screening module 12, for screening a second set of all cameras of which the target to be observed is within the field of view and is not blocked from the first set; the determination modes of the visual field ranges corresponding to different types of cameras are different.
A third screening module 16, coupled to the second screening module 14, for screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second collection and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a camera screening device according to the present application. The camera screening device comprises a processor 20 and a memory 22 coupled to each other for cooperating to implement the camera screening method described in any of the embodiments above. In this embodiment, the processor 20 may also be referred to as a CPU (Central Processing Unit ). The processor 20 may be an integrated circuit chip having signal processing capabilities. The Processor 20 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In addition, the camera screening device provided by the application can also comprise other structures, such as a common display screen, a communication circuit and the like, and the application does not describe the structures too much.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a device with memory function according to the present application. The memory-enabled device 30 has stored thereon program data 300, the program data 300 being executable by a processor to implement the camera screening method described in any of the embodiments above. The program data 300 may be stored in the storage device as a software product, and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. The aforementioned storage device includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
The foregoing is only illustrative of the present application and is not to be construed as limiting the scope of the application, and all equivalent structures or equivalent flow modifications which may be made by the teachings of the present application and the accompanying drawings or which may be directly or indirectly employed in other related art are within the scope of the application.
Claims (13)
1. A camera screening method, comprising:
Acquiring position information of an object to be observed;
acquiring a first set formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera;
Screening a second collection formed by all cameras which are not blocked and have the target to be observed in the visual field range of the target to be observed from the first collection; wherein, the judging modes of the visual field ranges corresponding to the cameras of different types are different;
screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second aggregation and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode;
The step of screening the camera with the best observation effect according to the imaging size and the observation angle of the object to be observed in each camera in the second aggregation includes: obtaining a first ratio of the imaging size of the object to be observed in a camera to the picture size of the camera, and taking the difference between the first ratio and the normalized observation angle as a score; the camera with the highest score is taken as the best camera.
2. The method according to claim 1, wherein the step of screening out a second set of all cameras of which the object to be observed is within the field of view and is not blocked from the first set of cameras comprises:
Screening all cameras of the target to be observed in the visual field range of the target to be observed from all cameras of the first type in the first aggregation, and screening all cameras of the target to be observed in the visual field range of the target to be observed from all cameras of the second type in the first aggregation, wherein all the screened cameras form an intermediate aggregation;
And screening a second set formed by all cameras of which the targets to be observed are not blocked from the middle set.
3. The camera screening method according to claim 2, wherein the step of acquiring positional information of the object to be observed includes:
Receiving a point position selected by a user in a pre-established three-dimensional model, and taking an object where the point position is located as the target to be observed;
obtaining a normal vector of a plane where the point is located, and taking the normal vector as an observation target orientation;
Acquiring a minimum bounding box of an object to be observed, and acquiring first coordinates of all vertexes of the minimum bounding box under a world coordinate system;
And obtaining the center point of the minimum bounding box according to the first coordinates of all the vertexes, and taking the second coordinates of the center point as the position information of the object to be observed.
4. A camera screening method according to claim 3, wherein the first type of camera is a rifle bolt, and the step of screening all cameras of the first type from all cameras of the first set of cameras for which the object to be observed is within its field of view comprises:
For each camera, converting the first coordinates of all vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the camera is located;
Converting the third coordinates of all vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system where a picture shot by the camera is currently located;
judging whether fourth coordinates of all vertexes of the minimum bounding box are located in a picture range shot by the camera or not;
If yes, the object to be observed is in the visual field range of the camera.
5. The method according to claim 4, wherein after the step of if the target to be observed is within the field of view of the camera, further comprising:
Obtaining a minimum polygon surrounding all the vertexes from the pixel coordinate system by using fourth coordinates of all the vertexes of the minimum bounding box;
And taking the area of the minimum polygon as the imaging size of the object to be observed in the camera.
6. A camera screening method according to claim 3, wherein the second type of camera is a dome camera, and the step of screening all cameras of the object to be observed within the field of view from all cameras of the second type in the first set comprises:
for each camera, obtaining a dead angle range of the camera under the current focal length;
judging whether all vertexes of the minimum bounding box are out of the dead angle range or not;
If yes, converting the first coordinates of all the vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the camera is located;
Converting the third coordinates of all vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system where a picture shot by the camera is currently located;
judging whether fourth coordinates of all vertexes of the minimum bounding box are located in a picture range shot by the camera or not;
If yes, the target to be observed is in the visual field range of the camera under the current focal length; otherwise, the object to be observed is out of the visual field range of the camera under the current focal length;
Responsive to all focal length traversals of the camera, determining whether the object to be observed is within a field of view of at least one focal length of the camera;
If yes, the object to be observed is in the visual field range of the camera.
7. The camera screening method of claim 6, wherein,
The step of obtaining the dead angle range of the camera under the current focal length comprises the following steps: obtaining a dead angle of the camera relative to the Z axis of the world coordinate system according to the field angle of the camera and the angle which can be lifted up on the Z axis of the world coordinate system;
the step of judging whether all vertexes of the minimum bounding box are out of the dead angle range comprises the following steps:
Obtaining a first included angle between a connecting line formed by each vertex of the minimum bounding box and the optical center of the ball machine and a Z axis of the world coordinate system;
judging whether the first included angles of all vertexes of the minimum bounding box are larger than or equal to the dead angle or not.
8. The method according to claim 6, wherein after the step of if the target to be observed is within the field of view of the camera at the current focal length, the method further comprises:
Obtaining a minimum polygon surrounding all the vertexes from the pixel coordinate system by using fourth coordinates of all the vertexes of the minimum bounding box;
and taking the maximum value of the areas in all the minimum polygons as the imaging size of the object to be observed in the camera.
9. A camera screening method according to claim 3, wherein before the step of screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second set and the observation angle of each camera, the method further comprises:
responding to the camera as a gun camera, obtaining a first included angle between the opposite direction of the optical axis of the camera and the direction of an observation target, and a second included angle between a vector formed by a connecting line of the optical center of the camera and the central point of the minimum bounding box and the optical axis of the camera; wherein the observation angle is half of the sum of the first included angle and the second included angle;
responding to the camera as a dome camera, and obtaining a third included angle between a vector formed by a connecting line of an optical center of the camera and a central point of the minimum bounding box and an observation target direction; wherein, the observation angle is the third included angle.
10. The method according to claim 4 or 6, wherein the step of screening the second set of all cameras of which the object to be observed is not occluded from the intermediate set of sets comprises:
obtaining an imaging region of the minimum bounding box under the pixel coordinate system according to fourth coordinates of all vertexes of the minimum bounding box;
Obtaining a fifth coordinate of the current pixel point on the virtual plane for each pixel point in the imaging area; the virtual plane is positioned between the camera and the target to be observed, and is perpendicular to the optical axis of the camera;
converting the fifth coordinate into a sixth coordinate in a world coordinate system;
a ray passing through the sixth coordinate is sent out from the optical center of the camera in a ray tracing mode, and an intersection point of the ray and the three-dimensional model is obtained;
responding that the intersection point is positioned in the minimum bounding box or positioned on the surface of the minimum bounding box, and the current pixel point is not blocked; otherwise, the current pixel point is shielded;
And responding to that all pixel points in the imaging area are not shielded, and then the target to be observed is not shielded.
11. A camera screening method according to claim 3, wherein the step of acquiring a first set of all cameras within a predetermined range from the object to be observed based on the position information comprises:
obtaining the maximum value of the effective observation distances of all cameras in the three-dimensional model;
determining the preset range by taking the center of the minimum bounding box as a sphere center and the maximum value as a radius;
A first set of all cameras located within the predetermined range is obtained.
12. A camera screening device comprising a processor and a memory coupled to each other, and the processor and the memory cooperating to implement the camera screening method of any one of claims 1-11.
13. An apparatus having a storage function, characterized in that program data is stored thereon, which program data is executable by a processor to implement the camera screening method according to any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110819845.3A CN113674356B (en) | 2021-07-20 | 2021-07-20 | Camera screening method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110819845.3A CN113674356B (en) | 2021-07-20 | 2021-07-20 | Camera screening method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113674356A CN113674356A (en) | 2021-11-19 |
CN113674356B true CN113674356B (en) | 2024-08-02 |
Family
ID=78539633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110819845.3A Active CN113674356B (en) | 2021-07-20 | 2021-07-20 | Camera screening method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113674356B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114442805A (en) * | 2022-01-06 | 2022-05-06 | 上海安维尔信息科技股份有限公司 | Monitoring scene display method and system, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174090A (en) * | 2017-12-28 | 2018-06-15 | 北京天睿空间科技股份有限公司 | Ball machine interlock method based on three dimensions viewport information |
CN113079369A (en) * | 2021-03-30 | 2021-07-06 | 浙江大华技术股份有限公司 | Method and device for determining image pickup equipment, storage medium and electronic device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491339B (en) * | 2012-06-11 | 2017-11-03 | 华为技术有限公司 | Video acquiring method, equipment and system |
EP2835792B1 (en) * | 2013-08-07 | 2016-10-05 | Axis AB | Method and system for selecting position and orientation for a monitoring camera |
CN104881870A (en) * | 2015-05-18 | 2015-09-02 | 浙江宇视科技有限公司 | Live monitoring starting method and device for to-be-observed point |
CN108986161B (en) * | 2018-06-19 | 2020-11-10 | 亮风台(上海)信息科技有限公司 | Three-dimensional space coordinate estimation method, device, terminal and storage medium |
-
2021
- 2021-07-20 CN CN202110819845.3A patent/CN113674356B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174090A (en) * | 2017-12-28 | 2018-06-15 | 北京天睿空间科技股份有限公司 | Ball machine interlock method based on three dimensions viewport information |
CN113079369A (en) * | 2021-03-30 | 2021-07-06 | 浙江大华技术股份有限公司 | Method and device for determining image pickup equipment, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN113674356A (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210374994A1 (en) | Gaze point calculation method, apparatus and device | |
US11330172B2 (en) | Panoramic image generating method and apparatus | |
US8265374B2 (en) | Image processing apparatus, image processing method, and program and recording medium used therewith | |
TWI535285B (en) | Conference system, surveillance system, image processing device, image processing method and image processing program, etc. | |
WO2021031781A1 (en) | Method and device for calibrating projection image and projection device | |
JP5093053B2 (en) | Electronic camera | |
US20180182163A1 (en) | 3d model generating system, 3d model generating method, and program | |
JPWO2018235163A1 (en) | Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method | |
CN104778656B (en) | Fisheye image correcting method based on spherical perspective projection | |
WO2021208486A1 (en) | Camera coordinate transformation method, terminal, and storage medium | |
US11380016B2 (en) | Fisheye camera calibration system, method and electronic device | |
CN105243637A (en) | Panorama image stitching method based on three-dimensional laser point cloud | |
CN113223130B (en) | Path roaming method, terminal equipment and computer storage medium | |
CN112802208B (en) | Three-dimensional visualization method and device in terminal building | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
CN113674356B (en) | Camera screening method and related device | |
CN210986289U (en) | Four-eye fisheye camera and binocular fisheye camera | |
JP5638578B2 (en) | Observation support device, observation support method and program | |
CN112529769B (en) | Method and system for adapting two-dimensional image to screen, computer equipment and storage medium | |
CN114567742A (en) | Panoramic video transmission method and device and storage medium | |
CN115147262A (en) | Image generation method and device | |
CN115311360B (en) | Method and device for acquiring pose of target camera in ring shooting and electronic equipment | |
Zhou et al. | Light Field Stitching Based On Concentric Spherical Modeling | |
JP2017010555A (en) | Image processing method and image processing apparatus | |
WO2017145755A1 (en) | Information processing device and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |