CN110602376B - Snapshot method and device and camera - Google Patents

Snapshot method and device and camera Download PDF

Info

Publication number
CN110602376B
CN110602376B CN201810603096.9A CN201810603096A CN110602376B CN 110602376 B CN110602376 B CN 110602376B CN 201810603096 A CN201810603096 A CN 201810603096A CN 110602376 B CN110602376 B CN 110602376B
Authority
CN
China
Prior art keywords
snapshot
parameter
camera
focus
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810603096.9A
Other languages
Chinese (zh)
Other versions
CN110602376A (en
Inventor
龚起
马伟民
尤灿
徐鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810603096.9A priority Critical patent/CN110602376B/en
Publication of CN110602376A publication Critical patent/CN110602376A/en
Application granted granted Critical
Publication of CN110602376B publication Critical patent/CN110602376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The application relates to the technical field of monitoring, in particular to a snapshot method and device and a camera. The snapshot method comprises the following steps: after the preset snapshot position is determined, acquiring a position angle parameter and a multiplying power parameter corresponding to the preset snapshot position; obtaining a snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter; adjusting the focus of the camera according to the snapshot focus parameters; and enabling the camera to perform image capturing at a preset capturing position. The camera snapshot focus can be directly set, the time for searching the focus position is shortened, the time for focusing is short, the camera snapshot can be carried out in real time during monitoring, the focusing effect is good, and the fuzzy state of the shot picture is reduced to a certain extent.

Description

Snapshot method and device and camera
Technical Field
The application relates to the technical field of monitoring, in particular to a snapshot method and device and a camera.
Background
The PTZ camera is a camera that can be moved horizontally and vertically in all directions in space and controlled to vary magnification, and can change a photographing angle and a focus during monitoring. PTZ is a shorthand for Pan/Tilt/Zoom.
The process of capturing a human face by an existing PTZ camera in monitoring is as follows: the camera rotates and zooms to a scene area to be monitored according to a preset position to capture a human face; when the camera moves in place, the automatic focusing function is started immediately at the moment to focus the current shooting picture clearly; and when the focus is clear, the face recognition function is started to capture the face in the current picture.
The face snapshot method has the following defects: the speed of automatic focusing can not meet the real-time requirement of face snapshot, namely when the camera moves in place, the automatic focusing function needs to be executed, and the automatic focusing can gradually converge the current shot picture until the current shot picture is clear. Therefore, a certain time is consumed to ensure the definition of the current picture, and the real-time performance of the snapshot cannot be ensured.
Disclosure of Invention
In view of this, the embodiments of the present application provide a snapshot method and apparatus, and a camera, so as to improve the real-time performance of camera snapshot. The technical scheme is as follows:
in a first aspect, a snapshot method is provided, and the method includes:
after a preset snapshot position is determined, acquiring a position angle parameter and a multiplying power parameter corresponding to the preset snapshot position;
obtaining a snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter;
adjusting the focus of the camera according to the snapshot focus parameters;
and enabling the camera to shoot the image of the shot object at the preset shooting position.
Optionally, the obtaining of the snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter includes:
obtaining a snapshot ground object distance parameter according to an established space plane equation of the snapshot ground area with the camera as an original point, the position angle parameter and the magnification parameter, wherein the snapshot ground object distance parameter indicates the distance between the camera and a corresponding snapshot place during snapshot, and the corresponding snapshot place is an intersection point of a lens optical axis of the camera and the snapshot ground area during snapshot;
obtaining object distance parameters according to the snapshot ground object distance parameters and the space plane equation, wherein the object distance parameters indicate the distance between the camera and a snapshot object during snapshot;
obtaining the snapshot focus parameter according to the snapshot object distance parameter and the magnification parameter;
optionally, the spatial plane equation is established by a method comprising:
acquiring corresponding datum point focus parameters and datum point multiplying power parameters when the lens optical axis of the camera is respectively aligned with three datum points which are not on the same straight line on the snapshot ground area and are clear;
for each reference point, obtaining a corresponding reference point object distance parameter according to the corresponding reference point focus parameter and the reference point multiplying power parameter;
for each reference point, obtaining a reference point coordinate parameter corresponding to the reference point in a space coordinate system taking the position of the camera as an origin according to the corresponding reference point object distance parameter and the position angle parameter;
and obtaining a space plane equation of the snapshot ground area by taking the camera as an origin according to the coordinate parameters of the reference points of the three reference points.
Optionally, the obtaining the object distance parameter according to the snapshot ground object distance parameter and the spatial plane equation includes:
obtaining a position height parameter of a camera according to the space plane equation, wherein the position height parameter indicates the distance which the camera passes by reaching the snapshot ground area along a gravity line;
and obtaining the object distance parameter according to the position height parameter, the preset object height parameter of the snapshot object and the snapshot ground object distance parameter.
Optionally, the obtaining the object distance parameter according to the position height parameter, a preset object height parameter and the ground object distance parameter includes:
and multiplying the ratio of the difference obtained by subtracting the preset object height parameter from the position height parameter and the position height parameter by the snap ground object distance parameter to obtain the object distance parameter.
Optionally, the obtaining of the snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter includes:
and finding out the corresponding snapshot focal parameters according to the position angle parameters and the magnification parameters in a prestored corresponding table of the position angle parameters, the magnification parameters and the snapshot focal parameters.
Optionally, after adjusting the focus of the camera according to the capturing focus parameter and before causing the camera to capture an image of the captured object at the predetermined capturing position, the method further comprises:
the focus of the camera is adjusted by means of sharpness-based autofocus.
Optionally, after adjusting the focus of the camera according to the capturing focus parameter and before causing the camera to capture an image of the captured object at the predetermined capturing position, the method further comprises: performing a snapshot when the predetermined snapshot position determines that at least one of the following conditions is satisfied:
the image of the snapshot object exists in the shooting picture;
the image size of the snapshot object reaches a preset size standard;
and the definition of the image of the snapshot object reaches a preset definition standard.
The predetermined snapshot position is determined by: detecting whether a snapshot object appears in a monitoring picture, and when the snapshot object is detected, determining the position where the camera needs to be positioned to be aligned with the snapshot object according to the image of the snapshot object in the monitoring picture as the preset snapshot position;
the position angle parameter is a horizontal angle value and a vertical angle value corresponding to the preset snapshot position; the magnification parameter is a preset value, a magnification value corresponding to the position angle parameter or a magnification value corresponding to an interval where the position angle parameter is located.
Optionally, the determining, according to the image of the captured object in the monitoring screen, a position where the camera needs to be located in order to aim at the captured object includes:
detecting whether the size of an image of a snapshot object in a monitoring picture reaches a preset threshold value, and when the size of the image of the snapshot object in the monitoring picture reaches the preset threshold value, determining the position where the camera needs to be located and the snapshot object to be aligned according to the image of the snapshot object in the monitoring picture.
In a second aspect, the present application provides a snapshot apparatus, the apparatus comprising:
the acquisition module is used for acquiring position angle parameters and multiplying power parameters corresponding to a preset snapshot position after the preset snapshot position is determined;
the calculation module is used for obtaining the snapshot focus parameters of the camera according to the position angle parameters and the multiplying power parameters;
the adjusting module is used for adjusting the focus of the camera according to the snapshot focus parameters;
and the shooting module enables the camera to shoot the image of the shot object at the preset shooting position.
In a third aspect, the present application provides a camera comprising a processor and a memory, the memory for storing a program; and the processor is used for executing the program stored on the memory to realize the snapshot method.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
after the preset snapshot position is determined, the snapshot focus parameter of the camera is obtained according to the position angle parameter and the multiplying power parameter corresponding to the preset snapshot position of the camera, so that the focus during snapshot of the camera can be directly set.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method of snap-shooting according to a first embodiment of the present application;
FIG. 2 is a flow chart of a method of snap-shooting according to a second embodiment of the present application;
fig. 3 is a flowchart of an illustrative manner of acquiring a snapshot focus parameter according to a snapshot method of a second embodiment of the present application;
FIG. 4 is a schematic diagram of an exemplary rectangular spatial coordinate system with the camera as the origin;
FIG. 5 is a schematic XOY plane view of the rectangular spatial coordinate system of FIG. 4;
FIG. 6 is a schematic XOZ plane view of the rectangular spatial coordinate system of FIG. 4;
FIG. 7 is a diagram illustrating exemplary relationships among object distances, magnifications, and focuses for ensuring a clear image;
FIG. 8 is a flow chart of an exemplary manner of establishing spatial plane equations for a snap ground area;
FIG. 9 is an exemplary schematic diagram of a spatial object distance model capturing a ground area;
FIG. 10 is an exemplary illustration of a two-dimensional planar model truncated from the spatial object distance model of FIG. 9;
fig. 11 is a schematic block diagram of a snapshot apparatus according to a third embodiment of the present application;
fig. 12 is a schematic diagram of an exemplary unit configuration of a calculation module of a snapshot apparatus according to a third embodiment of the present application;
FIG. 13 is an exemplary structural schematic of the cells that establish the spatial plane equations for a snap ground area.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In order to make the technical solutions and advantages of the present application clearer, the following will describe the embodiments of the present application in further detail with reference to the accompanying drawings.
In the present application, the method may generally be performed by a camera, which may be, for example, a PTZ camera, a common form of which is a dome camera; it is of course also possible that a computer device or other control device other than the camera controls the camera to perform corresponding actions and processes the relevant data to implement the method of the present application.
Fig. 1 is a flowchart of a snapshot method according to an embodiment of the present application. Referring to fig. 1, the method includes:
step 102: after the preset snapshot position is determined, acquiring a position angle parameter and a multiplying power parameter corresponding to the preset snapshot position;
step 104: obtaining a snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter;
step 106: adjusting the focus of the camera according to the snapshot focus parameters;
step 108: and enabling the camera to perform image capturing at a preset capturing position.
According to the snapshot method provided by the embodiment of the application, after the preset snapshot position is determined, the snapshot focus parameter of the camera is obtained according to the position angle parameter and the multiplying factor parameter corresponding to the preset snapshot position of the camera, so that the focus during snapshot of the camera can be directly set.
Wherein, according to the position angle parameter and the magnification parameter, the snapshot focus parameter of the camera is obtained, which may include:
according to an established space plane equation taking the camera as an origin, position angle parameters and multiplying power parameters of the snapshot ground area, obtaining snapshot ground object distance parameters, wherein the snapshot ground object distance parameters indicate the distance from the camera to a corresponding snapshot place during snapshot, and the corresponding snapshot place is the intersection point of a lens optical axis of the camera and the snapshot ground area during snapshot;
obtaining object distance parameters according to the snapshot ground object distance parameters and the space plane equation, wherein the object distance parameters indicate the distance between the camera and the snapshot object during snapshot;
and obtaining a snapshot focus parameter according to the snapshot object distance parameter and the magnification parameter.
Wherein the spatial plane equation may be established by the following means, which may include:
acquiring corresponding reference point focus parameters and reference point multiplying power parameters when the lens optical axis of the camera is respectively aligned with three reference points which are not on the same straight line on the snapshot ground area and are clear;
for each reference point, obtaining a corresponding reference point object distance parameter according to the corresponding reference point focus parameter and the reference point multiplying power parameter;
for each reference point, obtaining a reference point coordinate parameter corresponding to the reference point in a space coordinate system taking the position of the camera as an origin according to the corresponding reference point object distance parameter and the position angle parameter;
and obtaining a space plane equation of the snapshot ground area with the camera as the origin according to the coordinate parameters of the reference points of the three reference points.
Obtaining object distance parameters according to the snapshot ground object distance parameters and the space plane equation, wherein the obtaining of the object distance parameters can include:
obtaining a position height parameter of the camera according to a space plane equation, wherein the position height parameter indicates the distance which the camera passes by reaching the snapshot ground area along the gravity line;
and obtaining an object distance parameter according to the position height parameter, a preset object height parameter of the snapshot object and the snapshot ground object distance parameter.
Wherein, according to the height parameter of the position, the height parameter of the preset object of the snapshotted object and the object distance parameter of the snapshotted ground, the object distance parameter is obtained, which may include:
and multiplying the ratio of the difference obtained by subtracting the preset object height parameter from the position height parameter by the snapshot ground object distance parameter to obtain the object distance parameter.
Wherein, according to the position angle parameter and the magnification parameter, the snapshot focus parameter of the camera is obtained, which may include:
and finding out the corresponding snapshot focus parameter according to the position angle parameter and the magnification parameter in a prestored corresponding table of the position angle parameter, the magnification parameter and the snapshot focus parameter.
Wherein after adjusting the focus of the camera according to the capturing focus parameter and before causing the camera to capture an image of the captured object at the predetermined capturing position, the method may further comprise:
the focus of the camera is adjusted by means of sharpness-based autofocus.
Wherein after adjusting the focus of the camera according to the capturing focus parameter and before causing the camera to capture an image of the captured object at the predetermined capturing position, the method may further comprise: performing a snapshot when the predetermined snapshot position determines that at least one of the following conditions is satisfied:
the method comprises the steps of shooting an image with a snapshot object in a picture;
the image size of the snapshot object reaches a preset size standard;
the definition of the image of the snapshot object reaches a preset definition standard.
Wherein the predetermined snapshot position is determined by: detecting whether a snapshot object appears in a monitoring picture, and determining the position where a camera needs to be positioned to be aligned with the snapshot object according to the image of the snapshot object in the monitoring picture when the snapshot object is detected, wherein the position is used as a preset snapshot position;
the position angle parameter is a horizontal angle value and a vertical angle value corresponding to a preset snapshot position; the multiplying power parameter is a preset value, a multiplying power value corresponding to the position angle parameter or a multiplying power value corresponding to an interval where the position angle parameter is located.
The method for determining the position where the camera needs to be located to align the snapshot object according to the image of the snapshot object in the monitoring picture comprises the following steps:
detecting whether the size of an image of a snapshot object in a monitoring picture reaches a preset threshold value, and determining the position where a camera needs to be positioned to align the snapshot object according to the image of the snapshot object in the monitoring picture when the size of the image of the snapshot object in the monitoring picture reaches the preset threshold value.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 2 is a flowchart of a snapshot method according to an embodiment of the present application. The method is performed by a camera. As shown in fig. 2, the method includes step 202, step 204, step 206, and step 208.
In step 202, after the predetermined capturing position is determined, the camera acquires a position angle parameter and a magnification parameter corresponding to the predetermined capturing position.
The magnification parameter generally refers to the ratio of the focal length of the camera lens setting to its minimum focal length.
The position angle parameter generally refers to the horizontal angle and the vertical angle of the camera, and represents the position of the camera, which may also be referred to as PT position, where P represents the horizontal angle and T represents the vertical angle.
The position angle parameter and the magnification parameter of the camera at the time of capturing can be acquired by acquiring a value detected by a sensor or the like or a stored predetermined value.
The position angle parameter depends on the position and angle of the camera at the time of capturing.
The magnification parameter may be constant or may be set according to actual conditions or preset rules, but it should be understood that after the position angle parameter corresponding to the predetermined snapshot position is determined, the corresponding magnification parameter is also determined.
In surveillance applications, the camera is typically rotatable, possibly manually; the camera can be periodically rotated according to a set track, for example, a certain position angle can be set as a snapshot position, and then the camera is set to reach the position angle at fixed time; it is also possible to rotate toward a position where a preset object of interest is aligned when the object is detected to appear in the image pickup picture of the local or collaborative camera.
When the position angle parameter and the multiplying power parameter corresponding to the preset snapshot position are obtained, the camera may be in the way of reaching the preset snapshot position, namely the current position angle parameter is inconsistent with the position angle parameter during snapshot; or may have reached the predetermined snapshot position or be originally at the predetermined snapshot position, that is, the current position angle parameter is consistent with the position angle parameter during snapshot, and the position angle parameter during snapshot is acquired in real time.
The predetermined snapshot position may be determined by: and detecting whether a snapshot object appears in the monitoring picture, and determining the position where the camera needs to be positioned to align the snapshot object according to the image of the snapshot object in the monitoring picture when the snapshot object is detected, wherein the position is used as a preset snapshot position.
The monitoring picture can be a real-time shot picture of the camera or a real-time shot picture of other cameras, and the snapshot object can be a human face, for example. When a snapshot object is detected in the monitoring picture, the position and size of the image of the object in the monitoring picture can be positioned, the position which the camera needs to be positioned when aiming at the object can be determined according to the position, namely the preset snapshot position, and the position angle parameter and the magnification parameter can be obtained. For example, the position angle parameter is a horizontal angle value and a vertical angle value corresponding to a predetermined snapshot position, that is, a PT position; and the multiplying power parameter is a preset value, a multiplying power value corresponding to the position angle parameter or a multiplying power value corresponding to the interval of the position angle parameter.
In order to ensure that the setting of the snapshot focus parameter is accurate and the snapshot frame is clear, the subsequent snapshot operation may be started only when the size of the image of the snapshot object in the monitor frame (for example, the size in the vertical direction in the frame) reaches a predetermined threshold value. One implementation is: and detecting whether the image size of the snapshot object in the monitoring picture reaches a preset threshold value, when the image size of the snapshot object in the monitoring picture reaches the preset threshold value, determining the position where the camera needs to be positioned to align the snapshot object according to the image of the snapshot object in the monitoring picture, and continuing to perform subsequent snapshot operation.
In step 204, the camera obtains a snapshot focus parameter according to the position angle parameter and the magnification parameter.
One way may be: the corresponding table of the position angle parameter, the magnification parameter and the snapshot focus parameter is stored in advance, and data in the corresponding table can be obtained through actual measurement or through a calculation mode described below. Therefore, the snapshot focus parameters of the camera corresponding to the position angle parameters and the multiplying power parameters can be directly searched in the pre-stored corresponding table. That is, the magnification parameter under the position angle parameter is searched in the corresponding table, and then the corresponding snapshot focus parameter is searched; or firstly, the position angle parameter under the magnification parameter is found, and then the corresponding snapshot focus parameter is found. The magnification parameter and the position angle parameter in the correspondence table may be ranges or determined values. It should be understood that, although the position angle parameters include a horizontal angle and a vertical angle, in general, the vertical angle may vary or not vary only between several positions, and the magnification parameter may also vary or not vary only between several positions, so that the stored table data is not necessarily huge.
Another way may be by real-time calculations, which is detailed below. Fig. 3 shows a flow diagram of an illustrative manner of acquiring snapshot focus parameters. This approach includes step 2042, step 2044 and step 2046.
In step 2042, a snapshot ground object distance parameter is obtained according to the established spatial plane equation of the snapshot ground area corresponding to the camera and using the camera as the origin, the position angle parameter and the magnification parameter of the camera, the snapshot ground object distance parameter indicates the distance from the camera to a corresponding snapshot place during snapshot, and the corresponding snapshot place is the intersection point of the lens optical axis of the camera and the snapshot ground area during snapshot. The time of capturing here refers to the time when capturing is assumed to be performed, when the camera is in a predetermined capturing position.
It should be understood that, in the present application, a location corresponding to a snapshot generally refers to an intersection point of an optical axis of a lens of a camera during the snapshot and a ground area during the snapshot, that is, a location corresponding to a center of a picture captured by the lens when no other object exists between the lens and the ground during the snapshot, and a ground area corresponding to the camera is a ground area within a camera shooting monitoring range of the camera and including the location corresponding to the snapshot, which is a part or all of the monitored ground area.
When a spatial plane equation of a snapshot ground area corresponding to a camera is established, a rectangular coordinate system with the camera as an origin is generally established, and a correspondence relationship among an object distance, a magnification and a focus of a camera lens when a clear picture is ensured needs to be determined, which are described below.
Fig. 4 is a schematic diagram of an exemplary rectangular spatial coordinate system with a camera as an origin, wherein the camera is abstracted to an origin O (which may be considered as a point where an optical axis of a lens always passes through when the camera is at different positions), an X axis and a Y axis are two axes perpendicular to each other on a horizontal plane, and a Z axis is an axis in a vertical direction and is parallel to a gravity line. FIG. 5 is a schematic XOY plane view of the rectangular spatial coordinate system of FIG. 4, illustrating a rectangular planar coordinate system formed by one rotation of the camera at any vertical angle (except 90 degrees), defining a horizontal angle of 0 degrees in the positive X-axis direction, a horizontal angle of 180 degrees in the negative X-axis direction, a horizontal angle of 90 degrees in the positive Y-axis direction, and a horizontal angle of 270 degrees in the negative Y-axis direction, and dividing the entire rectangular planar coordinate system into 360 degrees in the counterclockwise direction; fig. 6 is a schematic plane view XOZ of the rectangular spatial coordinate system of fig. 4, which illustrates a rectangular planar coordinate system formed by rotating a camera half a turn at any horizontal angle, wherein the positive direction of the X-axis is defined as a vertical angle of 0 degrees, the positive direction of the Z-axis is defined as a vertical angle of 90 degrees, the negative direction of the Z-axis is defined as a vertical angle of-90 degrees, the rectangular planar coordinate system of the positive direction of the X-axis is divided into 180 degrees in a counterclockwise direction, and 0 degree in the vertical angle is defined as a horizontal direction.
Fig. 7 is a schematic diagram illustrating exemplary correspondence relationships among object distances, magnification ratios and focal points when a clear picture is ensured, wherein an abscissa is magnification ratio K, an ordinate is focal point F, and a plurality of curves in a coordinate system represent relationship curves between object distances L of L1, L2, L3 and L4 and between the magnification ratio K and the focal point F when the clear picture is ensured, it can be seen that different object distances L correspond to different focal distances F when the magnification ratio K1 is provided, and different object distances L also correspond to different focal distances F when the magnification ratio K2 is provided, that is, the third party can be determined by determining both of L, K and F. By means of the corresponding relation diagram, a function expression which is required to make the lens picture of the camera clear can be abstracted:
F(L,K,F)=0
wherein L is the object distance, K is the magnification, and F is the focus. It should be understood that the term "clear shot" herein mainly means that the subject has a clear image.
The functional expression shows that the other variable can be solved by any two variables, and the solution is unique, for example, the object distance L can be solved by two variables of the focus F and the multiplying power K. For example, the solution may be performed by using a fitting function or a corresponding data table.
The following describes how to establish a spatial plane equation of the snapshot ground area by combining the above spatial rectangular coordinate system with the camera as the origin and the corresponding relationship among the object distance, the magnification and the focus of the camera when the picture is clear.
In this embodiment, the establishing of the spatial plane equation of the snapshot ground area is to perform plane calibration on the snapshot ground area in a spatial rectangular coordinate system with the camera as an origin, that is, determine a plane by three points which are not on the same straight line in the snapshot ground area, and calculate a plane equation, that is, the spatial plane equation.
Therefore, after the space plane equation is solved, the ground object distance of each place in the ground area can be captured by combining the corresponding relation among the object distance, the multiplying power and the focusing of the camera when the shot picture is clear.
It should be understood that the snapshot ground area should be approximately a plane, otherwise an error occurs, in order to reduce the error, the monitoring ground area within the monitoring range of the camera may be divided into two or more snapshot ground areas according to the terrain, each snapshot ground area is guaranteed to be a plane or a close plane as much as possible, and when the snapshot focus parameter is obtained, which snapshot ground area the plane equation of which snapshot ground area is adopted as the established space plane equation of the snapshot ground area.
As an example, the spatial plane equation for the snap ground area may be established by the following method, as shown in fig. 8, including the following steps:
in step 802, when the optical axis of the lens of the camera is respectively aligned with three reference points which are not on the same straight line on the snapshot ground area and the shot picture is clear, the corresponding reference point focus parameter and reference point magnification parameter are acquired.
For example, fig. 9 is an exemplary diagram of a spatial object distance model of a captured ground area, assuming that a circular area ABCD is an area to be monitored by a camera and three points M, N, P are feature points to be calibrated, and the calibration processes of the three points are all the same, and the calibration is performed for obtaining coordinate positions (X, Y, Z) in a spatial rectangular coordinate system of the three points with an initial position of the camera as an origin O.
Taking point N as an example, the calibration process of point N is to first obtain a magnification parameter K and a focus parameter F of a monitoring position when the optical axis of the lens of the camera is aligned with point N and the image is clear. The magnification parameter K can be set according to the frame range, and the focus parameter F is obtained by trying to make the frame clear under the set magnification parameter Z. The magnification parameter and focus parameter of point M and point P, respectively, can be obtained as well.
In step 804, for each reference point, a corresponding reference point object distance parameter is obtained according to the corresponding reference point focus parameter and the reference point magnification parameter.
For example, at point N in fig. 9, after obtaining the magnification parameter K and the focus parameter F, the object distance parameter L can be solved by setting the expression F (L, Z, F) to 0. The object distance parameters of point M and point P can be obtained in the same way.
In step 806, for each reference point, obtaining a corresponding reference point coordinate parameter in a space coordinate system with the position of the camera as an origin according to the corresponding reference point object distance parameter and the position angle parameter corresponding to the camera at the time;
for example, for point N in fig. 9, coordinate values (X, Y, Z) of the currently monitored point N on the spatial rectangular coordinate system can be calculated by solving the object distance parameter L and the current camera position angle parameter, where the position angle parameter is represented by P, T, P represents a horizontal angle, and T represents a vertical angle. The calculation formula of the coordinate values is as follows:
X=Fx(L,P,T)
Y=Fy(L,P,T)
Z=Fz(L,P,T)
wherein, Fx(L, P, T) namely X value can be calculated through L, P and T, and Y value and Z value can be calculated according to corresponding formula, which is equivalent to knowing the distance and space angle of a line segment from the origin to a certain point, and solving the space seat of the pointAnd (4) marking. Similarly, the coordinates of the point M and the point P can be calculated by this method, respectively.
In step 808, a spatial plane equation of the ground area is captured in a spatial coordinate system with the position of the camera as the origin according to the reference point coordinate parameters of the three reference points.
For example, the coordinate parameters of the point M, the point N, and the point P are respectively set as: (X1, Y1, Z1), (X2, Y2, Z2), (X3, Y3, Z3). The plane equation determined by the three points can be solved through the coordinates of the three points in the space coordinate system: and aX + bY + cZ + d is the value of a, b, c and d in 0, so that a plane equation aX + bY + cZ + d of the snapshot ground area ABCD is 0 in a space coordinate system taking the position of the camera as an origin O, namely the space plane equation of the snapshot ground area corresponding to the camera.
After a space plane equation which takes the camera as an original point and corresponds to the snapshot ground area of the camera is established, a snapshot ground object distance parameter can be obtained according to the space plane equation, the position angle parameter and the magnification parameter of the camera, wherein the snapshot ground object distance parameter indicates the distance from the camera to the corresponding snapshot place during snapshot.
Still referring to fig. 9, after the spatial plane equation is established, the object distance parameter of the snapshot ground at any position can be obtained, for example, for the corresponding position E of the snapshot, the object distance is set as unknown number L1, the horizontal and vertical position angles corresponding to the camera are known and are respectively denoted as P1 and T1, and then the following equation set is solved:
aX+bY+cZ+d=0
X=Fx(L1,P1,T1)
Y=Fy(L1,P1,T1)
Z=Fz(L1,P1,T1)
the object distance L1 corresponding to the snapshot corresponding location E can be solved.
In step 2044, obtaining object distance parameters of the camera according to the snapshot ground object distance parameters and the space plane equation;
the object-to-object distance parameter of the camera indicates the distance of the camera from the object to be captured at the time of capture. The object distance parameter of the camera at the time of capturing can be obtained in the following manner.
Firstly, calculating the position height parameter of the camera according to a space plane equation, wherein the position height parameter of the camera indicates the distance which the camera passes by reaching the snapshot ground area along the gravity line.
And then, obtaining the object distance parameter according to the position height parameter of the camera, the preset height parameter of the snapshot object and the snapshot ground object distance parameter. Specifically, the object distance parameter may be obtained by multiplying the ratio of the difference obtained by subtracting the preset height parameter from the position height parameter to the position height parameter by the snap-shot ground object distance parameter.
This way of obtaining the object distance parameter of the camera at the time of the snapshot is described below as an example.
For example, a two-dimensional plane model of the passing camera O, the camera free landing point H (the intersection of the gravity line of the passing camera and the ground area of the snapshot) and the corresponding snapshot point E is extracted from the space object distance model of fig. 9 for analysis, i.e., the OEH plane, as shown in fig. 10. OH is the positional height parameter of the camera, i.e. the path length of the camera along the gravity line to the ground ABCD. The line segment GJ is a preset object height parameter of the snap-shot object in the plane OEH. The line segment OG is the distance from the camera to the object to be captured, i.e. the object-distance parameter. When the snapshot object is a human face, for example, the human face is located at the highest position of the human body, so that the human face area center occupies the whole height of the human body by default and is the whole height of the human body, or the error of the height of the human face center relative to the height of the whole human body is considered to be negligible, and the average height of the human body can be used as a preset object height parameter. Point E is the intersection of the extension of line OG and ground ABCD. That is, if the lens optical axis of the camera takes a shot along OG, there is no appearance of a snap-shot object, and the center of the picture taken by the camera is a point E on the ground.
According to the two-dimensional plane model of fig. 10, length information of the outlet section OG, that is, object distance information of a human face from the camera can be calculated. The position-angle parameters of the line OG, i.e. the parameters P and T, are directly measurable. The length of the outlet section OE can be calculated by referring to the above-described manner, that is, calculated by the current parameter P, T and the established space plane equation, and recorded as the ground object distance parameter L1; the segment OH is the length from the origin O along the vertical (gravity line) direction to the plane ABCD, and can be directly calculated as the camera position height parameter H1 since the equation of the plane ABCD is known, on the other hand, OH is substantially a special ground object distance whose position angle parameter is known, and the length H1 of OH is also calculated according to the method of calculating the object distance L1. It should be understood that in some embodiments, L1 and/or H1 may also be measured directly by a laser rangefinder or the like. The length of the line segment GJ is the height of the snap-shot object, and can be manually set or set by default in a program, and is recorded as a preset object height parameter H2. Because the triangle HOE is similar to the triangle JGE, according to the triangle similarity theorem, a calculation formula of the object distance L2 can be obtained:
L2=L1*(H1-H2)/H1
that is, the object distance L2 can be obtained by the above formula. It should be understood that the angle OHE need not be a right angle, but could be an acute or obtuse angle, i.e., the plane ABCD need not be a horizontal plane, and the above formula would still apply.
In step 2046, a snapshot focus parameter is obtained according to the snapshot object distance parameter and the magnification parameter of the camera;
in step 2044, the object distance parameter L of the camera has already been obtained, so the object distance parameter L and the magnification parameter K of the camera can be directly substituted into the formula F (L, K, F) ═ 0 to obtain the capture focus parameter F. In practice, for each magnification parameter K, a corresponding data table of L and F may be set, or a fitting function expression of L and F may be established to find the snapshot focus parameter F.
In step 206, the camera adjusts the focus according to the snapshot focus parameters;
after the snapshot focus parameters of the camera are obtained in the previous step 204, the focus motor of the camera may be controlled to drive the corresponding lens to move so that the focus matches the snapshot focus parameters F.
In step 208, the camera takes a snapshot of the image at a predetermined snapshot location.
After the focus of the camera is adjusted in the previous step 204, the camera reaches a predetermined capturing position (rotates to the predetermined capturing position or is originally at the predetermined capturing position), and image capturing is performed on the captured object under the set magnification and focus.
The camera can take a snapshot regardless of the presence or absence or the clarity of the object in the picture. The method can also be used for detecting the existence of the snapshot object or performing snapshot after ensuring the clear picture, if the snapshot object does not exist or the picture is not clear enough, the snapshot can be suspended, and when the snapshot object moves to the area corresponding to the picture or reaches the position where the snapshot object can be clearly presented, the snapshot is performed again; or go to the next snapshot position for snapshot without waiting.
One alternative is to: after the focus is adjusted and before the snapshot is taken, the focus is adjusted again by means of sharpness-based autofocus. This is because the previously adjusted focus belongs to the predictive focusing, but there may be an error in the calculation or the object to be captured does not completely reach the predetermined position, and therefore, it is possible to perform capturing after the image of the object is clear by adjusting the focus again in the automatic focusing manner based on the sharpness. Compared with a pure definition-based automatic focusing mode, the method has the advantages that as rough focusing is carried out before, the focus is already positioned near an actual proper focus, the subsequent focusing time is reduced, and the capturing speed is improved; compared with the simple pre-estimated focusing mode defined before the embodiment, the method improves the focusing precision and enables the captured object image to be clearer. The definition-based automatic focusing mode refers to a focusing mode for adjusting the focus until the definition of the central area of the picture or the snapshot object reaches a preset definition standard.
Another alternative is: after the focus is adjusted and before the snapshot is performed, the snapshot is performed again when the predetermined snapshot position determines that at least one of the following conditions is satisfied: the method comprises the steps of shooting an image with a snapshot object in a picture; the size of the image of the snap-shot object in the shot picture reaches a preset size standard; and the definition of the image of the snap-shot object in the shot picture reaches a preset definition standard. That is, the picture shot by the camera is detected firstly, if the image of the object to be snapshot exists, the image is snapshot, and useless pictures are avoided being snapshot when no object exists; or further, the image of the snapshot object is determined to be snapshot after the size of the image in the shooting picture reaches a preset size standard or the definition of the image in the shooting picture reaches a preset definition standard, and if the size of the image of the snapshot object does not reach the preset size standard, the snapshot can be performed after the definition of the image reaches the preset definition standard. The size of the image of the snapshot object in the shooting picture reaches the preset size standard or the definition of the image reaches the preset definition standard, so that the distance between the snapshot object and the camera is just near the calculated object distance, and the image of the snapshot object is clearer.
According to the snapshot method provided by the embodiment, after the preset snapshot position is determined, the snapshot focus parameter of the camera is obtained according to the position angle parameter and the magnification parameter corresponding to the preset snapshot position, so that the focus during snapshot of the camera can be directly set.
A third embodiment of the present application provides a snapshot apparatus, as shown in fig. 11, including:
an obtaining module 1102, configured to obtain a position angle parameter and a magnification parameter corresponding to a predetermined snapshot position after the predetermined snapshot position is determined;
the calculation module 1104 is used for obtaining a snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter;
an adjusting module 1106, which adjusts the focus of the camera according to the snapshot focus parameters;
and a shooting module 1108, which enables the camera to shoot images at the preset shooting position.
The obtaining module 1102 may further be configured to: detecting whether a snapshot object appears in a monitoring picture, and determining the position where a camera needs to be positioned to be aligned with the snapshot object according to the image of the snapshot object in the monitoring picture when the snapshot object is detected, wherein the position is used as a preset snapshot position;
the position angle parameter is a horizontal angle value and a vertical angle value corresponding to a preset snapshot position; the multiplying power parameter is a preset value, a multiplying power value corresponding to the position angle parameter or a multiplying power value corresponding to an interval where the position angle parameter is located.
The obtaining module 1102 may further be configured to:
detecting whether the size of an image of a snapshot object in a monitoring picture reaches a preset threshold value, and determining the position where a camera needs to be positioned to align the snapshot object according to the image of the snapshot object in the monitoring picture when the size of the image of the snapshot object in the monitoring picture reaches the preset threshold value.
As shown in fig. 12, the calculation module 1104 may include:
the ground object distance calculation unit 11042 obtains a snapshot ground object distance parameter according to an established spatial plane equation, a position angle parameter and a magnification parameter of the snapshot ground area, wherein the spatial plane equation, the position angle parameter and the magnification parameter take the camera as an original point, the snapshot ground object distance parameter indicates the distance from the camera to a corresponding snapshot place during snapshot, and the corresponding snapshot place is the intersection point of the lens optical axis of the camera and the snapshot ground area during snapshot;
the object distance calculation unit 11044 obtains an object distance parameter according to the snapshot ground object distance parameter and the spatial plane equation, wherein the object distance parameter indicates the distance from the camera to the snapshot object during snapshot;
the snapshot focus calculation unit 11046 obtains a snapshot focus parameter according to the snapshot object distance parameter and the magnification parameter.
The spatial plane equation can be established by the following units, as shown in fig. 13, including:
a reference point parameter acquiring unit 1302, configured to acquire a focus parameter and a reference point magnification parameter of a corresponding reference point when the lens optical axis of the camera is respectively aligned with three reference points that are not on the same straight line on the snapshot ground area and is clear;
a reference point object distance calculation unit 1304 for obtaining a reference point object distance parameter corresponding to each reference point according to the reference point focus parameter and the reference point magnification parameter corresponding to each reference point;
a reference point coordinate calculation unit 1306, for each reference point, obtaining a reference point coordinate parameter corresponding to the reference point in a space coordinate system with the position of the camera as an origin according to the reference point object distance parameter and the position angle parameter corresponding to the reference point;
the plane equation calculation unit 1308 obtains a spatial plane equation of the snapshot ground area with the camera as the origin according to the reference point coordinate parameters of the three reference points.
The object distance calculation unit 11044 may be specifically configured to:
obtaining a position height parameter of the camera according to a space plane equation, wherein the position height parameter indicates the distance which the camera passes by reaching the snapshot ground area along the gravity line;
and obtaining an object distance parameter according to the position height parameter, a preset object height parameter of the snapshot object and the snapshot ground object distance parameter.
Wherein the object distance calculation unit 11044 is further operable to:
and multiplying the ratio of the difference obtained by subtracting the preset object height parameter from the position height parameter by the snapshot ground object distance parameter to obtain the object distance parameter.
The calculating module 1104 may be specifically configured to:
and finding out the corresponding snapshot focus parameter according to the position angle parameter and the magnification parameter in a prestored corresponding table of the position angle parameter, the magnification parameter and the snapshot focus parameter.
Wherein, the adjusting module 1106 is further configured to:
after adjusting the focus of the camera according to the snapshot focus parameters and before enabling the camera to conduct image snapshot on the snapshot object at the preset snapshot position, the focus of the camera is adjusted through a definition-based automatic focusing mode.
The shooting module 1108 is further configured to:
after adjusting the focus of the camera according to the snapshot focus parameters and before enabling the camera to conduct image snapshot on the snapshot object at the preset snapshot position, performing snapshot when at least one of the following conditions is determined to be met at the preset snapshot position:
the method comprises the steps of shooting an image with a snapshot object in a picture;
the size of the image of the snap-shot object in the shot picture reaches a preset size standard;
and the definition of the image of the snap-shot object in the shot picture reaches a preset definition standard.
The snapshot device that this embodiment provided, after predetermined snapshot position is confirmed, obtains the snapshot focus parameter of camera according to position angle parameter and the multiplying power parameter that the camera corresponds at predetermined snapshot position to focus when can directly setting up the camera snapshot, compare with traditional autofocus, reduced the time of searching for the focus position, it is consuming time weak to focus, thereby the camera can carry out the snapshot in real time when the control, it is effectual to focus, the state that the picture is fuzzy has been reduced to a certain extent.
A fourth embodiment of the present application provides a camera, including a processor and a memory, the memory being used for storing a program; and a processor for executing the program stored in the memory to implement the capturing method in the first and second embodiments. The camera may be a PTZ camera or other suitable type of camera, it being understood that the camera also has the hardware necessary to perform the capturing and monitoring and the hardware necessary to perform other functions, as well as some software, which will not be described in detail herein.
The second embodiment is the most detailed in the above embodiments, and the details of the implementation are also applicable to the other embodiments.
It will be understood by those skilled in the art that all or part of the steps and modules for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware related to the instructions of the program, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.

Claims (10)

1. A method of snap-shooting, the method comprising:
after a preset snapshot position is determined, acquiring a position angle parameter and a multiplying power parameter corresponding to the preset snapshot position;
obtaining a snapshot focus parameter of the camera according to the position angle parameter and the magnification parameter; adjusting the focus of the camera according to the snapshot focus parameters; enabling the camera to perform image capturing at the preset capturing position;
the obtaining of the snapshot focus parameters of the camera according to the position angle parameters and the magnification parameters comprises:
obtaining a snapshot ground object distance parameter according to an established space plane equation of the snapshot ground area with the camera as an original point, the position angle parameter and the magnification parameter, wherein the snapshot ground object distance parameter indicates the distance between the camera and a corresponding snapshot place during snapshot, and the corresponding snapshot place is an intersection point of a lens optical axis of the camera and the snapshot ground area during snapshot;
obtaining a position height parameter of the camera according to the space plane equation, wherein the position height parameter of the camera indicates the distance which the camera passes by reaching a snapshot ground area along a gravity line, obtaining an object distance parameter according to the position height parameter, a preset height parameter of a snapshot object and the snapshot ground object distance parameter, and the object distance parameter indicates the distance from the camera to the snapshot object during snapshot;
and obtaining the snapshot focus parameter according to the object distance parameter and the magnification parameter.
2. The method of claim 1, wherein the spatial plane equation is established by:
acquiring corresponding reference point focus parameters and reference point multiplying power parameters when the lens optical axis of the camera is respectively aligned with three reference points which are not on the same straight line on the snapshot ground area and the shot picture is clear;
for each reference point, obtaining a corresponding reference point object distance parameter according to the corresponding reference point focus parameter and the reference point multiplying power parameter;
for each reference point, obtaining a reference point coordinate parameter corresponding to the reference point in a space coordinate system taking the position of the camera as an origin according to the corresponding reference point object distance parameter and the position angle parameter;
and obtaining a space plane equation of the snapshot ground area by taking the camera as an origin according to the coordinate parameters of the reference points of the three reference points.
3. The method of claim 2, wherein obtaining the object distance parameter from the snap ground object distance parameter and the spatial plane equation comprises:
obtaining a position height parameter of a camera according to the space plane equation, wherein the position height parameter indicates the distance which the camera passes by reaching the snapshot ground area along a gravity line;
and obtaining the object distance parameter according to the position height parameter, the preset object height parameter of the snapshot object and the snapshot ground object distance parameter.
4. The method according to claim 3, wherein the obtaining the object distance parameter according to the position height parameter, the preset height parameter of the captured object and the captured ground object distance parameter comprises:
and multiplying the ratio of the difference obtained by subtracting the preset object height parameter from the position height parameter and the position height parameter by the snap ground object distance parameter to obtain the object distance parameter.
5. The method according to any one of claims 1-4, wherein after adjusting the focus of the camera according to the snapshot focus parameters and before causing the camera to snap-shoot the image of the object being captured at the predetermined capture position, the method further comprises:
the focus of the camera is adjusted by means of sharpness-based autofocus.
6. The method according to any one of claims 1-4, wherein after adjusting the focus of the camera according to the snapshot focus parameters and before causing the camera to snap-shoot the image of the object being captured at the predetermined capture position, the method further comprises: performing a snapshot when the predetermined snapshot position determines that at least one of the following conditions is satisfied:
the method comprises the steps of shooting an image with a snapshot object in a picture;
the image size of the snapshot object reaches a preset size standard;
and the definition of the image of the snapshot object reaches a preset definition standard.
7. The method according to any one of claims 1 to 4,
the predetermined snapshot position is determined by: detecting whether a snapshot object appears in a monitoring picture, and when the snapshot object is detected, determining the position where the camera needs to be positioned to be aligned with the snapshot object according to the image of the snapshot object in the monitoring picture as the preset snapshot position;
the position angle parameter is a horizontal angle value and a vertical angle value corresponding to the preset snapshot position; the magnification parameter is a preset value, a magnification value corresponding to the position angle parameter or a magnification value corresponding to an interval where the position angle parameter is located.
8. The method according to claim 7, wherein determining a position at which the camera needs to be positioned to aim at the captured object from the image of the captured object in the monitoring picture comprises:
detecting whether the size of an image of a snapshot object in a monitoring picture reaches a preset threshold value, and when the size of the image of the snapshot object in the monitoring picture reaches the preset threshold value, determining the position where the camera needs to be located and the snapshot object to be aligned according to the image of the snapshot object in the monitoring picture.
9. A snapshot apparatus, the apparatus comprising:
the acquisition module is used for acquiring position angle parameters and multiplying power parameters corresponding to a preset snapshot position after the preset snapshot position is determined;
the calculation module is used for obtaining the snapshot focus parameters of the camera according to the position angle parameters and the multiplying power parameters;
the adjusting module is used for adjusting the focus of the camera according to the snapshot focus parameters;
the shooting module enables the camera to shoot images at the preset shooting position;
the calculation module is configured to:
obtaining a snapshot ground object distance parameter according to an established space plane equation of the snapshot ground area with the camera as an original point, the position angle parameter and the magnification parameter, wherein the snapshot ground object distance parameter indicates the distance between the camera and a corresponding snapshot place during snapshot, and the corresponding snapshot place is an intersection point of a lens optical axis of the camera and the snapshot ground area during snapshot;
obtaining a position height parameter of the camera according to the space plane equation, wherein the position height parameter of the camera indicates the distance which the camera passes by reaching a snapshot ground area along a gravity line, obtaining an object distance parameter according to the position height parameter, a preset height parameter of a snapshot object and the snapshot ground object distance parameter, and the object distance parameter indicates the distance from the camera to the snapshot object during snapshot; and obtaining the snapshot focus parameter according to the object distance parameter and the magnification parameter.
10. A camera comprising a processor and a memory, the memory for storing a program; the processor, executing the program stored in the memory, implements the snapshot method of any one of claims 1-8.
CN201810603096.9A 2018-06-12 2018-06-12 Snapshot method and device and camera Active CN110602376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810603096.9A CN110602376B (en) 2018-06-12 2018-06-12 Snapshot method and device and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810603096.9A CN110602376B (en) 2018-06-12 2018-06-12 Snapshot method and device and camera

Publications (2)

Publication Number Publication Date
CN110602376A CN110602376A (en) 2019-12-20
CN110602376B true CN110602376B (en) 2021-03-26

Family

ID=68848933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810603096.9A Active CN110602376B (en) 2018-06-12 2018-06-12 Snapshot method and device and camera

Country Status (1)

Country Link
CN (1) CN110602376B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112835172A (en) * 2020-12-31 2021-05-25 华兴源创(成都)科技有限公司 Automatic focusing method and system for constant-magnification imaging
CN113422901B (en) * 2021-05-29 2023-03-03 华为技术有限公司 Camera focusing method and related equipment
CN113435483A (en) * 2021-06-10 2021-09-24 宁波帅特龙集团有限公司 Fixed-point snapshot method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045548A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method for controlling automatic zoom of PTZ (pan/tilt/zoom) camera
CN102591366A (en) * 2012-02-17 2012-07-18 广州盈可视电子科技有限公司 Method and device for controlling cloud deck
CN103679687A (en) * 2012-09-18 2014-03-26 杭州海康威视数字技术股份有限公司 Target tracking method of intelligent tracking high-speed dome camera
WO2014043973A1 (en) * 2012-09-24 2014-03-27 天津市亚安科技股份有限公司 Calculation method for automatic locating angle of pan-tilt-zoom camera
CN103841333A (en) * 2014-03-27 2014-06-04 成都动力视讯科技有限公司 Preset bit method and control system
CN103905792A (en) * 2014-03-26 2014-07-02 武汉烽火众智数字技术有限责任公司 3D positioning method and device based on PTZ surveillance camera
CN104361603A (en) * 2014-11-28 2015-02-18 苏州科达科技股份有限公司 Gun camera image target designating method and system
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN108076281A (en) * 2016-11-15 2018-05-25 杭州海康威视数字技术股份有限公司 A kind of auto focusing method and Pan/Tilt/Zoom camera

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045548A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method for controlling automatic zoom of PTZ (pan/tilt/zoom) camera
CN102591366A (en) * 2012-02-17 2012-07-18 广州盈可视电子科技有限公司 Method and device for controlling cloud deck
CN103679687A (en) * 2012-09-18 2014-03-26 杭州海康威视数字技术股份有限公司 Target tracking method of intelligent tracking high-speed dome camera
WO2014043973A1 (en) * 2012-09-24 2014-03-27 天津市亚安科技股份有限公司 Calculation method for automatic locating angle of pan-tilt-zoom camera
CN103905792A (en) * 2014-03-26 2014-07-02 武汉烽火众智数字技术有限责任公司 3D positioning method and device based on PTZ surveillance camera
CN103841333A (en) * 2014-03-27 2014-06-04 成都动力视讯科技有限公司 Preset bit method and control system
CN104361603A (en) * 2014-11-28 2015-02-18 苏州科达科技股份有限公司 Gun camera image target designating method and system
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN108076281A (en) * 2016-11-15 2018-05-25 杭州海康威视数字技术股份有限公司 A kind of auto focusing method and Pan/Tilt/Zoom camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《PTZ相机的自动变焦方法研究》;《王帅》;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180131;全文 *

Also Published As

Publication number Publication date
CN110602376A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
US10652452B2 (en) Method for automatic focus and PTZ camera
US8803992B2 (en) Augmented reality navigation for repeat photography and difference extraction
US8488001B2 (en) Semi-automatic relative calibration method for master slave camera control
WO2018098824A1 (en) Photographing control method and apparatus, and control device
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
CN110602376B (en) Snapshot method and device and camera
US20200267309A1 (en) Focusing method and device, and readable storage medium
WO2022000300A1 (en) Image processing method, image acquisition apparatus, unmanned aerial vehicle, unmanned aerial vehicle system, and storage medium
CN105763795A (en) Focusing method and apparatus, cameras and camera system
CN114838668B (en) Tunnel displacement monitoring method and system
WO2023087894A1 (en) Region adjustment method and apparatus, and camera and storage medium
CN112949478A (en) Target detection method based on holder camera
CN112207821B (en) Target searching method of visual robot and robot
CN113850137A (en) Power transmission line image online monitoring method, system and equipment
CN109712188A (en) A kind of method for tracking target and device
CN115299031A (en) Automatic focusing method and camera system thereof
CN117152243A (en) Alarm positioning method based on monocular zooming of PTZ camera
JP6483661B2 (en) Imaging control apparatus, imaging control method, and program
CN112702513B (en) Double-optical-pan-tilt cooperative control method, device, equipment and storage medium
EP3882846B1 (en) Method and device for collecting images of a scene for generating virtual reality data
WO2019031244A1 (en) Information processing device, imaging system, method for controlling imaging system, and program
CN117837153A (en) Shooting control method, shooting control device and movable platform
CN113840084A (en) Method for realizing control of panoramic tripod head based on PTZ (Pan/Tilt/zoom) return technology of dome camera
CN112119430A (en) Data processing method, device, terminal and storage medium
JP2005175852A (en) Photographing apparatus and method of controlling photographing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant