CN110456829B - Positioning tracking method, device and computer readable storage medium - Google Patents

Positioning tracking method, device and computer readable storage medium Download PDF

Info

Publication number
CN110456829B
CN110456829B CN201910729307.8A CN201910729307A CN110456829B CN 110456829 B CN110456829 B CN 110456829B CN 201910729307 A CN201910729307 A CN 201910729307A CN 110456829 B CN110456829 B CN 110456829B
Authority
CN
China
Prior art keywords
position information
camera
video image
rectangular frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910729307.8A
Other languages
Chinese (zh)
Other versions
CN110456829A (en
Inventor
王丹飞
郑永勤
李海健
吕品
卢靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Valuehd Corp
Original Assignee
Shenzhen Valuehd Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Valuehd Corp filed Critical Shenzhen Valuehd Corp
Priority to CN201910729307.8A priority Critical patent/CN110456829B/en
Publication of CN110456829A publication Critical patent/CN110456829A/en
Application granted granted Critical
Publication of CN110456829B publication Critical patent/CN110456829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The invention discloses a positioning and tracking method, which comprises the following steps: acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters; determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameter; and controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame. The invention also discloses a positioning and tracking device and a computer readable storage medium. The invention can accurately and quickly determine the target observation range, quickly control the camera to move to the position of the target for shooting, quickly capture the video image of the target position, and improve the efficiency of video monitoring and the accuracy of monitoring the target position range.

Description

Positioning tracking method, device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a positioning and tracking method and apparatus, and a computer-readable storage medium.
Background
The positioning and tracking technology is widely applied in the fields of video monitoring, video conferences, automatic recording and broadcasting systems and the like, and has great practical application significance. The traditional positioning and tracking technology is realized by operating an external control keyboard and other modes, namely after a target is observed in a video image, an operating lever of the control keyboard is shaken to send an up-down, left-right rotation command and an amplification and reduction command so as to control a camera to rotate up, down, left and right and zoom.
However, since the sensitivity of the joystick is relatively low, the user of the joystick also needs a certain experience to accurately and quickly capture the observation target, so that it is difficult to accurately capture the observation target by controlling the rotation of the camera by swinging the joystick, the range of the video image is easily too large or too small, the positioning and tracking are not accurate enough, and the observation range is difficult to control.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a positioning and tracking method, a positioning and tracking device and a computer readable storage medium, and aims to solve the technical problems that the positioning and tracking are not accurate enough and the observation range is difficult to control.
In order to achieve the above object, the present invention provides a positioning and tracking method, including the following steps:
acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters;
determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameters;
and controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame.
In an embodiment, the 3D position information includes a horizontal offset, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter includes:
determining a horizontal offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the horizontal offset according to the horizontal offset angle and the tripod head stepping angle in the camera parameters.
In an embodiment, the step of determining a horizontal offset angle of the target rectangular frame from the video image according to the first 2D position information, the second 2D position information and the camera parameter includes:
acquiring a first width in the first 2D position information, and a second width and horizontal position coordinates in the second 2D position information;
and determining a horizontal offset angle according to the first width, the second width, the horizontal position coordinate and a horizontal visual angle in the camera parameters.
In an embodiment, the 3D position information further includes a vertical offset, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter further includes:
determining a vertical offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the vertical offset according to the vertical offset angle and the tripod head stepping angle in the camera parameters.
In an embodiment, the step of determining a vertical offset angle of the target rectangular frame from the video image according to the first 2D position information, the second 2D position information, and the camera parameter includes:
acquiring a first height in the first 2D position information, a second height in the second 2D position information and a vertical position coordinate of the target rectangular frame;
and determining a vertical offset angle according to the first height, the second height, the vertical position coordinate and the vertical visual angle in the camera parameters.
In an embodiment, the 3D position information further includes a focal length of a lens, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter further includes:
acquiring a first width and a first height in the first 2D position information, and a second width, a second height and a first image multiple in the second 2D position information;
determining a second image multiple according to the first width, the first height, the second width, the second height and the first image multiple;
and determining the focal length of the lens according to the second image multiple.
In an embodiment, the step of acquiring first 2D position information of a video image captured by a camera, second 2D position information of a target rectangular frame set in the video image, and camera parameters includes:
if the video image shot by the camera is detected, acquiring a multimedia data stream of the video image;
displaying the video image based on the multimedia data stream, and acquiring first 2D position information and camera parameters of the video image;
if the setting operation corresponding to the video image is detected, determining the target rectangular frame based on the setting operation;
and acquiring second 2D position information of the target rectangular frame.
In an embodiment, the 3D position information includes the horizontal offset, the vertical offset, and a focal length of a lens, and the step of controlling a camera of the camera to perform a steering operation according to the 3D position information includes:
controlling the camera to execute a moving operation based on the horizontal offset and the vertical offset;
and controlling the camera to execute focal length adjustment operation based on the lens focal length.
In addition, to achieve the above object, the present invention further provides a positioning and tracking device, including: the positioning and tracking system comprises a memory, a processor and a positioning and tracking program which is stored on the memory and can run on the processor, wherein the positioning and tracking program realizes the steps of the positioning and tracking method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, which stores a positioning and tracking program, and the positioning and tracking program implements the steps of the positioning and tracking method when executed by a processor.
According to the method, the first 2D position information of a video image shot by a camera, the second 2D position information of a target rectangular frame and camera parameters are obtained, the 3D position information is determined according to the first 2D position information, the second 2D position information and the camera parameters, the camera of the camera is controlled to execute steering operation according to the 3D position information, the camera is controlled to execute camera shooting operation when the steering operation is finished, so that a video image of the target position corresponding to the target rectangular frame is obtained, a target observation range is accurately and quickly determined by setting the target rectangular frame and converting the 2D position information into the 3D position information to control the camera to steer, the camera is quickly controlled to move to the position of a target to shoot, the video image of the target position can be quickly captured, and the video monitoring efficiency and the accuracy of the monitoring target position range are improved.
Drawings
FIG. 1 is a schematic diagram of an apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a positioning and tracking method according to the present invention;
FIG. 3 is a schematic diagram of 3D scaling comparison of the location tracking method of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Fig. 1 is a schematic structural diagram of a positioning and tracking apparatus in a hardware operating environment according to an embodiment of the present invention.
The positioning and tracking device of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, a portable computer and the like.
As shown in fig. 1, the positioning and tracking device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the positioning and tracking device may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Among others, sensors such as light sensors and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display screen according to the brightness of ambient light. Of course, the positioning and tracking device may also be configured with other sensors such as barometer, hygrometer, thermometer, infrared sensor, etc., which are not described herein again.
Those skilled in the art will appreciate that the configuration of the position tracking device shown in FIG. 1 is not meant to be limiting, and may include more or less components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a location tracking program therein.
In the positioning and tracking device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke a location tracking program stored in the memory 1005.
In this embodiment, the positioning and tracking device includes: a memory 1005, a processor 1001, and a location tracking program stored on the memory 1005 and executable on the processor 1001, wherein the processor 1001, when calling the location tracking program stored in the memory 1005, performs the following operations:
acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters;
determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameters;
and controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame.
Further, the processor 1001 may call a location tracking program stored in the memory 1005, and also perform the following operations:
determining a horizontal offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the horizontal offset according to the horizontal offset angle and the tripod head stepping angle in the camera parameters.
Further, the processor 1001 may call a location tracking program stored in the memory 1005, and also perform the following operations:
acquiring a first width in the first 2D position information, and a second width and horizontal position coordinates in the second 2D position information;
and determining a horizontal offset angle according to the first width, the second width, the horizontal position coordinate and a horizontal visual angle in the camera parameters.
Further, the processor 1001 may call the location tracking program stored in the memory 1005 to perform the following operations:
determining a vertical offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the vertical offset according to the vertical offset angle and the tripod head stepping angle in the camera parameters.
Further, the processor 1001 may call a location tracking program stored in the memory 1005, and also perform the following operations:
acquiring a first height in the first 2D position information, a second height in the second 2D position information and a vertical position coordinate of the target rectangular frame;
and determining a vertical offset angle according to the first height, the second height, the vertical position coordinate and the vertical visual angle in the camera parameters.
Further, the processor 1001 may call a location tracking program stored in the memory 1005, and also perform the following operations:
acquiring a first width and a first height in the first 2D position information, and a second width, a second height and a first image multiple in the second 2D position information;
determining a second image multiple according to the first width, the first height, the second width, the second height and the first image multiple;
and determining the focal length of the lens according to the second image multiple.
Further, the processor 1001 may call a location tracking program stored in the memory 1005, and also perform the following operations:
if the video image shot by the camera is detected, acquiring a multimedia data stream of the video image;
displaying the video image based on the multimedia data stream, and acquiring first 2D position information and camera parameters of the video image;
if the setting operation corresponding to the video image is detected, determining the target rectangular frame based on the setting operation;
and acquiring second 2D position information of the target rectangular frame.
Further, the processor 1001 may call the location tracking program stored in the memory 1005 to perform the following operations:
controlling the camera to execute a moving operation based on the horizontal offset and the vertical offset;
and controlling the camera to execute focal length adjustment operation based on the lens focal length.
The present invention further provides a positioning and tracking method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the positioning and tracking method of the present invention.
In this embodiment, a positioning and tracking system for implementing the positioning and tracking method includes: the positioning and tracking method comprises the following steps of:
step S10, acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters;
in this embodiment, first 2D position information of a video image captured by a camera is obtained from a display terminal, where the display terminal may be upper computer software of a PC, a video monitoring system, an automatic recording and playing system, a video conference system, or the like; the Camera may be a Camera having video recording and network communication functions, such as an IPC Camera (IP Camera), and may perform monitoring shooting and transmit information such as video images to the display terminal. After the video image is shot by the camera, the video image is converted into a multimedia data stream, the display terminal acquires the multimedia data stream sent by the camera and displays the video image on the display terminal by analyzing the multimedia data stream, and the display terminal can acquire first 2D position information of the video image from the multimedia data stream, wherein the first 2D position information comprises a first width and a first height.
After the video image is displayed on the display terminal, a target rectangular frame can be set on the video image displayed on the display terminal, the mode for setting the target rectangular frame on the display terminal can be manual touch control or mouse locking, the target rectangular frame can be a rectangular frame set by a user or other irregular shapes, the rectangular frame is converted into the target rectangular frame through a certain rule, and second 2D position information corresponding to the target rectangular frame and camera parameters of the current camera are obtained after the target rectangular frame is set. The second 2D position information comprises a second width, a second height, a first image multiple, a horizontal position coordinate and a vertical position coordinate of the target rectangular frame, and the camera parameters comprise a horizontal visual angle, a vertical visual angle and a tripod head stepping angle of the camera.
For example, referring to fig. 3, in the scheme, PC-side upper computer software is used to display a video image obtained from an IPC camera, and as shown in fig. 3 (a) before 3D zooming, a target rectangular frame can be drawn on the video image displayed by the upper computer software through a mouse, and as shown in the target rectangular frame where the letter a is located, the upper computer software obtains first 2D position information and second 2D position information. The first 2D position information is a first width W and a first height H of the video image, the second 2D position information is a second width W, a second height H, a horizontal position coordinate x, a vertical position coordinate y, and a first image multiple current _ ratio of the set target rectangular frame, and 7 information parameters of the first 2D position information and the second 2D position information, etc. are sent to the IPC camera, the IPC camera acquires the first 2D position information and the second 2D position information, and acquires camera parameters of the camera, that is, a horizontal angle of view α, a vertical angle of view β, and a pan-tilt angle of 0.069444 °/step, the camera parameters of the cameras of different manufacturers are different, and the embodiment is not specifically limited.
Step S20, determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameters;
in this embodiment, after the camera acquires the first 2D position information, the second 2D position information, and the camera parameter, the 2D-3D algorithm conversion module in the camera converts the first 2D position information and the second 2D position information into 3D position information that can be recognized by the camera pan/tilt control execution module through a 2D-3D position information conversion algorithm according to the first 2D position information, the second 2D position information, and the camera parameter. Specifically, the 3D position information includes a horizontal offset, a vertical offset, and a lens focal length.
For example, after the IPC camera in this solution acquires the first 2D position information, the second 2D position information, and the camera parameter, according to the first 2D position information, the second 2D position information, and the camera parameter, the 3D position information is determined, that is, according to the first width W and the first height H of the video image, the set second width W, the second height H, the horizontal position coordinate x, the vertical position coordinate y, and the first image multiple current _ ratio of the target rectangular frame, and the lens parameter information parameters of the IPC camera, such as the horizontal viewing angle α, the vertical viewing angle β, and the pan/tilt angle 0.069444 °/step, the first 2D position information and the second 2D position information are converted into the 3D position information through the 2D-3D position information conversion algorithm, that is, the horizontal offset pan _ off, the vertical offset tilt _ off, and the lens focal length target _ zoom.
And S30, controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame.
In this embodiment, after the first 2D position information and the second 2D position information are converted into 3D position information that can be recognized by the camera pan/tilt control execution module, the camera pan/tilt control execution module obtains the 3D position information from the 2D-3D algorithm conversion module, sequentially inputs parameters in the 3D position information, and obtains a protocol command corresponding to the 3D position information through a pan/tilt control protocol, where the pan/tilt control protocol may be a VISCA, a pecco-D/P protocol, or another pan/tilt control protocol, which is not specifically limited in this embodiment. The pan-tilt control execution module acquires protocol commands corresponding to different parameters in the 3D position information, controls the camera of the camera to execute steering operations in different directions through the acquired different protocol commands, and finally controls the camera of the camera to turn to a corresponding target position in a previously set target rectangular frame and perform shooting so as to control the camera to acquire a video image of the target position and perform real-time positioning and tracking.
For example, the pan-tilt control execution module in the IPC camera in this scheme obtains the 3D position information obtained by converting the first 2D position information and the second 2D position information through the 2D-3D algorithm conversion module, that is, obtains the horizontal offset pan _ off, the vertical offset tilt _ off, and the lens focal length target _ zoom, and according to the 3D position information of the horizontal offset pan _ off, the vertical offset tilt _ off, and the lens focal length target _ zoom, the pan-tilt control execution module controls the camera of the IPC camera to rotate to the target position corresponding to the target rectangular frame previously set on the video image displayed on the display terminal, and if the target position is at the upper left corner, controls the camera to move left and up according to the horizontal offset pan _ off and the vertical offset tilt _ off, and controls the camera to perform the focal length adjustment operation according to the lens focal length target _ zoom to the target position for observation.
According to the positioning and tracking method provided by the embodiment, the first 2D position information of a video image shot by a camera, the second 2D position information of a target rectangular frame and camera parameters set in the video image are obtained, then the 3D position information is determined according to the first 2D position information, the second 2D position information and the camera parameters, finally the camera of the camera is controlled to execute steering operation according to the 3D position information, the camera is controlled to execute the shooting operation when the steering operation is completed, so that the video image of the target position corresponding to the target rectangular frame is obtained, the target observation range is accurately and quickly determined by setting the target rectangular frame and converting the 2D position information into the 3D position information to control the camera to steer, the camera is quickly controlled to move to the position where the target is located to shoot, the video image of the target position can be quickly captured, and the video monitoring efficiency and the accuracy of the monitoring target position range are improved.
Based on the first embodiment, a second embodiment of the positioning and tracking method of the present invention is proposed, in this embodiment, the 3D position information includes a horizontal offset, and step S20 includes:
step a, determining a horizontal offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
the horizontal offset angle is a horizontal offset angle of the target rectangular frame from the video image, and may be a horizontal angle of the center position of the target rectangular frame from the center position of the video image or other horizontal angles capable of indicating that the target rectangular frame is offset from the video image, such as the horizontal offset angle θ in fig. 3 (a).
In this embodiment, after the 2D-3D algorithm conversion module acquires the first 2D position information, the second 2D position information, and the camera parameter, based on the calculated horizontal offset, a parameter required for calculation is acquired from the first 2D position information, the second 2D position information, and the camera parameter, and a horizontal offset angle is calculated, where the parameter required for calculating the horizontal offset may be a width, a relevant horizontal coordinate, a relevant camera parameter horizontal viewing angle, or another parameter.
In one embodiment, step a comprises:
acquiring a first width in the first 2D position information, and a second width and horizontal position coordinates in the second 2D position information;
and determining a horizontal offset angle according to the first width, the second width, the horizontal position coordinate and the horizontal view angle in the camera parameter.
In this embodiment, the 2D-3D algorithm conversion module obtains relevant parameters required for calculating the horizontal offset angle, including a first width of the video image in the first 2D position information, and a second width and a horizontal position coordinate of the target rectangular frame in the second 2D position information, where the first width may be a width of the video image, a half of the width, or a number of pixels capable of representing the width, and the like; the horizontal position coordinate may be an abscissa of the upper left corner or the lower left corner of the target rectangular frame, an abscissa of the center position of the target rectangular frame, or another abscissa capable of representing the position of the target rectangular frame.
In this embodiment, after the 2D-3D algorithm conversion module obtains the relevant parameters required for calculating the horizontal offset angle, that is, after obtaining the first width, the second width, the horizontal position coordinate and the horizontal viewing angle in the camera parameter, the first width, the second width, the horizontal position coordinate and the horizontal viewing angle in the camera parameter are input, and the horizontal offset angle is calculated through the 2D-3D conversion algorithm. Referring to fig. 3 (a), the formula of the horizontal offset angle θ is:
θ=arctan(BC/OC)
where OC = AC/tan (α/2), BC = x + w/2, AC = w/2, w is the first width, w is the second width, x is the horizontal position coordinate, and α is the horizontal angle of view in the camera parameters.
And b, determining the horizontal offset according to the horizontal offset angle and the tripod head stepping angle in the camera parameters.
In this embodiment, after the 2D-3D algorithm conversion module calculates the horizontal offset angle, the horizontal offset angle and the pan-tilt step angle are obtained, and according to the horizontal offset angle and the pan-tilt step angle, one of the parameters in the 3D position information, that is, the horizontal offset, can be calculated. The horizontal offset angle is a horizontal offset angle of the target rectangular frame deviating from the video image, and can be a horizontal angle of the center position of the target rectangular frame deviating from the center position of the video image or other horizontal angles capable of indicating that the target rectangular frame deviates from the video image, the tripod head stepping angle is determined by parameters of the camera, tripod head stepping angles of cameras of different manufacturers are different, and the tripod head stepping angle is a constant value. For example, knowing that the horizontal offset angle θ and the pan-tilt step angle are 0.069444 °/step, the horizontal offset amount pan _ off = θ/0.069444 can be calculated.
According to the positioning and tracking method provided by the embodiment, the horizontal offset angle of the target rectangular frame deviating from the video image is determined according to the first 2D position information, the second 2D position information and the camera parameter, the horizontal offset is determined according to the horizontal offset angle and the tripod head stepping angle in the camera parameter, the horizontal offset angle is determined first, then the horizontal offset is determined, the horizontal offset can be accurately determined according to the horizontal offset angle and the tripod head stepping angle, the accuracy of calculating the horizontal offset is improved, and the accuracy of the tripod head control module for controlling the horizontal movement distance of the camera is further improved.
Based on the first embodiment, a third embodiment of the positioning and tracking method of the present invention is provided, in this embodiment, the 3D position information includes a vertical offset, and step S20 further includes:
c, determining a vertical offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
the vertical offset angle is a vertical offset angle of the target rectangular frame from the video image, and may be a vertical offset angle of the center position of the target rectangular frame from the center position of the video image or another vertical offset angle capable of indicating that the target rectangular frame is offset from the video image, such as the vertical offset angle Φ in fig. 3.
In this embodiment, after the 2D-3D algorithm conversion module acquires the first 2D position information, the second 2D position information, and the camera parameter, based on calculating the vertical offset, parameters required for calculation are acquired from the first 2D position information, the second 2D position information, and the camera parameter, and a vertical offset angle is calculated, where the parameters required for calculating the vertical offset may be a height, a related vertical coordinate, a vertical viewing angle of the related camera parameter, or other parameters.
In one embodiment, step c comprises:
acquiring a first height in the first 2D position information, a second height in the second 2D position information and a vertical position coordinate of the target rectangular frame;
and determining a vertical offset angle according to the first height, the second height, the vertical position coordinate and the vertical visual angle in the camera parameters.
In this embodiment, the 2D-3D algorithm conversion module obtains relevant parameters required for calculating the vertical offset angle, including a first height of the video image in the first 2D position information, and a second height and a vertical position coordinate of the target rectangular frame in the second 2D position information, where the first height may be a half of a height and a height of the video image or a number of pixels capable of representing the height, and the like; the vertical position coordinate may be a vertical coordinate of the upper left corner or the lower left corner of the target rectangular frame, a vertical coordinate of the center position of the target rectangular frame, or another vertical coordinate capable of representing the position of the target rectangular frame.
In this embodiment, after the 2D-3D algorithm conversion module obtains the relevant parameters required for calculating the vertical offset angle, that is, after the first height, the second height, the vertical position coordinate and the vertical view angle in the camera parameter are obtained, the first height, the second height, the vertical position coordinate and the vertical view angle in the camera parameter are input, and the vertical offset angle is calculated through the 2D-3D algorithm. Referring to fig. 3 (a), the formula for the vertical offset angle Φ is:
Figure BDA0002158820460000121
wherein H is a first height, H is a second height, y is a vertical position coordinate, and β is a vertical angle of view in the camera parameters.
And d, determining the vertical offset according to the vertical offset angle and the tripod head stepping angle in the camera parameters.
In this embodiment, after the 2D-3D algorithm conversion module calculates the vertical offset angle, the vertical offset angle and the pan-tilt stepping angle are obtained, and according to the vertical offset angle and the pan-tilt stepping angle, the vertical offset, which is one of the parameters in the 3D position information, can be calculated. The vertical offset angle is the vertical offset angle of the target rectangular frame from the video image, and can be the vertical angle of the central position of the target rectangular frame from the central position of the video image or other vertical angles capable of indicating that the target rectangular frame is from the video image, the tripod head stepping angle is determined by parameters of the cameras, the tripod head stepping angles of the cameras of different manufacturers are different, and the tripod head stepping angle is a constant value. For example, knowing that the vertical offset angle Φ and the pan-tilt step angle are 0.069444 °/step, the vertical offset tilt _ off = Φ/0.069444 can be calculated.
According to the positioning and tracking method provided by the embodiment, the vertical offset angle of the target rectangular frame deviating from the video image is determined according to the first 2D position information, the second 2D position information and the camera parameter, then the vertical offset is determined according to the vertical offset angle and the tripod head stepping angle in the camera parameter, the vertical offset angle is determined first, then the vertical offset is determined, the vertical offset can be accurately determined according to the vertical offset angle and the tripod head stepping angle, the accuracy of calculating the vertical offset is improved, and the accuracy of the tripod head control module in controlling the vertical movement distance of the camera is further improved.
Based on the first embodiment, a fourth embodiment of the positioning and tracking method of the present invention is provided, in this embodiment, the 3D position information includes a lens focal length, and step S20 further includes:
step e, acquiring a first width and a first height in the first 2D position information, and a second width, a second height and a first image multiple in the second 2D position information;
in this embodiment, the 2D-3D algorithm conversion module obtains relevant parameters required for calculating the focal length of the lens, including a first width and a first height of a video image in the first 2D position information, and a second width, a second height and a first image multiple of a target rectangular frame in the second 2D position information, where the first width may be a width of the video image, a half of the width, or a number of pixels capable of representing the width, and the like; the first height can be the height of the video image, half of the height or the number of pixels capable of representing the height, etc.; the second width may be the width of the target rectangular frame, half of the width, or the number of pixels capable of representing the width, etc.; the second height can be the height of the target rectangular frame, half of the height or the number of pixels capable of representing the height, and the like; the first image multiple is the second image multiple calculated last time the positioning and tracking procedure is executed, it is understood that if the second image multiple is calculated for the first time, the first image multiple is equal to a preset initial value, for example, the initial value may be 1, and preferably, the second image multiple may be stored as the first image multiple when the steering operation of the camera is completed.
F, determining a second image multiple according to the first width, the first height, the second width, the second height and the first image multiple;
in this embodiment, after the 2D-3D algorithm conversion module obtains the relevant parameters required for calculating the second image multiple, that is, after obtaining the first width, the first height, the second width, the second height and the first image multiple, the first width, the first height, the second width, the second height and the first image multiple are input, and the second image multiple is calculated through the 2D-3D conversion algorithm. Referring to fig. 3 (a), the formula of the second image multiple target _ ratio is:
target_ratio=target_ratio/source_area*current_ratio
the area source _ area = W × H of the video image, the area target _ ratio = W × H of the target rectangular frame, W is a first width, H is a first height, W is a second width, H is a second height, and current _ ratio is a first image multiple.
And g, determining the focal length of the lens according to the second image multiple.
In this embodiment, after the 2D-3D algorithm conversion module determines the second image multiple, the second image multiple is obtained, and the lens focal length, which is one of the parameters in the 3D position information, can be calculated according to the functional relationship between the second image multiple and the lens focal length. The functional relationship between the second image multiple and the lens focal length is measured or fitted by the camera, and the functional relationship between the second image multiple and the lens focal length of the cameras of different manufacturers is different and generally is a nonlinear relationship. For example, knowing the second image multiple target ratio and the functional relationship f (x) of the second image multiple and the lens focal length, the lens focal length target _ zoom = f (target _ ratio) may be calculated.
According to the positioning and tracking method provided by the embodiment, the first width and the first height in the first 2D position information, and the second width, the second height and the first image multiple in the second 2D position information are obtained, then the second image multiple is determined according to the first width, the first height, the second width, the second height and the first image multiple, finally the lens focal length is determined according to the second image multiple, the second image multiple is determined, then the lens focal length is determined, the lens focal length can be accurately determined according to the functional relation between the second image multiple, the second image multiple and the lens focal length, the accuracy of calculating the lens focal length is improved, and the accuracy of the cloud deck control module controlling the camera to execute the focal length adjusting operation is further improved.
Based on the first embodiment, a fifth embodiment of the positioning and tracking method of the present invention is provided, in this embodiment, step S10 includes:
step h, if the video image shot by the camera is detected, acquiring a multimedia data stream of the video image;
in this embodiment, the camera is connected to the display terminal through the network port, the camera captures a video image, if it is detected that the video image is captured by the camera, the camera converts the captured video image into a multimedia data stream, and sends the multimedia data stream to the display terminal, and the display terminal obtains the multimedia data stream corresponding to the video image.
Step i, displaying the video image based on the multimedia data stream, and acquiring first 2D position information and camera parameters of the video image;
in this embodiment, after the display terminal obtains the multimedia data stream corresponding to the video image, the multimedia data stream is converted into the video image, so that the multimedia data stream is converted into the video image that can be displayed on the display terminal.
Step j, if the setting operation corresponding to the video image is detected, determining the target rectangular frame based on the setting operation;
in this embodiment, after the video image is displayed on the display terminal, a target rectangular frame may be set on the video image displayed on the display terminal, where the target rectangular frame may be set in a rectangular or other irregular shape that is locked by a mouse or locked in a target position by manual touch on a touch screen. If the setting operation corresponding to the video image displayed on the display terminal is detected, converting the setting operation into a target rectangular frame through a certain rule, and locking the target rectangular frame where the target position is located, wherein the rule for converting the rectangular or irregular shape into the target rectangular frame is as follows: if the rectangle is set, directly locking the rectangle as a target rectangle frame; if the irregular shape is set, a target rectangular frame is set by detecting the positions of the highest point, the lowest point, the leftmost point and the rightmost point of the irregular shape, and all points on the boundary of the irregular shape are included.
And k, acquiring second 2D position information of the target rectangular frame.
In this embodiment, after the target rectangular frame is set on the video image displayed by the display terminal, second 2D position information corresponding to the target rectangular frame is obtained, where the second 2D position information includes a second width, a second height, a first image multiple, a horizontal position coordinate, and a vertical position coordinate of the target rectangular frame.
According to the positioning and tracking method provided by the embodiment, if the video image shot by the camera is detected, the multimedia data stream of the video image is acquired, then the video image is displayed based on the multimedia data stream, the first 2D position information and the camera parameters of the video image are acquired, and finally, if the setting operation corresponding to the video image is detected, the target rectangular frame is determined based on the setting operation, the second 2D position information of the target rectangular frame is acquired, and the target rectangular frame is set on the video image, so that a user can conveniently and quickly lock the target rectangular frame, accurately capture an observation target, and determine the observation position and the observation range.
Based on the first embodiment, a sixth embodiment of the positioning and tracking method of the present invention is provided, in this embodiment, step S30 includes:
step l, controlling the camera to execute moving operation based on the horizontal offset and the vertical offset;
in the embodiment, after 2D position information is converted into 3D position information, the pan-tilt control execution module acquires the 3D position information, inputs a horizontal offset in the 3D position information, acquires a protocol instruction corresponding to the horizontal offset through a pan-tilt control protocol, and controls the camera to move left and right according to the acquired protocol instruction; or inputting a vertical offset in the 3D position information, acquiring a protocol instruction corresponding to the vertical offset through a holder control protocol, and controlling the camera to move upwards or downwards according to the acquired protocol instruction; or inputting a horizontal offset and a vertical offset in the 3D position information, acquiring a protocol instruction corresponding to the vertical offset through a holder control protocol, and controlling the camera to move leftwards or rightwards and downwards according to the acquired protocol instruction.
And m, controlling the camera to execute focal length adjustment operation based on the focal length of the lens.
In this embodiment, after converting 2D position information into 3D position information, the pan/tilt control execution module acquires the 3D position information, inputs a lens focal length in the 3D position information, acquires a protocol instruction corresponding to the lens focal length through a pan/tilt control protocol, and controls the camera to execute a focal length adjustment operation according to the acquired protocol instruction, where the focal length adjustment operation may be to increase the focal length, decrease the focal length, focus, and the like. For example, as shown in the zoom comparison diagram in fig. 3, diagram (a) is a video image displayed on the display terminal before the focal length is adjusted and a set target rectangular frame, diagram (b) is a video image displayed after the focal length is adjusted, that is, after the camera is controlled to perform the moving and focal length adjusting operations, the camera is turned to the position of the letter a in the target rectangular frame to perform the image pickup, and finally the letter a is positionally tracked and enlarged to the video image shown in diagram (b).
According to the positioning and tracking method provided by the embodiment, the camera is controlled to execute the movement operation based on the horizontal offset and the vertical offset, and then the camera is controlled to execute the focal length adjustment operation based on the focal length of the lens, so that the camera is controlled to rotate to the target position corresponding to the rectangular frame of the shooting target to shoot, and the positioning and tracking of the observation target are realized.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a positioning and tracking program is stored on the computer-readable storage medium, and when executed by a processor, the positioning and tracking program implements the following operations:
further, the positioning and tracking program when executed by the processor further realizes the following operations:
acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters;
determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameter;
and controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame.
Further, the positioning and tracking program when executed by the processor further performs the following operations:
determining a horizontal offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the horizontal offset according to the horizontal offset angle and the tripod head stepping angle in the camera parameters.
Further, the positioning and tracking program when executed by the processor further realizes the following operations:
acquiring a first width in the first 2D position information, and a second width and horizontal position coordinates in the second 2D position information;
and determining a horizontal offset angle according to the first width, the second width, the horizontal position coordinate and a horizontal visual angle in the camera parameters.
Further, the positioning and tracking program when executed by the processor further realizes the following operations:
determining a vertical offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
and determining the vertical offset according to the vertical offset angle and the tripod head stepping angle in the camera parameters.
Further, the positioning and tracking program when executed by the processor further performs the following operations:
acquiring a first height in the first 2D position information, a second height in the second 2D position information and a vertical position coordinate of the target rectangular frame;
and determining a vertical offset angle according to the first height, the second height, the vertical position coordinate and the vertical visual angle in the camera parameters.
Further, the positioning and tracking program when executed by the processor further performs the following operations:
acquiring a first width and a first height in the first 2D position information, and a second width, a second height and a first image multiple in the second 2D position information;
determining a second image multiple according to the first width, the first height, the second width, the second height and the first image multiple;
and determining the focal length of the lens according to the second image multiple.
Further, the positioning and tracking program when executed by the processor further realizes the following operations:
if the video image shot by the camera is detected, acquiring a multimedia data stream of the video image;
displaying the video image based on the multimedia data stream, and acquiring first 2D position information and camera parameters of the video image;
if the setting operation corresponding to the video image is detected, determining the target rectangular frame based on the setting operation;
and acquiring second 2D position information of the target rectangular frame.
Further, the positioning and tracking program when executed by the processor further performs the following operations:
controlling the camera to execute a moving operation based on the horizontal offset and the vertical offset;
and controlling the camera to execute focal length adjustment operation based on the lens focal length.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (6)

1. A positioning and tracking method, characterized in that the positioning and tracking method comprises the following steps:
acquiring first 2D position information of a video image shot by a camera, second 2D position information of a target rectangular frame set in the video image and camera parameters;
determining 3D position information according to the first 2D position information, the second 2D position information and the camera parameter;
controlling a camera of the camera to execute steering operation according to the 3D position information, and controlling the camera to execute shooting operation when the steering operation is finished so as to obtain a video image of a target position corresponding to the target rectangular frame;
wherein, the 3D position information includes a focal length of a lens, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter further includes:
acquiring a first width and a first height in the first 2D position information, and a second width, a second height and a first image multiple in the second 2D position information;
determining a second image multiple according to the first width, the first height, the second width, the second height and the first image multiple, wherein a formula of the second image multiple target _ ratio is as follows: target _ ratio = target _ ratio1/source _ area _ current _ ratio, wherein the area of the video image source _ area = W H, the area of the target rectangular frame target _ ratio1= W H, W is a first width, H is a first height, W is a second width, H is a second height, and current _ ratio is a first image multiple which is a second image multiple calculated when the localization tracking program is executed last time;
calculating the focal length of the lens according to the functional relation between the second image multiple and the focal length of the lens;
the 3D position information includes a horizontal offset, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter includes:
determining a horizontal offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
determining the horizontal offset according to the horizontal offset angle and the tripod head stepping angle in the camera parameters;
the 3D position information further includes a vertical offset, and the step of determining the 3D position information according to the first 2D position information, the second 2D position information, and the camera parameter further includes:
determining a vertical offset angle of the target rectangular frame deviating from the video image according to the first 2D position information, the second 2D position information and the camera parameter;
determining the vertical offset according to the vertical offset angle and the tripod head stepping angle in the camera parameters;
the step of controlling the camera of the camera to perform a steering operation according to the 3D position information includes:
acquiring a protocol instruction corresponding to the horizontal offset through a pan-tilt control protocol, and controlling the camera to move leftwards or rightwards according to the acquired protocol instruction, or acquiring a protocol instruction corresponding to the vertical offset through the pan-tilt control protocol, and controlling the camera to move upwards or downwards according to the acquired protocol instruction; or acquiring protocol instructions corresponding to the horizontal offset and the vertical offset through a holder control protocol, and controlling the camera to move leftwards and upwards, leftwards and downwards, rightwards and upwards or rightwards and downwards according to the acquired protocol instructions;
and acquiring a protocol instruction corresponding to the focal length of the lens through a holder control protocol, and controlling the camera to execute focal length adjustment operation according to the acquired protocol instruction.
2. The method of claim 1, wherein the step of determining a horizontal offset angle of the target rectangular frame from the video image according to the first 2D position information, the second 2D position information and the camera parameter comprises:
acquiring a first width in the first 2D position information, and a second width and horizontal position coordinates in the second 2D position information;
and determining a horizontal offset angle according to the first width, the second width, the horizontal position coordinate and the horizontal view angle in the camera parameter.
3. The method of claim 1, wherein the step of determining a vertical offset angle of the target rectangular frame from the video image according to the first 2D position information, the second 2D position information and the camera parameter comprises:
acquiring a first height in the first 2D position information, a second height in the second 2D position information and a vertical position coordinate of the target rectangular frame;
and determining a vertical offset angle according to the first height, the second height, the vertical position coordinate and the vertical visual angle in the camera parameter.
4. The positioning and tracking method according to claim 1, wherein the step of acquiring first 2D position information of a video image captured by a camera, second 2D position information of a target rectangular frame set in the video image, and camera parameters comprises:
if the video image shot by the camera is detected, acquiring a multimedia data stream of the video image;
displaying the video image based on the multimedia data stream, and acquiring first 2D position information and camera parameters of the video image;
if the setting operation corresponding to the video image is detected, determining the target rectangular frame based on the setting operation;
and acquiring second 2D position information of the target rectangular frame.
5. A positioning and tracking device, comprising: memory, a processor and a location tracking program stored on the memory and executable on the processor, the location tracking program when executed by the processor implementing the steps of the location tracking method according to any one of claims 1 to 4.
6. A computer-readable storage medium, having a location tracking program stored thereon, which when executed by a processor implements the steps of the location tracking method of any one of claims 1 to 4.
CN201910729307.8A 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium Active CN110456829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910729307.8A CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910729307.8A CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110456829A CN110456829A (en) 2019-11-15
CN110456829B true CN110456829B (en) 2022-12-13

Family

ID=68485468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910729307.8A Active CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110456829B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131713B (en) * 2019-12-31 2022-03-08 深圳市维海德技术股份有限公司 Lens switching method, device, equipment and computer readable storage medium
CN111385476A (en) * 2020-03-16 2020-07-07 浙江大华技术股份有限公司 Method and device for adjusting shooting position of shooting equipment
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113067962A (en) * 2021-03-17 2021-07-02 杭州寰宇微视科技有限公司 Method for realizing scene motion positioning based on movement camera image
CN113452913B (en) * 2021-06-28 2022-05-27 北京宙心科技有限公司 Zooming system and method
CN113938614B (en) * 2021-12-20 2022-03-22 苏州万店掌软件技术有限公司 Video image zooming method, device, equipment and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704048B1 (en) * 1998-08-27 2004-03-09 Polycom, Inc. Adaptive electronic zoom control
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN103713652A (en) * 2012-09-28 2014-04-09 浙江大华技术股份有限公司 Holder rotation speed control method, device and system
CN103929583A (en) * 2013-01-15 2014-07-16 北京三星通信技术研究有限公司 Method for controlling intelligent terminal and intelligent terminal
JP2015180091A (en) * 2015-05-08 2015-10-08 ルネサスエレクトロニクス株式会社 digital camera
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106657727A (en) * 2016-11-02 2017-05-10 深圳市维海德技术股份有限公司 Camera varifocal sensing mechanism and camera assembly
CN107079106A (en) * 2016-09-26 2017-08-18 深圳市大疆创新科技有限公司 Focusing method and device, image capturing method and device and camera system
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings
CN107507243A (en) * 2016-06-14 2017-12-22 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107959793A (en) * 2017-11-29 2018-04-24 努比亚技术有限公司 A kind of image processing method and terminal, storage medium
CN108225278A (en) * 2017-11-29 2018-06-29 维沃移动通信有限公司 A kind of distance measuring method, mobile terminal
WO2018120460A1 (en) * 2016-12-28 2018-07-05 平安科技(深圳)有限公司 Image focal length detection method, apparatus and device, and computer-readable storage medium
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN108549413A (en) * 2018-04-27 2018-09-18 全球能源互联网研究院有限公司 A kind of holder method of controlling rotation, device and unmanned vehicle
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5384172B2 (en) * 2009-04-03 2014-01-08 富士フイルム株式会社 Auto focus system
JP5682168B2 (en) * 2010-07-30 2015-03-11 ソニー株式会社 Camera device, camera system, control device, and program
CN102148965B (en) * 2011-05-09 2014-01-15 厦门博聪信息技术有限公司 Video monitoring system for multi-target tracking close-up shooting
CN107925713B (en) * 2015-08-26 2020-05-08 富士胶片株式会社 Image pickup system and image pickup control method
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN105718862A (en) * 2016-01-15 2016-06-29 北京市博汇科技股份有限公司 Method, device and recording-broadcasting system for automatically tracking teacher via single camera
CN109391762B (en) * 2017-08-03 2021-10-22 杭州海康威视数字技术股份有限公司 Tracking shooting method and device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704048B1 (en) * 1998-08-27 2004-03-09 Polycom, Inc. Adaptive electronic zoom control
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
CN103713652A (en) * 2012-09-28 2014-04-09 浙江大华技术股份有限公司 Holder rotation speed control method, device and system
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN103929583A (en) * 2013-01-15 2014-07-16 北京三星通信技术研究有限公司 Method for controlling intelligent terminal and intelligent terminal
JP2015180091A (en) * 2015-05-08 2015-10-08 ルネサスエレクトロニクス株式会社 digital camera
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN107507243A (en) * 2016-06-14 2017-12-22 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN107079106A (en) * 2016-09-26 2017-08-18 深圳市大疆创新科技有限公司 Focusing method and device, image capturing method and device and camera system
CN106657727A (en) * 2016-11-02 2017-05-10 深圳市维海德技术股份有限公司 Camera varifocal sensing mechanism and camera assembly
WO2018120460A1 (en) * 2016-12-28 2018-07-05 平安科技(深圳)有限公司 Image focal length detection method, apparatus and device, and computer-readable storage medium
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings
CN107959793A (en) * 2017-11-29 2018-04-24 努比亚技术有限公司 A kind of image processing method and terminal, storage medium
CN108225278A (en) * 2017-11-29 2018-06-29 维沃移动通信有限公司 A kind of distance measuring method, mobile terminal
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN108549413A (en) * 2018-04-27 2018-09-18 全球能源互联网研究院有限公司 A kind of holder method of controlling rotation, device and unmanned vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Zoom Motion Estimation Using Block-Based Fast Local Area Scaling;Hyo-Sung Kim,等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20121231;第22卷(第9期);全文 *
基于双焦的单目立体成像系统分析;刘昕鑫,等;《计算机测量与控制》;20081231(第9期);全文 *

Also Published As

Publication number Publication date
CN110456829A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110456829B (en) Positioning tracking method, device and computer readable storage medium
US9007400B2 (en) Image processing device, image processing method and computer-readable medium
US9712745B2 (en) Method and apparatus for operating camera function in portable terminal
EP2426637B1 (en) Method for generating panoramic image
JP5740884B2 (en) AR navigation for repeated shooting and system, method and program for difference extraction
KR102124617B1 (en) Method for composing image and an electronic device thereof
KR20050003402A (en) Remotely-operated robot, and robot self position identifying method
US20130314547A1 (en) Controlling apparatus for automatic tracking camera, and automatic tracking camera having the same
US10951879B2 (en) Method, system and apparatus for capture of image data for free viewpoint video
JP2013228267A (en) Display device, display method, and program
JP2013236215A (en) Display video forming apparatus and display video forming method
US20220182551A1 (en) Display method, imaging method and related devices
JP7204346B2 (en) Information processing device, system, information processing method and program
JP2016197797A (en) Image processing apparatus, image processing method, and image processing system
CN112672051B (en) Shooting method and device and electronic equipment
CN113302908B (en) Control method, handheld cradle head, system and computer readable storage medium
CN111935410B (en) Quick view finding method and system for multi-camera shooting
JP5200800B2 (en) Imaging apparatus and imaging system
JP6269014B2 (en) Focus control device and focus control method
JP2014039166A (en) Controller of automatic tracking camera and automatic tracking camera equipped with the same
CN113840084A (en) Method for realizing control of panoramic tripod head based on PTZ (Pan/Tilt/zoom) return technology of dome camera
CN112565866A (en) Focus control method, system, device and storage medium
CN112584110A (en) White balance adjusting method and device, electronic equipment and storage medium
JP2005333628A (en) Camera control apparatus, and monitoring camera system using same
JP4984791B2 (en) Pointing device, pointing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant