CN110456829A - Positioning and tracing method, device and computer readable storage medium - Google Patents

Positioning and tracing method, device and computer readable storage medium Download PDF

Info

Publication number
CN110456829A
CN110456829A CN201910729307.8A CN201910729307A CN110456829A CN 110456829 A CN110456829 A CN 110456829A CN 201910729307 A CN201910729307 A CN 201910729307A CN 110456829 A CN110456829 A CN 110456829A
Authority
CN
China
Prior art keywords
location information
camera
video image
video
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910729307.8A
Other languages
Chinese (zh)
Other versions
CN110456829B (en
Inventor
王丹飞
郑永勤
李海健
吕品
卢靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hi Tech Ltd By Share Ltd
Original Assignee
Shenzhen Hi Tech Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hi Tech Ltd By Share Ltd filed Critical Shenzhen Hi Tech Ltd By Share Ltd
Priority to CN201910729307.8A priority Critical patent/CN110456829B/en
Publication of CN110456829A publication Critical patent/CN110456829A/en
Application granted granted Critical
Publication of CN110456829B publication Critical patent/CN110456829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The invention discloses a kind of positioning and tracing method, include the following steps: the first 2D location information for obtaining the video image that video camera takes, the 2nd 2D location information and camera parameter of the target rectangle frame being arranged in the video image;According to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined;Steering operation is executed according to the camera that the 3D location information controls the video camera, and controls the video camera when the steering operation is executed and completed and executes camera operation, to obtain the video image of the corresponding target position of the target rectangle frame.The invention also discloses a kind of positioning and tracking device and computer readable storage mediums.The present invention can accurately and quickly determine target observations range, position where rapidly control video camera is moved to target is imaged, the video image that target position can rapidly be captured improves the efficiency of video monitoring and the accuracy of monitoring objective position range.

Description

Positioning and tracing method, device and computer readable storage medium
Technical field
The present invention relates to technical field of image processing more particularly to a kind of positioning and tracing methods, device and computer-readable Storage medium.
Background technique
Locating and tracking technology is widely used in fields such as video monitoring, video conference and automatic recording broadcasting systems, is had very Big practical application meaning.Traditional locating and tracking technology is realized by manipulating the modes such as external control keyboard, that is, is existed After observing target in video image, by shaking the control stick of control keyboard, rotation order up and down and amplification contracting are sent Small order is rotated and scaling up and down with controlling camera.
But since the remolding sensitivity of control stick is lower, control stick user is also required to certain experience, it could be accurately fast Object observing is captured fastly, therefore by way of shaking control stick and controlling camera rotation, it is difficult to accurately capture sight Observation of eyes mark is easy so that video image range is excessive or too small, and it is not accurate enough uncontrollable with range of observation that there are locating and trackings The problems such as.
Above content is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that above content is existing skill Art.
Summary of the invention
The main purpose of the present invention is to provide a kind of positioning and tracing method, device and computer readable storage medium, purports Solving the technical problem that locating and tracking is not accurate enough and range of observation is uncontrollable.
To achieve the above object, the present invention provides a kind of positioning and tracing method, and the positioning and tracing method includes following step It is rapid:
The first 2D location information for obtaining the video image that video camera takes, the target being arranged in the video image The 2nd 2D location information and camera parameter of rectangle frame;
According to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined;
Steering operation is executed according to the camera that the 3D location information controls the video camera, and in the steering operation It executes and controls the video camera execution camera operation when completing, to obtain the video of the corresponding target position of the target rectangle frame Image.
In one embodiment, the 3D location information includes horizontal offset, it is described according to the first 2D location information, 2nd 2D location information and camera parameter, the step of determining 3D location information include:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the horizontal-shift angle of the video image;
According to the holder step angle in the horizontal-shift angle and the camera parameter, the horizontal-shift is determined Amount.
In one embodiment, described according to the first 2D location information, the 2nd 2D location information and camera parameter, really The step of fixed target rectangle frame deviates the horizontal-shift angle of the video image include:
Obtain the second width in the first width and the 2nd 2D location information in the first 2D location information And horizontal position coordinate;
According to the horizontal view angle in first width, the second width, horizontal position coordinate and camera parameter, water is determined Flat deviation angle.
In one embodiment, the 3D location information further includes vertical offset, described to be believed according to the first position 2D Breath, the 2nd 2D location information and camera parameter, the step of determining 3D location information, further includes:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the vertical shift angle of the video image;
According to the holder step angle in the vertical shift angle and the camera parameter, the vertical shift is determined Amount.
In one embodiment, described according to the first 2D location information, the 2nd 2D location information and camera parameter, really The step of fixed target rectangle frame deviates the vertical shift angle of the video image include:
Obtain the first height in the first 2D location information and the second height in the 2nd 2D location information and mesh Mark the vertical position coordinate of rectangle frame;
According to the vertical angle of view in first height, the second height, vertical position coordinate and camera parameter, determines and hang down Straight deviation angle.
In one embodiment, the 3D location information further includes lens focus, it is described according to the first 2D location information, 2nd 2D location information and camera parameter, the step of determining 3D location information, further includes:
Obtain the in the first width in the first 2D location information and the first height and the 2nd 2D location information Two width, the second height and the first image multiple;
According to first width, the first height, the second width, the second height and the first image multiple, the second figure is determined As multiple;
The lens focus is determined according to the second image multiple.
In one embodiment, the first 2D location information for obtaining the video image that takes of video camera, in the view The step of the 2nd 2D location information and camera parameter of the target rectangle frame being arranged in frequency image includes:
If detecting, video camera takes video image, obtains the multimedia data stream of the video image;
The video image is shown based on the multimedia data stream, and obtains the first position the 2D letter of the video image Breath and camera parameter;
If detecting the corresponding setting operation of the video image, the target rectangle is determined based on setting operation Frame;
Obtain the 2nd 2D location information of the target rectangle frame.
In one embodiment, the 3D location information includes the horizontal offset, vertical offset and lens focus, institute Stating the step of executing steering operation according to the camera that the 3D location information controls the video camera includes:
Based on the horizontal offset and vertical offset, controls the camera and execute moving operation;
Based on the lens focus, controls the camera and execute focal-length adjustment operations.
In addition, to achieve the above object, the present invention also provides a kind of positioning and tracking device, the positioning device includes: to deposit Reservoir, processor and it is stored in the locating and tracking program that can be run on the memory and on the processor, the positioning The step of trace routine realizes positioning and tracing method above-mentioned when being executed by the processor.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium Locating and tracking program is stored on storage medium, the locating and tracking program realizes locating and tracking above-mentioned when being executed by processor The step of method.
The present invention by obtaining the first 2D location information of the video image that takes of video camera, in the video image The 2nd 2D location information and camera parameter of the target rectangle frame of setting, according to the first 2D location information, the 2nd 2D Confidence breath and camera parameter, determine 3D location information, are held according to the camera that the 3D location information controls the video camera Row steering operation, and control the video camera when the steering operation is executed and completed and execute camera operation, to obtain the mesh The video image for marking the corresponding target position of rectangle frame is converted into the position 3D by setting target rectangle frame and by 2D location information Information accurately and quickly determines target observations range in a manner of controlling camera and turn to, and it is mobile rapidly to control video camera It is imaged to the position where target, can rapidly capture the video image of target position, improve video monitoring The accuracy of efficiency and monitoring objective position range.
Detailed description of the invention
Fig. 1 is the apparatus structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of positioning and tracing method first embodiment of the present invention;
Fig. 3 is that positioning and tracing method 3D of the present invention scales comparison diagram schematic diagram.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is the structure of the positioning and tracking device for the hardware running environment that the embodiment of the present invention is related to Schematic diagram.
Positioning and tracking device of the embodiment of the present invention can be PC, be also possible to smart phone, tablet computer, e-book reading The packaged type terminal device having a display function such as device, portable computer.
As shown in Figure 1, the positioning and tracking device may include: processor 1001, such as CPU, network interface 1004, user Interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing the connection between these components Communication.User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user Interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include having for standard Line interface, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable storage Device (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processing The storage device of device 1001.
Optionally, positioning and tracking device can also include camera, RF (Radio Frequency, radio frequency) circuit, sensing Device, voicefrequency circuit, WiFi module etc..Wherein, sensor such as optical sensor and other sensors.Specifically, light sensing Device may include ambient light sensor and proximity sensor, wherein ambient light sensor can be adjusted according to the light and shade of ambient light The brightness of display screen.Certainly, positioning and tracking device can also configure barometer, hygrometer, thermometer, infrared sensor etc. other Sensor, details are not described herein.
It will be understood by those skilled in the art that positioning and tracking device structure shown in Fig. 1 is not constituted to locating and tracking The restriction of device may include perhaps combining certain components or different component cloth than illustrating more or fewer components It sets.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium Believe module, Subscriber Interface Module SIM and locating and tracking program.
In positioning and tracking device shown in Fig. 1, network interface 1004 is mainly used for connecting background server, takes with backstage Business device carries out data communication;User interface 1003 is mainly used for connecting client (user terminal), carries out data communication with client; And processor 1001 can be used for calling the locating and tracking program stored in memory 1005.
In the present embodiment, positioning and tracking device includes: memory 1005, processor 1001 and is stored in the memory On 1005 and the locating and tracking program that can be run on the processor 1001, wherein processor 1001 calls memory 1005 When the locating and tracking program of middle storage, and execute following operation:
The first 2D location information for obtaining the video image that video camera takes, the target being arranged in the video image The 2nd 2D location information and camera parameter of rectangle frame;
According to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined;
Steering operation is executed according to the camera that the 3D location information controls the video camera, and in the steering operation It executes and controls the video camera execution camera operation when completing, to obtain the video of the corresponding target position of the target rectangle frame Image.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the horizontal-shift angle of the video image;
According to the holder step angle in the horizontal-shift angle and the camera parameter, the horizontal-shift is determined Amount.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
Obtain the second width in the first width and the 2nd 2D location information in the first 2D location information And horizontal position coordinate;
According to the horizontal view angle in first width, the second width, horizontal position coordinate and camera parameter, water is determined Flat deviation angle.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the vertical shift angle of the video image;
According to the holder step angle in the vertical shift angle and the camera parameter, the vertical shift is determined Amount.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
Obtain the first height in the first 2D location information and the second height in the 2nd 2D location information and mesh Mark the vertical position coordinate of rectangle frame;
According to the vertical angle of view in first height, the second height, vertical position coordinate and camera parameter, determines and hang down Straight deviation angle.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
Obtain the in the first width in the first 2D location information and the first height and the 2nd 2D location information Two width, the second height and the first image multiple;
According to first width, the first height, the second width, the second height and the first image multiple, the second figure is determined As multiple;
The lens focus is determined according to the second image multiple.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
If detecting, video camera takes video image, obtains the multimedia data stream of the video image;
The video image is shown based on the multimedia data stream, and obtains the first position the 2D letter of the video image Breath and camera parameter;
If detecting the corresponding setting operation of the video image, the target rectangle is determined based on setting operation Frame;
Obtain the 2nd 2D location information of the target rectangle frame.
Further, processor 1001 can call the locating and tracking program stored in memory 1005, also execute following Operation:
Based on the horizontal offset and vertical offset, controls the camera and execute moving operation;
Based on the lens focus, controls the camera and execute focal-length adjustment operations.
The present invention also provides a kind of positioning and tracing methods, are positioning and tracing method first of the present invention implementation referring to Fig. 2, Fig. 2 The flow diagram of example.
In the present embodiment, realize that the locating and tracking system where positioning and tracing method includes: display terminal and video camera, Wherein, video camera includes 2D-3D algorithm conversion module and cradle head control execution module, which includes:
Step S10 obtains the first 2D location information of the video image that video camera takes, sets in the video image The 2nd 2D location information and camera parameter for the target rectangle frame set;
In the present embodiment, from display terminal, the first 2D location information of the video image that video camera takes is obtained, Wherein, display terminal can be upper computer software, video monitoring system, automatic recording broadcasting system or the video conferencing system at the end PC etc. Deng;Video camera can be the video camera with video record and network communicating function, such as IPC video camera (IP Camera, network Video camera) etc., it can be monitored camera shooting, and the information such as video image are sent to display terminal.View is taken in video camera After frequency image, by the video image converting multimedia data flow, display terminal obtains the multimedia sended over from video camera Data flow, and video image being shown on display terminal to, display terminal can be from multimedia number by definition multimedia data flow According to the first 2D location information for obtaining video image in stream, wherein the first 2D location information includes the first width and the first height.
Target rectangle frame can be arranged after showing video image in display terminal on the video image that display terminal is shown, On display terminal be arranged target rectangle frame mode can be manual touch-control or mouse locking, target rectangle frame can be from The rectangle of family setting or other irregular shapes by certain rule are converted into target rectangle frame, are setting up target rectangle frame The camera parameter of the corresponding 2nd 2D location information of target rectangle frame and current camera is obtained afterwards.Wherein, the 2nd 2D Confidence breath includes that the second width, the second height, the first image multiple, horizontal position coordinate and the upright position of target rectangle frame are sat Mark, camera parameter includes the horizontal view angle, vertical angle of view and holder step angle of camera.
The view obtained from IPC video camera is shown using the end PC upper computer software for example, referring to Fig. 3, in this programme Shown in Fig. 3 (a) before frequency image, video image such as 3D scaling, mouse can be passed through on the video image that upper computer software is shown A target rectangle frame delimited, shown in target rectangle frame of the target rectangle frame as where letter A, upper computer software obtains the first 2D Location information and the 2nd 2D location information.The first width W of first 2D location information, that is, video image and the first height H, second 2D location information is the second width w, the second height h, horizontal position coordinate x, the upright position seat of set target rectangle frame Y and the first image multiple current_ratio is marked, and by 7 informations parameter of the first 2D location information and the 2nd 2D location information Deng being handed down to IPC video camera, IPC video camera obtains above-mentioned first 2D location information and the 2nd 2D location information, and obtains camera shooting The camera parameter of head itself, i.e. horizontal view angle α, vertical angle of view β and 0.069444 ° of holder step angle/step, different manufacturers are taken the photograph The camera parameter of camera is different, is not specifically limited in the present embodiment.
Step S20 determines the position 3D according to the first 2D location information, the 2nd 2D location information and camera parameter Information;
In the present embodiment, it after video camera obtains the first 2D location information, the 2nd 2D location information and camera parameter, takes the photograph 2D-3D algorithm conversion module in camera passes through according to the first 2D location information, the 2nd 2D location information and camera parameter First 2D location information and the 2nd 2D location information are converted into camera pan-tilt control and held by 2D-3D location information transfer algorithm The identifiable 3D location information of row module.Specifically, 3D location information includes that horizontal offset, vertical offset and camera lens are burnt Away from.
For example, the IPC video camera in this programme obtains the first 2D location information, the 2nd 2D location information and camera parameter Afterwards, according to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined, i.e., according to video figure First width W of picture and the first height H, the second width w, the second height h, horizontal position coordinate of set target rectangle frame X, the lens parameters horizontal view angle α of vertical position coordinate y and the first image multiple current_ratio and IPC video camera, The informations parameter such as vertical angle of view β and 0.069444 ° of holder step angle/step, by 2D-3D location information transfer algorithm, by One 2D location information and the 2nd 2D location information are converted into 3D location information, i.e. horizontal offset pan_off, vertical offset Tilt_off and lens focus target_zoom.
Step S30 executes steering operation according to the camera that the 3D location information controls the video camera, and described Steering operation, which executes, controls the video camera execution camera operation when completing, to obtain the corresponding target position of the target rectangle frame The video image set.
In the present embodiment, the first 2D location information and the 2nd 2D location information camera pan-tilt control is converted into execute After the identifiable 3D location information of module, camera pan-tilt control execution module obtains 3D from 2D-3D algorithm conversion module Confidence breath, and the parameter among 3D location information is sequentially input, the corresponding association of 3D location information is obtained by cradle head control agreement View order, wherein cradle head control agreement can be VISCA, PELCO-D/P agreement or other cradle head control agreements, the present embodiment In be not specifically limited.Cradle head control execution module obtains the corresponding protocol command of different parameters in 3D location information, by obtaining The camera for the different protocol commands control video cameras got executes the steering operation of different directions, such as can control camera lens to It is upper downwards, to from left to right, zoom, focusing or stopping etc., the final camera for controlling video camera goes to previously positioned target Corresponding target position in rectangle frame, and imaged, to control the video image that video camera obtains target position, to carry out reality When positioning and tracking.
For example, the cradle head control execution module in IPC video camera in this programme obtains and passes through 2D-3D algorithm conversion module By the 3D location information after the first 2D location information and the conversion of the 2nd 2D location information, i.e. acquisition horizontal offset pan_off, hang down Straight offset tilt_off and lens focus target_zoom, and according to horizontal offset pan_off, vertical offset tilt_ The 3D location information of off and lens focus target_zoom, the camera that cradle head control execution module controls IPC video camera turn To the corresponding target position of target rectangle frame being previously arranged on the video image that display terminal is shown, if target position is on a left side Upper angle is then moved up according to horizontal offset pan_off and vertical offset tilt_off control camera to the left, and according to Lens focus target_zoom controls camera and executes focal-length adjustment operations, turns to the progress of target position to control camera lens Observation.
The positioning and tracing method that the present embodiment proposes, by the first position 2D for obtaining the video image that video camera takes Information, the 2nd 2D location information of the target rectangle frame being arranged in the video image and camera parameter, then according to institute The first 2D location information, the 2nd 2D location information and camera parameter are stated, determines 3D location information, finally according to the position 3D The camera that information controls the video camera executes steering operation, and controls the camera shooting when the steering operation is executed and completed Machine executes camera operation, to obtain the video image of the corresponding target position of the target rectangle frame, by the way that target rectangle is arranged Frame and by 2D location information be converted into 3D location information by control camera turn in a manner of, accurately and quickly determine target see Range is examined, the position where video camera is moved to target is rapidly controlled and is imaged, can rapidly capture target position Video image, improve the efficiency of video monitoring and the accuracy of monitoring objective position range.
Based on first embodiment, the second embodiment of positioning and tracing method of the present invention, in the present embodiment, the position 3D are proposed Information includes horizontal offset, and step S20 includes:
Step a determines the target according to the first 2D location information, the 2nd 2D location information and camera parameter Rectangle frame deviates the horizontal-shift angle of the video image;
Wherein, horizontal-shift angle is the horizontal-shift angle that target rectangle frame deviates video image, can be target rectangle The center of frame deviate the center of video image level angle or other can indicate that target rectangle frame deviates view The level angle of frequency image, such as the horizontal-shift angle θ in Fig. 3 (a).
In the present embodiment, 2D-3D algorithm conversion module gets the first 2D location information, the 2nd 2D location information and takes the photograph As based on horizontal offset is calculated, being obtained from the first 2D location information, the 2nd 2D location information and camera parameter after head parameter Parameter required for calculating is taken, horizontal-shift angle is calculated, calculating parameter required for horizontal offset can be width, correlation Horizontal coordinate, relevant camera parameter level visual angle or other parameters etc..
In one embodiment, step a includes:
Obtain the second width in the first width and the 2nd 2D location information in the first 2D location information And horizontal position coordinate;
According to the horizontal view angle in first width, the second width, horizontal position coordinate and camera parameter, water is determined Flat deviation angle.
In the present embodiment, 2D-3D algorithm conversion module obtains relevant parameter required for calculating horizontal-shift angle, packet Include second of target rectangle frame in the first width of video image in the first 2D location information and the 2nd 2D location information Width and horizontal position coordinate, wherein the first width can be the width of video image, width half or width can be represented Pixel number etc.;Horizontal position coordinate can be the abscissa in the target rectangle frame upper left corner or the lower left corner, target rectangle The abscissa of frame center or other can represent the abscissa of target rectangle frame position.
In the present embodiment, after 2D-3D algorithm conversion module obtains relevant parameter required for calculating horizontal-shift angle, After obtaining the horizontal view angle in the first width, the second width, horizontal position coordinate and camera parameter, the first width of input, Horizontal view angle in second width, horizontal position coordinate and camera parameter calculates horizontal-shift by 2D-3D transfer algorithm Angle.Referring to Fig. 3 (a), the formula of horizontal-shift angle θ are as follows:
θ=arctan (BC/OC)
Wherein, (α/2) OC=AC/tan, BC=x+w/2, AC=W/2, W is the first width, w is the second width, x is water Flat position coordinates and α are the horizontal view angle in camera parameter.
Step b determines the water according to the holder step angle in the horizontal-shift angle and the camera parameter Flat offset.
In the present embodiment, after 2D-3D algorithm conversion module calculates horizontal-shift angle, horizontal-shift angle and cloud are obtained Platform step angle can calculate one of the parameter in 3D location information according to horizontal-shift angle and holder step angle --- Horizontal offset.Wherein, horizontal-shift angle is the horizontal-shift angle that target rectangle frame deviates video image, can be target square The center of shape frame deviate the center of video image level angle or other can indicate that target rectangle frame deviates The level angle of video image, holder step angle are determined that the holder of the camera of different manufacturers is walked by the parameter of camera Also different into angle, holder step angle is constant value.For example, as it is known that horizontal-shift angle θ and holder step angle are 0.069444 °/step, θ/0.069444 horizontal offset pan_off=can be calculated.
The positioning and tracing method that the present embodiment proposes, by according to the first 2D location information, the 2nd 2D location information With camera parameter, determine that the target rectangle frame deviates the horizontal-shift angle of the video image, then according to the water Holder step angle in flat deviation angle and the camera parameter, determines the horizontal offset, first determines horizontal-shift Angle, then determine horizontal offset, horizontal offset can be accurately determined according to horizontal-shift angle and holder step angle, mention Height calculates the accuracy of horizontal offset, and then improves the accurate of the distance that cradle head control module control camera moves horizontally Property.
Based on first embodiment, the 3rd embodiment of positioning and tracing method of the present invention, in the present embodiment, the position 3D are proposed Information includes vertical offset, step S20, further includes:
Step c determines the target according to the first 2D location information, the 2nd 2D location information and camera parameter Rectangle frame deviates the vertical shift angle of the video image;
Wherein, vertical shift angle is the vertical deviation angle that target rectangle frame deviates video image, can be target rectangle The center of frame deviate the center of video image vertical shift angle or other can indicate that target rectangle frame is inclined Vertical shift angle from video image, such as the vertical shift angle φ in Fig. 3.
In the present embodiment, 2D-3D algorithm conversion module gets the first 2D location information, the 2nd 2D location information and takes the photograph As based on vertical offset is calculated, being obtained from the first 2D location information, the 2nd 2D location information and camera parameter after head parameter Parameter required for calculating is taken, vertical shift angle is calculated, calculating parameter required for vertical offset can be height, correlation Vertical coordinate, the vertical angle of view of relevant camera parameter or other parameters etc..
In one embodiment, step c includes:
Obtain the first height in the first 2D location information and the second height in the 2nd 2D location information and mesh Mark the vertical position coordinate of rectangle frame;
According to the vertical angle of view in first height, the second height, vertical position coordinate and camera parameter, determines and hang down Straight deviation angle.
In the present embodiment, 2D-3D algorithm conversion module obtains relevant parameter required for calculating vertical shift angle, packet Include second of target rectangle frame in the first height and the 2nd 2D location information of video image in the first 2D location information Height and vertical position coordinate, wherein the first height can be the height of video image, height half or can represent height Pixel number etc.;Vertical position coordinate can be the ordinate in the target rectangle frame upper left corner or the lower left corner, target rectangle The ordinate of frame center or other can represent the ordinate of target rectangle frame position.
In the present embodiment, after 2D-3D algorithm conversion module obtains relevant parameter required for calculating vertical shift angle, After obtaining the vertical angle of view in the first height, the second height, vertical position coordinate and camera parameter, the first height of input, Vertical angle of view in second height, vertical position coordinate and camera parameter calculates vertical shift by 2D-3D transfer algorithm Angle.Referring to Fig. 3 (a), the formula of vertical shift angle φ are as follows:
Wherein, H is the first height, h is the second height, y is vertical position coordinate and β is the vertical view in camera parameter Angle.
Step d determines described hang down according to the holder step angle in the vertical shift angle and the camera parameter Straight offset.
In the present embodiment, after 2D-3D algorithm conversion module calculates vertical shift angle, vertical shift angle and cloud are obtained Platform step angle can calculate one of the parameter in 3D location information according to vertical shift angle and holder step angle --- Vertical offset.Wherein, vertical shift angle is the vertical shift angle that target rectangle frame deviates video image, can be target The center of rectangle frame deviate the center of video image vertical angle or other can indicate that target rectangle frame is inclined Vertical angle from video image, holder step angle determined by the parameter of camera, the holder of the camera of different manufacturers Step angle is also different, and holder step angle is constant value.For example, as it is known that vertical shift angle φ and holder step angle are 0.069444 °/step, φ/0.069444 vertical offset tilt_off=can be calculated.
The positioning and tracing method that the present embodiment proposes, by according to the first 2D location information, the 2nd 2D location information It with camera parameter, determines that the target rectangle frame deviates the vertical shift angle of the video image, is then hung down according to described Holder step angle in straight deviation angle and the camera parameter, determines the vertical offset, first determines vertical shift Angle, then determine vertical offset, vertical offset can be accurately determined according to vertical shift angle and holder step angle, mention Height calculates the accuracy of vertical offset, and then improves the accurate of the distance that cradle head control module control camera vertically moves Property.
Based on first embodiment, the fourth embodiment of positioning and tracing method of the present invention, in the present embodiment, the position 3D are proposed Information includes lens focus, step S20, further includes:
Step e obtains the first width in the first 2D location information and the first height and the 2nd 2D location information In the second width, second height and the first image multiple;
In the present embodiment, relevant parameter required for 2D-3D algorithm conversion module acquisition calculating lens focus, including the In one 2D location information the first width of video image and first height and the 2nd 2D location information in target rectangle frame The second width, second height and the first image multiple, wherein the first width can be the half of the width of video image, width Or pixel number that width can be represented etc.;First height can be the height of video image, the half of height or energy Enough represent pixel number of height etc.;Second width can be the width of target rectangle frame, width half or being capable of generation Pixel number of table width etc.;Second height can be the height of target rectangle frame, height half or height can be represented Pixel number of degree etc.;First image multiple, second image times calculated when being last execution locating and tracking program Number, it is to be appreciated that calculate the second image multiple if it is first time, then the first image multiple is equal to pre-set initial Value, as initial value can be 1, it is preferable that can the steering operation of camera execute complete when, using the second image multiple as the One image multiple simultaneously stores.
Step f is determined according to first width, the first height, the second width, the second height and the first image multiple Second image multiple;
In the present embodiment, after 2D-3D algorithm conversion module obtains relevant parameter required for calculating the second image multiple, It is the first width of input, first high after obtaining the first width, the first height, the second width, the second height and the first image multiple Degree, the second width, the second height and the first image multiple calculate the second image multiple by 2D-3D transfer algorithm.With reference to Fig. 3 (a), the formula of the second image multiple target_ratio are as follows:
Target_ratio=target_ratio/source_area*current_ratio
Wherein, the area source_area=W*H of video image, the area target_ratio=w* of target rectangle frame H, W are the first width, H is the first height, w is the second width, h is the second height and current_ratio is the first image times Number.
Step g determines the lens focus according to the second image multiple.
In the present embodiment, after 2D-3D algorithm conversion module determines the second image multiple, the second image multiple, root are obtained According to the functional relation of the second image multiple and lens focus, one of the parameter in 3D location information can be calculated --- camera lens is burnt Away from.Wherein, the functional relation of the second image multiple and lens focus is measured by video camera or is fitted Lai the camera shooting of different manufacturers Second image multiple of machine is different with the functional relation of lens focus, generally non-linear relation.For example, as it is known that the second image times The functional relation f (x) of number target_ratio and the second image multiple and lens focus, can calculate lens focus Target_zoom=f (target_ratio).
The positioning and tracing method that the present embodiment proposes, by obtaining the first width in the first 2D location information and the One height and the second width in the 2nd 2D location information, the second height and the first image multiple, then according to described first Width, the first height, the second width, the second height and the first image multiple, determine the second image multiple, finally according to described the Two image multiples determine the lens focus, first determine the second image multiple, then determine lens focus, can be according to the second image The functional relation of multiple and the second image multiple and lens focus accurately determines lens focus, improves and calculates the accurate of lens focus Property, and then improve the accuracy that cradle head control module control camera executes focal-length adjustment operations.
Based on first embodiment, the 5th embodiment of positioning and tracing method of the present invention, in the present embodiment, step are proposed S10 includes:
Step h, if detecting, video camera takes video image, obtains the multimedia data stream of the video image;
In the present embodiment, video camera and display terminal are connected by the network port, shoot video figure by video camera Picture, if detecting, video camera takes video image, and the video image taken is converted to multimedia data stream by video camera, And by multimedia data stream to display terminal, display terminal then obtains the corresponding multimedia data stream of video image.
Step i shows the video image based on the multimedia data stream, and obtains the first 2D of the video image Location information and camera parameter;
In the present embodiment, after display terminal obtains the corresponding multimedia data stream of video image, multi-medium data is circulated Change video image into, multimedia data stream is converted into can be in the video image that display terminal is shown.
Step j determines the mesh based on setting operation if detecting the corresponding setting operation of the video image Mark rectangle frame;
In the present embodiment, after display terminal shows video image, it can be arranged on the video image that display terminal is shown Target rectangle frame, wherein the mode of setting target rectangle frame, which can be, to be locked by mouse or in the manual touch-control locking of touch screen The rectangle of target position or other irregular shapes.If detecting the corresponding setting behaviour of video image shown in display terminal Make, then setting operation is converted by target rectangle frame by certain rule, and lock the target rectangle where the target position Frame, wherein rectangle or irregular shape are converted into the rule of target rectangle frame are as follows: if being provided that rectangle, directly lock The rectangle is target rectangle frame;If being provided that irregular shape, by the detection highest point of irregular shape, minimum point, The position of the point of the point and rightmost of leftmost, is arranged a target rectangle frame, all by the borderline all the points of irregular shape It is included.
Step k obtains the 2nd 2D location information of the target rectangle frame.
In the present embodiment, after target rectangle frame is set up on the video image that display terminal is shown, then target square is obtained The corresponding 2nd 2D location information of shape frame, wherein the 2nd 2D location information includes the second width of target rectangle frame, second high Degree, the first image multiple, horizontal position coordinate and vertical position coordinate.
The positioning and tracing method that the present embodiment proposes, if obtaining institute by detecting that video camera takes video image The multimedia data stream of video image is stated, the video image is then shown based on the multimedia data stream, and described in acquisition The first 2D location information and camera parameter of video image, if finally detecting the corresponding setting behaviour of the video image Make, then the target rectangle frame is determined based on setting operation, obtain the 2nd 2D location information of the target rectangle frame, In The mode of target rectangle frame is set on video image, facilitates user's quick lock in target rectangle frame, accurately captures observation mesh Mark determines observation location and range.
Based on first embodiment, the sixth embodiment of positioning and tracing method of the present invention, in the present embodiment, step are proposed S30 includes:
Step l is based on the horizontal offset and vertical offset, controls the camera and executes moving operation;
In the present embodiment, after 2D location information is converted into 3D location information, cradle head control execution module obtains the position 3D Information inputs the horizontal offset in 3D location information, by cradle head control agreement, obtains the corresponding agreement of horizontal offset and refers to It enables, is moved right to the left according to acquired protocol instructions control camera;Or the vertical shift in input 3D location information Amount obtains the corresponding protocol instructions of vertical offset by cradle head control agreement, controls camera shooting according to acquired protocol instructions Head moves up or down;Or horizontal offset and vertical offset in input 3D location information, it is assisted by cradle head control View obtains the corresponding protocol instructions of vertical offset, according to acquired protocol instructions control camera to upper left or lower-left or Upper right or bottom right are mobile.
Step m is based on the lens focus, controls the camera and executes focal-length adjustment operations.
In the present embodiment, after 2D location information is converted into 3D location information, cradle head control execution module obtains the position 3D Information inputs the lens focus in 3D location information, by cradle head control agreement, obtains the corresponding protocol instructions of lens focus, Focal-length adjustment operations are executed according to acquired protocol instructions control camera, focal-length adjustment operations, which can be, to be increased focal length, subtracts Small focal length, focusing etc..For example, figure (a) is to show before focusing in display terminal as shown in the scaling comparison diagram in Fig. 3 Video image and setting target rectangle frame, figure (b) be the video image shown after focusing, i.e., control camera execution After mobile and focal-length adjustment operations, the position where going to the alphabetical A in target rectangle frame is imaged, and finally determines alphabetical A Position tracks and is amplified to the video image as shown in figure (b).
The positioning and tracing method that the present embodiment proposes controls institute by being based on the horizontal offset and vertical offset It states camera and executes moving operation, and then be based on the lens focus, control the camera and execute focal-length adjustment operations, control Camera executes mobile and focal-length adjustment operations, reach control camera turn to the corresponding target position of photographic subjects rectangle frame into Row shooting, realizes the positioning and tracking to object observing.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium On be stored with locating and tracking program, following operation is realized when the locating and tracking program is executed by processor:
Further, following operation is also realized when the locating and tracking program is executed by processor:
The first 2D location information for obtaining the video image that video camera takes, the target being arranged in the video image The 2nd 2D location information and camera parameter of rectangle frame;
According to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined;
Steering operation is executed according to the camera that the 3D location information controls the video camera, and in the steering operation It executes and controls the video camera execution camera operation when completing, to obtain the video of the corresponding target position of the target rectangle frame Image.
Further, following operation is also realized when the locating and tracking program is executed by processor:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the horizontal-shift angle of the video image;
According to the holder step angle in the horizontal-shift angle and the camera parameter, the horizontal-shift is determined Amount.
Further, following operation is also realized when the locating and tracking program is executed by processor:
Obtain the second width in the first width and the 2nd 2D location information in the first 2D location information And horizontal position coordinate;
According to the horizontal view angle in first width, the second width, horizontal position coordinate and camera parameter, water is determined Flat deviation angle.
Further, following operation is also realized when the locating and tracking program is executed by processor:
According to the first 2D location information, the 2nd 2D location information and camera parameter, the target rectangle frame is determined Deviate the vertical shift angle of the video image;
According to the holder step angle in the vertical shift angle and the camera parameter, the vertical shift is determined Amount.
Further, following operation is also realized when the locating and tracking program is executed by processor:
Obtain the first height in the first 2D location information and the second height in the 2nd 2D location information and mesh Mark the vertical position coordinate of rectangle frame;
According to the vertical angle of view in first height, the second height, vertical position coordinate and camera parameter, determines and hang down Straight deviation angle.
Further, following operation is also realized when the locating and tracking program is executed by processor:
Obtain the in the first width in the first 2D location information and the first height and the 2nd 2D location information Two width, the second height and the first image multiple;
According to first width, the first height, the second width, the second height and the first image multiple, the second figure is determined As multiple;
The lens focus is determined according to the second image multiple.
Further, following operation is also realized when the locating and tracking program is executed by processor:
If detecting, video camera takes video image, obtains the multimedia data stream of the video image;
The video image is shown based on the multimedia data stream, and obtains the first position the 2D letter of the video image Breath and camera parameter;
If detecting the corresponding setting operation of the video image, the target rectangle is determined based on setting operation Frame;
Obtain the 2nd 2D location information of the target rectangle frame.
Further, following operation is also realized when the locating and tracking program is executed by processor:
Based on the horizontal offset and vertical offset, controls the camera and execute moving operation;
Based on the lens focus, controls the camera and execute focal-length adjustment operations.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone, Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of positioning and tracing method, which is characterized in that the positioning and tracing method the following steps are included:
The first 2D location information for obtaining the video image that video camera takes, the target rectangle being arranged in the video image The 2nd 2D location information and camera parameter of frame;
According to the first 2D location information, the 2nd 2D location information and camera parameter, 3D location information is determined;
Steering operation is executed according to the camera that the 3D location information controls the video camera, and is executed in the steering operation The video camera is controlled when completion and executes camera operation, to obtain the video figure of the corresponding target position of the target rectangle frame Picture.
2. positioning and tracing method as described in claim 1, which is characterized in that the 3D location information includes horizontal offset, It is described according to the first 2D location information, the 2nd 2D location information and camera parameter, the step of determining 3D location information packet It includes:
According to the first 2D location information, the 2nd 2D location information and camera parameter, determine that the target rectangle frame deviates The horizontal-shift angle of the video image;
According to the holder step angle in the horizontal-shift angle and the camera parameter, the horizontal offset is determined.
3. positioning and tracing method as claimed in claim 2, which is characterized in that described according to the first 2D location information, Two 2D location informations and camera parameter determine that the target rectangle frame deviates the step of the horizontal-shift angle of the video image Suddenly include:
Obtain the second width and water in the first width and the 2nd 2D location information in the first 2D location information Flat position coordinates;
According to the horizontal view angle in first width, the second width, horizontal position coordinate and camera parameter, determine horizontal inclined Move angle.
4. positioning and tracing method as described in claim 1, which is characterized in that the 3D location information further includes vertical shift Amount, it is described according to the first 2D location information, the 2nd 2D location information and camera parameter, determine the step of 3D location information Suddenly, further includes:
According to the first 2D location information, the 2nd 2D location information and camera parameter, determine that the target rectangle frame deviates The vertical shift angle of the video image;
According to the holder step angle in the vertical shift angle and the camera parameter, the vertical offset is determined.
5. positioning and tracing method as claimed in claim 4, which is characterized in that described according to the first 2D location information, Two 2D location informations and camera parameter determine that the target rectangle frame deviates the step of the vertical shift angle of the video image Suddenly include:
Obtain the first height in the first 2D location information and the second height and target square in the 2nd 2D location information The vertical position coordinate of shape frame;
According to the vertical angle of view in first height, the second height, vertical position coordinate and camera parameter, determine vertical inclined Move angle.
6. positioning and tracing method as described in claim 1, which is characterized in that the 3D location information further includes lens focus, It is described according to the first 2D location information, the 2nd 2D location information and camera parameter, the step of determining 3D location information, also Include:
It obtains second wide in the first width in the first 2D location information and the first height and the 2nd 2D location information Degree, the second height and the first image multiple;
According to first width, the first height, the second width, the second height and the first image multiple, the second image times is determined Number;
The lens focus is determined according to the second image multiple.
7. positioning and tracing method as described in claim 1, which is characterized in that the video image for obtaining video camera and taking The first 2D location information, the target rectangle frame being arranged in the video image the 2nd 2D location information and camera parameter The step of include:
If detecting, video camera takes video image, obtains the multimedia data stream of the video image;
Show the video image based on the multimedia data stream, and obtain the first 2D location information of the video image with And camera parameter;
If detecting the corresponding setting operation of the video image, the target rectangle frame is determined based on setting operation;
Obtain the 2nd 2D location information of the target rectangle frame.
8. positioning and tracing method as described in any one of claim 1 to 7, which is characterized in that the 3D location information includes institute State horizontal offset, vertical offset and lens focus, the camera shooting that the video camera is controlled according to the 3D location information Head execute steering operation the step of include:
Based on the horizontal offset and vertical offset, controls the camera and execute moving operation;
Based on the lens focus, controls the camera and execute focal-length adjustment operations.
9. a kind of positioning and tracking device, which is characterized in that the positioning and tracking device includes: memory, processor and is stored in On the memory and the locating and tracking program that can run on the processor, the locating and tracking program is by the processor It realizes when execution such as the step of positioning and tracing method described in any item of the claim 1 to 8.
10. a kind of computer readable storage medium, which is characterized in that be stored on the computer readable storage medium positioning with Track program realizes such as locating and tracking described in any item of the claim 1 to 8 when the locating and tracking program is executed by processor The step of method.
CN201910729307.8A 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium Active CN110456829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910729307.8A CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910729307.8A CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110456829A true CN110456829A (en) 2019-11-15
CN110456829B CN110456829B (en) 2022-12-13

Family

ID=68485468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910729307.8A Active CN110456829B (en) 2019-08-07 2019-08-07 Positioning tracking method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110456829B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131713A (en) * 2019-12-31 2020-05-08 深圳市维海德技术股份有限公司 Lens switching method, device, equipment and computer readable storage medium
CN111385476A (en) * 2020-03-16 2020-07-07 浙江大华技术股份有限公司 Method and device for adjusting shooting position of shooting equipment
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113067962A (en) * 2021-03-17 2021-07-02 杭州寰宇微视科技有限公司 Method for realizing scene motion positioning based on movement camera image
CN113452913A (en) * 2021-06-28 2021-09-28 北京宙心科技有限公司 Zooming system and method
CN113938614A (en) * 2021-12-20 2022-01-14 苏州万店掌软件技术有限公司 Video image zooming method, device, equipment and storage medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704048B1 (en) * 1998-08-27 2004-03-09 Polycom, Inc. Adaptive electronic zoom control
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20100254689A1 (en) * 2009-04-03 2010-10-07 Kunio Yata Autofocus system
US20120026343A1 (en) * 2010-07-30 2012-02-02 Toshihiro Ezoe Camera device, camera system, control device and program
WO2012151777A1 (en) * 2011-05-09 2012-11-15 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video monitoring system
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN103713652A (en) * 2012-09-28 2014-04-09 浙江大华技术股份有限公司 Holder rotation speed control method, device and system
CN103929583A (en) * 2013-01-15 2014-07-16 北京三星通信技术研究有限公司 Method for controlling intelligent terminal and intelligent terminal
JP2015180091A (en) * 2015-05-08 2015-10-08 ルネサスエレクトロニクス株式会社 digital camera
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN105718862A (en) * 2016-01-15 2016-06-29 北京市博汇科技股份有限公司 Method, device and recording-broadcasting system for automatically tracking teacher via single camera
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106657727A (en) * 2016-11-02 2017-05-10 深圳市维海德技术股份有限公司 Camera varifocal sensing mechanism and camera assembly
CN107079106A (en) * 2016-09-26 2017-08-18 深圳市大疆创新科技有限公司 Focusing method and device, image capturing method and device and camera system
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings
CN107507243A (en) * 2016-06-14 2017-12-22 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107925713A (en) * 2015-08-26 2018-04-17 富士胶片株式会社 Camera system and camera shooting control method
CN107959793A (en) * 2017-11-29 2018-04-24 努比亚技术有限公司 A kind of image processing method and terminal, storage medium
CN108225278A (en) * 2017-11-29 2018-06-29 维沃移动通信有限公司 A kind of distance measuring method, mobile terminal
WO2018120460A1 (en) * 2016-12-28 2018-07-05 平安科技(深圳)有限公司 Image focal length detection method, apparatus and device, and computer-readable storage medium
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN108549413A (en) * 2018-04-27 2018-09-18 全球能源互联网研究院有限公司 A kind of holder method of controlling rotation, device and unmanned vehicle
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device
CN109391762A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of track up

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704048B1 (en) * 1998-08-27 2004-03-09 Polycom, Inc. Adaptive electronic zoom control
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20100254689A1 (en) * 2009-04-03 2010-10-07 Kunio Yata Autofocus system
US20120026343A1 (en) * 2010-07-30 2012-02-02 Toshihiro Ezoe Camera device, camera system, control device and program
WO2012151777A1 (en) * 2011-05-09 2012-11-15 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video monitoring system
CN103713652A (en) * 2012-09-28 2014-04-09 浙江大华技术股份有限公司 Holder rotation speed control method, device and system
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN103929583A (en) * 2013-01-15 2014-07-16 北京三星通信技术研究有限公司 Method for controlling intelligent terminal and intelligent terminal
JP2015180091A (en) * 2015-05-08 2015-10-08 ルネサスエレクトロニクス株式会社 digital camera
CN107925713A (en) * 2015-08-26 2018-04-17 富士胶片株式会社 Camera system and camera shooting control method
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN105718862A (en) * 2016-01-15 2016-06-29 北京市博汇科技股份有限公司 Method, device and recording-broadcasting system for automatically tracking teacher via single camera
CN105763795A (en) * 2016-03-01 2016-07-13 苏州科达科技股份有限公司 Focusing method and apparatus, cameras and camera system
CN107507243A (en) * 2016-06-14 2017-12-22 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically
CN107079106A (en) * 2016-09-26 2017-08-18 深圳市大疆创新科技有限公司 Focusing method and device, image capturing method and device and camera system
CN106657727A (en) * 2016-11-02 2017-05-10 深圳市维海德技术股份有限公司 Camera varifocal sensing mechanism and camera assembly
WO2018120460A1 (en) * 2016-12-28 2018-07-05 平安科技(深圳)有限公司 Image focal length detection method, apparatus and device, and computer-readable storage medium
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings
CN109391762A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of track up
CN107959793A (en) * 2017-11-29 2018-04-24 努比亚技术有限公司 A kind of image processing method and terminal, storage medium
CN108225278A (en) * 2017-11-29 2018-06-29 维沃移动通信有限公司 A kind of distance measuring method, mobile terminal
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN108549413A (en) * 2018-04-27 2018-09-18 全球能源互联网研究院有限公司 A kind of holder method of controlling rotation, device and unmanned vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYO-SUNG KIM,等: "Zoom Motion Estimation Using Block-Based Fast Local Area Scaling", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
刘昕鑫,等: "基于双焦的单目立体成像系统分析", 《计算机测量与控制》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131713A (en) * 2019-12-31 2020-05-08 深圳市维海德技术股份有限公司 Lens switching method, device, equipment and computer readable storage medium
CN111131713B (en) * 2019-12-31 2022-03-08 深圳市维海德技术股份有限公司 Lens switching method, device, equipment and computer readable storage medium
CN111385476A (en) * 2020-03-16 2020-07-07 浙江大华技术股份有限公司 Method and device for adjusting shooting position of shooting equipment
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113067962A (en) * 2021-03-17 2021-07-02 杭州寰宇微视科技有限公司 Method for realizing scene motion positioning based on movement camera image
CN113452913A (en) * 2021-06-28 2021-09-28 北京宙心科技有限公司 Zooming system and method
CN113452913B (en) * 2021-06-28 2022-05-27 北京宙心科技有限公司 Zooming system and method
CN113938614A (en) * 2021-12-20 2022-01-14 苏州万店掌软件技术有限公司 Video image zooming method, device, equipment and storage medium
CN113938614B (en) * 2021-12-20 2022-03-22 苏州万店掌软件技术有限公司 Video image zooming method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110456829B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN110456829A (en) Positioning and tracing method, device and computer readable storage medium
US9516214B2 (en) Information processing device and information processing method
EP3092603B1 (en) Dynamic updating of composite images
CN109934931B (en) Method and device for collecting image and establishing target object recognition model
KR100657522B1 (en) Apparatus and method for out-focusing photographing of portable terminal
KR101988152B1 (en) Video generation from video
CN104104867A (en) Method for controlling image photographing device for photographing and device thereof
GB2529943A (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
CN103997598A (en) Method of tracking object using camera and camera system for object tracking
CN110383335A (en) The background subtraction inputted in video content based on light stream and sensor
CN112887598A (en) Image processing method and device, shooting support, electronic equipment and readable storage medium
JP2018530177A (en) Method and system for assisting a user in capturing an image or video
CN102663731B (en) Fast calibration method and system of optical axis of camera lens in optical touch system
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN113194263A (en) Gun and ball linkage control method and device, computer equipment and storage medium
CN104168407A (en) Panorama photographing method
CN107645628B (en) Information processing method and device
CN110602376B (en) Snapshot method and device and camera
JP6483661B2 (en) Imaging control apparatus, imaging control method, and program
CN110636204B (en) Face snapshot system
CN109211185A (en) A kind of flight equipment, the method and device for obtaining location information
CN105472232B (en) Image acquisition method and electronic device
CN111277750B (en) Shooting control method and device, electronic equipment and computer storage medium
KR101990252B1 (en) Method for producing virtual reality image, portable device in which VR photographing program for performing the same is installed, and server supplying the VR photographing program to the portable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant