CN109391762B - Tracking shooting method and device - Google Patents

Tracking shooting method and device Download PDF

Info

Publication number
CN109391762B
CN109391762B CN201710656922.1A CN201710656922A CN109391762B CN 109391762 B CN109391762 B CN 109391762B CN 201710656922 A CN201710656922 A CN 201710656922A CN 109391762 B CN109391762 B CN 109391762B
Authority
CN
China
Prior art keywords
image
monitoring target
shooting
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710656922.1A
Other languages
Chinese (zh)
Other versions
CN109391762A (en
Inventor
潘科辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201710656922.1A priority Critical patent/CN109391762B/en
Publication of CN109391762A publication Critical patent/CN109391762A/en
Application granted granted Critical
Publication of CN109391762B publication Critical patent/CN109391762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method and a device for tracking shooting, and belongs to the technical field of computers. The method comprises the following steps: acquiring a first image captured by an image capturing part; determining the position of a monitoring target in the first image; determining first shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the first image and a preset image reference position; adjusting the photographing direction of the image photographing part based on the first photographing direction adjustment information. By adopting the invention, the tracking shooting quality can be improved.

Description

Tracking shooting method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for tracking shooting.
Background
In life, there are often situations where tracking shots are needed. For example, tracking and shooting the process of handling traffic accidents by staff, tracking and shooting wild animals to make documentaries and the like.
In a general tracking shooting mode, a pan-tilt camera is used, an operator observes a target to be tracked and shot, and manually operates a pan-tilt to rotate or slide in a guide rail plane, so that the tracked target is located in a shooting range of the camera.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
based on the processing process, the system needs an operator to completely rely on observation, then controls the holder to move, and shoots the tracking target.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for tracking shooting. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method of tracking shots, the method including:
acquiring a first image captured by an image capturing part;
determining the position of a monitoring target in the first image;
determining first shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the first image and a preset image reference position;
adjusting the photographing direction of the image photographing part based on the first photographing direction adjustment information.
Optionally, the determining the position of the monitoring target in the first image includes:
and determining the position of the monitoring target in the first image according to a pre-trained target detection algorithm model for detecting the position of the monitoring target.
Optionally, the determining the position of the monitoring target in the first image according to a pre-trained target detection algorithm model for detecting the position of the monitoring target includes:
determining the position of at least one target in the first image according to a pre-trained target detection algorithm model for detecting the position of a monitored target;
in the displayed first image, displaying a mark corresponding to each target according to the position of each target in the first image, and when a selection instruction corresponding to the first target is received, determining the position of the first target in the first image as the position of the monitoring target in the first image.
Optionally, the method further comprises:
after a selection instruction corresponding to a first target is received, when a second image shot by an image shooting component is acquired, determining the position of the monitoring target in the second image according to the image characteristic information of the monitoring target, wherein the image characteristic information of the monitoring target is extracted from a previous frame image of the second image;
determining second shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the second image and a preset image reference position;
adjusting the photographing direction of the image photographing part based on the second photographing direction adjustment information.
Optionally, the method further comprises:
calculating the shooting distance between the image shooting component and the monitoring target according to the pitch angle of the image shooting component, the installation height of the image shooting component and the preset height of the monitoring target;
determining a target focal length corresponding to the calculated shooting distance according to a pre-stored corresponding relationship between the shooting distance and the focal length;
and adjusting the focal length of the image shooting part to the target focal length.
Optionally, the image reference position is an image center position.
In a second aspect, there is provided an apparatus for tracking shots, the apparatus comprising:
an apparatus for tracking shots, the apparatus comprising:
an acquisition module for acquiring a first image captured by the image capturing section;
the determining module is used for determining the position of a monitoring target in the first image; the first shooting direction adjusting information of the image shooting component is determined according to the relative position information between the position of the monitoring target in the first image and a preset image reference position;
and the adjusting module is used for adjusting the shooting direction of the image shooting component based on the first shooting direction adjusting information.
Optionally, the determining module is configured to:
and determining the position of the monitoring target in the first image according to a pre-trained target detection algorithm model for detecting the position of the monitoring target.
Optionally, the determining module is further configured to:
determining the position of at least one target in the first image according to a pre-trained target detection algorithm model for detecting the position of a monitored target;
in the displayed first image, displaying a mark corresponding to each target according to the position of each target in the first image, and when a selection instruction corresponding to the first target is received, determining the position of the first target in the first image as the position of the monitoring target in the first image.
Optionally, the determining module is further configured to, after receiving a selection instruction corresponding to the first target, determine, according to image feature information of the monitoring target, a position of the monitoring target in a second image when the second image captured by the image capturing component is acquired, where the image feature information of the monitoring target is extracted from a previous image of the second image; determining second shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the second image and a preset image reference position;
and the adjusting module is also used for adjusting the shooting direction of the image shooting component based on the second shooting direction adjusting information.
Optionally, the determining module is further configured to calculate a shooting distance between the image capturing component and the monitoring target according to a pitch angle of the image capturing component, an installation height of the image capturing component, and a preset height of the monitoring target; determining a target focal length corresponding to the calculated shooting distance according to a pre-stored corresponding relationship between the shooting distance and the focal length;
and the adjusting module is also used for adjusting the focal length of the image shooting component to the target focal length.
Optionally, the image reference position is an image center position.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, the terminal including a processor and a memory, the memory having at least one instruction stored therein, the instruction being loaded and executed by the processor to implement the information presentation method as described in the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the information presentation method as described in the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a first image shot by an image shooting component is acquired, the position of a monitoring target in the first image is determined, first shooting direction adjustment information of the image shooting component is determined according to relative position information between the position of the monitoring target in the first image and a preset image reference position, and the shooting direction of the image shooting component is adjusted based on the first shooting direction adjustment information. Therefore, the cradle head camera can automatically track and shoot the monitored target without an operator operating the cradle head to move, so that shooting errors can not be generated, and the tracking and shooting quality is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a tracking shooting method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a display interface of a method for detecting a target according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a display interface of a method for clicking a monitoring target according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for adjusting focus according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for determining a monitored target according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for tracking shooting according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides a method for tracking shooting, which can be realized by a terminal. The terminal can be provided with an image shooting component or externally connected with the image shooting component, and the image shooting component can be a pan-tilt camera and the like. In this embodiment, a pan-tilt camera externally connected to the terminal is taken as an example to perform detailed description of the scheme, and other situations are similar to the above, which are not described again in this embodiment.
The terminal may include, among other things, a processor, memory, a screen, etc. The processor, which may be a Central Processing Unit (CPU), may be configured to receive an instruction and control the display to perform Processing such as displaying, in an embodiment of the present disclosure, the processor may receive an image sent by the pan/tilt camera, identify a target in the image, send related information to the display screen to display the image, calculate an angle that the pan/tilt camera needs to transmit according to the image information, and control the pan/tilt camera to adjust and rotate. The Memory may be a RAM (Random Access Memory), a Flash Memory, or the like, and may be configured to store received data, data required by the processing procedure, data generated during the processing procedure, or the like. The screen may be a touch screen or the like, and may be used to display a device list, a control page, and also to detect a touch signal or the like.
The terminal may further include a transceiver, an image detection part, an audio output part, an audio input part, and the like. The transceiver, which may be used for data transmission with other devices, may include an antenna, matching circuitry, a modem, and the like. The image detection means may be a camera or the like. The audio output component may be a speaker, headphones, or the like. The audio input means may be a microphone or the like.
The monitoring target for tracking shooting can be various, such as wild animals, people, vehicles and the like. In the embodiment of the present disclosure, the process of tracking and shooting the traffic accident by the staff is taken as an example to perform detailed description of the scheme, and other situations are similar to the above, and the embodiment of the present disclosure is not described again.
The process flow shown in fig. 1 will be described in detail below with reference to the embodiments, and the contents may be as follows:
in step 101, a first image captured by an image capturing means is acquired.
The first image is any image frame in the shooting video when the monitoring target is detected.
In implementation, a worker installs the pan-tilt camera at the top end of the patrol car, installs the terminal in the car, and the terminal is connected with the pan-tilt camera and used for data transmission and shooting control. When traffic accidents are handled, a worker opens a tracking shooting system installed in a terminal, selects to start shooting, then aims the shooting direction of a pan-tilt camera at an accident site, and the pan-tilt camera can continuously send shot videos to the terminal. After getting off, the staff walks to an accident site to start handling traffic accidents, and the pan-tilt camera can continuously send shot videos to the terminal. The terminal can acquire each image frame in the video shot by the pan-tilt camera. The first image may be any image frame in the captured video.
In step 102, the position of the monitored target in the first image is determined.
The position of the monitoring target in the first image may be coordinates of a center point of the identified monitoring target in the first image, or may be coordinates of a contour point of the monitoring target in the image, or the like.
In implementation, for each image obtained from the pan-tilt camera, the terminal may identify a monitoring target with certain characteristics in the image through a preset image identification manner to determine the position of the monitoring target in the image (the first image may be any image obtained by the terminal from the pan-tilt camera and subjected to identification processing). For example, if the monitoring target is a worker, a yellow vest of the worker may be recognized in the first image, or if the monitoring target is a zebra, a black and white stripe may be recognized in the first image. When the staff enters the shooting range of the pan-tilt camera when walking to the accident site, the terminal detects the monitoring target in the image shot by the pan-tilt camera and determines the position of the monitoring target in the image.
If the terminal cannot identify the monitoring target in the first image, the subsequent process may not be performed on the first image. For the situation, the terminal may not perform any processing, or control the pan-tilt camera to randomly change the shooting direction, and continue to identify the monitoring target for the subsequent images, until the monitoring target is identified, perform the processing of the flow.
Alternatively, the position detection of the monitoring target may be performed by a target detection algorithm model, and accordingly, the specific processing of step 102 may be as follows: and determining the position of the monitoring target in the first image according to a pre-trained target detection algorithm model for detecting the position of the monitoring target.
The object detection algorithm model may be an HOG + ADABOOST (name of an iterative algorithm) algorithm model, etc., the input of the HOG + ADABOOST algorithm model may be data of an image, the HOG + ADABOOST algorithm model performs mobile scanning in the image by using a rectangular frame, and when the HOG + ADABOOST algorithm model moves to a position, the output may be whether the image in the rectangular frame at the position meets the image feature information of the object, and if the output of the algorithm model is yes, it indicates that the object is detected in the image, and the terminal acquires coordinates of four vertices of the current rectangular frame. The terminal may use the coordinates of the four vertices as the position of the monitoring target, or may calculate the coordinates of the center point of the rectangular frame based on the coordinates of the four vertices as the position of the monitoring target. If the output of the algorithm model is "no", it indicates that no target can be detected in the current image, and the relevant processing is referred to the above step 102.
Alternatively, a plurality of targets (e.g. a plurality of workers) may be detected in the image, and then the monitoring target may be selected from the plurality of targets by manual selection, and the corresponding steps are as follows:
in step 102A, a position of at least one target in the first image is determined according to a pre-trained target detection algorithm model for detecting a position of the monitored target.
In an implementation, when the terminal acquires the first image captured by the image capturing component, the first image may be input into the trained HOG + ADABOOST algorithm model, and whenever the HOG + ADABOOST algorithm model scans and detects an object, a result "yes" is output, and the terminal may acquire coordinates of four vertices of the rectangular frame scanned to the object. That is, when a plurality of targets are included in one image, the terminal can acquire coordinates of four vertices of a rectangular frame of each scanned target. The terminal may use the coordinates of the four vertices of each object as the position of the object, or may calculate the coordinates of the center point of the rectangular frame based on the coordinates of the four vertices of each object as the position of each object.
In step 102B, as shown in fig. 2, in the displayed first image, a mark is displayed corresponding to each object according to the position of each object in the first image, and when a selection instruction corresponding to the first object is received, the position of the first object in the first image is determined as the position of the monitoring object in the first image.
Wherein the mark may be a rectangular frame or the like.
In implementation, the terminal displays the video received from the pan-tilt camera in real time on the display screen. In the displayed first image, the position of each target detected according to the target detection algorithm model is displayed, and a mark is displayed at the position of each target, for example, the position of the target is the coordinates of four vertices of a rectangular box of the image range of the target, so that the displayed mark can be the rectangular box, or the position of the target is the coordinates of the center point of the four vertices, so that the displayed mark can be a dot, a square, a circular box or the like at the center point. When a user (another worker seated in the vehicle) manually selects one target as a monitoring target, the terminal takes the target as the monitoring target and determines the position of the monitoring target in the first image.
In step 103, as shown in fig. 3, first photographing direction adjustment information of the image photographing part is determined according to relative position information between the position of the monitoring target in the first image and a preset image reference position.
The image reference position is an image center position, and may be other positions in the image, and may be arbitrarily selected according to actual requirements. The image reference position is a preset position where the monitoring target is expected to be located in the image.
In implementation, the position of the monitoring target in the first image and a preset image reference position are determined, and further, in the first image, the direction of the position of the monitoring target with respect to the image reference position is determined as their relative position information, and further, the direction may be determined as the first photographing direction adjustment information.
In step 104, the image capturing direction of the image capturing means is adjusted based on the first image capturing direction adjustment information.
In an implementation, after acquiring the first shooting direction adjustment information, the terminal may send a direction adjustment notification to the image capturing component, where the direction adjustment notification carries the first shooting direction adjustment information. And the image adjusting part adjusts the shooting direction of the image adjusting part according to the first shooting direction adjusting message after receiving the direction adjusting notice.
Alternatively, the adjustment of the image capturing section may adjust the focal length in addition to the adjustment of the capturing direction as described above, and the corresponding processing may include the following steps as shown in fig. 4:
in step 401, a photographing distance between the image pickup device and the monitoring target is calculated according to a pitch angle of the image pickup device, a mounting height of the image pickup device, and a preset height of the monitoring target.
Wherein the pitch angle of the image pickup section is a pitch angle after the image pickup section is adjusted according to the shooting direction adjustment information.
In the implementation, the image shooting component is firstly adjusted according to the shooting direction adjustment information, the shooting distance between the image shooting component and the monitored target is calculated by using the adjusted pitch angle of the image shooting component, and the process needs knowledge of the cosine law of right-angled triangles. The method comprises the steps that the preset height of a monitored target is subtracted from the installation height of an image shooting component, the obtained numerical value is the length of a right-angle side of a right-angled triangle, the pitch angle of the image shooting component is the angle between the hypotenuse required by the right-angled triangle and the known right-angle side, and according to the cosine law of the right-angled triangle, the length of the hypotenuse can be calculated by knowing the angle between the right-angle side of the right-angled triangle and a non-right angle. The bevel edge is the shooting distance between the image shooting component and the monitored target.
In step 402, a target focal length corresponding to the calculated shooting distance is determined based on a correspondence relationship between the shooting distance and the focal length stored in advance.
In implementation, in order to achieve a better follow-up shooting effect, it is necessary to ensure that the size of the monitored target in the image is stabilized near a certain value, and a technician may set the value first, and then set the correspondence between the shooting distance and the focal length based on the value, and store the correspondence in the terminal, and may store the correspondence in the form of a correspondence table, as shown in table 1.
TABLE 1
Figure GDA0002917054440000091
After the shooting distance between the shooting part and the monitored target is calculated, the focal distance corresponding to the shooting distance, that is, the target focal distance, can be searched in the correspondence table.
In step 403, the focal length of the image capturing part is adjusted to the target focal length.
In implementation, after determining the target focal length, the terminal may send a focal length adjustment message to the image capturing component, where the focal length of the target is carried. And after receiving the focal length adjusting message, the image adjusting part sets the focal length of the image adjusting part as the target focal length.
Alternatively, for the above-mentioned case that the user manually selects the monitoring target, the image feature of the monitoring target may be obtained in the image for determining the position of the monitoring target in the subsequently captured image, and the corresponding processing may be as shown in fig. 5, and includes the following steps:
in step 501, after receiving a selection instruction corresponding to a first target, when a second image captured by an image capturing component is acquired, a position of a monitoring target in the second image is determined according to image feature information of the monitoring target. And extracting the image characteristic information of the monitoring target from the previous frame image of the second image.
The second image is any image frame shot by the image shooting component after the terminal receives the selection instruction of the first target after the user manually selects the monitoring target.
In the implementation, the user manually selects the first target from the targets, and the terminal receives the selection instruction of the first target. After that, when the terminal receives the image (namely the second image) sent by the image shooting component again, the corresponding position of the monitoring target in the previous frame image of the second image is determined, a range is determined at the position in the second image, the central position of the range is the same as the central position of the monitoring target in the previous frame image, the area is a preset multiple of the area of the monitoring target in the previous frame image, images at different positions in the range are matched and judged with the image of the monitoring target in the previous frame image, and then the position of the monitoring target in the second image is determined.
For each frame of image received after the selection instruction, the position of the monitoring target can be identified according to the processing mode.
In step 502, second shooting direction adjustment information of the image shooting component is determined according to relative position information between the position of the monitoring target in the second image and a preset image reference position.
In step 503, the image capturing direction of the image capturing means is adjusted based on the second image capturing direction adjustment information.
In practice, the processing of steps 502 and 503 is similar to that of steps 103 and 104, respectively, and reference can be made to the contents of the above embodiments.
In the whole process, two stages can be divided. A stage of detecting the position of the monitoring target based on the HOG + ADABOOST algorithm model before the user manually selects the monitoring target from one or more targets, thereby adjusting the shooting direction, which may be referred to in steps 101 to 104. In another stage, after the user manually selects a monitoring target from one or more targets, the position of the monitoring target is detected in the current image frame based on the image of the monitoring target in the previous image frame, and the shooting direction is adjusted, which may be referred to in steps 501 to 503.
In the embodiment of the invention, a first image shot by an image shooting component is acquired, the position of a monitoring target in the first image is determined, first shooting direction adjustment information of the image shooting component is determined according to relative position information between the position of the monitoring target in the first image and a preset image reference position, and the shooting direction of the image shooting component is adjusted based on the first shooting direction adjustment information. Therefore, the cradle head camera can automatically track and shoot the monitored target without an operator operating the cradle head to move, so that shooting errors can not be generated, and the tracking and shooting quality is improved.
Still another exemplary embodiment of the present disclosure provides an apparatus for tracking photographing, as shown in fig. 6, including: an obtaining module 610, a determining module 620 and an adjusting module 630.
The acquisition module 610 is configured to acquire a first image captured by an image capturing part;
the determining module 620 is configured to determine a position of a monitoring target in the first image; determining first shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the first image and a preset image reference position;
the adjusting module 630 is configured to adjust the photographing direction of the image photographing part based on the first photographing direction adjustment information.
Optionally, the determining module 620 is further configured to:
and determining the position of the monitoring target in the first image according to a pre-trained target detection algorithm model for detecting the position of the monitoring target.
Optionally, the determining module 620 is further configured to:
determining the position of at least one target in the first image according to a pre-trained target detection algorithm model for detecting the position of a monitored target;
in the displayed first image, displaying a mark corresponding to each target according to the position of each target in the first image, and when a selection instruction corresponding to the first target is received, determining the position of the first target in the first image as the position of the monitoring target in the first image.
Optionally, the determining module 620 includes:
the determining module 620 is further configured to, when a second image captured by an image capturing component is acquired after a selection instruction corresponding to a first target is received, determine a position of the monitoring target in the second image according to image feature information of the monitoring target, wherein the image feature information of the monitoring target is extracted from a previous frame image of the second image; determining second shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the second image and a preset image reference position;
an adjusting module 630, further configured to adjust the shooting direction of the image capturing part based on the second shooting direction adjustment information.
Optionally, the determining module 620 includes:
a determination module 620 configured to calculate a photographing distance between the image photographing part and the monitoring target according to a pitch angle of the image photographing part, a mounting height of the image photographing part, and a preset height of the monitoring target; determining a target focal length corresponding to the calculated shooting distance according to a pre-stored corresponding relationship between the shooting distance and the focal length;
an adjusting module 630, further configured to adjust the focal length of the image capturing part to the target focal length.
Optionally, the image reference position is an image center position.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the embodiment of the invention, a first image shot by an image shooting component is acquired, the position of a monitoring target in the first image is determined, first shooting direction adjustment information of the image shooting component is determined according to relative position information between the position of the monitoring target in the first image and a preset image reference position, and the shooting direction of the image shooting component is adjusted based on the first shooting direction adjustment information. Therefore, the cradle head camera can automatically track and shoot the monitored target without an operator operating the cradle head to move, so that shooting errors can not be generated, and the tracking and shooting quality is improved.
It should be noted that: in the tracking shooting device provided in the above embodiment, only the division of the above functional modules is used for illustration in tracking shooting, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the above described functions. In addition, the tracking shooting device and the tracking shooting method provided by the above embodiment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Referring to fig. 7, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the method for tracking shooting provided in the foregoing embodiments. Specifically, the method comprises the following steps:
the terminal 700 may include RF (Radio Frequency) circuitry 710, memory 720 including one or more computer-readable storage media, an input unit 730, a display unit 740, a sensor 750, audio circuitry 760, a WiFi (wireless fidelity) module 770, a processor 780 including one or more processing cores, and a power supply 790. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
RF circuit 710 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink information from a base station and processing the received downlink information by one or more processors 780; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 710 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 700, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 720 may also include a memory controller to provide access to memory 720 by processor 780 and input unit 730.
The input unit 730 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 730 may include a touch-sensitive surface 731 as well as other input devices 732. Touch-sensitive surface 731, also referred to as a touch display screen or touch pad, can collect touch operations by a user on or near touch-sensitive surface 731 (e.g., operations by a user on or near touch-sensitive surface 731 using a finger, stylus, or any other suitable object or attachment) and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 731 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch-sensitive surface 731 can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic wave. The input unit 730 may also include other input devices 732 in addition to the touch-sensitive surface 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal 700, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 740 may include a Display panel 741, and optionally, the Display panel 741 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 731 can overlay display panel 741, such that when touch-sensitive surface 731 detects a touch event thereon or nearby, processor 780 can determine the type of touch event, and processor 780 can then provide a corresponding visual output on display panel 741 based on the type of touch event. Although in FIG. 7 the touch-sensitive surface 731 and the display panel 741 are implemented as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface 731 and the display panel 741 may be integrated to implement input and output functions.
The terminal 700 can also include at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the terminal 700 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 700, detailed descriptions thereof are omitted.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and terminal 700. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another terminal via the RF circuit 710, or outputs the audio data to the memory 720 for further processing. The audio circuitry 760 may also include an earbud jack to provide communication of a peripheral headset with the terminal 700.
WiFi belongs to a short-distance wireless transmission technology, and the terminal 700 can help a user send and receive e-mails, browse web pages, access streaming media, and the like through the WiFi module 770, and provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 770, it is understood that it does not belong to the essential constitution of the terminal 700 and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is a control center of the terminal 700, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 700 and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby integrally monitoring the mobile phone. Optionally, processor 780 may include one or more processing cores; preferably, the processor 780 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The terminal 700 also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 790 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal 700 may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in the present embodiment, the display unit of the terminal 700 is a touch screen display, the terminal 700 further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors to perform the tracking shooting method according to the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A method of tracking shots, the method comprising:
acquiring a first image captured by an image capturing part;
determining the position of a monitoring target in the first image, if the monitoring target cannot be identified in the first image, adjusting the shooting direction of the image shooting component, and continuing to identify the monitoring target for other images subsequently shot by the image shooting component until the monitoring target is identified in the other images, and determining the position of the monitoring target in the other images;
determining first shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the image of the monitoring target which is recognized for the first time and a preset image reference position; adjusting the shooting direction of the image shooting component based on the first shooting direction adjustment information;
acquiring a second image shot by an image shooting component and image characteristic information of the monitoring target, wherein the second image is any image frame after the image of the monitoring target is recognized for the first time, shot by the image shooting component, and the image characteristic information of the monitoring target is extracted from a previous frame image of the second image;
determining the corresponding position of the monitoring target in the previous frame of image, determining a range at the corresponding position in the second image, wherein the central position of the range is the same as the central position of the monitoring target in the previous frame of image, the area corresponding to the range is a preset multiple of the area of the monitoring target in the previous frame of image, and performing matching judgment on the images at different positions in the range and the image characteristic information of the monitoring target to determine the position of the monitoring target in the second image;
determining second shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the second image and a preset image reference position;
adjusting the photographing direction of the image photographing part based on the second photographing direction adjustment information;
calculating the shooting distance between the image shooting component and the monitoring target according to the pitch angle of the image shooting component, the installation height of the image shooting component and the preset height of the monitoring target;
determining a target focal length corresponding to the calculated shooting distance according to a pre-stored corresponding relationship between the shooting distance and the focal length;
and adjusting the focal length of the image shooting part to the target focal length.
2. The method of claim 1, wherein the image reference position is an image center position.
3. An apparatus for tracking shots, the apparatus comprising:
an acquisition module for acquiring a first image captured by the image capturing section;
a determining module, configured to determine a position of a monitoring target in the first image, adjust a shooting direction of the image shooting component if the monitoring target cannot be identified in the first image, and continue to identify the monitoring target for other images subsequently shot by the image shooting component until the monitoring target is identified in the other images, and determine the position of the monitoring target in the other images; determining first shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the image of the monitoring target which is recognized for the first time and a preset image reference position;
an adjusting module, configured to adjust a shooting direction of the image shooting component based on the first shooting direction adjustment information;
the determining module is further configured to acquire a second image captured by the image capturing component and image feature information of the monitoring target, where the second image is any image frame after the image of the monitoring target is initially identified, and the image feature information of the monitoring target is extracted from a previous image frame of the second image;
determining the corresponding position of the monitoring target in the previous frame of image, determining a range at the corresponding position in the second image, wherein the central position of the range is the same as the central position of the monitoring target in the previous frame of image, the area corresponding to the range is a preset multiple of the area of the monitoring target in the previous frame of image, and performing matching judgment on the images at different positions in the range and the image characteristic information of the monitoring target to determine the position of the monitoring target in the second image;
determining second shooting direction adjustment information of the image shooting component according to relative position information between the position of the monitoring target in the second image and a preset image reference position;
the adjusting module is further configured to adjust the shooting direction of the image shooting component based on the second shooting direction adjustment information;
the determining module is further configured to calculate a shooting distance between the image capturing component and the monitoring target according to the pitch angle of the image capturing component, the installation height of the image capturing component, and a preset height of the monitoring target; determining a target focal length corresponding to the calculated shooting distance according to a pre-stored corresponding relationship between the shooting distance and the focal length;
the adjusting module is further configured to adjust the focal length of the image capturing component to the target focal length.
4. The apparatus of claim 3, wherein the image reference position is an image center position.
5. A terminal, characterized in that it comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which is loaded and executed by the processor to implement the method of tracking shots according to any of the claims 1 to 2.
6. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method of tracking shots according to any one of claims 1 to 2.
CN201710656922.1A 2017-08-03 2017-08-03 Tracking shooting method and device Active CN109391762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710656922.1A CN109391762B (en) 2017-08-03 2017-08-03 Tracking shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710656922.1A CN109391762B (en) 2017-08-03 2017-08-03 Tracking shooting method and device

Publications (2)

Publication Number Publication Date
CN109391762A CN109391762A (en) 2019-02-26
CN109391762B true CN109391762B (en) 2021-10-22

Family

ID=65412997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710656922.1A Active CN109391762B (en) 2017-08-03 2017-08-03 Tracking shooting method and device

Country Status (1)

Country Link
CN (1) CN109391762B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111756990B (en) * 2019-03-29 2022-03-01 阿里巴巴集团控股有限公司 Image sensor control method, device and system
WO2020258164A1 (en) * 2019-06-27 2020-12-30 深圳市大疆创新科技有限公司 Target tracking method and device, and computer storage medium
CN110456829B (en) * 2019-08-07 2022-12-13 深圳市维海德技术股份有限公司 Positioning tracking method, device and computer readable storage medium
CN110719403A (en) * 2019-09-27 2020-01-21 北京小米移动软件有限公司 Image processing method, device and storage medium
CN110719406B (en) * 2019-10-15 2022-06-14 腾讯科技(深圳)有限公司 Shooting processing method, shooting equipment and computer equipment
CN111123959B (en) * 2019-11-18 2023-05-30 亿航智能设备(广州)有限公司 Unmanned aerial vehicle control method based on gesture recognition and unmanned aerial vehicle adopting method
CN111243030B (en) * 2020-01-06 2023-08-11 浙江大华技术股份有限公司 Target focusing dynamic compensation method and device and storage device
CN113766175A (en) * 2020-06-04 2021-12-07 杭州萤石软件有限公司 Target monitoring method, device, equipment and storage medium
CN112648477B (en) * 2020-07-06 2022-12-27 深圳市寻视光电有限公司 Automatic tracking cradle head support and tracking method thereof
CN112648476B (en) * 2020-07-06 2022-10-18 深圳市寻视光电有限公司 Automatic tracking cradle head support and tracking method thereof
CN111862620B (en) * 2020-07-10 2022-10-18 浙江大华技术股份有限公司 Image fusion processing method and device
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113489893B (en) * 2020-07-31 2023-04-07 深圳技术大学 Real-time target object tracking and positioning method and real-time target object tracking and positioning device
CN111901528B (en) * 2020-08-05 2022-01-18 深圳市浩瀚卓越科技有限公司 Shooting equipment stabilizer
CN112843734A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Picture shooting method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
CN103248824A (en) * 2013-04-27 2013-08-14 天脉聚源(北京)传媒科技有限公司 Method and device for determining shooting angle of camera and picture pick-up system
CN103248799A (en) * 2012-02-01 2013-08-14 联想(北京)有限公司 Photographing method, photographing device and electronic equipment all for tracking target object
CN106303195A (en) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 Capture apparatus and track up method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101888479B (en) * 2009-05-14 2012-05-02 汉王科技股份有限公司 Method and device for detecting and tracking target image
JP6012982B2 (en) * 2012-02-24 2016-10-25 京セラ株式会社 Calibration processing apparatus, camera calibration apparatus, camera system, and camera calibration method
CN105898136A (en) * 2015-11-17 2016-08-24 乐视致新电子科技(天津)有限公司 Camera angle adjustment method, system and television
CN105357442A (en) * 2015-11-27 2016-02-24 小米科技有限责任公司 Shooting angle adjustment method and device for camera
CN105718887A (en) * 2016-01-21 2016-06-29 惠州Tcl移动通信有限公司 Shooting method and shooting system capable of realizing dynamic capturing of human faces based on mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248799A (en) * 2012-02-01 2013-08-14 联想(北京)有限公司 Photographing method, photographing device and electronic equipment all for tracking target object
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
CN103248824A (en) * 2013-04-27 2013-08-14 天脉聚源(北京)传媒科技有限公司 Method and device for determining shooting angle of camera and picture pick-up system
CN106303195A (en) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 Capture apparatus and track up method and system

Also Published As

Publication number Publication date
CN109391762A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109391762B (en) Tracking shooting method and device
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN107124555B (en) Method and device for controlling focusing, computer equipment and computer readable storage medium
US20170187566A1 (en) Alerting Method and Mobile Terminal
CN107124556B (en) Focusing method, focusing device, computer readable storage medium and mobile terminal
CN108763999B (en) Bar code identification method and terminal equipment
CN109583271B (en) Method, device and terminal for fitting lane line
US10922846B2 (en) Method, device and system for identifying light spot
CN109660723B (en) Panoramic shooting method and device
CN105989572B (en) Picture processing method and device
CN110300267B (en) Photographing method and terminal equipment
CN108038825B (en) Image processing method and mobile terminal
CN108763998B (en) Bar code identification method and terminal equipment
CN107749046B (en) Image processing method and mobile terminal
CN107423238B (en) Screen projection connection method and device and computer readable storage medium
CN108495349B (en) Switching method of operator network and mobile terminal
CN210093385U (en) Device for monitoring on-off state of switch
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN109493821B (en) Screen brightness adjusting method and device and storage medium
CN110881105B (en) Shooting method and electronic equipment
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109784234B (en) Right-angled bend identification method based on forward fisheye lens and vehicle-mounted equipment
CN108512616B (en) Signal intensity display method, mobile terminal and computer readable storage medium
CN107832714B (en) Living body identification method and device and storage equipment
CN115589529A (en) Photographing method, apparatus, system, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant