WO2022141197A1 - 云台的控制方法、装置、可移动平台和存储介质 - Google Patents

云台的控制方法、装置、可移动平台和存储介质 Download PDF

Info

Publication number
WO2022141197A1
WO2022141197A1 PCT/CN2020/141400 CN2020141400W WO2022141197A1 WO 2022141197 A1 WO2022141197 A1 WO 2022141197A1 CN 2020141400 W CN2020141400 W CN 2020141400W WO 2022141197 A1 WO2022141197 A1 WO 2022141197A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
pan
tilt
control
target
Prior art date
Application number
PCT/CN2020/141400
Other languages
English (en)
French (fr)
Inventor
王协平
楼致远
王振动
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080067425.8A priority Critical patent/CN114982217A/zh
Priority to PCT/CN2020/141400 priority patent/WO2022141197A1/zh
Priority to PCT/CN2021/135818 priority patent/WO2022143022A1/zh
Priority to CN202180086440.1A priority patent/CN116783568A/zh
Publication of WO2022141197A1 publication Critical patent/WO2022141197A1/zh
Priority to US18/215,871 priority patent/US20230341079A1/en

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/20Undercarriages with or without wheels
    • F16M11/2007Undercarriages with or without wheels comprising means allowing pivoting adjustment
    • F16M11/2035Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction
    • F16M11/2042Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction constituted of several dependent joints
    • F16M11/205Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction constituted of several dependent joints the axis of rotation intersecting in a single point, e.g. gimbals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • G03B17/12Bodies with means for supporting objectives, supplementary lenses, filters, masks, or turrets
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/768Addressed sensors, e.g. MOS or CMOS sensors for time delay and integration [TDI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control

Definitions

  • Embodiments of the present invention relate to the technical field of PTZ, and in particular, to a control method, device, movable platform and storage medium of a PTZ.
  • the intelligent follow function is a common shooting function of the gimbal.
  • the gimbal can obtain the real-time image of the camera through the preset interface, and input the real-time image to the machine learning unit, and then the camera to be captured can be obtained.
  • the real-time position of the object can be further captured according to the acquired real-time position.
  • Embodiments of the present invention provide a pan-tilt control method, device, movable platform and storage medium, which can solve the problem of large delays, large delays in the transmission of real-time images obtained by a camera to a pan-tilt controller through a machine learning unit This leads to the problem of poor follow-up effect, so that the quality and effect of the follow-up operation of the gimbal can be guaranteed.
  • a first aspect of the present invention is to provide a control method of a pan/tilt head, the method comprising:
  • the acquisition position is determined by an image acquisition device, and the image acquisition device is in communication connection with the pan/tilt;
  • the PTZ is controlled according to the control parameters, so as to realize the following operation of the target object.
  • a second aspect of the present invention is to provide a control device for a PTZ, the device comprising:
  • a processor for running a computer program stored in the memory to achieve:
  • the acquisition position is determined by an image acquisition device, and the image acquisition device is in communication connection with the pan/tilt;
  • the PTZ is controlled according to the control parameters, so as to realize the following operation of the target object.
  • a third aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan/tilt according to the second aspect above is provided on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the pan/tilt through the image acquisition device.
  • a fourth aspect of the present invention is to provide a movable platform comprising:
  • the control device of the pan/tilt according to the second aspect above is disposed on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the pan/tilt through the image acquisition device.
  • a fifth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used in the first aspect.
  • a sixth aspect of the present invention is to provide a control method of a pan-tilt system, wherein the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the method includes:
  • controlling the image acquisition device to acquire an image, and to acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device;
  • the pan/tilt is controlled to move according to a control parameter, so as to implement a follow-up operation on the target object, wherein the control parameter is determined based on the collection position.
  • a seventh aspect of the present invention is to provide a control device for a pan-tilt system, wherein the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • controlling the image acquisition device to acquire an image, and to acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device;
  • the pan/tilt is controlled to move according to a control parameter, so as to implement a follow-up operation on the target object, wherein the control parameter is determined based on the collection position.
  • the eighth aspect of the present invention is to provide a control system of a PTZ, comprising:
  • the control device of the pan-tilt system according to the seventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a ninth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the seventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a tenth aspect of the present invention is to provide a computer-readable storage medium, wherein the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used in the sixth aspect.
  • An eleventh aspect of the present invention is to provide a control method for a pan/tilt, which is used for a pan/tilt, wherein the pan/tilt is communicatively connected with an image acquisition device, and the method includes:
  • the captured image includes a target object
  • the position of the target object is sent to the image acquisition device, so that the image acquisition device determines a focus position corresponding to the target object based on the position of the target object, and based on the focus position
  • the target object performs a focusing operation.
  • a twelfth aspect of the present invention is to provide a control device for a PTZ, which is used for a PTZ, the PTZ is communicatively connected with an image acquisition device, and the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • the captured image includes a target object
  • the position of the target object is sent to the image acquisition device, so that the image acquisition device determines a focus position corresponding to the target object based on the position of the target object, and based on the focus position
  • the target object performs a focusing operation.
  • a thirteenth aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan-tilt according to the eleventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device, and to control the image acquisition device through the pan-tilt.
  • a fourteenth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt according to the eleventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device, and to control the image acquisition device through the pan-tilt.
  • a fifteenth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used in the tenth aspect The control method of the PTZ.
  • a sixteenth aspect of the present invention is to provide a control method of a pan-tilt system, the pan-tilt system comprising a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, the method comprising:
  • controlling the image acquisition device to acquire an image, the image including the target object
  • the pan/tilt is controlled to follow the target object based on the position of the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • a seventeenth aspect of the present invention is to provide a control device for a pan-tilt system
  • the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt
  • the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • controlling the image acquisition device to acquire an image, the image including the target object
  • the pan/tilt is controlled to follow the target object based on the position of the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • An eighteenth aspect of the present invention is to provide a control system of a PTZ, comprising:
  • the control device of the pan-tilt system according to the seventeenth aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a nineteenth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the seventeenth aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twentieth aspect of the present invention is to provide a computer-readable storage medium, wherein the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used for the sixteenth The control method of the pan-tilt system described in the aspect.
  • a twenty-first aspect of the present invention is to provide a control method of a pan-tilt system, the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the method includes:
  • the acquisition position of the second object in the acquired image is acquired, so that the gimbal changes from following the first object to based on the second object
  • the acquisition position of the object performs a follow-up operation on the second object, and the image acquisition device changes from a focus operation on the first object to a focus operation on the second object based on the position of the second object .
  • a twenty-second aspect of the present invention is to provide a control device for a pan-tilt system, the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • the acquisition position of the second object in the acquired image is acquired, so that the gimbal changes from following the first object to based on the second object
  • the acquisition position of the object performs a follow-up operation on the second object, and the image acquisition device changes from a focus operation on the first object to a focus operation on the second object based on the position of the second object .
  • a twenty-third aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan-tilt system according to the twenty-second aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-fourth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the twenty-second aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-fifth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used for the second The control method of the PTZ system according to the eleventh aspect.
  • the control method, device, movable platform, and storage medium for a pan-tilt head acquire the acquisition position of a target object in the acquired image, and then determine the follow-up operation for the target object based on the acquisition position. and control the pan/tilt according to the control parameters, so that the target object can be followed by operation.
  • the acquisition device directly acquires the acquisition position, which effectively reduces the delay time corresponding to the acquisition of the acquisition position by the PTZ through the image acquisition device, thus solving the problem of poor follow-up effect due to a relatively large delay, and further ensuring the correct accuracy.
  • the quality and effect of the target object's following operation effectively improve the stability and reliability of the method.
  • Fig. 1 is the structural representation of the pan-tilt system provided in the prior art
  • FIG. 2 is a schematic flowchart of a method for controlling a pan/tilt according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a communication connection between a pan-tilt and an image acquisition device according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of acquiring the acquisition position of the target object in the acquired image according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of obtaining a collection position of a target object in a collected image according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of acquiring a target focus position corresponding to the target object by using an image acquisition device according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram 1 of a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram 2 of a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position provided by an embodiment of the present invention
  • FIG. 9 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a target object being changed according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart of calculating a current position prediction value corresponding to the collection position according to an embodiment of the present invention.
  • FIG. 12 is an embodiment of the present invention to determine the acquisition position corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value.
  • FIG. 13 is a schematic flowchart 1 of determining a control parameter for performing a follow-up operation on the target object based on the predicted value of the current position according to an embodiment of the present invention
  • FIG. 14 is a second schematic flowchart of determining a control parameter for performing a following operation on the target object based on the predicted value of the current position according to an embodiment of the present invention
  • 15 is a schematic flowchart of controlling the pan/tilt based on the pan/tilt motion model and the control parameters provided by an embodiment of the present invention
  • FIG. 16 is a schematic flowchart 1 of controlling the pan/tilt according to the control parameter according to an embodiment of the present invention
  • FIG. 17 is a second schematic flowchart of controlling the pan/tilt according to the control parameters according to an embodiment of the present invention.
  • FIG. 18 is a schematic flowchart of another method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 19 is a schematic flowchart of another control method of a pan/tilt according to an embodiment of the present invention.
  • FIG. 20 is a schematic flowchart of another control method of a pan-tilt head provided by an embodiment of the present invention.
  • FIG. 21 is a schematic flowchart of a control method of a pan-tilt system according to an embodiment of the present invention.
  • FIG. 22 is a schematic flowchart of another method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 23 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention.
  • FIG. 24 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention.
  • FIG. 25 is a schematic diagram 1 of a principle of a method for controlling a pan-tilt head provided by an application embodiment of the present invention
  • FIG. 26 is a second schematic diagram of the principle of a method for controlling a pan-tilt head provided by an application embodiment of the present invention.
  • FIG. 27 is a schematic structural diagram of a control device for a pan/tilt according to an embodiment of the present invention.
  • FIG. 28 is a schematic structural diagram of a control device of a pan-tilt system according to an embodiment of the present invention.
  • 29 is a schematic structural diagram of another control device of a pan/tilt according to an embodiment of the present invention.
  • FIG. 30 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • FIG. 31 is a schematic structural diagram of another control device of a pan-tilt system provided by an embodiment of the present invention.
  • FIG. 32 is a schematic structural diagram of a control system of a pan/tilt according to an embodiment of the present invention.
  • FIG. 33 is a schematic structural diagram of a control system of a pan/tilt according to an embodiment of the present invention.
  • FIG. 34 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention.
  • FIG. 35 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention.
  • FIG. 36 is a schematic structural diagram of another pan/tilt control system according to an embodiment of the present invention.
  • FIG. 37 is a schematic structural diagram 1 of a movable platform according to an embodiment of the present invention.
  • 38 is a second schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 39 is a third schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 40 is a fourth schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 41 is a fifth schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • the stabilizer is not limited to anti-shake and stabilization during the video shooting process, and can also expand more other operation methods, which is conducive to ensuring the user's video shooting experience effect.
  • the gimbal stabilizer can not only perform stabilization operation, but also control the camera to rotate. The combination of the two can achieve a closed loop, which can realize the intelligent follow-up operation of the subject.
  • the following two points are more important: one is how to obtain the position information of the subject in the picture; the other is how to control the gimbal Make movement to keep the subject in the composition position, such as the center of the frame.
  • the method of controlling and collocating a third-party camera to realize intelligent following is mainly to introduce an image processing device.
  • the real-time image is input to the AI machine learning unit (software implementation), and the real-time position of the object to be photographed in the third-party camera can be obtained.
  • the camera 100 is used as a third-party load, which can be connected to an image signal processing (Image Signal Processing, ISP for short) device through an HDMI interface, and the image signal processing device It can include: ISP module 1011, buffer 1012, real-time video output device 1013, format converter 1014, machine learning model 1015 and policy processor 1016, the above-mentioned ISP module 1011 can analyze and process the received image, and The processed image data is transmitted to the buffer 1012 for buffering, and the buffered image data can not only be output in real time by the real-time video output device 1013, but also can be buffered by the format converter 1014.
  • ISP Image Signal Processing
  • the format conversion operation is performed, so that the image data after the format conversion operation can be input into the machine learning model 1015 to perform the machine learning operation, so as to identify the to-be-followed subject set by the user.
  • the policy processor 1016 can determine the control parameters of the gimbal according to the strategy, and then the gimbal controller 102 can control the gimbal based on the control parameters of the gimbal, so that the gimbal can be followed
  • the main body performs an intelligent follow-up operation.
  • the video signal transmitted from the HDMI interface has a large delay, which directly causes the effect of the follow-up operation to become very poor, and the delay length corresponding to the HDMI interface of different cameras is different, resulting in Algorithmically difficult to normalize.
  • this embodiment provides a control method, device, movable platform and storage medium of a pan/tilt head.
  • the control method obtains the collection position of the target object in the collected image, and determines the control parameters for following the target object based on the collection position; and then controls the PTZ according to the control parameters, so that the target object can be controlled.
  • Follow-up operation in which, since the acquisition position is determined by the image acquisition device, and the PTZ can directly acquire the acquisition position through the image acquisition device, this effectively reduces the corresponding delay when the PTZ acquires the acquisition position through the image acquisition device. Therefore, the problem of poor follow-up effect caused by the relatively large delay is solved, the quality and effect of the follow-up operation of the gimbal are further ensured, and the stability and reliability of the method are effectively improved.
  • FIG. 2 is a schematic flowchart of a control method of a pan/tilt according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a communication connection between a pan/tilt and an image acquisition device provided by an embodiment of the present invention; refer to FIG. 2-Fig. As shown in Fig. 3, this embodiment provides a control method for a PTZ, wherein the PTZ is communicatively connected with an image acquisition device.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as : Cameras, video cameras, other devices with image acquisition capabilities and image processing capabilities, etc.
  • a communication serial bus USB interface may be provided on the PTZ, and the USB interface is used for wired communication connection with the image acquisition device, that is, the PTZ communicates with the image acquisition device through the USB interface.
  • the delay time corresponding to the transmission of the data is relatively short. For example: when the PTZ transmits data with the image acquisition device through the HDMI interface, the delay time corresponding to the transmitted data is t1; when the PTZ transmits data with the image acquisition device through the USB interface, the corresponding delay time of the transmitted data is t1.
  • the delay time is t2, where t2 ⁇ t1 or t2 ⁇ t1.
  • the communication connection method between the PTZ and the image acquisition device is not limited to the above-mentioned limited implementation, and those skilled in the art can also set it according to specific application requirements and application scenarios, as long as it can ensure that the PTZ is in the PTZ.
  • the corresponding delay time may be relatively short, which will not be repeated here.
  • the execution body of the control method of the pan-tilt can be the control device of the pan-tilt, and it can be understood that the control device can be implemented as software or a combination of software and hardware; when the control device executes the control method of the pan-tilt, It can solve the problem of poor follow-up effect caused by the long delay of data transmission through the interface, so as to ensure the quality and effect of the follow-up operation on the target object.
  • the method may include:
  • Step S201 acquiring the acquisition position of the target object in the acquired image, where the acquisition position is determined by the image acquisition device.
  • Step S202 Based on the collection position, determine a control parameter for performing a following operation on the target object.
  • Step S203 Control the pan-tilt head according to the control parameters, so as to realize the following operation of the target object.
  • Step S201 acquiring the acquisition position of the target object in the acquired image, where the acquisition position is determined by the image acquisition device.
  • the image acquisition device can be set on the pan-tilt and used for image acquisition operation. After acquiring the acquired image, the image acquisition device can analyze and process the acquired image to determine the acquisition position of the target object in the acquired image.
  • the collection position of the target object in the captured image may include: key point positions corresponding to the target object in the captured image, or a coverage area corresponding to the target object in the captured image, and so on.
  • the acquisition position of the target object in the acquired image can be actively or passively transmitted to the PTZ through the USB interface, so that the PTZ can obtain the target object in the acquired image. collection location in .
  • Step S202 Based on the collection position, determine a control parameter for performing a following operation on the target object.
  • the acquisition position can be analyzed and processed to determine a control parameter for following the target object, and the control parameter can include at least one of the following: attitude information, angular velocity information, acceleration information, etc. .
  • determining the control parameters for following the target object based on the acquisition position may include: calculating a current position prediction value corresponding to the acquisition position; determining, based on the current position prediction value, for following the target object Control parameters for the operation.
  • the acquired image can be analyzed and processed, so that the acquisition position of the target object in the acquired image can be obtained.
  • the acquisition position is transmitted to the PTZ, there is a certain delay time when the acquisition position is acquired through the image acquisition device. Therefore, in order to reduce the influence of the delay time on the intelligent follow-up operation, the calculation and The current location prediction value corresponding to the collection location. It can be understood that the current position prediction value and the collection position are different positions.
  • the predicted value of the current position After the predicted value of the current position is obtained, the predicted value of the current position can be analyzed and processed to determine the control parameters used to follow the target object. Therefore, the accuracy and reliability of the determination of the control parameters are effectively ensured.
  • Step S203 Control the pan-tilt head according to the control parameters, so as to realize the following operation of the target object.
  • the PTZ can be controlled based on the control parameters, so that the following operation of the target object can be realized.
  • the pan/tilt may correspond to different motion states, for example, the pan/tilt may be in uniform motion, uniform acceleration motion, uniform deceleration motion, and the like.
  • the gimbal with different motion states can have different control strategies.
  • controlling the pan-tilt according to the control parameters may include: acquiring a pan-tilt motion model corresponding to the target object; and controlling the pan-tilt based on the pan-tilt motion model and the control parameters.
  • the motion model of the pan/tilt can be determined according to the motion state of the target object, for example, if the target object is moving at a uniform speed, the pan/tilt may be in uniform motion; if the target object is in uniform acceleration motion, the pan/tilt may be in uniform acceleration motion; The target object is in uniform deceleration movement, and the gimbal can be in uniform deceleration movement.
  • the motion model of the gimbal is related to the following duration, for example, during initial following, it may be a uniform acceleration motion; it may also be related to the following state, for example, when the following target is lost, it may be a uniform acceleration motion.
  • the pan-tilt motion model corresponding to the target object can be acquired.
  • this embodiment does not make any specific acquisition methods for the pan-tilt motion model corresponding to the target object.
  • those skilled in the art can set according to specific application requirements and design requirements, for example: obtaining multi-frame captured images through an image capture device, and analyzing and processing the multi-frame captured images to determine the moving speed corresponding to the PTZ, A pan-tilt motion model corresponding to the target object is determined based on the moving speed, and the pan-tilt motion model may include any one of the following: a uniform acceleration motion model, a uniform deceleration motion model, a uniform speed motion model, and the like.
  • an inertial measurement unit may be provided on the gimbal, and a gimbal motion model corresponding to the target object is obtained through the inertial measurement unit, and the like. After the gimbal motion model is obtained, the gimbal can be controlled based on the gimbal motion model and control parameters to realize the following operation of the target object, thereby effectively improving the quality and efficiency of the following operation of the target object.
  • the control method of the pan-tilt provided by this embodiment is to obtain the acquisition position of the target object in the acquired image, and then determine the control parameters for following the target object based on the acquisition position, and control the pan-tilt according to the control parameters, Therefore, the following operation of the target object can be realized, wherein, since the acquisition position is determined by the image acquisition device, and the PTZ can directly acquire the acquisition position through the image acquisition device, this effectively reduces the need for the PTZ to obtain the acquisition position through the image acquisition device.
  • the delay time corresponding to the acquisition of the position can solve the problem of poor follow-up effect caused by the relatively large delay, further ensure the quality and effect of the follow-up operation on the target object, and effectively improve the stability and reliability of the method. sex.
  • FIG. 5 is a schematic flowchart of obtaining a collection position of a target object in a collected image according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 5 , this embodiment provides a
  • the implementation manner of acquiring the acquisition location in the acquired image, specifically, the acquisition location of the acquisition target object in the acquired image in this embodiment may include:
  • Step S501 Acquire a target focus position corresponding to the target object through an image acquisition device.
  • Step S502 Determine the target focus position as the capture position of the target object in the captured image.
  • the focusing operation of the image acquisition device and the following operation of the pan/tilt or drone are two completely independent operations.
  • the tracking object of the gimbal or the drone cannot be adjusted in time based on the change of the focusing object, so the quality and effect of the following operation cannot be guaranteed.
  • an image acquisition device can be mounted on the UAV through the gimbal, then the control parameters of the UAV and/or the gimbal can be adjusted to realize the following operation. .
  • the present embodiment provides an image
  • the focusing operation of the acquisition device and the following operation of the pan/tilt or UAV are technical solutions of related operations. Specifically, in the field of camera technology, the acquisition position of the target object in the acquired image is acquired by the image acquisition device, and the image acquisition device When the focus point of the target object is different from the acquisition position, when the pan-tilt head is controlled to follow the target object based on the acquisition position, the target object obtained by the image acquisition device is likely to be out of focus.
  • the target focus position corresponding to the target object can be acquired through the image acquisition device. It is understandable that Yes, the above-mentioned target focus position may be a focus position selected by a user or an automatically recognized focus position.
  • the target focus position corresponding to the target object can be directly determined as the capture position of the target object in the captured image, that is, the focus position corresponding to the target object and the target object in the captured image are obtained.
  • the acquisition position is consistent, thereby effectively avoiding the situation of the target object being out of focus.
  • determining the target focus position as the capture position of the target object in the captured image may include: acquiring a preset area range corresponding to the target focus position , the preset area range is directly determined as the acquisition position of the target object in the acquired image.
  • the preset position corresponding to the position of the target object may be at least a partial coverage area corresponding to the target object in the captured image, and in this case, the focus position corresponding to the target object is basically the same as the capture position of the target object in the captured image , so it can also avoid the situation where the target object appears out of focus.
  • the target focus position corresponding to the target object is acquired by the image capture device, and then the target focus position is determined as the capture position of the target object in the captured image, thereby effectively realizing the focus position corresponding to the target object It is basically the same as the acquisition position of the target object in the acquired image, thereby effectively preventing the target object from appearing out of focus, and further improving the quality and effect of the following operation on the target object.
  • FIG. 6 is a schematic flowchart of obtaining a target focus position corresponding to a target object through an image acquisition device according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 6 , this embodiment provides a
  • the implementation manner of acquiring the target focus position specifically, in this embodiment, the acquisition of the target focus position corresponding to the target object by the image acquisition device may include:
  • Step S601 Acquire a historical focus position and a current focus position corresponding to the target object through an image acquisition device.
  • Step S602 Determine a target focus position corresponding to the target object based on the historical focus position and the current focus position.
  • the target object when the image acquisition device is used for the image acquisition operation of the target object, since the target object may be in a moving state, for example, the target object is in a state of uniform speed movement, uniform acceleration movement state, uniform deceleration movement state, etc., and the different target objects
  • the moving state easily causes the corresponding focus position to change during the image acquisition operation.
  • the historical focus position and current focus position corresponding to the target object corresponding to the image acquisition device can be used.
  • the historical focus position refers to the focus position corresponding to the historical image frame obtained by the image capture device
  • the current focus position refers to the focus position corresponding to the current image frame obtained by the image capture device.
  • determining the target focus position corresponding to the target object based on the historical focus position and the current focus position may include: determining a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position; According to the historical object part and the current object part, the target focus position corresponding to the target object is determined.
  • multiple focus positions (including historical focus positions and current focus positions) corresponding to the multiple frames of images can be determined.
  • the focus positions can be the same or different.
  • a historical image corresponding to the historical focus position and a current image corresponding to the current focus position can be determined, and then the historical image is analyzed and processed based on the historical focus position to determine the historical image corresponding to the historical focus position.
  • the historical object part corresponding to the location can be used to analyze and process the historical images to determine the target object contour and target object type in the historical image, and then determine the historical focus position and the target object contour and target object type in the historical image.
  • the corresponding relationship between the historical object parts corresponding to the historical focus positions is determined.
  • the current image can also be analyzed and processed based on the current focus position to determine the current object part corresponding to the current focus position.
  • the historical object part and the current object part can be analyzed and processed to determine the target focus position corresponding to the target object.
  • an image recognition algorithm or a pre-trained machine learning model can be used to analyze and identify the acquired image to identify at least one object included in the acquired image and the region where the object is located.
  • an image recognition algorithm or a pre-trained machine learning model can be used to analyze and identify the acquired image to identify at least one object included in the acquired image and the region where the object is located.
  • certain focus positions are part of the area where an object is located, it can be determined that some of the focus positions above correspond to the same object .
  • certain focus positions are part of areas where different objects are located, it can be determined that the aforementioned certain focus positions correspond to different objects.
  • the distance information between any two focusing positions can be determined, and when the distance information is less than or equal to a preset threshold, it can be determined that the two focusing positions correspond to the same object If the distance information is greater than the preset threshold, it can be determined that the above two focus positions correspond to different parts of the same object.
  • the historical focus position and the current focus position After obtaining the historical focus position and the current focus position, it can be determined whether the historical focus position and the current focus position correspond to the same target object, and when the historical focus position and the current focus position correspond to the same target object, the historical focus position and the current focus position can be determined. Whether the position corresponds to the same part of the same object.
  • the above-mentioned information After the above-mentioned information is determined, the above-mentioned information can be transmitted to the PTZ, so that the PTZ can perform a follow-up control operation based on the above-mentioned information, thereby ensuring the quality and effect of the intelligent follow-up operation.
  • mapping relationship between the focusing position and the focusing object and the focusing position of the focusing object, and have respective attribute information
  • the attribute information may have a corresponding identification
  • the mapping relationship and attribute information may be obtained through the image acquisition device. Send it to the PTZ, so that the PTZ can make corresponding judgments based on the information and make corresponding execution strategies.
  • determining the target focus position corresponding to the target object according to the historical object part and the current object part may include: when the historical object part and the current object part are different parts of the same target object, obtaining the historical object part and the current object part. The relative position information between the current object parts; the current focus position is adjusted based on the relative position information, and the target focus position corresponding to the target object is obtained.
  • the historical object part and the current object part can be analyzed and processed.
  • the historical object part and the current object part are different parts of the same target object, it means that the historical image and the current image
  • the historical object part corresponding to the historical image frame is the eyes of the character A
  • the current object part corresponding to the current image frame is the shoulder of the character A.
  • the relative position information between the historical object part and the current object part can be obtained, for example, the relative position between the eyes of the person A and the shoulder of the person A information; after the relative position information is acquired, the current focus position can be adjusted based on the relative position information to obtain the target focus position corresponding to the target object.
  • adjusting the current focus position based on the relative position information, and obtaining the target focus position corresponding to the target object may include: when the relative position information is greater than or equal to a preset threshold, adjusting the current focus position based on the relative position information, Obtain the target focus position corresponding to the target object; when the relative position information is smaller than the preset threshold, determine the current focus position as the target focus position corresponding to the target object.
  • the relative position information can be analyzed and compared with the preset threshold.
  • the relative position information is greater than or equal to the preset threshold, it means that when the image acquisition device is used to focus on a target object, The focus positions for the same target object are different at different times, and then the current focus position can be adjusted based on the relative position information to obtain the target focus position corresponding to the target object.
  • the relative position information is less than the preset threshold, it means that when the image acquisition device is used to focus on a target object, the focus position on a target object is basically unchanged at different times, and the current focus position can be determined as the target object.
  • the target focus position corresponding to the object is the relative position information is greater than or equal to the preset threshold.
  • the corresponding historical object part can be determined based on the historical focus position, and the corresponding current object part can be determined based on the current focus position.
  • the historical object part and the current object part can be analyzed and processed to determine the target focus position corresponding to the target object.
  • the relative position information d1 between part 1 and part 2 can be obtained, and then the relative position information d1 and the preset threshold are analyzed.
  • the relative position information d1 is less than the preset threshold, it means that when the image acquisition device is used to focus on the above-mentioned person, the focus position changes slightly, and then the current focus position can be determined as the target corresponding to the target object. Focus position.
  • the relative position information d2 between part 3 and part 4 can be obtained, and then the relative position information d2 can be obtained.
  • Analysis and comparison with the preset threshold value when the relative position information d2 is greater than the preset threshold value, it means that when the image acquisition device is used to focus on the above-mentioned person, the focus position changes greatly, and then the current focus can be based on the relative position information.
  • the position is adjusted to obtain the target focus position corresponding to the target object, that is, when the target object to be followed has not changed, but the focus position has changed, the current position can be adjusted based on the relative positional relationship between various parts in the target object.
  • the focus position is automatically adjusted, which can effectively avoid image jumps.
  • the historical object part and the current object part may be analyzed and processed to determine the target focus position corresponding to the target object.
  • determining the target focus position corresponding to the target object according to the historical object part and the current object part may include: when the historical object part and the current object part are different parts of the same target object, adjusting the composition target position based on the current focus position Perform an update to obtain the first updated composition target position; and perform a follow-up operation on the target object based on the first updated composition target position.
  • the composition target position may be updated based on the current focus position to obtain the first updated composition target position. For example: when the preset composition target position is the center position of the screen, at this time, in order to prevent the image from shaking due to the change of the target object, the composition target position can be updated based on the current focus position, that is, the current focus position can be updated. Determined as the target position of the composition after the first update. After the first updated composition target position is obtained, the target object can be followed based on the first updated composition target position, thereby ensuring the quality and efficiency of the target object following operation.
  • the historical focus position and the current focus position corresponding to the target object are acquired by the image acquisition device, and then the target focus position corresponding to the target object is determined based on the historical focus position and the current focus position, thereby effectively guaranteeing The accuracy and reliability of determining the target focus position is facilitated to follow the target object based on the target focus position, which further improves the practicability of the method.
  • FIG. 9 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention. on the basis of the above-mentioned embodiment, with continued reference to shown in FIG. 9 , the method in this embodiment may further include:
  • Step S901 Detecting whether the target object performing the following operation changes.
  • Step S902 when the target object is changed from the first object to the second object, acquire the acquisition position of the second object in the acquired image.
  • Step S903 Update the composition target position based on the collection position of the second object in the captured image, and obtain a second updated composition target position corresponding to the second object, so that the second object can be adjusted based on the second updated composition target position.
  • the target object in the follow operation it can be detected in real time whether the target object in the follow operation is changed. Specifically, the historical focus position and the current focus position can be obtained, the historical target object corresponding to the historical focus position and the current target object corresponding to the current focus position can be identified, and whether the historical target object and the current target object have changed.
  • the capture position of the second object in the captured image can be obtained, and then the composition target position can be updated based on the capture position of the second object in the captured image , and obtain the second updated composition target position corresponding to the second object.
  • the collection position of the second object in the captured image may be determined as the second updated composition target position corresponding to the second object, and then the second object may be followed based on the second updated composition target position, In this way, the situation of image shaking due to the change of the target object can be effectively avoided, and the quality and efficiency of the control of the PTZ can be further improved.
  • FIG. 11 is a schematic flowchart of calculating the predicted value of the current position corresponding to the collection position according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 11 , this embodiment provides a calculation and collection method
  • the implementation of the current position prediction value corresponding to the position, specifically, the calculation of the current position prediction value corresponding to the collection position in this embodiment may include:
  • Step S1101 Determine a delay time corresponding to the collection position, where the delay time is used to indicate the time required for the PTZ to obtain the collection position via the image collection device.
  • determining the delay time corresponding to the acquisition location may include: acquiring an exposure time corresponding to the acquired image; when the pan/tilt platform acquires the current acquisition location, determining the current reception time corresponding to the current acquisition location; The time interval between the current receiving time and the exposure time is determined as the delay time corresponding to the collection position.
  • the exposure time t n corresponding to the acquired image can be recorded, and the recorded exposure time t n corresponding to the acquired image can be stored in a preset area, so that the The PTZ can acquire the exposure time t n corresponding to the acquired image through the image acquisition device.
  • the image acquisition device transmits the current acquisition position of the target object in the currently acquired image to the PTZ
  • the current reception time t n+1 corresponding to the current acquisition position can be determined .
  • Step S1102 Based on the delay time and the collection position, determine a current position prediction value corresponding to the collection position.
  • determining the current position prediction value corresponding to the collection position based on the delay time and the collection position may include: when the PTZ acquires the previous collection position, determining the previous reception corresponding to the previous collection position time; determine the previous position prediction value corresponding to the previous collection position; calculate the current position prediction value corresponding to the collection position according to the collection position, exposure time, delay time, previous reception time and previous position prediction value .
  • the image acquisition device When the image acquisition device acquires multiple frames of images, it can determine multiple acquisition locations corresponding to the target object in the multi-frame images, and when the multiple acquisition locations are transmitted to the PTZ, the PTZ can acquire multiple acquisition locations , the multiple collection positions may include: the previous collection position and the current collection position.
  • the PTZ acquires the previous collection position
  • the previous reception time corresponding to the previous collection position can be determined
  • the predicted value of the previous position corresponding to the previous collection position can also be determined.
  • the specific implementation manner of determining the position prediction value is similar to the specific implementation manner of determining the current position prediction value in the above-mentioned embodiment. For details, reference may be made to the above statement, which will not be repeated here.
  • calculating the current position prediction value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value may include: based on the acquisition position, the exposure time, the delay time The time, the previous reception time and the previous position prediction value are used to determine the position adjustment value corresponding to the collection position; the sum of the position adjustment value and the collection position is determined as the current position prediction value corresponding to the collection position.
  • the acquisition position, exposure time, delay time, previous reception time, and predicted value of the previous position can be calculated.
  • Perform analysis processing to determine the position adjustment value ⁇ x corresponding to the collection position.
  • the sum of the position adjustment value and the collection position can be determined as the current position prediction value corresponding to the collection position
  • the accuracy and reliability of determining the current position prediction value corresponding to the collection position are effectively improved.
  • the delay time corresponding to the collection position by determining the delay time corresponding to the collection position, and then determining the current position prediction value corresponding to the collection position based on the delay time and the collection position, since the current position prediction value considers the current position prediction value corresponding to the collection position delay time, thus effectively ensuring the accuracy and reliability of the determination of the predicted value of the current position; in addition, when using different image acquisition devices and/or different transmission interfaces to transmit the acquisition position, different image acquisitions from the above can be obtained.
  • the delay time corresponding to the device and/or different transmission interfaces thereby effectively solving the problem of the delay time corresponding to the data transmission between different image acquisition devices and/or different transmission interfaces existing in the prior art. For problems with different lengths, the normalization of the algorithm is realized, which further improves the quality and efficiency of the following operation on the target object.
  • FIG. 12 is a schematic flowchart of determining the position adjustment value corresponding to the collection position based on the collection position, the exposure time, the delay time, the previous reception time and the previous position prediction value according to the embodiment of the present invention;
  • this embodiment provides a method for determining and collecting the predicted value of the current position.
  • Step S1201 Determine the moving speed corresponding to the target object based on the acquisition position, the previous position prediction value, the exposure time, and the previous reception time.
  • determining the moving speed corresponding to the target object may include: acquiring the position difference between the collection position and the previous position prediction value and the exposure time The time difference between the time and the previous receiving time; the ratio between the position difference and the time difference is determined as the moving speed corresponding to the target object.
  • the position difference between the acquisition position and the previous position prediction value can be obtained and the time difference between the exposure time and the previous reception time (t n -t n-1 ), the ratio between the position difference and the time difference can be determined as the moving speed corresponding to the target object, that is
  • Step S1202 Determine the product value between the moving speed and the time interval as the position adjustment value corresponding to the collection position.
  • the product value between the movement speed and the time interval can be determined as the position adjustment value corresponding to the collection position, that is, the position adjustment value:
  • the moving speed corresponding to the target object is determined based on the collection position, the predicted value of the previous position, the exposure time and the previous reception time, and then the product value between the moving speed and the time interval is determined as the value of the product of the collection position
  • the corresponding position adjustment value is obtained, thereby effectively ensuring the accuracy and reliability of determining the position adjustment value, and further improving the accuracy of calculating the current position prediction value corresponding to the collected position based on the position adjustment value.
  • FIG. 13 is a schematic flow chart 1 of determining a control parameter for a follow-up operation of a target object based on the predicted value of the current position according to an embodiment of the present invention.
  • the example provides an implementation manner of determining the control parameters used for the following operation on the target object.
  • determining the control parameters used for the following operation on the target object may include:
  • Step S1301 Determine the position deviation between the current position prediction value and the composition target position.
  • Step S1302 Based on the position deviation, determine the control parameters used to follow the target object.
  • the composition target position is pre-configured when following the target object, and the composition target position is the position where the target object is expected to remain in the image during the following operation of the target object.
  • the target position may refer to the center position of the image, that is, the target object is continuously located at the center position of the image, so that the quality and effect of the following operation on the target object can be guaranteed.
  • determining the control parameters used for the following operation of the target object may include: acquiring a picture field angle corresponding to the captured image; The control parameter for the object to follow.
  • obtaining the field of view angle of the screen corresponding to the collected image by the image collection device may include: obtaining, by the image collection device, the angle of view corresponding to the collected image. Focal length information; according to the focal length information, the field of view angle of the screen corresponding to the captured image is determined. After the picture field angle corresponding to the captured image is acquired, the picture field angle and the position deviation can be analyzed and processed to determine the control parameters used for the following operation of the target object.
  • control parameter is negatively correlated with the FOV of the screen, that is, when the FOV of the screen increases, the size of the target object located in the image becomes smaller, and the control parameter (for example, the rotation speed of the gimbal) It can be decreased as the viewing angle of the picture increases.
  • the control parameter for example, the rotation speed of the gimbal
  • determining the control parameters used for the follow-up operation of the target object based on the position deviation may include: acquiring a gimbal attitude corresponding to the acquisition position through an inertial measurement unit IMU disposed on the gimbal; The positional deviation is converted into the geodetic coordinate system by the attitude and the field of view angle of the screen, and the control parameters used for the following operation of the target object are obtained, which also realizes the accurate and reliable determination of the control parameters.
  • FIG. 14 is a second schematic flowchart of determining a control parameter for a follow-up operation of a target object based on the predicted value of the current position provided by an embodiment of the present invention.
  • the example provides another implementation manner of determining the control parameters used for the following operation on the target object.
  • determining the control parameters used for the following operation on the target object may include: :
  • Step S1401 Acquire a follow mode corresponding to the gimbal, and the follow mode includes any one of the following: a single-axis follow mode, a two-axis follow mode, and a full follow mode.
  • Step S1402 Based on the current position prediction value and the following mode, determine the control parameters used to follow the target object.
  • the follow model corresponding to the gimbal may include any one of the following: a single-axis follow mode, a two-axis follow mode, and a full follow mode. It can be understood that those skilled in the art can adjust the control mode of the PTZ based on different application scenarios and application requirements, which will not be repeated here.
  • the gimbal in different follow modes can correspond to different control parameters.
  • the control parameters can be the same as the single axis of the gimbal.
  • the yaw axis can be controlled to move based on the target attitude.
  • the control parameters can correspond to the two axes of the gimbal, for example, the yaw axis and the pitch axis can be controlled to move based on the target attitude.
  • control parameters can correspond to the three axes of the gimbal, for example, the yaw axis, the pitch axis and the roll axis can be controlled to move based on the target attitude.
  • determining the control parameters for the following operation of the target object may include: based on the current position prediction value, determining the candidate control parameters for the following operation of the target object; Among the candidate control parameters, target control parameters corresponding to the following mode are determined.
  • the alternative control parameters for the following operation of the target object can be determined based on the corresponding relationship between the predicted value of the current position and the control parameters. It can be understood that the alternative control parameters The number of parameters may be multiple. For example, when the gimbal is a three-axis gimbal, the alternative control parameters may include control parameters corresponding to the yaw axis, the pitch axis, and the roll axis.
  • target control parameters corresponding to the following model may be determined from the candidate control parameters, wherein the target control parameters may be at least a part of the candidate control parameters.
  • determining the target control parameter corresponding to the following mode may include: when the following mode is the single-axis following mode, determining the single-axis following mode corresponding to the single-axis following mode in the alternative control parameters axis control parameters, and set other alternative control parameters to zero; when the following mode is the dual-axis following mode, the dual-axis control parameters corresponding to the dual-axis following mode can be determined in the alternative control parameters, and other alternative control parameters can be determined.
  • the control parameters are set to zero; when the follow mode is the full follow mode, the alternative control parameters are determined as the three-axis control parameters corresponding to the full follow mode.
  • FIG. 15 is a schematic flowchart of controlling the pan/tilt based on the motion model and control parameters of the pan/tilt according to an embodiment of the present invention; on the basis of the above embodiment, referring to FIG. 15 , this embodiment provides a The implementation of the control of the PTZ, specifically, the control of the PTZ based on the motion model of the PTZ and the control parameters in this embodiment may include:
  • Step S1501 Acquire duration information corresponding to the following operation on the target object.
  • control device of the PTZ can be provided with a timer, and the timer can be used to time the duration information corresponding to the follow operation of the target object. Therefore, the timer can be used to follow the target object.
  • the duration information corresponding to the operation can be provided with a timer, and the timer can be used to time the duration information corresponding to the follow operation of the target object. Therefore, the timer can be used to follow the target object. The duration information corresponding to the operation.
  • Step S1502 When the duration information is less than the first time threshold, update the control parameters based on the motion model of the pan/tilt, obtain the updated control parameters, and control the pan/tilt based on the updated control parameters.
  • the duration information can be associated with The preset first time threshold is analyzed and compared, and when the duration information is less than the first time threshold, the control parameters can be updated based on the motion model of the gimbal, so that the updated control parameters can be obtained, and the updated control parameters can be adjusted based on the updated control parameters.
  • PTZ control in the process of moving the gimbal, different gimbal motion models can correspond, and different gimbal motion models can correspond to different control parameters. Therefore, after obtaining the duration information, the duration information can be associated with The preset first time threshold is analyzed and compared, and when the duration information is less than the first time threshold, the control parameters can be updated based on the motion model of the gimbal, so that the updated control parameters can be obtained, and the updated control parameters can be adjusted based on the updated control parameters. PTZ control.
  • updating the control parameters based on the pan-tilt motion model, and obtaining the updated control parameters may include: determining, based on the pan-tilt motion model, an update coefficient corresponding to the control parameter, wherein the update coefficient is less than 1; The product value of the control parameter is determined as the updated control parameter.
  • determining the update coefficient corresponding to the control parameter based on the pan-tilt motion model may include: when the pan-tilt motion model is a uniform acceleration motion, determining the ratio between the duration information and the first time threshold as a value corresponding to the control parameter The corresponding update coefficient, the update coefficient at this time is less than 1. Then, the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the duration information t ⁇ the first time threshold T, the updated control parameter can be determined based on the following formula: Among them, En is the control parameter, and the updated control parameter is
  • the gimbal when the gimbal starts to follow a target object, when the gimbal obtains the control parameters used to follow the target object, in order to prevent the gimbal from suddenly following the target object,
  • the updated control parameters corresponding to the control parameters can be obtained, and the updated control parameters are the transition control parameters from 0 to the control parameters, that is, when the duration information is less than the first time
  • the gimbal is controlled based on the updated control parameters, thereby realizing the slow start operation, that is, the gimbal can be controlled to slowly adjust to the control parameters, thereby ensuring the quality and effect of the follow-up operation on the target object.
  • the ratio between the duration information and the first time threshold is determined as an update coefficient corresponding to the control parameter, and the update coefficient at this time is less than 1.
  • the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the duration information t ⁇ the first time threshold T, the updated control parameter can be determined based on the following formula: Among them, En is the control parameter, and the updated control parameter is
  • the gimbal when the gimbal starts to stop the following operation for a target object, when the gimbal obtains the control parameters for stopping the following operation on the target object, in order to avoid the gimbal suddenly stop following the target object, then
  • the updated control parameter corresponding to the control parameter can be obtained, and the updated control parameter is the transition control parameter from the control parameter to 0, that is, when the duration information is less than the first time threshold
  • the gimbal is controlled based on the updated control parameters, so as to realize the slow stop operation, that is, the gimbal can be controlled to slowly adjust to 0, thereby ensuring the quality and effect of the stop-following operation on the target object.
  • Step S1503 When the duration information is greater than or equal to the first time threshold, and the motion model of the pan/tilt is uniform acceleration motion, use the control parameters to control the pan/tilt.
  • the control parameters are directly used to control the gimbal, that is, when the duration information When t ⁇ the first time threshold T, the updated control parameters are the same as the control parameters The gimbal can then be controlled using the control parameters.
  • control parameter when the duration information is greater than or equal to the first time threshold, and the motion model of the gimbal is a uniform deceleration motion, the control parameter may be configured as 0.
  • the control parameters are updated based on the motion model of the pan-tilt head, and the updated control parameters are obtained, and The gimbal is controlled based on the updated control parameters; when the duration information is greater than or equal to the first time threshold and the gimbal motion model is a uniform acceleration motion, the control parameters are used to control the gimbal, thereby effectively realizing the use of slow start
  • the strategy and slow stop strategy are used to control the gimbal, which further ensures the quality and efficiency of the follow operation of the target object.
  • Fig. 16 is a schematic diagram 1 of a flow chart of controlling a pan/tilt according to control parameters provided by an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to Fig. 16, this embodiment provides a method according to control parameters
  • the implementation manner of controlling the pan-tilt, specifically, the control of the pan-tilt according to the control parameters in this embodiment may include:
  • Step S1601 Acquire the following state corresponding to the target object.
  • the target object when the target object is followed, the target object may correspond to different following states.
  • the following states corresponding to the target object may include at least one of the following: keep following state, lost state. It can be understood that, when the target object corresponds to different following states, different control parameters can be used to control the pan-tilt, so as to ensure the safety and reliability of the control of the pan-tilt.
  • this embodiment does not limit the specific implementation manner of acquiring the following state corresponding to the target object.
  • Those skilled in the art can obtain the tracking state corresponding to the target object according to specific application requirements and design requirements.
  • the following state corresponding to the object specifically, when there is a target object in the image collected by the image acquisition device, it can be determined that the following state corresponding to the target object is the follow-up state; in the image collected by the image acquisition device When there is no target object in the target object, it can be determined that the following state corresponding to the target object is a lost state.
  • acquiring the follow state corresponding to the target object may include: detecting whether the target object performing the follow operation changes; when the target object is changed from the first object to the second object, it may also be determined that the first object is Lost state.
  • Step S1602 Control the PTZ based on the following state and control parameters.
  • controlling the PTZ based on the following state and the control parameters may include: when the target object is in the lost state, acquiring information about the lost duration corresponding to the process of following the target object; The parameters are updated, and the updated control parameters are obtained; the gimbal is controlled based on the updated control parameters.
  • the lost duration information corresponding to the following operation of the target object can be obtained through a timer, and then the control parameters can be updated according to the lost duration information to obtain the updated control parameters.
  • updating the control parameters according to the loss duration information, and obtaining the updated control parameters may include: when the loss duration information is greater than or equal to a second time threshold, updating the control parameters to zero; when the loss duration information is less than the second time threshold At the time threshold, the ratio between the lost duration information and the second time threshold is obtained, and the difference between 1 and the ratio is determined as the update coefficient corresponding to the control parameter, and the product value of the update coefficient and the control parameter is determined as Control parameters after update.
  • the loss duration information can be analyzed and compared with the second time threshold.
  • the loss duration information is greater than or equal to the second time threshold, it means that the target object to follow is in a lost state. The time is longer, and the control parameters can be updated to zero, that is, when the lost duration information t ⁇ the second time threshold T, the control parameters can be updated to zero.
  • the loss duration information is less than the second time threshold, it means that the following target object is in the lost state for a short time, and then the ratio between the lost duration information and the second time threshold can be obtained, and the ratio between 1 and the ratio can be obtained.
  • the difference is determined as the update coefficient corresponding to the control parameter, and then the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the lost duration information t ⁇ the second time threshold T, then the control parameter
  • the tracking state corresponding to the target object is obtained, and then the pan-tilt is controlled based on the following state and control parameters, thereby effectively ensuring the accuracy and reliability of the control of the pan-tilt.
  • FIG. 17 is a second schematic flowchart of controlling the pan/tilt according to control parameters according to an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to FIG. 17 , this embodiment provides another method for controlling the cloud platform Specifically, the control of the PTZ according to the control parameters in this embodiment may include:
  • Step S1701 Obtain the object type of the target object.
  • Step S1702 Control the PTZ according to the object type and control parameters.
  • the target object when using the pan/tilt to follow the target object, the target object may correspond to different object types, and the above-mentioned object types may include any one of the following: a stationary object, a moving object with a higher height, a moving object with a lower height
  • the PTZ can be controlled according to the object type and control parameters.
  • controlling the pan-tilt head according to the object type and the control parameters may include: adjusting the control parameters according to the object type to obtain the adjusted parameters; and controlling the pan-tilt head based on the adjusted parameters.
  • adjusting the control parameters according to the object type, and obtaining the adjusted parameters may include: when the target object is a stationary object, reducing the control bandwidth corresponding to the gimbal in the yaw direction and the control bandwidth corresponding to the gimbal in the pitch direction Bandwidth; when the target object is a moving object and the height of the moving object is greater than or equal to the height threshold, increase the control bandwidth of the gimbal in the yaw direction, and reduce the control bandwidth of the gimbal in the pitch direction; When the object is a moving object and the height of the moving object is less than the height threshold, the control bandwidth corresponding to the yaw direction of the gimbal and the control bandwidth corresponding to the pitch direction of the gimbal are increased.
  • the control bandwidth corresponding to the yaw direction (yaw axis direction) of the gimbal and the pitch of the gimbal can be reduced.
  • the control bandwidth corresponding to the direction (pitch axis direction) can reduce the pan following performance and the pitch following performance.
  • the yaw of the gimbal can be improved.
  • the target object is a moving object
  • the height of the moving object is less than the height threshold
  • you can increase the yaw direction of the gimbal ( The control bandwidth corresponding to the yaw axis direction) and the control bandwidth corresponding to the gimbal in the pitch direction (pitch axis direction) can improve the translation follow performance and pitch follow performance.
  • FIG. 18 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to FIG. 18 , the method in this embodiment may further include:
  • Step S1801 Acquire an execution operation input by a user with respect to the image capturing apparatus through a display interface.
  • Step S1802 Control the image capture device according to the execution operation, so that the image capture device determines the capture position.
  • a display interface that can be used for interactive operation by the user is preset.
  • the display interface may be a display interface on a control device of the pan/tilt head, or the display interface may be a display interface on an image acquisition device.
  • the execution operation input by the user on the image capture device can be obtained through the display interface, and then the image capture device can be controlled according to the execution operation, so that the image capture device can determine whether the target object is in the image capture device based on the execution operation.
  • the acquisition location in the acquisition image is acquired.
  • the control device of the PTZ may be provided with an application APP for controlling the image acquisition device, and the control device of the PTZ can be operated by operating the control device of the PTZ.
  • the above-mentioned APP is started, and a display interface for controlling the image acquisition device can be displayed on the display.
  • the user can obtain the execution operation input by the user for the image acquisition device through the display interface, and then the image acquisition device can be executed according to the execution operation. Control, so that the image acquisition device determines the acquisition position, so that the user can control the image acquisition device through the control device of the PTZ.
  • the display interface is the display interface on the image capture device
  • the user can obtain the execution operation input by the user on the image capture device through the display interface, and then the image capture device can be controlled according to the execution operation, so that the image capture device can be captured.
  • the device determines the acquisition position, so that the user can control the image acquisition device through the image acquisition device.
  • the execution operation input by the user on the image capture device is obtained through the display interface, and then the image capture device is controlled according to the execution operation, so that the image capture device determines the capture position, thereby effectively realizing the image capture device. control, which further improves the quality and effect of following the target object.
  • FIG. 19 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to FIG. 19 , the method in this embodiment may further include:
  • Step S1901 Obtain distance information corresponding to the target object through a ranging sensor provided on the image acquisition device.
  • Step S1902 Send the distance information to the image acquisition device, so that the image acquisition device determines the acquisition position of the target object in the acquired image in combination with the distance information.
  • the image acquisition device when the image acquisition device acquires the acquisition position of the target object in the acquired image, in order to improve the accuracy of determining the acquisition position of the target object in the acquired image, the image acquisition device may be provided with a ranging sensor, which can measure the distance.
  • the sensor can be communicated with the image acquisition device through the PTZ.
  • the distance information corresponding to the target object can be obtained through the ranging sensor set on the image acquisition device, and then the distance information can be sent to the image acquisition device.
  • the acquisition position of the target object in the acquired image can be determined in combination with the distance information, which effectively improves the accuracy and reliability of determining the acquisition position of the target object in the acquired image. That is, the image acquisition device is made to fuse or calibrate the acquisition position acquired based on image recognition through the acquisition position acquired by the distance information.
  • FIG. 20 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above embodiments, and continuing to refer to FIG. 20, the method in this embodiment may further include:
  • Step S2001 Determine a working mode corresponding to the image acquisition device, and the working mode includes any one of the following: a first-following-then-focus mode, and a first-focusing-then-following mode;
  • Step S2002 use the working mode to control the image acquisition device.
  • the image acquisition device when the following operation is performed based on the image acquisition device, the image acquisition device may correspond to different working models, and the working models may include: follow-then-focus mode, focus-then-follow mode, and the above-mentioned first-follow-then-focus mode refers to
  • the image acquisition device may perform the following operation first, and then perform the focusing operation.
  • the first focus and then follow mode means that when the image capture device needs to perform a follow operation and a focus operation, the image capture device can perform the focus operation first, and then perform the follow operation.
  • the composition following operation may be performed preferentially based on the captured image, and then the focusing operation may be performed on the target object that has undergone the compositional following operation.
  • the working mode of the image capture device is the focus first and then follow mode, the focus operation can be performed on the target object in the captured image first, and then the composition follow operation can be performed on the target object subjected to the focus operation.
  • an operation interface/operation control for controlling the image capture device is preset. After the operation interface/operation control is obtained, the working mode of the image capture device can be configured/selected through the operation interface. After the working mode of the image capturing device is completed, the working mode corresponding to the image capturing device can be determined through the working mode identifier.
  • the working mode can be used to control the image capturing device, thereby effectively realizing that the image capturing device can meet the requirements of different application scenarios, and further improving the control of the image capturing device. Flexible and reliable.
  • FIG. 21 is a schematic flowchart of a control method of a pan-tilt system provided by an embodiment of the present invention; with reference to FIG. 21 , the present embodiment provides a control method of a pan-tilt system, wherein the pan-tilt system includes: a cloud-tilt system
  • the pan-tilt system includes: a cloud-tilt system
  • the image acquisition device can be integrated on the PTZ.
  • the PTZ and the image acquisition device provided on the PTZ can be sold or maintained as a whole.
  • the image acquisition device and the pan/tilt head can be sold or maintained as a whole.
  • the image capturing device may be separately installed on the pan/tilt, and in this case, the image capturing device and the pan/tilt may be separately sold or maintained.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as cameras, video cameras, other devices with image acquisition capabilities, and the like.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image acquisition device, that is, the PTZ is connected to the image acquisition device through the USB interface.
  • the interface and the image acquisition device transmit data, the delay time corresponding to the transmission of the data is relatively short.
  • the communication connection method between the PTZ and the image acquisition device is not limited to the above-mentioned limited implementation, and those skilled in the art can also set it according to specific application requirements and application scenarios, as long as it can ensure that the PTZ is in the PTZ.
  • the corresponding delay time may be relatively short, which will not be repeated here.
  • the execution body of the control method of the pan-tilt system may be the control device of the pan-tilt system, and it can be understood that the control device of the pan-tilt system may be implemented as software or a combination of software and hardware; in addition, the pan-tilt system
  • the control device of the PTZ can be set on the PTZ or the image acquisition device.
  • the control device of the PTZ system is set on the image acquisition device, the PTZ and the image acquisition device can be integrated products.
  • the control device executes the control method of the pan-tilt system, it can solve the problem of poor follow-up effect caused by the long delay time generated by the interface transmission data, so as to ensure the quality and effect of the follow-up operation on the target object.
  • the method may include:
  • Step S2101 Control the image acquisition device to acquire images, and acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device.
  • Step S2102 Transmit the collection position to the PTZ.
  • Step S2103 Control the pan/tilt to move according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • Step S2101 Control the image acquisition device to acquire images, and acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device.
  • the image capture device can be controlled to perform the image capture device according to the follow-up requirement. After the image capture device acquires the image, the image capture device can analyze and process the image to determine whether the target object is in the image. collection location. Specifically, the collection position of the target object in the image may include: the position of the key point corresponding to the target object in the image, or the coverage area corresponding to the target object in the image, and so on.
  • Step S2102 Transmit the collection position to the PTZ.
  • the acquisition position of the target object in the acquired image can be actively or passively transmitted to the PTZ through the USB interface, so that the PTZ can obtain the target object in the image. collection location.
  • Step S2103 Control the pan/tilt to move according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • the acquisition position can be analyzed and processed to determine the control parameters used to control the PTZ, and then the PTZ can be controlled to move according to the control parameters to achieve the target object to follow operate.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 20 .
  • the methods shown in FIG. 2 to FIG. 20 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 20 , which will not be repeated here.
  • the image acquisition device is controlled to acquire an image, and the acquisition position of the target object in the image is acquired, and then the acquisition position is transmitted to the pan-tilt, and the pan-tilt is controlled to move according to the control parameters , wherein the control parameters are determined based on the acquisition position, so that the target object can be followed.
  • the PTZ can directly acquire the acquisition position through the image acquisition device, In this way, the delay time corresponding to the acquisition of the acquisition position by the PTZ through the image acquisition device is effectively reduced, thereby solving the problem of poor follow-up effect due to a long delay, and further ensuring the quality of the follow-up operation on the target object And the effect effectively improves the stability and reliability of the method.
  • FIG. 22 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; with reference to FIG. 22 , the present embodiment provides another pan/tilt control method, and the method is suitable for a pan/tilt.
  • the PTZ is communicatively connected with the image acquisition device.
  • the execution subject of the PTZ control method may be the PTZ control device.
  • the control device can be implemented as software or a combination of software and hardware.
  • the method can include:
  • Step S2201 Acquire a captured image, where the captured image includes a target object.
  • Step S2202 Determine the position of the target object in the captured image, so as to perform a following operation on the target object based on the position of the target object.
  • Step S2203 Send the position of the target object to the image capture device, so that the image capture device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • Step S2201 Acquire a captured image, where the captured image includes a target object.
  • an image acquisition device is connected to the PTZ in communication, and the above-mentioned image acquisition device can perform an image acquisition device for a target object, so that the acquired image can be acquired, and after the acquired image is acquired by the image acquisition device, the acquired image can be automatically or Passively transmitted to the gimbal, so that the gimbal can obtain the captured image stably.
  • Step S2202 Determine the position of the target object in the captured image, so as to perform a following operation on the target object based on the position of the target object.
  • the acquired image can be analyzed and processed to determine the position of the target object, and the acquired position of the target object is used to implement the following operation of the target object.
  • the captured image can be displayed through the display interface, and then the user can perform operations on the captured image input through the display interface, and the position of the target object can be determined according to the execution operation, that is, the user can perform operations on the target object included in the captured image. Box selection operation, so that the location of the target object can be determined.
  • a preset image processing algorithm may be used to automatically analyze and process the captured image to determine the position of the target object.
  • Step S2203 Send the position of the target object to the image capture device, so that the image capture device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • the position of the target object can be sent to the image acquisition device, and the image acquisition device acquires the target object's position.
  • the focus position corresponding to the target object can be determined based on the position of the target object, so that when the target object is followed, the following position used to follow the target object and the focus corresponding to the target object are realized.
  • the positions are the same, which effectively avoids the situation that the target object is out of focus due to the inconsistency between the focusing position and the following position, and further improves the quality and effect of the following operation on the target object.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 20 .
  • the methods shown in FIG. 2 to FIG. 20 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 20 , which will not be repeated here.
  • the captured image is acquired, and the position of the target object is determined in the captured image, so as to follow the target object based on the position of the target object, and then the position of the target object is sent to the image.
  • the acquisition device so that the image acquisition device determines the focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position, thereby effectively ensuring that the following operation is performed on the target object.
  • the following position for the following operation on the target object is the same as the focusing position corresponding to the target object, which effectively avoids the situation that the target object is defocused due to the inconsistency between the focusing position and the following position, thereby effectively improving the accuracy of the target object.
  • the quality and effect of the object's following operation further improves the stability and reliability of the method.
  • FIG. 23 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention; with reference to FIG. 23 , the present embodiment provides a control method of a pan-tilt system, wherein the pan-tilt system includes: The PTZ and the image capture device connected to the PTZ in communication.
  • the image capture device can be integrated on the PTZ.
  • the PTZ and the image capture device installed on the PTZ can be sold or maintained as a whole.
  • the image capturing device may be separately installed on the pan/tilt, and in this case, sales or maintenance operations may be performed separately between the image capturing device and the pan/tilt.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as cameras, video cameras, other devices with image acquisition capabilities, and the like.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image acquisition device, that is, the PTZ is connected to the image acquisition device through the USB interface.
  • the interface and the image acquisition device transmit data, the delay time corresponding to the transmission of the data is relatively short.
  • the communication connection method between the PTZ and the image acquisition device is not limited to the above-mentioned limited implementation, and those skilled in the art can also set it according to specific application requirements and application scenarios, as long as it can ensure that the PTZ is in the PTZ.
  • the corresponding delay time may be relatively short, which will not be repeated here.
  • the execution body of the control method of the pan-tilt system may be the control device of the pan-tilt system, and it can be understood that the control device of the pan-tilt system may be implemented as software or a combination of software and hardware; in addition, the pan-tilt system
  • the control device can be set on the PTZ or the image acquisition device.
  • the control device executes the control method of the pan-tilt system, it can solve the problem of poor follow-up effect caused by the long delay caused by the interface transmission data, so as to ensure the quality and effect of the follow-up operation on the target object.
  • the method may include:
  • Step S2301 Control the image acquisition device to acquire an image, where the image includes a target object.
  • Step S2302 Determine the position of the target object in the image.
  • Step S2303 control the pan-tilt head to follow the target object based on the position of the target object, and control the image acquisition device to focus on the target object according to the position of the target object.
  • Step S2301 Control the image acquisition device to acquire an image, where the image includes a target object.
  • the image acquisition device can be controlled to perform the image acquisition device according to the follow-up demand. After the image acquisition device acquires the image, the image can be actively or passively transmitted to the PTZ, so that the PTZ can Get the image.
  • Step S2302 Determine the position of the target object in the image.
  • the image can be analyzed and processed to determine the acquisition position of the target object in the image.
  • the collection position of the target object in the image may include: the position of the key point corresponding to the target object in the image, or the coverage area corresponding to the target object in the image, and so on.
  • Step S2303 control the pan-tilt head to follow the target object based on the position of the target object, and control the image acquisition device to focus on the target object according to the position of the target object.
  • the PTZ can be controlled to follow the target object based on the position of the target object.
  • the corresponding position of the target object can also be determined based on the position of the target object.
  • the position of the target object can be the same as the focus position corresponding to the target object, so that when the target object is followed, the following position used to follow the target object corresponds to the target object.
  • the focus position of the target object is the same, which effectively avoids the situation that the target object is out of focus due to the inconsistency between the focus position and the follow position, and further improves the quality and effect of the follow operation on the target object.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 20 .
  • the methods shown in FIG. 2 to FIG. 20 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 20 , which will not be repeated here.
  • the image acquisition device is controlled to acquire an image, and the position of the target object is determined in the image, and then the pan-tilt is controlled to follow the target object based on the position of the target object, and according to the target object
  • the position of the image acquisition device controls the focus operation of the target object, thereby effectively ensuring that when the target object is followed, the following position used to follow the target object is the same as the focus position corresponding to the target object. It avoids the situation that the target object is out of focus due to the inconsistency between the focusing position and the following position, thereby effectively improving the quality and effect of the following operation on the target object, and further improving the stability and reliability of the method.
  • FIG. 24 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention; with reference to FIG. 24 , the present embodiment provides another control method of a pan-tilt system, wherein the pan-tilt system includes: : a PTZ and an image acquisition device connected in communication with the PTZ.
  • the execution subject of the control method of the PTZ system can be the control device of the PTZ system. It is understood that the control device of the PTZ system can be implemented as software, Or a combination of software and hardware; in addition, the control device of the PTZ system can be set on the PTZ or the image acquisition device. When the control device of the PTZ system is set on the image acquisition device, the PTZ and the image acquisition device can be integrated.
  • the method in this embodiment may further include:
  • Step S2401 Acquire a capture position of the first object in the captured image, where the capture position of the first object is used for the pan/tilt head to follow the first object, and for the image capture device to focus on the first object.
  • Step S2402 When the first object is changed to the second object, obtain the collection position of the second object in the captured image, so that the pan/tilt changes from following the first object to the second object based on the collection position of the second object.
  • a follow-up operation is performed on the object, and the image capture device is changed from a focusing operation on the first object to a focusing operation on the second object based on the position of the second object.
  • Step S2401 Acquire a capture position of the first object in the captured image, where the capture position of the first object is used for the pan/tilt head to follow the first object, and for the image capture device to focus on the first object.
  • an image acquisition operation may be performed for the first object through the image acquisition device, so that an acquired image including the first object can be obtained.
  • the acquired image can be analyzed and processed to determine the acquisition position of the first object in the acquired image, and the determined acquisition position of the first object in the acquired image is used for the PTZ to monitor the first object
  • a follow-up operation is performed, and in addition, the determined capture position of the first object in the captured image is used for the image capture device to perform a focus operation on the first object.
  • the execution subject for determining the capturing position of the first object in the captured image may be an "image capturing device" or a "cloud platform".
  • Step S2402 When the first object is changed to the second object, obtain the collection position of the second object in the captured image, so that the pan/tilt changes from following the first object to the second object based on the collection position of the second object.
  • a follow-up operation is performed on the object, and the image capture device is changed from a focusing operation on the first object to a focusing operation on the second object based on the position of the second object.
  • the following object When the following operation is performed on the first object, the following object may be changed, that is, the first object may be changed into the second object.
  • the acquisition position of the second object in the acquired image can be acquired, and then the pan-tilt head can be controlled based on the acquisition position of the second object in the acquired image, thereby effectively realizing the
  • the control pan/tilt changes from the following operation on the first object to the following operation on the second object based on the collection position of the second object.
  • the acquired acquisition position of the second object in the acquired image can also be used for the image acquisition device to perform a focusing operation.
  • the image acquisition device can be changed from a focusing operation on the first object to a focusing operation based on the second object. position to focus on the second object, so that when the second object is followed, the following position used to follow the second object is the same as the focus position corresponding to the second object, which effectively avoids the need for Since the focus position is inconsistent with the following position, the second object is out of focus, which further improves the quality and effect of the following operation on the second object.
  • the implementation manner of acquiring the acquisition position of the second object in the acquired image is similar to the above-mentioned implementation manner of acquiring the acquisition position of the first object in the acquired image. Repeat.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 20 .
  • the methods shown in FIG. 2 to FIG. 20 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 20 , which will not be repeated here.
  • the acquisition position of the second object in the acquired image can be acquired, so that the pan/tilt head changes from following the first object to following the second object based on the acquisition position of the second object, and the image acquisition device changes from the focusing operation on the first object to the position based on the second object
  • the focusing operation is performed on the second object, thereby effectively ensuring that when the following object is changed from the first object to the second object, the following operation can be performed on the second object, and when the following operation is performed on the second object, by ensuring
  • the following position used for the following operation on the second object is the same as the focusing position corresponding to the second object, which effectively avoids the situation of false focus on the second object caused by the inconsistency between the focusing position and the following position, thereby effectively
  • the quality and effect of the following operation on the second object are improved, and the stability and reliability of the method are
  • the present invention provides an intelligent following method based on a camera (which can be a third-party camera set on the gimbal, or a camera integrated on the gimbal).
  • the method can include: camera and gimbal.
  • the method in this embodiment includes the following steps:
  • Step 1 Camera plane bias prediction.
  • the camera exposure timestamp of the current image frame is directly obtained by the camera.
  • the camera exposure timestamp of the current image frame can be t n .
  • the camera sends the detection information of the current image frame (which may include the coordinate information of the target object in the image frame) to When the PTZ receives the detection information of the current image frame, the timestamp is t n+1 .
  • the PTZ receives the detection information corresponding to the previous image frame.
  • the time stamp of the detection information of the frame is t n-1 .
  • the target value of the target object in the current image frame obtained by the camera There is a deviation between the time and the time when the PTZ receives the above detection information. Therefore, in order to ensure the quality and effect of the following operation on the target object, it is necessary to consider the influence of the link delay on the intelligent following operation. Specifically, the following steps can be performed:
  • Step 1.1 Obtain the link delay corresponding to the communication link formed by the camera and the gimbal.
  • Step 1.2 Based on the current image frame, obtain the acquisition position of the target object in the current image frame.
  • the camera may analyze and process the current image frame to determine the acquisition position of the target object in the current image frame as (x n , yn ).
  • Step 1.3 Based on the acquisition position of the target object in the current image frame, determine the current position prediction value corresponding to the acquisition position Specifically, it can be achieved based on the following formula:
  • ⁇ t is the link delay corresponding to the communication link formed by the camera and the gimbal
  • t n is the camera exposure timestamp corresponding to the current image frame
  • t n- 1 is the timestamp when the PTZ receives the detection information of the previous image frame.
  • Step 1.4 Predict the value based on the current location corresponding to the acquisition location Determines the camera plane bias.
  • the camera plane deviation is the normalized coordinate value deviation, which is denoted as e x and e y .
  • the composition target can be obtained.
  • the composition target can be denoted as (tgt x , tgt y ), and then determine the camera plane deviation based on the composition target and the predicted value of the current position.
  • the camera plane deviation can be obtained based on the following formula:
  • Step 2 Perform a coordinate transformation operation on the camera plane deviation, and determine the deviation angle used to follow the target object.
  • Step 2.1 Obtain the actual image field of view fov information of the camera and the current attitude information of the gimbal.
  • the focal length information of the camera can be obtained first, and then the actual picture field angle fov information of the camera can be determined based on the focal length information. It should be noted that the above focal length information can be directly It can be obtained through the camera, or it can also be obtained by the user's configuration based on specific application scenarios and application requirements.
  • Step 2.2 Convert the camera plane deviation to the geodetic coordinate system NED (north, east, and earth coordinate system) according to the actual picture field angle fov information and the current attitude information, so as to obtain the deviation angle.
  • NED geodetic coordinate system
  • the camera coordinate system can be denoted as the b system
  • the NED coordinate system can be denoted as the n system.
  • the deviation angle in the geodetic coordinate system can be obtained by the following formula:
  • e x and e y are the coordinate value deviations after normalization on the camera plane
  • FOV x and FOV y are the fov angles corresponding to the camera in the horizontal (x-axis direction) and vertical (y-axis direction), respectively
  • E x , E y , E z are the deviation angles corresponding to each axis in the camera coordinate system
  • the matrix representation is as follows:
  • the attitude of the gimbal and the corresponding rotation matrix can be measured through the IMU And the angle deviation in the NED coordinate system can be obtained according to the following formula:
  • E n is the deviation angle corresponding to the geodetic coordinate system NED, is the rotation matrix corresponding to the gimbal attitude, and E b is the corresponding deviation angle in the camera coordinate system.
  • the gimbal may correspond to different following models, the following models may include a single-axis following model, a two-axis following model and a three-axis following model, and different following models may correspond to different deviation angles.
  • the obtained deviation angle can correspond to the single axis of the gimbal.
  • the deviation angle corresponds to the yaw axis, and the deviation angle corresponding to the other two axes is adjusted to zero.
  • the obtained deviation angle can correspond to the two axes of the gimbal.
  • the deviation angle corresponds to the yaw axis and the pitch axis, and the deviation angle corresponding to other axes is adjusted to zero .
  • the obtained deviation angle can correspond to the three axes of the gimbal, for example, the deviation angle corresponds to the yaw axis, the pitch axis and the roll axis.
  • Step 3 Control the gimbal based on the deviation angle so as to follow the target object.
  • a pan-tilt controller may be provided on the pan-tilt, and the pan-tilt controller may include three proportional-integral-derivative (Proportion Integral Differential, PID for short) controllers, the specific structure is referring to FIG. controller, position loop PID controller and velocity loop PID controller.
  • PID Proportion Integral Differential
  • the deviation angle En can be input into the PID controller, so as to obtain the control parameters for controlling the rotation of the pan-tilt motor.
  • Step 4 The gimbal intelligently follows the strategy.
  • the gimbal intelligent following strategy can include the following three aspects: the slow start and stop strategy of following objects and lost objects, adjusting the gimbal controller for different object following speeds, and determining the focus offset according to the historical focus position.
  • the slow start and stop strategy of following objects and lost objects can include the following three aspects: the slow start and stop strategy of following objects and lost objects, adjusting the gimbal controller for different object following speeds, and determining the focus offset according to the historical focus position.
  • Step 4.1 Start a slow start-stop strategy for following targets and missing targets.
  • the slow start and stop strategy is the uniform acceleration strategy or the uniform deceleration strategy
  • the acceleration and deceleration time threshold is set to T
  • the known En is the deviation angle of each axis in the ned coordinate system
  • the actual deviation angle actually output to the gimbal controller Specifically, the actual deviation angle output to the gimbal controller
  • t is the duration information of starting to follow the target object
  • T is the preset time threshold
  • the user can set the specific time length of the preset time threshold according to the specific application scenario and application requirements, in general, T can be 0.5s or 1s.
  • the phase relative to the deviation angle E n can be obtained.
  • the actual deviation angle That is, it is a transition parameter between 0 and the deviation angle En , so that a slow start to follow the target object can be realized.
  • the duration information of the following operation on the target object is greater than or equal to the preset time threshold, the actual deviation angle can be Determined as the deviation angle En , that is, the target object can be followed stably.
  • T is the preset time threshold.
  • the user can set the specific time length of the preset time threshold according to specific application scenarios and application requirements. In general, T can be 1s, 1.5s or 2s and so on.
  • the phase relative to the deviation angle E n can be obtained.
  • the actual deviation angle That is, it is a transition parameter between 0 and the deviation angle En , so that a slow start to follow the target object can be realized.
  • the duration information of the following operation on the target object is greater than or equal to the preset time threshold, the actual deviation angle can be Determined as the deviation angle En , that is, the target object can be followed stably.
  • Step 4.2 Adjust the gimbal controller according to the following speed of different objects.
  • the gimbal can adjust the gimbal controller according to the different types of objects it follows, which can include the following categories:
  • Step 4.3 Determine the focus offset according to the historical focus position.
  • the historical focus position corresponding to the historical image frame and the current focus position corresponding to the current image frame may be different.
  • the self-test focus of the current focus position and the historical focus position can be obtained. Offset, based on the focus offset to determine the target focus position corresponding to the target object.
  • the focus offset when the focus offset is less than or equal to the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively close, and then the current focus position can be determined as the target focus position corresponding to the target object .
  • the focus offset is greater than the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively far, and then the current focus position can be adjusted based on the focus offset, so that the corresponding target object can be obtained. target focus position.
  • the focus offset is greater than the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively far.
  • it can be detected whether the target object has changed. After the target object has changed, it can be determined based on the changed
  • the target object is updated to the target position of the composition, and the updated target position is obtained, so as to control the gimbal based on the updated target position, so as to realize the following operation of the changed target object.
  • the intelligent following method based on the camera provided by this application embodiment effectively solves the following problems: (1) solve the problem that the real-time image has poor follow-up effect due to the long delay time generated by HDMI transmission; (2) solve the problem The gimbal achieves target following, which increases the development cost of additional AI machine learning algorithms and hardware design costs; (3) solves the problem that the camera follows the target type changes and causes the coordinate point to jump; (4) solves the inconsistency between the focus point and the follow point.
  • the problem is that the following target will not be out of focus; the quality and effect of the following operation on the target object are further ensured, and the stability and reliability of the method are effectively improved.
  • FIG. 27 is a schematic structural diagram of a control device for a pan/tilt according to an embodiment of the present invention.
  • the present embodiment provides a control device for a pan/tilt, wherein the pan/tilt is communicatively connected with an image acquisition device , the control device of the pan/tilt can execute the control method of the pan/tilt corresponding to FIG. 2 .
  • the apparatus in this embodiment may include:
  • a first memory 12 for storing computer programs
  • the first processor 11 is used for running the computer program stored in the first memory 12 to realize:
  • the acquisition position is determined by the image acquisition device
  • the PTZ is controlled according to the control parameters to realize the following operation of the target object.
  • the structure of the control device of the PTZ may further include a first communication interface 13 for the electronic device to communicate with other devices or a communication network.
  • the first processor 11 when the first processor 11 acquires the acquisition position of the target object in the acquired image, the first processor 11 is configured to: acquire the target focus position corresponding to the target object through the image acquisition device; Determine the acquisition position of the target object in the acquired image.
  • the first processor 11 when the first processor 11 obtains the target focus position corresponding to the target object through the image capture device, the first processor 11 is configured to: obtain the historical focus position corresponding to the target object through the image capture device and The current focus position; the target focus position corresponding to the target object is determined based on the historical focus position and the current focus position.
  • the first processor 11 determines the target focus position corresponding to the target object based on the historical focus position and the current focus position
  • the first processor 11 is configured to: determine the historical object corresponding to the historical focus position part and the current object part corresponding to the current focus position; according to the historical object part and the current object part, determine the target focus position corresponding to the target object.
  • the first processor 11 determines the target focus position corresponding to the target object according to the historical object part and the current object part
  • the first processor 11 is configured to: the historical object part and the current object part are the same
  • the relative position information between the historical object part and the current object part is obtained; the current focus position is adjusted based on the relative position information, and the target focus position corresponding to the target object is obtained.
  • the first processor 11 when the first processor 11 adjusts the current focus position based on the relative position information to obtain the target focus position corresponding to the target object, the first processor 11 is configured to: when the relative position information is greater than or equal to the preset When the threshold is set, the current focus position is adjusted based on the relative position information to obtain the target focus position corresponding to the target object; when the relative position information is less than the preset threshold, the current focus position is determined as the target focus corresponding to the target object Location.
  • the first processor 11 determines the target focus position corresponding to the target object according to the historical object part and the current object part, the first processor 11 is configured to: the historical object part and the current object part are the same When different parts of the target object are present, the composition target position is updated based on the current focus position to obtain the first updated composition target position; and the target object is followed based on the first updated composition target position.
  • the first processor 11 is used to: detect whether the target object performing the following operation changes; when the target object changes from the first object to the second object, obtain the acquisition position of the second object in the acquired image; The composition target position is updated based on the collection position of the second object in the captured image, and the second updated composition target position corresponding to the second object is obtained, so as to follow the second object based on the second updated composition target position .
  • the first processor 11 determines, based on the collection position, a control parameter for performing a following operation on the target object
  • the first processor 11 is configured to: calculate a current position prediction value corresponding to the collection position; The predicted value of the current position determines the control parameters used to follow the target object.
  • the first processor 11 when the first processor 11 calculates the predicted value of the current position corresponding to the collection position, the first processor 11 is configured to: determine a delay time corresponding to the collection position, and the delay time is used to indicate the cloud The station obtains the required duration of the acquisition position via the image acquisition device; based on the delay time and the acquisition position, the current position prediction value corresponding to the acquisition position is determined.
  • the first processor 11 determines the delay time corresponding to the acquisition position
  • the first processor 11 is configured to: acquire the exposure time corresponding to the acquired image; when the pan/tilt head acquires the current acquisition position , determine the current reception time corresponding to the current collection position; determine the time interval between the current reception time and the exposure time as the delay time corresponding to the collection position.
  • the first processor 11 determines a current position prediction value corresponding to the collection position based on the delay time and the collection position
  • the first processor 11 is configured to: when the pan-tilt head obtains the previous collection position , determine the previous reception time corresponding to the previous collection position; determine the previous position prediction value corresponding to the previous collection position; according to the collection position, exposure time, delay time, previous reception time and previous position prediction value to calculate the predicted value of the current location corresponding to the acquisition location.
  • the first processor 11 calculates the current position prediction value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time and the previous position prediction value
  • the first processor 11 calculates the current position prediction value corresponding to the acquisition position. 11 is used to: determine the position adjustment value corresponding to the collection position based on the collection position, exposure time, delay time, previous reception time and previous position prediction value; determine the sum of the position adjustment value and the collection position as The current location prediction corresponding to the acquisition location.
  • the first processor 11 when the first processor 11 determines the position adjustment value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value, the first processor 11 Used to: determine the movement speed corresponding to the target object based on the collection position, the previous position prediction value, the exposure time and the previous reception time; determine the product value between the movement speed and the time interval as corresponding to the collection position position adjustment value.
  • the first processor 11 determines the moving speed corresponding to the target object based on the acquisition position, the previous position prediction value, the exposure time and the previous reception time
  • the first processor 11 is configured to: acquire the acquisition The position difference value between the position and the previous position prediction value and the time difference value between the exposure time and the previous reception time; the ratio between the position difference value and the time difference value is determined as the movement corresponding to the target object speed.
  • the first processor 11 determines a control parameter for performing a following operation on the target object based on the predicted value of the current position
  • the first processor 11 is configured to: determine the distance between the predicted value of the current position and the target position of the composition Based on the position deviation, determine the control parameters used to follow the target object.
  • the first processor 11 determines, based on the position deviation, a control parameter for performing a follow-up operation on the target object
  • the first processor 11 is configured to: acquire a field of view angle corresponding to the captured image; The field of view angle and position deviation of the screen determine the control parameters used to follow the target object.
  • control parameter is inversely related to the picture field angle.
  • the first processor 11 determines a control parameter for performing a following operation on the target object based on the predicted value of the current position
  • the first processor 11 is configured to: obtain a following mode corresponding to the pan/tilt, follow
  • the modes include any one of the following: single-axis following mode, dual-axis following mode, and full-following mode; based on the predicted value of the current position and the following mode, determine the control parameters used to follow the target object.
  • the first processor 11 determines, based on the current position prediction value and the following mode, the control parameters for performing the following operation on the target object
  • the first processor 11 is configured to: based on the current position prediction value, determine the The candidate control parameters for the following operation of the target object; in the candidate control parameters, the target control parameters corresponding to the following mode are determined.
  • the first processor 11 determines the target control parameter corresponding to the following mode among the candidate control parameters
  • the first processor 11 is configured to: when the following mode is the single-axis following mode, in the standby mode In the selected control parameters, determine the single-axis control parameters corresponding to the single-axis follow mode, and set other alternative control parameters to zero; when the follow mode is the dual-axis follow mode, in the alternative control parameters, determine the corresponding two-axis control parameters.
  • the two-axis control parameters corresponding to the follow mode, and other alternative control parameters are set to zero; when the follow mode is the full follow mode, the alternative control parameters are determined as the three-axis control parameters corresponding to the full follow mode.
  • the first processor 11 when the first processor 11 controls the pan/tilt according to the control parameters, the first processor 11 is configured to: acquire a pan/tilt motion model corresponding to the target object; pair the pan/tilt motion model and the control parameters based on the pan/tilt motion model PTZ control.
  • the first processor 11 when the first processor 11 controls the pan/tilt based on the motion model of the pan/tilt and the control parameters, the first processor 11 is configured to: acquire the duration information corresponding to the following operation on the target object; When the duration information is less than the first time threshold, the control parameters are updated based on the motion model of the gimbal, the updated control parameters are obtained, and the gimbal is controlled based on the updated control parameters; when the duration information is greater than or equal to the first time threshold, And when the motion model of the gimbal is uniform acceleration motion, the control parameters are used to control the gimbal.
  • the first processor 11 when the first processor 11 updates the control parameters based on the pan-tilt motion model and obtains the updated control parameters, the first processor 11 is configured to: determine the corresponding control parameters based on the pan-tilt motion model. update coefficient, wherein the update coefficient is less than 1; the product value of the update coefficient and the control parameter is determined as the updated control parameter.
  • the first processor 11 determines the update coefficients corresponding to the control parameters based on the motion model of the pan/tilt
  • the first processor 11 is configured to: when the motion model of the pan/tilt is a uniform acceleration motion, update the duration information The ratio to the first time threshold is determined as an update coefficient corresponding to the control parameter.
  • the first processor 11 when the first processor 11 controls the pan/tilt according to the control parameters, the first processor 11 is configured to: acquire a follow state corresponding to the target object; control the pan/tilt based on the follow state and the control parameter .
  • the first processor 11 when the first processor 11 acquires the following state corresponding to the target object, the first processor 11 is configured to: detect whether the target object performing the following operation has changed; when the target object is changed from the first object to When the second object is used, it is determined that the first object is in a lost state.
  • the first processor 11 when the first processor 11 controls the pan/tilt based on the follow state and the control parameter, the first processor 11 is configured to: when the target object is in the lost state, obtain the process of the follow operation on the target object. Corresponding loss duration information; update the control parameters according to the lost duration information to obtain the updated control parameters; control the PTZ based on the updated control parameters.
  • the first processor 11 when the first processor 11 updates the control parameters according to the loss duration information and obtains the updated control parameters, the first processor 11 is configured to: when the loss duration information is greater than or equal to the second time threshold, update the control parameters The control parameter is updated to zero; when the lost duration information is less than the second time threshold, the ratio between the lost duration information and the second time threshold is obtained, and the difference between 1 and the ratio is determined as an update corresponding to the control parameter coefficient, and the product value of the update coefficient and the control parameter is determined as the updated control parameter.
  • the first processor 11 when the first processor 11 controls the pan-tilt according to the control parameters, the first processor 11 is configured to: acquire the object type of the target object; and control the pan-tilt according to the object type and the control parameters.
  • the first processor 11 when the first processor 11 controls the pan/tilt according to the object type and the control parameters, the first processor 11 is configured to: adjust the control parameters according to the object type, and obtain the adjusted parameters; based on the adjusted parameters Control the PTZ.
  • the first processor 11 when the first processor 11 adjusts the control parameters according to the object type and obtains the adjusted parameters, the first processor 11 is configured to: when the target object is a stationary object, reduce the gimbal in the yaw direction The corresponding control bandwidth and the control bandwidth corresponding to the gimbal in the pitch direction; when the target object is a moving object and the height of the moving object is greater than or equal to the height threshold, the control bandwidth corresponding to the gimbal in the yaw direction is increased.
  • the first processor 11 is configured to: acquire the execution operation input by the user on the image capturing apparatus through the display interface; and control the image capturing apparatus according to the execution operation, so that the image capturing apparatus determines the capturing position.
  • the first processor 11 is configured to: acquire distance information corresponding to the target object through a ranging sensor disposed on the image acquisition device; send the distance information to the image acquisition device, so that the image acquisition device combines the distance The information determines the acquisition position of the target object in the acquired image.
  • the first processor 11 is configured to: determine a working mode corresponding to the image capture device, and the work mode includes any one of the following: a first-follow-then-focus mode, a first-focus-and-follow mode; using the working mode to monitor the image capture device Take control.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image capture device.
  • the apparatus shown in FIG. 27 may execute the methods of the embodiments shown in FIGS. 2 to 20 and FIGS. 25 to 26 .
  • FIGS. 2 to 20 and 25 to 26 A description of the embodiments shown in .
  • FIGS. 2 to 20 and FIGS. 25 to 26 A description of the embodiments shown in .
  • FIG. 28 is a schematic structural diagram of a control device of a pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides a control device of a pan-tilt system, wherein the pan-tilt system includes a pan-tilt system and an image acquisition device connected in communication with the PTZ, the control device of the PTZ system can execute the control method of the PTZ system corresponding to FIG. 21 .
  • the apparatus in this embodiment may include:
  • the second processor 21 is used for running the computer program stored in the second memory 22 to realize:
  • the control pan/tilt moves according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • the structure of the control device of the pan-tilt system may also include a second communication interface 23 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 28 may execute the method of the embodiment shown in FIG. 21 and FIG. 25 to FIG. 26 .
  • the apparatus shown in FIG. 28 may execute the method of the embodiment shown in FIG. 21 and FIG. 25 to FIG. 26 .
  • related instructions For the execution process and technical effects of the technical solution, refer to the descriptions in the embodiments shown in FIG. 21 and FIG. 25 to FIG. 26 , and details are not repeated here.
  • FIG. 29 is a schematic structural diagram of another pan/tilt control device provided by an embodiment of the present invention.
  • the present embodiment provides a pan/tilt control device, which is used for a pan/tilt communication connection
  • the apparatus in this embodiment may include:
  • the third processor 31 is used for running the computer program stored in the third memory 32 to realize:
  • the position of the target object is sent to the image acquisition device, so that the image acquisition device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • the structure of the control device of the PTZ may further include a third communication interface 33 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 29 may execute the method of the embodiment shown in FIG. 22 and FIG. 25 to FIG. 26 .
  • the apparatus shown in FIG. 29 may execute the method of the embodiment shown in FIG. 22 and FIG. 25 to FIG. 26 .
  • related instructions For the execution process and technical effects of the technical solution, refer to the descriptions in the embodiments shown in FIG. 22 and FIG. 25 to FIG. 26 , which will not be repeated here.
  • FIG. 30 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides another control device of a pan-tilt system, wherein the pan-tilt system includes A PTZ and an image acquisition device connected to the PTZ in communication, the control device of the PTZ system can execute the control method of the PTZ system corresponding to FIG. 23 .
  • the apparatus in this embodiment may include:
  • a fourth memory 42 for storing computer programs
  • the fourth processor 41 is used for running the computer program stored in the fourth memory 42 to realize:
  • the pan/tilt is controlled to follow the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • the structure of the control device of the pan-tilt system may further include a fourth communication interface 43 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 30 may execute the method of the embodiment shown in FIG. 23 and FIG. 25 to FIG. 26 .
  • the apparatus shown in FIG. 30 may execute the method of the embodiment shown in FIG. 23 and FIG. 25 to FIG. 26 .
  • related instructions For the execution process and technical effects of the technical solution, refer to the descriptions in the embodiments shown in FIG. 23 and FIG. 25 to FIG. 26 , which will not be repeated here.
  • FIG. 31 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides another control device of a pan-tilt system, wherein the pan-tilt system includes: A pan-tilt and an image acquisition device connected in communication with the pan-tilt, the control device of the pan-tilt system can execute the control method of the pan-tilt system corresponding to FIG. 24 .
  • the apparatus in this embodiment may include:
  • the fifth processor 51 is used for running the computer program stored in the fifth memory 52 to realize:
  • the collection position of the first object is used for the pan-tilt head to follow the first object, and for the image collection device to perform a focusing operation on the first object;
  • the acquisition position of the second object in the acquired image is acquired, so that the pan/tilt changes from following the first object to following the second object based on the acquisition position of the second object operation, and causing the image capture device to change from focusing on the first object to focusing on the second object based on the position of the second object.
  • the structure of the control device of the pan-tilt system may further include a fifth communication interface 53 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 31 can execute the methods of the embodiments shown in FIGS. 24 to 26 .
  • the apparatus shown in FIG. 31 can execute the methods of the embodiments shown in FIGS. 24 to 26 .
  • the parts not described in detail in this embodiment reference may be made to the related descriptions of the embodiments shown in FIGS. 24 to 26 .
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 24 to FIG. 26 , which will not be repeated here.
  • control device in any of the above embodiments may be independent of the pan/tilt or the image acquisition device, or may be integrated in the pan/tilt or the image acquisition device.
  • FIG. 32 is a schematic structural diagram of a control system for a pan/tilt according to an embodiment of the present invention.
  • the present embodiment provides a control system for a pan/tilt.
  • the control system may include:
  • the control device 62 of the pan/tilt shown in FIG. 27 is disposed on the pan/tilt 61 and used to communicate with the image acquisition device, and to control the pan/tilt 61 through the image acquisition device.
  • control system in this embodiment may further include:
  • the ranging sensor 63 is arranged on the image acquisition device, and is used for acquiring distance information corresponding to the target object;
  • the control device 62 of the pan/tilt is connected in communication with the ranging sensor 63 for sending the distance information to the image acquisition device, so that the image acquisition device can determine the acquisition position of the target object in the acquired image in combination with the distance information.
  • FIG. 33 is a schematic structural diagram of a control system of a pan/tilt provided by an embodiment of the present invention.
  • the present embodiment provides a control system of a pan/tilt.
  • the control system of the pan/tilt can be include:
  • the control device 73 of the pan-tilt system corresponding to FIG. 28 is disposed on the pan-tilt 71 and is used to communicate with the image acquisition device 72 and to control the image acquisition device 72 and the pan-tilt 71 respectively.
  • FIG. 34 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. with reference to FIG. 34 , the present embodiment provides another pan/tilt control system, specifically, the control system of the pan/tilt Systems can include:
  • the control device 82 of the pan/tilt in the above-mentioned FIG. 29 is disposed on the pan/tilt 81 , and is used to communicate with the image capture device, and to control the image capture device through the pan/tilt 81 .
  • FIG. 35 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. with reference to FIG. 35 , the present embodiment provides another pan/tilt control system, specifically, the control system of the pan/tilt Systems can include:
  • the control device 92 of the pan-tilt system shown in FIG. 30 is disposed on the pan-tilt 91 and used to communicate with the image acquisition device, and to control the image acquisition device and the pan-tilt 91 respectively.
  • control system of the pan-tilt system shown in FIG. 35 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the pan-tilt system shown in FIG. 30 , and are not described in detail in this embodiment. , please refer to the relevant description of the embodiment shown in FIG. 30 .
  • FIG. 36 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. with reference to FIG. 36 , the present embodiment provides another pan/tilt control system, specifically, the control system of the pan/tilt Systems can include:
  • the control device 103 of the pan-tilt system corresponding to FIG. 31 is disposed on the pan-tilt 101 and used to communicate with the image acquisition device 102 and to control the image acquisition device 102 and the pan-tilt 101 respectively.
  • control device in the control system of the PTZ in the above-mentioned embodiments may be integrated in the PTZ, and may further include an image acquisition device, and the image acquisition device may be integrated on the PTZ, or may also be integrated with the PTZ. Detachable connection.
  • FIG. 37 is a schematic structural diagram 1 of a movable platform provided by an embodiment of the present invention. Referring to FIG. 37 , this embodiment provides a movable platform. Specifically, the movable platform may include:
  • the control device 113 of the pan/tilt in the above-mentioned FIG. 27 is disposed on the pan/tilt 112 , and is used to communicate with the image capture device 114 and to control the pan/tilt 112 through the image capture device 114 .
  • the support mechanism 111 varies with the type of the movable platform.
  • the supporting mechanism 111 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 111 can be a hand-held pan-tilt.
  • movable platforms include, but are not limited to, the types described above.
  • FIG. 38 is a second schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • this embodiment provides a movable platform.
  • the movable platform may include:
  • the control device 123 of the pan-tilt system shown in FIG. 28 is disposed on the pan-tilt 122 and used to communicate with the image acquisition device 124 and to control the image acquisition device 124 and the pan-tilt 122 respectively.
  • the support mechanism 121 varies with the type of the movable platform.
  • the support mechanism 121 can be a handle, and when the movable platform is an airborne pan-tilt, the support mechanism 121 can be a hand-held pan-tilt.
  • movable platforms include, but are not limited to, the types described above.
  • the specific implementation principle, implementation process and implementation effect of the movable platform shown in FIG. 38 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the PTZ system shown in FIG. 28 , and the parts that are not described in detail in this embodiment , refer to the related description of the embodiment shown in FIG. 28 .
  • FIG. 39 is a third structural schematic diagram of a movable platform provided by an embodiment of the present invention.
  • this embodiment provides a movable platform.
  • the movable platform may include:
  • the control device 133 of the pan/tilt in the above-mentioned FIG. 29 is disposed on the pan/tilt 132 , and is used to communicate with the image capture device 134 , and to control the image capture device 134 through the pan/tilt 132 .
  • the supporting mechanism 131 varies with the type of the movable platform.
  • the supporting mechanism 131 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 131 can be a handle.
  • movable platforms include, but are not limited to, the types described above.
  • FIG. 40 is a fourth schematic structural diagram of a movable platform provided by an embodiment of the present invention. Referring to FIG. 40, this embodiment provides a movable platform. Specifically, the movable platform may include:
  • the control device 143 of the pan-tilt system shown in FIG. 30 is disposed on the pan-tilt 142 and is used to communicate with the image acquisition device 144 and to control the image acquisition device 144 and the pan-tilt 142 respectively.
  • the support mechanism 141 varies with the type of the movable platform.
  • the supporting mechanism 141 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 141 can be a hand-held pan-tilt.
  • movable platforms include, but are not limited to, the types described above.
  • FIG. 41 is a schematic structural diagram 5 of a movable platform provided by an embodiment of the present invention. Referring to FIG. 41 , this embodiment provides a movable platform. Specifically, the movable platform may include:
  • the control device 153 of the pan-tilt system shown in FIG. 31 is disposed on the pan-tilt 152 and is used to communicate with the image acquisition device 154 and to control the image acquisition device 154 and the pan-tilt 152 respectively.
  • the support mechanism 151 varies with the type of the movable platform.
  • the supporting mechanism 151 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 151 can be a handle.
  • the supporting mechanism 151 can be a handle.
  • movable platforms include, but are not limited to, the types described above.
  • the specific implementation principle, implementation process and implementation effect of the movable platform shown in FIG. 41 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the PTZ system shown in FIG. 31 , and the parts that are not described in detail in this embodiment , refer to the related description of the embodiment shown in FIG. 31 .
  • control device in the movable platform in the above-mentioned various embodiments may be integrated into the pan/tilt, and may further include an image acquisition device, and the image acquisition device may also be integrated in the pan/tilt or detachably connected to the pan/tilt. .
  • an embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 2 to FIG. 20 and FIG. 25 to 26 are the control methods of the pan/tilt head.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium.
  • the control method of the PTZ system is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 22 , FIG. 25 to FIG. 26 . PTZ control method.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 23 , FIG. 25 to FIG. 26 .
  • the control method of the PTZ system is a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 23 , FIG. 25 to FIG. 26 .
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned pan-tilt system in FIGS. 24 to 26 . control method.
  • the disclosed related detection apparatus and method may be implemented in other manners.
  • the embodiments of the detection apparatus described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • Another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of detection devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer processor (processor) to perform all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

一种云台的控制方法、装置、可移动平台和存储介质。云台的控制方法包括:获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的,图像采集装置与云台通信连接;基于采集位置,确定用于对目标对象进行跟随操作的控制参数;根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。

Description

云台的控制方法、装置、可移动平台和存储介质 技术领域
本发明实施例涉及云台技术领域,尤其涉及一种云台的控制方法、装置、可移动平台和存储介质。
背景技术
随着科学技术的飞速发展,云台的应用领域越来越广泛,尤其广泛应用于拍摄领域。其中,智能跟随功能是云台的一种常用的拍摄功能,具体的,云台可以通过预设接口得到相机的实时图像,将实时图像输入至机器学习单元,即可得到相机中需要被拍摄的物体的实时位置,并可进一步依据获取的实时位置对被拍摄的物体进行跟随拍摄。
发明内容
本发明实施例提供了一种云台的控制方法、装置、可移动平台和存储介质,可以解决在将相机获得的实时图像通过机器学习单元传输至云台控制器时存在较大的延时、进而导致跟随效果差的问题,从而可以保证云台跟随操作的质量和效果。
本发明的第一方面是为了提供一种云台的控制方法,所述方法包括:
获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置与所述云台通信连接;
基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
本发明的第二方面是为了提供一种云台的控制装置,所述装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置与所述云台通信连接;
基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
本发明的第三方面是为了提供一种云台的控制系统,包括:
云台;
上述第二方面所述的云台的控制装置,设置于所述云台上,且用于与所述图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
本发明的第四方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第二方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
本发明的第五方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第一方面所述的云台的控制方法。
本发明的第六方面是为了提供一种云台系统的控制方法,其中,所述云台系统包括云台和与云台通信连接的图像采集装置,所述方法包括:
控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
将所述采集位置传输至所述云台;
控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
本发明的第七方面是为了提供一种云台系统的控制装置,其中,所述云台系统包括云台和与云台通信连接的图像采集装置,所述装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
将所述采集位置传输至所述云台;
控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
本发明的第八方面是为了提供一种云台的控制系统,包括:
云台;
上述第七方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第九方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第七方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第十方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第六方面所述的云台系统的控制方法。
本发明的第十一方面是为了提供一种云台的控制方法,用于云台,所述云台通信连接有图像采集装置,所述方法包括:
获取采集图像,所述采集图像中包括目标对象;
在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于 所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
本发明的第十二方面是为了提供一种云台的控制装置,用于云台,所述云台通信连接有图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取采集图像,所述采集图像中包括目标对象;
在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
本发明的第十三方面是为了提供一种云台的控制系统,包括:
云台;
上述第十一方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
本发明的第十四方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第十一方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
本发明的第十五方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第十方面所述的云台的控制方法。
本发明的第十六方面是为了提供一种云台系统的控制方法,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
控制所述图像采集装置采集图像,所述图像包括目标对象;
在所述图像中确定所述目标对象的位置;
基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
本发明的第十七方面是为了提供一种云台系统的控制装置,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
控制所述图像采集装置采集图像,所述图像包括目标对象;
在所述图像中确定所述目标对象的位置;
基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
本发明的第十八方面是为了提供一种云台的控制系统,包括:
云台;
上述第十七方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第十九方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第十七方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第十六方面所述的云台系统的控制方法。
本发明的第二十一方面是为了提供一种云台系统的控制方法,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
获取第一对象在采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
在所述第一对象改变为第二对象时,获取所述第二对象在采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
本发明的第二十二方面是为了提供一种云台系统的控制装置,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取第一对象在采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
在所述第一对象改变为第二对象时,获取所述第二对象在采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
本发明的第二十三方面是为了提供一种云台的控制系统,包括:
云台;
上述第二十二方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十四方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第二十二方面所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十五方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第二十一方面所述的云台系统的控制方法。
本发明实施例提供的云台的控制方法、装置、可移动平台和存储介质,通过获取目标对象在采集图像中的采集位置,而后基于所述采集位置确定用于对所述目标对象进行跟随操作的控制参数,并根据所述控制参数对所述云台进行控制,从而可以实现对所述目标对象进行跟随操作,其中,由于采集位置是通过图像采集装置所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较大而导致跟随效果差的问题,进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为现有技术中提供的云台系统的结构示意图;
图2为本发明实施例提供的一种云台的控制方法的流程示意图;
图3为本发明实施例提供的一种云台与图像采集装置进行通信连接的结构示意图;
图4为本发明实施例提供的获取目标对象在采集图像中的采集位置的示意图;
图5为本发明实施例提供的获取目标对象在采集图像中的采集位置的流程示意图;
图6为本发明实施例提供的通过图像采集装置获取与所述目标对象相对应的目标对焦位置的流程示意图;
图7为本发明实施例提供的与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位的示意图一;
图8为本发明实施例提供的与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位的示意图二;
图9为本发明实施例提供的另一种云台的控制方法的流程示意图;
图10为本发明实施例提供的目标对象发生改变的示意图;
图11为本发明实施例提供的计算与所述采集位置相对应的当前位置预测值的流程示意图;
图12为本发明实施例提供的基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值的流程示意图;
图13为本发明实施例提供的基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数的流程示意图一;
图14为本发明实施例提供的基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数的流程示意图二;
图15为本发明实施例提供的基于所述云台运动模型和所述控制参数对所述云台进行控制的流程示意图;
图16为本发明实施例提供的根据所述控制参数对所述云台进行控制的流程示意图一;
图17为本发明实施例提供的根据所述控制参数对所述云台进行控制的流程示意图二;
图18为本发明实施例提供的又一种云台的控制方法的流程示意图;
图19为本发明实施例提供的另一种云台的控制方法的流程示意图;
图20为本发明实施例提供的又一种云台的控制方法的流程示意图;
图21为本发明实施例提供的一种云台系统的控制方法的流程示意图;
图22为本发明实施例提供的另一种云台的控制方法的流程示意图;
图23为本发明实施例提供的另一种云台系统的控制方法的流程示意图;
图24为本发明实施例提供的又一种云台系统的控制方法的流程示意图;
图25为本发明应用实施例提供的一种云台的控制方法的原理示意图一;
图26为本发明应用实施例提供的一种云台的控制方法的原理示意图二;
图27为本发明实施例提供的一种云台的控制装置的结构示意图;
图28为本发明实施例提供的一种云台系统的控制装置的结构示意图;
图29为本发明实施例提供的另一种云台的控制装置的结构示意图;
图30为本发明实施例提供的另一种云台系统的控制装置的结构示意图;
图31为本发明实施例提供的又一种云台系统的控制装置的结构示意图;
图32为本发明实施例提供的一种云台的控制系统的结构示意图;
图33为本发明实施例提供的一种云台的控制系统的结构示意图;
图34为本发明实施例提供的另一种云台的控制系统的结构示意图;
图35为本发明实施例提供的又一种云台的控制系统的结构示意图;
图36为本发明实施例提供的另一种云台的控制系统的结构示意图;
图37为本发明实施例提供的一种可移动平台的结构示意图一;
图38为本发明实施例提供的一种可移动平台的结构示意图二;
图39为本发明实施例提供的一种可移动平台的结构示意图三;
图40为本发明实施例提供的一种可移动平台的结构示意图四;
图41为本发明实施例提供的一种可移动平台的结构示意图五。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
为了便于理解本申请的技术方案,下面对现有技术进行简要说明:
随着科学技术的飞速发展,利用数码相机、单反相机进行视频拍摄操作的用户越来越多,随之云台稳定器作为辅助视频拍摄的工具也得到了较为广泛的应用。但是稳定器却不仅局限于在视频拍摄的过程中进行防抖增稳,并且还可以拓展出了更多的其他的操作方法,从而有利于保证用户的视频拍摄体验效果。
在进行视频拍摄的过程中,如何保证被摄物体(目标对象)保持在视频构图中的固定位置(构图位置)成了一个依赖摄影师的拍摄技术性的关键问题。而随着人工智能(Artificial Intelligence,简称AI)技术的发展,识别被摄物体在画面中的位置成为 了可能。另外,云台稳定器除了能够进行增稳操作,还能控制相机进行旋转运动,二者结合实现闭环,即可实现被摄物体的智能跟随操作。在针对某一目标对象进行智能跟随操作的过程中,为了保证智能跟随操作的质量和效果,以下两点比较重要:一是如何获取被摄物体在画面中的位置信息;二是如何控制云台进行运动,将被摄物体保持在构图位置,如画面中心。
现有技术中,控制搭配第三方相机实现智能跟随的方法主要是引入一个图像处理装置,图像处理装置可以通过高清多媒体接口(High Definition Multimedia Interface,简称HDMI)或者其他接口得到相机的实时图像,将实时图像输入AI机器学习单元(软件实现),即可得到第三方相机中需要被拍摄的物体的实时位置。
具体的,如图1所示,以相机100作为图像采集装置为例,相机100作为第三方负载,其可以通过HDMI接口连接到图像信号处理(Image Signal Processing,简称ISP)装置,图像信号处理装置中可以包括:ISP模块1011、缓存器1012、实时视频输出器1013、格式转换器1014、机器学习模型1015以及策略处理器1016,上述的ISP模块1011可以对所接收到的图像进行分析处理,并将处理后的图像数据传输至缓存器1012进行缓存,经过缓存器1012进行缓存后的图像数据不仅可以通过实时视频输出器1013进行实时输出,并且还可以利用格式转换器1014对缓存后的图像数据进行格式转换操作,以使得格式转换操作之后的图像数据可以输入到机器学习模型1015中进行机器学习操作,以识别用户设定的待跟随主体。在识别出待跟随主体之后,可以通过策略处理器1016按照策略确定云台的控制参数,而后云台控制器102可以基于云台的控制参数对云台进行控制操作,以实现云台可以对待跟随主体进行智能跟随操作。
然而,上述技术方案存在以下问题:从HDMI接口传输获得的视频信号存在较大的延时,直接导致跟随操作的效果变得很差,而且不同相机的HDMI接口所对应的延时长短不同,导致算法上很难归一化处理。
此外,随着短视频的兴起,相机厂商也都相继推出了自家的自动跟随算法,但是相机厂商的跟随随算法是用于对焦点的跟随,但是基本原理是:获取被跟随物体(被摄物体)在画面中的位置,也就是说相机厂商能够对实时图像进行计算,从而可以获得被摄物体的实时位置,而不会产生因HDMI视频传输过程导致的延时,并且,在利用相机内部计算得到被跟随物体的坐标信息的时效比较高。
然而,在利用第三方相机进行跟随操作时,通常识别到的目标坐标位置越精准越好,然而容易导致第三方相机的跟随坐标存在跳点情况,比如:从人头肩跟随切换到人脸跟随、从人脸跟随切换到人眼跟随时容易存在跳点问题。
总结来说,现有技术中的智能跟随方法存在以下缺陷:
(1)在利用与相机通信连接的图像处理模块进行智能跟随操作时,虽然可以计算被跟随物体的实时位置,但是会增加额外的AI机器学习算法的开发成本以及硬件设计成本。
(2)在将图像通过HDMI接口传输至图像处理模块时,所对应的延时时间比较大,延时越大,对被跟随物体进行跟随操作的效果也越差;当延时达到一个瓶颈的时候,甚至无法实现目标跟随操作。
(3)不同相机的HDMI接口所对应的延时不一样,差别比较大,在算法上很难做归一化处理。
(4)在利用第三方相机进行智能跟随操作时,对于跟随框而言,跟随框的坐标容易 存在跳变,现有的跟随算法无法解决此类问题。
为了解决上述技术问题中的至少一种,本实施例提供了一种云台的控制方法、装置、可移动平台和存储介质。其中,控制方法通过获取目标对象在采集图像中的采集位置,并基于采集位置确定用于对目标对象进行跟随操作的控制参数;而后根据控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作,其中,由于采集位置是通过图像采集装置所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较大而导致跟随效果差的问题,进一步保证了云台跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
下面结合附图,对本发明中的一种云台的控制方法、装置、可移动平台和存储介质的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图2为本发明实施例提供的一种云台的控制方法的流程示意图;图3为本发明实施例提供的一种云台与图像采集装置进行通信连接的结构示意图;参考附图2-图3所示,本实施例提供了一种云台的控制方法,其中,云台通信连接有图像采集装置,可以理解的是,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力和图像处理能力的其他装置等等。在一些实例中,云台上可以设有通信串行总线USB接口,USB接口用于与图像采集装置进行有线通信连接,即云台通过USB接口与图像采集装置通信连接,具体应用时,在云台通过USB接口与图像采集装置进行传输数据时,传输数据所对应的延时时间比较短。举例来说:在云台通过HDMI接口与图像采集装置进行传输数据时,传输数据所对应的延时时间为t1;在云台通过USB接口与图像采集装置进行传输数据时,传输数据所对应的延时时间为t2,其中,t2<t1或者t2<<t1。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,只要能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短即可,在此不再赘述。
另外,云台的控制方法的执行主体可以是云台的控制装置,可以理解的是,该控制装置可以实现为软件、或者软件和硬件的组合;在控制装置执行该云台的控制方法时,可以解决因通过接口传输数据所产生的延时比较长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S201:获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的。
步骤S202:基于采集位置,确定用于对目标对象进行跟随操作的控制参数。
步骤S203:根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S201:获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的。
其中,图像采集装置可以设置于云台上,用于进行图像采集操作,在获取到采集图像之后,图像采集装置可以对采集图像进行分析处理,以确定目标对象在采集图像中的采集位置。具体的,目标对象在采集图像中的采集位置可以包括:目标对象在采集图像中所对应的关键点位置,或者,目标对象在采集图像中所对应的覆盖区域等等。
在获取到目标对象在采集图像中的采集位置之后,则可以将目标对象在采集图像中的采集位置通过USB接口主动或者被动地传输至云台,从而使得云台可以获取到目标对象在采集图像中的采集位置。
步骤S202:基于采集位置,确定用于对目标对象进行跟随操作的控制参数。
在获取到采集位置之后,则可以对采集位置进行分析处理,以确定用于对目标对象进行跟随操作的控制参数,该控制参数可以包括以下至少之一:姿态信息、角速度信息、加速度信息等等。
其中,由于采集位置是通过图像采集装置所确定的,而在云台通过图像采集装置获取到采集位置时存在一定的延时时间,因此,在确定控制参数时,需要结合与采集位置相对应的延时时间对采集位置进行分析处理,以能够准确地确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于采集位置,确定用于对目标对象进行跟随操作的控制参数可以包括:计算与采集位置相对应的当前位置预测值;基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数。
具体的,如图4所示,在图像采集装置获取到采集图像之后,则可以对采集图像进行分析处理,从而可以获得目标对象在采集图像中的采集位置,在将目标对象在采集图像中的采集位置传输至云台时,由于通过图像采集装置获取到采集位置时存在一定的延时时间,因此,为了降低延时时间对智能跟随操作的影响程度,则可以基于上述的延时时间计算与采集位置相对应的当前位置预测值。可以理解的是,当前位置预测值与采集位置是不同的位置。
在获取到当前位置预测值之后,则可以对当前位置预测值进行分析处理,以确定用于对目标对象进行跟随操作的控制参数,由于当前位置预测值是考虑了与采集位置进行传输时所对应的延时时间而进行确定的,因此,有效地保证了对控制参数进行确定的准确可靠性。
步骤S203:根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
在获取到控制参数之后,则可以基于控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作。在一些实例中,在对目标对象进行跟随操作时,云台可以对应有不同的运动状态,例如:云台可以为匀速运动、匀加速运动、匀减速运动等等。为了保证对目标对象进行跟随操作的质量和效率,不同运动状态的云台可以对应有不同的控制策略。具体的,根据控制参数对云台进行控制可以包括:获取与目标对象所对应的云台运动模型;基于云台运动模型和控制参数对云台进行控制。
在一些实施例中,云台的运动模型可以依据目标对象的运动状态确定,例如,目标对象为匀速运动,云台可以为匀速运动;目标对象为匀加速运动,云台可以为匀加速运动;目标对象为匀减速运动,云台可以为匀减速运动。
在一些实施例中,云台的运动模型与跟随时长相关,例如:初始跟随时,可以是匀加速运动;还可以与跟随状态相关,例如:丢失跟随目标时,可以是匀加速运动。
其中,在利用云台对目标对象进行跟随操作时,可以获取与目标对象所对应的云台运动模型,具体的,本实施例对于与目标对象所对应的云台运动模型的具体获取方式不做限定,本领域技术人员可以根据具体的应用需求和设计需求进行设置,例如:通过图像采集装置获得多帧采集图像,对多帧采集图像进行分析处理,以确定与云台相对应的移动速度,基于移动速度确定与目标对象相对应的云台运动模型,该云台运动模型可以包括以下任意 之一:匀加速运动模型、匀减速运动模型、匀速运动模型等等。或者,在云台上可以设置有惯性测量单元,通过惯性测量单元获取与目标对象相对应的云台运动模型等等。在获取到云台运动模型之后,则可以基于云台运动模型和控制参数对云台进行控制,以实现对目标对象进行跟随操作,从而有效地提高了对目标对象进行跟随操作的质量和效率。
本实施例提供的云台的控制方法,通过获取目标对象在采集图像中的采集位置,而后基于采集位置确定用于对目标对象进行跟随操作的控制参数,并根据控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作,其中,由于采集位置是通过图像采集装置所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较大而导致跟随效果差的问题,进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图5为本发明实施例提供的获取目标对象在采集图像中的采集位置的流程示意图;在上述实施例的基础上,继续参考附图5所示,本实施例提供了一种对目标对象在采集图像中的采集位置进行获取的实现方式,具体的,本实施例中的获取目标对象在采集图像中的采集位置可以包括:
步骤S501:通过图像采集装置获取与目标对象相对应的目标对焦位置。
步骤S502:将目标对焦位置确定为目标对象在采集图像中的采集位置。
在现有技术中,在通过配置有图像采集装置的云台或者无人机进行跟随操作时,图像采集装置的对焦操作和云台或者无人机的跟随操作是两个完全独立的操作,此时,在图像采集装置所对应的对焦对象发生变化时,无法基于对焦对象的变化及时地对云台或者无人机的跟随对象进行调整,这样无法保证跟随操作的质量和效果。
其中,可以理解,在利用无人机进行跟随操作时,无人机上可以通过云台挂载有图像采集装置,那么可以对无人机和/或云台的控制参数进行调整,以实现跟随操作。
因此,为了避免上述因图像采集装置所对应的对焦对象发生变化时,无法基于对焦对象的变化及时地对云台或者无人机的跟随对象进行调整的问题,本实施例提供了一种将图像采集装置的对焦操作和云台或者无人机的跟随操作是关联操作的技术方案,具体的,对于摄像技术领域,在通过图像采集装置获取目标对象在采集图像中的采集位置,且图像采集装置针对目标对象的对焦点与采集位置不同时,这样在基于采集位置控制云台对目标对象进行跟随操作时,容易使得通过图像采集装置所获得的目标对象出现虚焦的情况。因此,为了避免出现进行跟随操作的目标对象出现虚焦的情况,在获取目标对象在采集图像中的采集位置时,则可以通过图像采集装置获取与目标对象相对应的目标对焦位置,可以理解的是,上述的目标对焦位置可以是用户选定的对焦位置或者是自动识别出的对焦位置。
在获取到与目标对象相对应的目标对焦位置,则可以将目标对焦位置直接确定为目标对象在采集图像中的采集位置,即使得与目标对象相对应的对焦位置与目标对象在采集图像中的采集位置一致,进而有效地避免了目标对象出现虚焦的情况。
在另一些实例中,在获取到与目标对象相对应的目标对焦位置之后,将目标对焦位置确定为目标对象在采集图像中的采集位置可以包括:获取与目标对焦位置所对应的预设区域范围,将预设区域范围直接确定为与目标对象在采集图像中的采集位置。其中,目标对象位置所对应的预设位置可以是目标对象在采集图像中所对应的至少部分覆盖区域,此时,与目标对象相对应的对焦位置与目标对象在采集图像中的采集位置基本一致,因此也可以 避免目标对象出现虚焦的情况。
本实施例中,通过图像采集装置获取与目标对象相对应的目标对焦位置,而后将目标对焦位置确定为目标对象在采集图像中的采集位置,从而有效地实现了与目标对象相对应的对焦位置和与目标对象在采集图像中的采集位置基本一致,进而可以有效地避免目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
图6为本发明实施例提供的通过图像采集装置获取与目标对象相对应的目标对焦位置的流程示意图;在上述实施例的基础上,继续参考附图6所示,本实施例提供了一种对目标对焦位置进行获取的实现方式,具体的,本实施例中的通过图像采集装置获取与目标对象相对应的目标对焦位置可以包括:
步骤S601:通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置。
步骤S602:基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置。
其中,在利用图像采集装置对目标对象进行图像采集操作时,由于目标对象可能处于移动状态,例如:目标对象处于匀速移动状态、匀加速移动状态、匀减速移动状态等等,而目标对象的不同移动状态容易使得图像采集操作时所对应的对焦位置发生变化。此时,为了能够保证对目标对象相对应的目标对焦位置进行获取的准确可靠性,则可以通过图像采集装置相对应的与目标对象相对应的历史对焦位置和当前对焦位置,可以理解的是,历史对焦位置是指通过图像采集装置所获得的历史图像帧所对应的对焦位置,当前对焦位置是指通过图像采集装置所获得的当前图像帧所对应的对焦位置。
在获取到历史对焦位置和当前对焦位置之后,则可以对历史对焦位置和当前对焦位置进行分析处理,以确定与目标对象相对应的目标对焦位置。在一些实例中,基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置可以包括:确定与历史对焦位置相对应的历史对象部位和与当前对焦位置相对应的当前对象部位;根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置。
在通过图像采集装置获取到与目标对象相对应的多帧图像时,可以确定与多帧图像相对应的多个对焦位置(包括历史对焦位置和当前对焦位置),与多帧图像所对应的多个对焦位置可以相同或不同。在获取到上述多个对焦位置之后,则可以确定与历史对焦位置相对应的历史图像和与当前对焦位置相对应的当前图像,而后基于历史对焦位置对历史图像进行分析处理,以确定与历史对焦位置相对应的历史对象部位。具体的,可以利用预设图像识别算法对历史图像进行分析处理,以确定位于历史图像中的目标对象轮廓和目标对象类型,而后确定历史对焦位置与位于历史图像中的目标对象轮廓和目标对象类型之间的对应关系,从而确定与历史对焦位置相对应的历史对象部位。相类似的,也可以基于当前对焦位置对当前图像进行分析处理,以确定与当前对焦位置相对应的当前对象部位。在获取到历史对象部位和当前对象部位之后,则可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。
具体的,在图像采集装置获取到采集图像之后,可以利用图像识别算法或者预先训练好的机器学习模型对采集图像进行分析识别,以识别采集图像中所包括的至少一个对象以及对象所在区域。在获取到多个对焦位置之后,则可以确定对焦位置与对象所在区域进行分析比较,在某几个对焦位置是一对象所在区域的一部分时,则可以确定上述某几个对焦位置对应于同一对象。在某几个对焦位置是不同对象所在区域的一部分时,则可以确定上 述某几个对焦位置对应于不同对象。在确定某几个对焦位置对应同一对象时,则可以确定上述任意两个对焦位置之间的距离信息,在距离信息小于或等于预设阈值时,则可以确定上述两个对焦位置对应于同一对象的同一部位,在距离信息大于预设阈值时,则可以确定上述两个对焦位置对应于同一对象的不同部位。
当然的,本领域技术人员也可以采用其他的方式来确定某几个对焦位置是否对应于同一目标对象、确定某几个对焦位置是否对应于同一目标对象的同一部位,在此不再赘述。
在获取到历史对焦位置和当前对焦位置之后,可以确定历史对焦位置和当前对焦位置是否对应同一目标对象,并且在历史对焦位置和当前对焦位置对应同一目标对象时,可以确定历史对焦位置和当前对焦位置是否对应同一对象的同一部位。在确定上述信息之后,则可以将上述信息传输至云台,以使得云台可以基于上述信息进行跟随控制操作,进而保证了智能跟随操作的质量和效果。
可以理解,对焦位置与对焦对象、对焦对象的对焦部位之间可以具有相应的映射关系,且具有各自的属性信息,该属性信息可以具有相应的标识,该映射关系以及属性信息可以经由图像采集装置发送至云台,以使得云台可以依据该信息进行相应的判断,并作出相应的执行策略。
在一些实例中,根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置可以包括:在历史对象部位与当前对象部位为同一目标对象的不同部位时,则获取历史对象部位与当前对象部位之间的相对位置信息;基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置。
在获取到历史对象部位和当前对象部位之后,则可以对历史对象部位与当前对象部位进行分析处理,在历史对象部位和当前对象部位为同一目标对象的不同部位时,则说明历史图像与当前图像进行跟随操作的是同一目标对象的不同部位,例如:历史图像帧中所对应的历史对象部位为人物甲的眼睛,当前图像帧中所对应的当前对象部位为人物甲的肩部。此时,为了避免因对焦位置的变化而出现抖动的情况,则可以获取历史对象部位与当前对象部位之间的相对位置信息,例如:人物甲的眼睛与人物甲的肩部之间的相对位置信息;在获取到相对位置信息之后,则可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置。
具体的,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置可以包括:在相对位置信息大于或等于预设阈值时,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置;在相对位置信息小于预设阈值时,将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在获取到相对位置信息之后,则可以将相对位置信息与预设阈值进行分析比较,在相对位置信息大于或等于预设阈值时,即说明在利用图像采集装置对一目标对象进行对焦操作时,在不同时刻针对同一目标对象的对焦部位不同,进而则可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置。在相对位置信息小于预设阈值时,则说明在利用图像采集装置对一目标对象进行对焦操作时,在不同时刻针对一目标对象的对焦部位基本不变,进而可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
举例来说,在通过图像采集装置对一人物进行图像采集操作时,可以获取至少两帧图像,在获取到上述至少两帧图像之后,则可以确定与至少两帧图像相对应的历史对焦位置 和当前对焦位置,基于历史对焦位置可以确定相对应的历史对象部位,基于当前对焦位置可以确定相对应的当前对象部位。
而后可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。参考附图7所示,在历史对象部位为部位1、当前对象部位为部位2时,可以获取部位1与部位2之间的相对位置信息d1,而后将相对位置信息d1与预设阈值进行分析比较,在相对位置信息d1小于预设阈值时,则说明在利用图像采集装置对上述人物进行对焦操作时,对焦位置发生较小变化,进而可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在另一些实例中,参考附图8所示,在历史对象部位为部位3、当前对象部位为部位4时,可以获取部位3与部位4之间的相对位置信息d2,而后将相对位置信息d2与预设阈值进行分析比较,在相对位置信息d2大于预设阈值时,则说明在利用图像采集装置对上述人物进行对焦操作时,对焦位置发生较大变化,进而可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置,即在待跟随的目标对象未发生改变,只是对焦位置发生改变时,则可以基于目标对象中各个部位之间的相对位置关系对当前对焦位置进行自动调整,从而可以有效地避免图像出现跳变的情况。
在又一些实例中,在获取到历史对象部位和当前对象部位之后,则可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。具体的,根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置可以包括:在历史对象部位与当前对象部位为同一目标对象的不同部位时,基于当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;基于第一更新后构图目标位置对目标对象进行跟随操作。
在获取到历史对象部位和当前对象部位之后,则可以识别历史对象部位和当前对象部位是否为同一目标对象的不同部位,在确定历史对象部位与当前对象部位为同一目标对象的不同部位时,如图8所示,则可以基于当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置。例如:预设的构图目标位置为画面的中心位置时,此时,为了避免因目标对象变更而使得图像出现抖动现象,则可以基于当前对焦位置对构图目标位置进行更新,即可以将当前对焦位置确定为第一更新后构图目标位置。在获取到第一更新后构图目标位置之后,则可以基于第一更新后构图目标位置对目标对象进行跟随操作,进而可以保证对目标对象进行跟随操作的质量和效率。
本实施例中,通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置,而后基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置,从而有效地保证了对目标对焦位置进行确定的准确可靠性,而后便于基于目标对焦位置对目标对象进行跟随操作,进一步提高了该方法的实用性。
图9为本发明实施例提供的另一种云台的控制方法的流程示意图;在上述实施例的基础上,继续参考附图9所示,本实施例中的方法还可以包括:
步骤S901:检测进行跟随操作的目标对象是否发生改变。
步骤S902:在目标对象由第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置。
步骤S903:基于第二对象在采集图像中的采集位置对构图目标位置进行更新,获得与第二对象相对应的第二更新后构图目标位置,以基于第二更新后构图目标位置对第二对象 进行跟随操作。
在通过图像采集装置对目标对象进行跟随操作时,为了避免因进行跟随操作的目标对象发生变化而容易使得云台出现抖动情况,则可以实时检测进行跟随操作的目标对象是否为发生改变。具体的,可以获取历史对焦位置和当前对焦位置,识别与历史对焦位置所对应的历史目标对象和与当前对焦位置所对应的当前目标对象,识别历史目标对象与当前目标对象是否发生改变。
在历史目标对象与当前目标对象为同一目标对象时,则可以确定进行跟随操作的目标对象未发生改变,如图8所示;在历史目标对象与当前目标对象为不同的目标对象时,如图10所示,即目标对象由第一对象改变为第二对象,而后则可以确定进行跟随操作的目标对象已发生改变。此时,为了保证对第二对象进行跟随操作的质量和效果,则可以获取第二对象在采集图像中的采集位置,而后可以基于第二对象在采集图像中的采集位置对构图目标位置进行更新,获得与第二对象相对应的第二更新后构图目标位置。具体的,可以将第二对象在采集图像中的采集位置确定为与第二对象相对应的第二更新后构图目标位置,而后可以基于第二更新后构图目标位置对第二对象进行跟随操作,这样可以有效地避免因目标对象改变而出现图像抖动的情况,进一步提高了对云台进行控制的质量和效率。
图11为本发明实施例提供的计算与采集位置相对应的当前位置预测值的流程示意图;在上述实施例的基础上,继续参考附图11所示,本实施例提供了一种计算与采集位置相对应的当前位置预测值的实现方式,具体的,本实施例中的计算与采集位置相对应的当前位置预测值可以包括:
步骤S1101:确定与采集位置相对应的延时时间,延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长。
其中,在云台通过图像采集装置直接获取采集位置时,由于数据传输存在一定的延时时间。因此,为了能够实现对与采集位置相对应的当前位置预测值进行获取的准确可靠性,则可以确定与采集位置相对应的延时时间,该延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长信息。在一些实例中,确定与采集位置相对应的延时时间可以包括:获取与采集图像相对应的曝光时间;在云台获取到当前采集位置时,确定与当前采集位置相对应的当前接收时间;将当前接收时间与曝光时间之间的时间间隔,确定为与采集位置相对应的延时时间。
具体的,在利用图像采集装置进行图像采集操作时,可以记录与采集图像相对应的曝光时间t n,所记录的与采集图像相对应的曝光时间t n可以存储在预设区域中,从而使得云台可以通过图像采集装置获取与采集图像相对应的曝光时间t n。另外,在图像采集装置将目标对象在当前采集图像中的当前采集位置传输至云台时,在云台获取到当前采集位置时,可以确定与当前采集位置相对应的当前接收时间t n+1。在获取到当前接收时间t n+1与曝光时间t n之后,则可以将当前接收时间t n+1与曝光时间t n之间的时间间隔,确定为与采集位置相对应的延时时间,即Δt=t n+1-t n
步骤S1102:基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值。
在获取到延时时间和采集位置之后,则可以对延时时间和采集位置进行分析处理,以确定与采集位置相对应的当前位置预测值。在一些实例中,基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值可以包括:在云台获取到前一采集位置时,确定与前一采集位置相对应的前一接收时间;确定与前一采集位置相对应的前一位置预测值;根 据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值。
在图像采集装置获取到多帧图像时,可以确定目标对象在多帧图像中所对应的多个采集位置,在将多个采集位置传输至云台时,云台则可以获取到多个采集位置,多个采集位置可以包括:前一采集位置和当前采集位置。在云台获取到前一采集位置时,则可以确定与前一采集位置相对应的前一接收时间,同时还可以确定与前一采集位置相对应的前一位置预测值,其中,对前一位置预测值进行确定的具体实现方式与上述实施例中对当前位置预测值进行确定的具体实现方式相类似,具体可参考上述陈述内容,在此不再赘述。
在获取到采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值之后,则可以对采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值进行分析处理,以计算出与采集位置相对应的当前位置预测值。在一些实例中,根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值可以包括:基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值;将位置调整值与采集位置的和值,确定为与采集位置相对应的当前位置预测值。
其中,在获取到采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值之后,则可以对采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值进行分析处理,以确定与采集位置相对应的位置调整值Δx,在获取到位置调整值Δx之后,则可以将位置调整值与采集位置的和值确定为与采集位置相对应的当前位置预测值
Figure PCTCN2020141400-appb-000001
从而有效地提高了对与采集位置相对应的当前位置预测值进行确定的准确可靠性。
本实施例中,通过确定与采集位置相对应的延时时间,而后基于延时时间和采集位置确定与采集位置相对应的当前位置预测值,由于当前位置预测值考虑了与采集位置相对应的延时时间,因此有效地保证了对当前位置预测值进行确定的准确可靠性;另外,在利用不同的图像采集装置和/或不同的传输接口传输采集位置时,可以获得与上述不同的图像采集装置和/或不同的传输接口相对应的延时时间,从而有效地解决了现有技术中存在的不同的图像采集装置和/或不同的传输接口进行数据传输时,所对应的延时时间的长短不同的问题,实现算法的归一化处理,进一步提高了对目标对象进行跟随操作的质量和效率。
图12为本发明实施例提供的基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值的流程示意图;在上述实施例的基础上,继续参考附图12所示,在计算与采集位置相对应的当前位置预测值的过程中,为了提高对当前位置预测值进行计算的准确度,本实施例提供了一种确定与采集位置相对应的位置调整值的实现方式,具体的,本实施例中的基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值可以包括:
步骤S1201:基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度。
其中,在获取到采集位置、前一位置预测值、曝光时间和前一接收时间之后,则可以对采集位置、前一位置预测值、曝光时间和前一接收时间进行分析处理,以确定与目标对象相对应的移动速度。具体的,基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度可以包括:获取采集位置与前一位置预测值之间的位置差值以及曝光时间与前一接收时间之间的时间差值;将位置差值与时间差值之间的比 值,确定为与目标对象相对应的移动速度。
以采集位置为x n、前一位置预测值为
Figure PCTCN2020141400-appb-000002
曝光时间为t n、前一接收时间t n-1为例,在获取到采集位置x n、前一位置预测值
Figure PCTCN2020141400-appb-000003
曝光时间t n和前一接收时间t n-1之后,则可以获取采集位置与前一位置预测值之间的位置差值
Figure PCTCN2020141400-appb-000004
以及曝光时间与前一接收时间之间的时间差值(t n-t n-1),则可以将位置差值与时间差值之间的比值确定为与目标对象相对应的移动速度,即
Figure PCTCN2020141400-appb-000005
步骤S1202:将移动速度与时间间隔之间的乘积值,确定为与采集位置相对应的位置调整值。
在获取到移动速度和时间间隔之后,则可以将移动速度与时间间隔之间的乘积值确定为与采集位置相对应的位置调整值,即位置调整值:
Figure PCTCN2020141400-appb-000006
本实施例中,基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度,而后将移动速度与时间间隔之间的乘积值确定为与采集位置相对应的位置调整值,从而有效地保证了对位置调整值进行确定的准确可靠性,进一步提高了基于位置调整值计算与采集位置相对应的当前位置预测值的精确程度。
图13为本发明实施例提供的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数的流程示意图一;在上述实施例的基础上,继续参考附图13所示,本实施例提供了一种确定用于对目标对象进行跟随操作的控制参数的实现方式,具体的,本实施例中的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数可以包括:
步骤S1301:确定当前位置预测值与构图目标位置之间的位置偏差。
步骤S1302:基于位置偏差,确定用于对目标对象进行跟随操作的控制参数。
其中,在对目标对象进行跟随操作时,预先配置有构图目标位置,该构图目标位置即为在对目标对象进行跟随操作的过程中,期望目标对象持续位于图像中的位置,一般情况下,构图目标位置可以是指图像的中心位置,即使得目标对象持续位于图像的中心位置,这样可以保证对目标对象进行跟随操作的质量和效果。
在获取到当前位置预测值之后,则可以确定当前位置预测值与构图目标位置之间的位置偏差,而后对位置偏差进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于位置偏差,确定用于对目标对象进行跟随操作的控制参数可以包括:获取与采集图像相对应的画面视场角;基于画面视场角和位置偏差,确定用于对目标对象进行跟随操作的控制参数。
其中,通过图像采集装置获取与采集图像相对应的画面视场角,具体的,通过图像采集装置获取与采集图像相对应的画面视场角可以包括:通过图像采集装置获得与采集图像相对应的焦距信息;根据焦距信息确定与采集图像相对应的画面视场角。在获取到与采集图像相对应的画面视场角之后,则可以对画面视场角和位置偏差进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,控制参数与画面视场角呈负相关,即在画面视场角增大时,位于图像中的目标对象的尺寸变小,此时控制参数(例如:云台的转动速度)可以随着画面视场角的增加而减小。在画面视场角减小时,位于图像中的目标对象的尺寸变大,此时控制参数可以随着画面视场角的减小而增大。
在另一些实例中,基于位置偏差,确定用于对目标对象进行跟随操作的控制参数可以 包括:通过设置于云台上的惯性测量单元IMU获取与采集位置相对应的云台姿态;基于云台姿态和画面视场角将位置偏差转换至大地坐标系下,获得用于对目标对象进行跟随操作的控制参数,这样同样实现了对控制参数进行确定的准确可靠性。
本实施例中,通过确定当前位置预测值与构图目标位置之间的位置偏差,而后基于位置偏差确定用于对目标对象进行跟随操作的控制参数,这样不仅有效地保证了对控制参数进行确定的准确可靠性,进一步也提高了该方法的实用性。
图14为本发明实施例提供的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数的流程示意图二;在上述实施例的基础上,继续参考附图14所示,本实施例提供了另一种确定用于对目标对象进行跟随操作的控制参数的实现方式,具体的,本实施例中的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数可以包括:
步骤S1401:获取与云台相对应的跟随模式,跟随模式包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式。
步骤S1402:基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数。
在云台针对目标对象进行跟随操作时,可以对应有不同的跟随模式。具体的,与云台相对应的跟随模型可以包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式。可以理解的是,本领域技术人员可以基于不同的应用场景和应用需求对云台的控制模式进行调整,在此不再赘述。
其中,在对云台进行控制时,位于不同跟随模式下的云台可以对应有不同的控制参数,例如:当云台的跟随模式为单轴跟随模式时,控制参数可以与云台的单个轴相对应,例如:可以基于目标姿态控制yaw轴进行运动。当云台的跟随模式为双轴跟随模式时,控制参数可以与云台的两个轴相对应,例如:可以基于目标姿态控制yaw轴和pitch轴进行运动。当云台的跟随模式为三轴跟随模式时,控制参数可以与云台的三个轴相对应,例如:可以基于目标姿态控制yaw轴、pitch轴和roll轴进行运动。
基于上述陈述内容可知,由于云台可以对应有不同的跟随模式,而不同的跟随模式可以对应有不同的控制参数,因此,为了提高对控制参数进行确定的准确可靠性,在获取到跟随模式之后,则可以对当前位置预测值和跟随模型进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数可以包括:基于当前位置预测值,确定用于对目标对象进行跟随操作的备选控制参数;在备选控制参数中,确定与跟随模式相对应的目标控制参数。
其中,在获取到当前位置预测值之后,则可以基于当前位置预测值与控制参数之间的对应关系来确定用于对目标对象进行跟随操作的备选控制参数,可以理解的是,备选控制参数的数量可以为多个,例如,在云台为三轴云台时,备选控制参数可以包括与yaw轴、pitch轴和roll轴相对应的控制参数。
在获取到备选控制参数之后,则可以在备选控制参数中确定与跟随模型相对应的目标控制参数,其中,目标控制参数可以为备选控制参数中的至少一部分。具体的,在备选控制参数中,确定与跟随模式相对应的目标控制参数可以包括:在跟随模式为单轴跟随模式时,在备选控制参数中可以确定与单轴跟随模式相对应的单轴控制参数,并将其他备选控制参数置零;在跟随模式为双轴跟随模式时,在备选控制参数中可以确定与双轴跟随模式 相对应的双轴控制参数,并将其他备选控制参数置零;在跟随模式为全跟随模式时,将备选控制参数确定为与全跟随模式相对应的三轴控制参数。
本实施中,通过获取与云台相对应的跟随模式,而后基于当前位置预测值和跟随模式确定用于对目标对象进行跟随操作的控制参数,这样不仅实现了对与不同跟随模式的云台相对应的控制参数进行确定的准确可靠性,并且有效地满足各个应用场景的需求,进一步提高了该方法使用的灵活可靠性。
图15为本发明实施例提供的基于云台运动模型和控制参数对云台进行控制的流程示意图;在上述实施例的基础上,继续参考附图15所示,本实施例提供了一种对云台进行控制的实现方式,具体的,本实施例中的基于云台运动模型和控制参数对云台进行控制可以包括:
步骤S1501:获取用于对目标对象进行跟随操作所对应的时长信息。
其中,云台的控制装置上可以设置有计时器,该计时器可以用于对目标对象进行跟随操作所对应的时长信息进行计时操作,因此,通过计时器可以获取到用于对目标对象进行跟随操作所对应的时长信息。
步骤S1502:在时长信息小于第一时间阈值时,则基于云台运动模型对控制参数进行更新,获得更新后控制参数,并基于更新后控制参数对云台进行控制。
其中,在云台进行移动的过程中,可以对应有不同的云台运动模型,而不同的云台运动模型可以对应有不同的控制参数,因此在获取到时长信息之后,则可以将时长信息与预设的第一时间阈值进行分析比较,在时长信息小于第一时间阈值时,则可以基于云台运动模型对控制参数进行更新,从而可以获得更新后控制参数,并可以基于更新后控制参数对云台进行控制。
在一些实例中,基于云台运动模型对控制参数进行更新,获得更新后控制参数可以包括:基于云台运动模型,确定与控制参数相对应的更新系数,其中,更新系数小于1;将更新系数与控制参数的乘积值,确定为更新后控制参数。
具体的,基于云台运动模型,确定与控制参数相对应的更新系数可以包括:在云台运动模型为匀加速运动时,将时长信息与第一时间阈值之间的比值确定为与控制参数相对应的更新系数,此时的更新系数小于1。而后可以将更新系数与控制参数的乘积值确定为更新后控制参数,即在时长信息t<第一时间阈值T时,则可以基于以下公式确定更新后控制参数,
Figure PCTCN2020141400-appb-000007
其中,E n为控制参数,更新后控制参数为
Figure PCTCN2020141400-appb-000008
举例来说,在云台开始针对某一目标对象进行跟随操作时,在云台获取到用于对目标对象进行跟随操作的控制参数时,为了避免云台突然对目标对象进行跟随操作,则在时长信息小于第一时间阈值时,则可以获取与控制参数相对应的更新后控制参数,该更新后控制参数即为由0到控制参数之间的过渡控制参数,即在时长信息小于第一时间阈值时,基于更新后控制参数对云台进行控制,从而实现了缓慢启动操作,即可以控制云台缓慢地调整至控制参数,进而保证了对目标对象进行跟随操作的质量和效果。
另一些实例中,在云台运动模型为匀减速运动时,则将时长信息与第一时间阈值之间的比值确定为与控制参数相对应的更新系数,此时的更新系数小于1。而后可以将更新系数与控制参数的乘积值确定为更新后控制参数,即在时长信息t<第一时间阈值T时,则可以基于以下公式确定更新后控制参数,
Figure PCTCN2020141400-appb-000009
其中,E n为控制参数,更新后控制参数为
Figure PCTCN2020141400-appb-000010
举例来说,在云台开始针对某一目标对象停止跟随操作时,在云台获取到用于对目标对象停止跟随操作的控制参数时,为了避免云台突然停止对目标对象进行跟随操作,则在时长信息小于第一时间阈值时,则可以获取与控制参数相对应的更新后控制参数,该更新后控制参数即为由控制参数到0之间的过渡控制参数,即在时长信息小于第一时间阈值时,基于更新后控制参数对云台进行控制,从而实现了缓慢停止操作,即可以控制云台缓慢地调整至0,进而保证了对目标对象停止跟随操作的质量和效果。
步骤S1503:在时长信息大于或等于第一时间阈值、且云台运动模型为匀加速运动时,利用控制参数对云台进行控制。
其中,在时长信息与第一时间阈值的比较结果为时长信息大于或等于第一时间阈值、且云台运动模型为匀加速运动时,则直接利用控制参数对云台进行控制,即在时长信息t≥第一时间阈值T时,更新后控制参数与控制参数相同
Figure PCTCN2020141400-appb-000011
而后可以利用控制参数对云台进行控制。
在另一些实例中,在时长信息大于或等于第一时间阈值、且云台运动模型为匀减速运动时,则可以将控制参数配置为0。
本实施例中,通过获取用于对目标对象进行跟随操作所对应的时长信息,在时长信息小于第一时间阈值时,则基于云台运动模型对控制参数进行更新,获得更新后控制参数,并基于更新后控制参数对云台进行控制;在时长信息大于或等于第一时间阈值、且云台运动模型为匀加速运动时,利用控制参数对云台进行控制,从而有效地实现了利用缓慢启动策略和缓慢停止策略对云台进行控制操作,进一步保证了对目标对象进行跟随操作的质量和效率。
图16为本发明实施例提供的根据控制参数对云台进行控制的流程示意图一;在上述任意一个实施例的基础上,继续参考附图16所示,本实施例提供了一种根据控制参数对云台进行控制的实现方式,具体的,本实施例中的根据控制参数对云台进行控制可以包括:
步骤S1601:获取与目标对象相对应的跟随状态。
其中,在对目标对象进行跟随操作时,目标对象可以对应有不同的跟随状态,在一些实例中,与目标对象相对应的跟随状态可以包括以下至少之一:保持跟随状态、丢失状态。可以理解的是,在目标对象对应有不同的跟随状态时,可以利用不同的控制参数对云台进行控制,以保证对云台进行控制的安全可靠性。
另外,本实施例对于获取与目标对象相对应的跟随状态的具体实现方式不做限定,本领域技术人员可以根据具体的应用需求和设计需求,在一些实例中,通过图像采集装置可以获取与目标对象相对应的跟随状态,具体的,在通过图像采集装置所采集的图像中存在目标对象时,则可以确定与目标对象相对应的跟随状态为保持跟随状态;在通过图像采集装置所采集的图像中不存在目标对象时,则可以确定与目标对象相对应的跟随状态为丢失状态。
在另一些实例中,获取与目标对象相对应的跟随状态可以包括:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,同样可以确定第一对象为丢失状态。
步骤S1602:基于跟随状态和控制参数对云台进行控制。
在获取到跟随状态和控制参数之后,则可以基于跟随状态和控制参数对云台进行控制。在一些实例中,基于跟随状态和控制参数对云台进行控制可以包括:在目标对象为丢失状 态时,则获取对目标对象进行跟随操作过程中所对应的丢失时长信息;根据丢失时长信息对控制参数进行更新,获得更新后控制参数;基于更新后控制参数对云台进行控制。
其中,在目标对象为丢失状态时,则可以通过计时器获取对目标对象进行跟随操作过程中所对应的丢失时长信息,而后可以根据丢失时长信息对控制参数进行更新,获得更新后控制参数。在一些实例中,根据丢失时长信息对控制参数进行更新,获得更新后控制参数可以包括:在丢失时长信息大于或等于第二时间阈值时,将控制参数更新为零;在丢失时长信息小于第二时间阈值时,获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,将更新系数与控制参数的乘积值确定为更新后控制参数。
具体的,在获取到丢失时长信息之后,则可以将丢失时长信息与第二时间阈值进行分析比较,在丢失时长信息大于或等于第二时间阈值时,则说明所跟随的目标对象处于丢失状态的时间较长,进而可以将控制参数更新为零,即在丢失时长信息t≥第二时间阈值T时,则可以将控制参数
Figure PCTCN2020141400-appb-000012
在丢失时长信息小于第二时间阈值时,则说明所跟随的目标对象处于丢失状态的时间较短,进而则可以获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,而后则可以将更新系数与控制参数的乘积值确定为更新后控制参数,即在丢失时长信息t<第二时间阈值T时,则可以将控制参数
Figure PCTCN2020141400-appb-000013
本实施例中,通过获取与目标对象相对应的跟随状态,而后基于跟随状态和控制参数对云台进行控制,从而有效地保证了对云台进行控制的准确可靠性。
图17为本发明实施例提供的根据控制参数对云台进行控制的流程示意图二;在上述任意一个实施例的基础上,继续参考附图17所示,本实施例提供了另一种对云台进行控制的实现方式,具体的,本实施例中的根据控制参数对云台进行控制可以包括:
步骤S1701:获取目标对象的对象类型。
步骤S1702:根据对象类型和控制参数对云台进行控制。
其中,在利用云台对目标对象进行跟随操作时,目标对象可以对应有不同的对象类型,上述的对象类型可以包括以下任意之一:静止对象、高度较高的移动对象、高度较低的移动对象等等,而为了能够保证对不同的目标对象进行跟随操作的质量,在对不同的目标对象进行跟随操作时,则可以根据对象类型和控制参数对云台进行控制。在一些实例中,根据对象类型和控制参数对云台进行控制可以包括:根据对象类型对控制参数进行调整,获得调整后参数;基于调整后参数对云台进行控制。
具体的,根据对象类型对控制参数进行调整,获得调整后参数可以包括:在目标对象为静止对象时,则降低云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度大于或等于高度阈值时,则提高云台在偏航方向所对应的控制带宽,并降低云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度小于高度阈值时,则提高云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽。
举例来说,在目标对象为建筑物时,为了能够保证对建筑物进行跟随操作的质量和效果,则可以降低云台在偏航方向(yaw轴方向)所对应的控制带宽和云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以降低平移跟随性能和俯仰跟随性能。
在目标对象为移动对象、且移动对象的高度大于或等于高度阈值时,例如:目标对象为某一人物时,为了能够保证对人物进行跟随操作的质量和效果,则可以提高云台在偏航 方向(yaw轴方向)所对应的控制带宽,并降低云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以提高平移跟随性能、并降低俯仰跟随性能。
在目标对象为移动对象、且移动对象的高度小于高度阈值时,例如:目标对象为某一宠物时,为了能够保证对宠物进行跟随操作的质量和效果,则可以提高云台在偏航方向(yaw轴方向)所对应的控制带宽和云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以提高平移跟随性能和俯仰跟随性能。
本实施例中,通过获取目标对象的对象类型,而后根据对象类型和控制参数对云台进行控制,从而有效地实现了可以针对不同类型的目标对象进行不同的跟随控制操作,进而保证了对目标对象进行跟随操作的质量和效果。
图18为本发明实施例提供的又一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图18所示,本实施例中的方法还可以包括:
步骤S1801:通过显示界面获取用户针对图像采集装置所输入的执行操作。
步骤S1802:根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置。
其中,预先设置有可以供用户进行交互操作的显示界面,具体的,显示界面可以为云台的控制装置上的显示界面,或者,显示界面可以为图像采集装置上的显示界面。在获取到显示界面之后,则可以通过显示界面获取用户针对图像采集装置所输入的执行操作,而后可以根据执行操作对图像采集装置进行控制,以使得图像采集装置可以基于执行操作来确定目标对象在采集图像中的采集位置。
举例1,在显示界面为云台的控制装置上的显示界面时,云台的控制装置上可以设置有用于对图像采集装置进行控制的应用程序APP,通过对云台的控制装置进行操作即可启动上述APP,并可以在显示器上显示用于对图像采集装置进行控制的显示界面,用户可以通过显示界面获取用户针对图像采集装置所输入的执行操作,而后则可以根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而实现了用户可以通过云台的控制装置对图像采集装置进行控制。
举例2,在显示界面为图像采集装置上的显示界面时,用户可以通过显示界面获取用户针对图像采集装置所输入的执行操作,而后则可以根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而实现了用户可以通过图像采集装置对图像采集装置进行控制。
本实施例中,通过显示界面获取用户针对图像采集装置所输入的执行操作,而后根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而有效地实现了对图像采集装置进行控制,进一步提高了对目标对象进行跟随操作的质量和效果。
图19为本发明实施例提供的另一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图19所示,本实施例中的方法还可以包括:
步骤S1901:通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息。
步骤S1902:将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
其中,在图像采集装置获取目标对象在采集图像中的采集位置时,为了提高对目标对象在采集图像中的采集位置进行确定的精确度,在图像采集装置上可以设置有测距传感器, 测距传感器可以通过云台与图像采集装置通信连接,在具体应用时,可以通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息,而后可以将距离信息发送至图像采集装置,在图像采集装置获得距离信息之后,则可以将结合距离信息确定目标对象在采集图像中的采集位置,这样有效地提高了对目标对象在采集图像中的采集位置进行确定的准确可靠性。也即,使得图像采集装置通过距离信息获取的采集位置,对基于图像识别获取的采集位置进行融合或校准。
图20为本发明实施例提供的又一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图20所示,本实施例中的方法还可以包括:
步骤S2001:确定图像采集装置所对应的工作模式,工作模式包括以下任意之一:先跟随后对焦模式、先对焦后跟随模式;
步骤S2002:利用工作模式对图像采集装置进行控制。
其中,在基于图像采集装置进行跟随操作时,图像采集装置可以对应有不同的工作模型,该工作模型可以包括:先跟随后对焦模式、先对焦后跟随模式,上述的先跟随后对焦模式是指在图像采集装置需要执行跟随操作和对焦操作时,图像采集装置可以优先进行跟随操作,而后再进行对焦操作。先对焦后跟随模式是指在图像采集装置需要执行跟随操作和对焦操作时,图像采集装置可以优先进行对焦操作,而后再进行跟随操作。
举例来说,在控制云台和图像采集装置进行跟随操作时,可以通过图像采集装置获得采集图像时,是先基于采集图像进行构图跟随操作,还是先对采集图像中的目标对象进行对焦操作;在图像采集装置的工作模式为先跟随后对焦模式时,则可以优先基于采集图像进行构图跟随操作,而后再对经过构图跟随操作的目标对象进行对焦操作。在图像采集装置的工作模式为先对焦后跟随模式时,则可以优先对采集图像中的目标对象进行对焦操作,而后再对进行对焦操作的目标对象进行构图跟随操作。
具体应用时,预先设置有用于对图像采集装置进行控制的操作界面/操作控件,在获取到操作界面/操作控件之后,则可以通过操作界面对图像采集装置的工作模式进行配置/选择,在配置完图像采集装置的工作模式之后,则可以通过工作模式标识来确定该图像采集装置所对应的工作模式。
在确定图像采集装置所对应的工作模式之后,则可以利用工作模式对图像采集装置进行控制,从而有效地实现了图像采集装置可以满足不同的应用场景需求,进一步提高了对图像采集装置进行控制的灵活可靠性。
图21为本发明实施例提供的一种云台系统的控制方法的流程示意图;参考附图21所示,本实施例提供了一种云台系统的控制方法,其中,云台系统包括:云台和与云台通信连接的图像采集装置,在一些实例中,图像采集装置可以集成在云台上,此时,云台和设置于云台上的图像采集装置可以整体进行销售或者维护操作,此时,图像采集装置和云台可以整体进行销售或者维护操作。在另一些实例中,图像采集装置可以单独设置于云台上,此时,图像采集装置和云台可以单独进行销售或者维护操作。
另外,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力的其他装置等等。在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采集装置有线通信连接,即云台通过USB接口与图像采集装置通信连接,具体应用时,在云台通过USB接口与图像采集装置进行传输数据时,传输数据所对应的延时时间比较短。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,只要能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短即可,在此不再赘述。
此外,该云台系统的控制方法的执行主体可以是云台系统的控制装置,可以理解的是,该云台系统的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台系统的控制装置可以设置于云台或者图像采集装置上,当云台系统的控制装置设置于图像采集装置上时,云台与图像采集装置可以是集成的产品。在控制装置执行该云台系统的控制方法时,可以解决因接口传输数据所产生的延时时间比较长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S2101:控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,采集位置是通过图像采集装置所确定的。
步骤S2102:将采集位置传输至云台。
步骤S2103:控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S2101:控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,采集位置是通过图像采集装置所确定的。
当针对一目标对象存在跟随需求时,则可以根据跟随需求控制图像采集装置进行图像采集装置,在图像采集装置获取到图像之后,图像采集装置可以对图像进行分析处理,以确定目标对象在图像中的采集位置。具体的,目标对象在图像中的采集位置可以包括:目标对象在图像所对应的关键点位置,或者,目标对象在图像中所对应的覆盖区域等等。
步骤S2102:将采集位置传输至云台。
在获取到目标对象在采集图像中的采集位置之后,则可以将目标对象在采集图像中的采集位置通过USB接口主动或者被动地传输至云台,从而使得云台可以获取到目标对象在图像中的采集位置。
步骤S2103:控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
在云台获取到采集位置之后,则可以对采集位置进行分析处理,以确定用于对云台进行控制的控制参数,而后则可以控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作。
需要注意的是,本实施例中的方法还可以包括上述图2至图20中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图20中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图20所示实施例中的描述,在此不再赘述。
本实施例中提供的云台系统的控制方法,通过控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,而后将采集位置传输至云台,并控制云台按照控制参数进行运动,其中,控制参数是基于采集位置所确定的,从而可以实现对目标对象进行跟随操作,另外,由于采集位置是通过图像采集装置所确定的,而云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较长而导致跟随效果差的问题,进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图22为本发明实施例提供的另一种云台的控制方法的流程示意图;参考附图22所示,本实施立体供了另一种云台的控制方法,该方法适用于云台,该云台通信连接有图像采集装置,另外,云台的控制方法的执行主体可以是云台的控制装置,可以理解的是,该控制装置可以实现为软件、或者软件和硬件的组合,具体的,该方法可以包括:
步骤S2201:获取采集图像,采集图像中包括目标对象。
步骤S2202:在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作。
步骤S2203:将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S2201:获取采集图像,采集图像中包括目标对象。
其中,云台通信连接有图像采集装置,上述的图像采集装置可以针对一目标对象进行图像采集装置,从而可以获取到采集图像,在图像采集装置获取到采集图像之后,则可以将采集图像主动或者被动地传输至云台,从而使得云台可以稳定地获取到采集图像。
步骤S2202:在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作。
其中,在获取到采集图像之后,则可以对采集图像进行分析处理,以确定目标对象的位置,所获取到的目标对象的位置用于实现对目标对象进行跟随操作。具体的,可以通过显示界面对采集图像进行显示,而后用户可以通过显示界面针对采集图像输入执行操作,根据执行操作即可确定目标对象的位置,即用户可以对采集图像中所包括的目标对象进行框选操作,从而可以确定目标对象的位置。或者,在获取到采集图像之后,可以利用预设图像处理算法对采集图像进行自动分析处理,以确定目标对象的位置。
当然的,本领域技术人员也可以采用其他的方式在采集图像中确定目标对象的位置,只要能够保证对目标对象的位置进行确定的准确可靠性即可,在此不再赘述。
步骤S2203:将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
其中,在获取到目标对象的位置之后,为了保证通过图像采集装置对目标对象进行跟随操作的质量和效果,则可以将目标对象的位置发送至图像采集装置,在图像采集装置获取到目标对象的位置之后,则可以基于目标对象的位置确定与目标对象相对应的对焦位置,从而实现了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
需要注意的是,本实施例中的方法还可以包括上述图2至图20中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图20中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图20所示实施例中的描述,在此不再赘述。
本实施例中提供的云台的控制方法,通过获取采集图像,并在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作,而后将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对 焦位置,并基于对焦位置对目标对象进行对焦操作,从而有效地保证了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,从而有效地提高了对目标对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
图23为本发明实施例提供的另一种云台系统的控制方法的流程示意图;参考附图23所示,本实施例提供了一种云台系统的控制方法,其中,云台系统包括:云台和与云台通信连接的图像采集装置,在一些实例中,图像采集装置可以集成在云台上,此时,云台和设置于云台上的图像采集装置可以整体进行销售或者维护操作。在另一些实例中,图像采集装置可以单独设置于云台上,此时,图像采集装置和云台之间可以单独进行销售或者维护操作。
另外,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力的其他装置等等。在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采集装置有线通信连接,即云台通过USB接口与图像采集装置通信连接,具体应用时,在云台通过USB接口与图像采集装置进行传输数据时,传输数据所对应的延时时间比较短。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,只要能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短即可,在此不再赘述。
此外,该云台系统的控制方法的执行主体可以是云台系统的控制装置,可以理解的是,该云台系统的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台系统的控制装置可以设置于云台或者图像采集装置上。在控制装置执行该云台系统的控制方法时,可以解决因接口传输数据所产生的延时长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S2301:控制图像采集装置采集图像,图像包括目标对象。
步骤S2302:在图像中确定目标对象的位置。
步骤S2303:基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S2301:控制图像采集装置采集图像,图像包括目标对象。
当针对一目标对象存在跟随需求时,则可以根据跟随需求控制图像采集装置进行图像采集装置,在图像采集装置获取到图像之后,可以将图像主动或者被动地传输至云台,从而使得云台可以获取到图像。
步骤S2302:在图像中确定目标对象的位置。
其中,在获取到图像之后,则可以对图像进行分析处理,以确定目标对象在图像中的采集位置。具体的,目标对象在图像中的采集位置可以包括:目标对象在图像所对应的关键点位置,或者,目标对象在图像中所对应的覆盖区域等等。
步骤S2303:基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
在获取到目标对象的位置之后,则可以基于目标对象的位置控制云台对目标对象进行 跟随操作,此外,在获取到目标对象的位置之后,还可以基于目标对象的位置确定与目标对象相对应的对焦位置,具体的,目标对象的位置可以与目标对象相对应的对焦位置相同,从而实现了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
需要注意的是,本实施例中的方法还可以包括上述图2至图20中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图20中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图20所示实施例中的描述,在此不再赘述。
本实施例提供的云台系统的控制方法,通过控制图像采集装置采集图像,并在图像中确定目标对象的位置,而后基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作,从而有效地保证了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,从而有效地提高了对目标对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
图24为本发明实施例提供的又一种云台系统的控制方法的流程示意图;参考附图24所示,本实施例提供了又一种云台系统的控制方法,其中,云台系统包括:云台和与云台通信连接的图像采集装置,该云台系统的控制方法的执行主体可以是云台系统的控制装置,可以理解的是,该云台系统的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台系统的控制装置可以设置于云台或者图像采集装置上,当云台系统的控制装置设置于图像采集装置上时,云台与图像采集装置可以是集成的产品。具体的,本实施例中的方法还可以包括:
步骤S2401:获取第一对象在采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作。
步骤S2402:在第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S2401:获取第一对象在采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作。
其中,针对第一对象存在跟随需求时,则可以通过图像采集装置针对第一对象进行图像采集操作,从而可以获得包括第一对象的采集图像。在获取到采集图像之后,则可以对采集图像进行分析处理,确定第一对象在采集图像中的采集位置,所确定的第一对象在采集图像中的采集位置用于供云台对第一对象进行跟随操作,此外,所确定的第一对象在采集图像中的采集位置用于供图像采集装置对第一对象进行对焦操作。另外,对采集图像进行分析处理,确定第一对象在采集图像中的采集位置的执行主体可以为“图像采集装置”或者“云台”。
步骤S2402:在第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟 随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
在对第一对象进行跟随操作时,可能会出现跟随对象发生变更的情况,即第一对象可能改变为第二对象。在第一对象改变为第二对象时,则可以获取第二对象在采集图像中的采集位置,而后可以基于第二对象在采集图像中的采集位置对云台进行控制,从而有效地实现了可以控制云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作。
此外,所获得的第二对象在采集图像中的采集位置还可以用于供图像采集装置进行对焦操作,具体的,可以使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作,从而实现了在对第二对象进行跟随操作时,用于对第二对象进行跟随操作的跟随位置与第二对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对第二对象出现虚焦的情况,进一步提高了对第二对象进行跟随操作的质量和效果。
此外,对第二对象在采集图像中的采集位置进行获取的实现方式与上述对第一对象在采集图像中的采集位置进行获取的实现方式相类似,具体可参考上述陈述内容,在此不再赘述。
需要注意的是,本实施例中的方法还可以包括上述图2至图20中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图20中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图20所示实施例中的描述,在此不再赘述。
本实施例提供的云台系统的控制方法,通过获取第一对象在采集图像中的采集位置,在第一对象改变为第二对象时,则可以获取第二对象在采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作,从而有效地保证了跟随对象由第一对象改变为第二对象时,则可以对第二对象进行跟随操作,并且,在对第二对象进行跟随操作时,通过保证用于对第二对象进行跟随操作的跟随位置与第二对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对第二对象出现虚焦的情况,从而有效地提高了对第二对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
具体应用时,参考附图25所示,本发明提供了一种基于相机(可以为设置于云台上的第三方相机、或者集成于云台上的相机)而实现的智能跟随方法,该方法的执行主体可以包括:相机和云台。具体的,本实施例中的方法包括以下步骤:
步骤1:相机平面偏差预测。
通过相机直接获取当前图像帧的相机曝光时间戳,当前图像帧的相机曝光时间戳可以为t n,在相机将当前图像帧的检测信息(可以包括目标对象位于图像帧中的坐标信息)发送至云台时,云台接收与当前图像帧的检测信息的时间戳为t n+1,相对应的,在相机将上一图像帧的检测信息发送至云台时,云台接收与上一图像帧的检测信息的时间戳为t n-1
考虑到由相机和云台所构成的通信链路上所存在的链路延时以及与链路相对应的其他不稳定因素,通过相机所获得的目标对象在当前图像帧中的目标对象目标值的时间与云台接收上述检测信息的时间之间存在偏差。因此,为了保证对目标对象进行跟随操作的质量和效果,则需要考虑链路延时对智能跟随操作的影响,具体可以进行如下步骤:
步骤1.1:获取由相机和云台所构成的通信链路所对应的链路延时。
具体的,链路延时即为当前图像帧的曝光时间与云台接收到与当前图像帧相对应的检测信息的接收时间之间的时间间隔,即:Δt=t n+1-t n
步骤1.2:基于当前图像帧,获得目标对象在当前图像帧的采集位置。
具体的,相机可以对当前图像帧进行分析处理,以确定目标对象在当前图像帧的采集位置为(x n,y n)。
步骤1.3:基于目标对象在当前图像帧的采集位置,确定与采集位置相对应的当前位置预测值
Figure PCTCN2020141400-appb-000014
具体的可以基于以下公式来实现:
Figure PCTCN2020141400-appb-000015
Figure PCTCN2020141400-appb-000016
其中,
Figure PCTCN2020141400-appb-000017
为目标对象在上一图像帧中的前一位置预测值,(x n,y n)为目标对象在当前图像帧的采集位置,
Figure PCTCN2020141400-appb-000018
为与采集位置相对应的当前位置预测值,Δt为由相机和云台所构成的通信链路所对应的链路延时,t n为与当前图像帧所对应的相机曝光时间戳,t n-1为云台接收到与上一图像帧的检测信息的时间戳。
需要注意的是,在n=1时,目标对象在上一图像帧中的前一位置预测值
Figure PCTCN2020141400-appb-000019
与目标对象在当前图像帧的采集位置(x n,y n)相同。
步骤1.4:基于与采集位置相对应的当前位置预测值
Figure PCTCN2020141400-appb-000020
确定相机平面偏差。
具体的,相机平面偏差为归一化后的坐标值偏差,记作e x和e y,为了能够获取到相机平面偏差,则可以获取构图目标,具体实现时,可以将构图目标记作(tgt x,tgt y),而后基于构图目标和当前位置预测值确定相机平面偏差,具体的,可以基于以下公式来获取相机平面偏差:
Figure PCTCN2020141400-appb-000021
Figure PCTCN2020141400-appb-000022
步骤2:对相机平面偏差进行坐标转换操作,确定用于对目标对象进行跟随操作的偏差角度。
步骤2.1:获取相机的实际画面视场角fov信息和云台的当前姿态信息。
其中,为了能够获取相机的实际画面视场角fov信息,则可以先获取到相机的焦距信息,基于焦距信息确定相机的实际画面视场角fov信息,需要注意的是,上述的焦距信息可以直接通过相机获取,或者也可以是用户基于具体的应用场景和应用需求进行配置所获得的。
步骤2.2:根据实际画面视场角fov信息和当前姿态信息,将相机平面偏差转换至大地坐标系NED(北、东、地坐标系),从而可以获得偏差角度。
具体的,相机坐标系可以记作b系,NED坐标系可以记作n系。
在大地坐标系下的偏差角度可以通过以下公式来获得:
E x=0;
Figure PCTCN2020141400-appb-000023
其中,e x和e y为在相机平面进行归一化之后的坐标值偏差,FOV x、FOV y分别为相机在横向(x轴方向)和纵向(y轴方向)所对应的fov角度,E x、E y、E z为相机坐标系下各个 轴所对应的偏差角度,其矩阵表示如下:
Figure PCTCN2020141400-appb-000024
通过IMU可以测得云台姿态和所对应的旋转矩阵
Figure PCTCN2020141400-appb-000025
并可以根据下式得到NED坐标系下的角度偏差:
Figure PCTCN2020141400-appb-000026
其中,E n为在大地坐标系NED中所对应的偏差角度,
Figure PCTCN2020141400-appb-000027
为与云台姿态所对应的旋转矩阵,E b为在相机坐标系中所对应的偏差角度。
在一些实例中,云台可以对应有不同的跟随模型,该跟随模型可以包括单轴跟随模型、双轴跟随模型和三轴跟随模型,不同的跟随模型可以对应有不同的偏差角度。在云台为单轴跟随模式时,所获得的偏差角度可以与云台的单轴相对应,例如:偏差角度与yaw轴相对应,与其他两个轴所对应的偏差角度调整为零。在云台为双轴跟随模式时,所获得的偏差角度可以与云台的两个轴相对应,例如:偏差角度与yaw轴和pitch轴相对应,与其他轴所对应的偏差角度调整为零。在云台为三轴跟随模式时,所获得的偏差角度可以与云台的三个轴相对应,例如:偏差角度与yaw轴、pitch轴和roll轴相对应。
步骤3:基于偏差角度对云台进行控制,以实现对目标对象进行跟随操作。
其中,云台上可以设置有云台控制器,该云台控制器可以包括三个比例积分微分(Proportion Integral Differential,简称PID)控制器,具体结构参考附图26,其具体可以包括跟踪换PID控制器、位置环PID控制器和速度环PID控制器。
在获取到偏差角度E n之后,则可以将偏差角度E n输入至PID控制器中,从而可以获得用于控制云台电机进行转动的控制参数。
需要注意的是,本领域技术人员可以根据不同的需求调节三个PID控制器的相关参数。在PID控制器的带宽越高时,跟随性能越好,但是其平滑性会降低,反之带宽越低跟随性能越差,平滑度增高。
步骤4:云台智能跟随策略。
其中,云台智能跟随策略可以包括以下三大方面:跟随物体和丢失物体缓启停策略、针对不同物体跟随速度的调整云台控制器、根据历史对焦位置来确定对焦偏移量,下面对上述三大方面的云台智能跟随策略进行说明:
步骤4.1:开始跟随目标和丢失目标缓启停策略。
其中,缓启停策略即为匀加速策略或匀减速策略,设置加减速时间阈值为T,已知E n为ned坐标系下各个轴的偏差角度,实际输出给云台控制器的实际偏差角度
Figure PCTCN2020141400-appb-000028
具体的,输出给云台控制器的实际偏差角度
Figure PCTCN2020141400-appb-000029
与偏差角度E n之间存在以下关系:
(a)开始跟随目标的匀加速运动中:
Figure PCTCN2020141400-appb-000030
其中,t为开始跟随目标对象的时长信息,T为预设时间阈值,用户可以根据具体的应用场景和应用需求对预设时间阈值的具体时间长短进行设置,一般情况下,T可以为0.5s或者1s。
具体的,在开始对目标对象进行跟随操作时,为了避免突然出现在目标对象进行跟随 操作,在对目标对象进行跟随操作的时长信息小于预设时间阈值时,则可以获取与偏差角度E n相对应的实际偏差角度
Figure PCTCN2020141400-appb-000031
该实际偏差角度
Figure PCTCN2020141400-appb-000032
即为0与偏差角度E n之间的过渡参数,从而可以实现缓慢启动对目标对象进行跟随操作。在对目标对象进行跟随操作的时长信息大于或等于预设时间阈值时,则可以将实际偏差角度
Figure PCTCN2020141400-appb-000033
确定为偏差角度E n,即可以稳定地对目标对象进行跟随操作。
(b)丢失目标加速运动中:
Figure PCTCN2020141400-appb-000034
其中,t为开始丢失目标对象的时长信息,T为预设时间阈值,用户可以根据具体的应用场景和应用需求对预设时间阈值的具体时间长短进行设置,一般情况下,T可以为1s、1.5s或者2s等等。
具体的,在开始对目标对象进行跟随操作时,为了避免突然出现在目标对象进行跟随操作,在对目标对象进行跟随操作的时长信息小于预设时间阈值时,则可以获取与偏差角度E n相对应的实际偏差角度
Figure PCTCN2020141400-appb-000035
该实际偏差角度
Figure PCTCN2020141400-appb-000036
即为0与偏差角度E n之间的过渡参数,从而可以实现缓慢启动对目标对象进行跟随操作。在对目标对象进行跟随操作的时长信息大于或等于预设时间阈值时,则可以将实际偏差角度
Figure PCTCN2020141400-appb-000037
确定为偏差角度E n,即可以稳定地对目标对象进行跟随操作。
步骤4.2:根据不同物体跟随速度的调整云台控制器。
其中,云台可以根据所跟随的不同物体类型调整云台控制器,具体可以包括以下几类:
(a)在待跟随的目标对象为人物时,为了提高跟随质量和效果,则可以提高平移跟随性能(与yaw方向相对应),并降低俯仰跟随性能(与pitch方向相对应);
(b)在待跟随的目标对象为宠物时,为了提高跟随质量和效果,则可以提高平移跟随性能(与yaw方向相对应),并提高俯仰跟随性能(与pitch方向相对应);
(c)在待跟随的目标对象为建筑物时,为了提高跟随质量和效果,则可以降低平移跟随性能(与yaw方向相对应),并降低俯仰跟随性能(与pitch方向相对应)。
(d)在待跟随的目标对象为其他物体时,则可以保持控制参数不变。
步骤4.3:根据历史对焦位置来确定对焦偏移量。
其中,在利用图像采集装置针对某一目标对象进行图像采集操作的过程中,在历史图像帧所对应的历史对焦位置与当前图像帧所对应的当前对焦位置可以不同。此时,为了避免因对焦位置偏移而使得云台发生抖动或者抽动的情况,则可以当检测到当前对焦位置和历史对焦位置不同时,则可以获取当前对焦位置和历史对焦位置自检的对焦偏移量,基于对焦偏移量来确定与目标对象相对应的目标对焦位置。
具体的,在对焦偏移量小于或等于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较近,进而则可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在对焦偏移量大于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较远,进而则可以基于对焦偏移量对当前对焦位置进行调整,从而可以获得与目标对象相对应的目标对焦位置。
在对焦偏移量大于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较远,此时则可以检测目标对象是否发生改变,在目标对象发生改变之后,则可以基于更改 后的目标对象对构图目标位置进行更新,获得更新后目标位置,以基于更新后目标位置对云台进行控制,实现对更改后的目标对象进行跟随操作。
本应用实施例提供的基于相机而实现的智能跟随方法,有效地解决了以下问题:(1)解决实时图像因为HDMI传输产生的延时时间比较长而导致跟随效果差的问题;(2)解决云台实现目标跟随,增加了额外的AI机器学习算法的开发成本以及硬件设计成本问题;(3)解决相机跟随目标类型变化导致坐标点跳跃的问题;(4)解决对焦点和跟随点不统一问题,不会出现被跟随目标虚焦情况;进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图27为本发明实施例提供的一种云台的控制装置的结构示意图;参考附图27所示,本实施例提供了一种云台的控制装置,其中,云台通信连接有图像采集装置,该云台的控制装置可以执行与图2相对应的云台的控制方法。具体的,本实施例中的装置可以包括:
第一存储器12,用于存储计算机程序;
第一处理器11,用于运行第一存储器12中存储的计算机程序以实现:
获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的;
基于采集位置,确定用于对目标对象进行跟随操作的控制参数;
根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
其中,云台的控制装置的结构中还可以包括第一通信接口13,用于电子设备与其他设备或通信网络通信。
在一些实例中,在第一处理器11获取目标对象在采集图像中的采集位置时,第一处理器11用于:通过图像采集装置获取与目标对象相对应的目标对焦位置;将目标对焦位置确定为目标对象在采集图像中的采集位置。
在一些实例中,在第一处理器11通过图像采集装置获取与目标对象相对应的目标对焦位置时,第一处理器11用于:通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置。
在一些实例中,在第一处理器11基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置时,第一处理器11用于:确定与历史对焦位置相对应的历史对象部位和与当前对焦位置相对应的当前对象部位;根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置。
在一些实例中,在第一处理器11根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置时,第一处理器11用于:在历史对象部位与当前对象部位为同一目标对象的不同部位时,则获取历史对象部位与当前对象部位之间的相对位置信息;基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置。
在一些实例中,在第一处理器11基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置时,第一处理器11用于:在相对位置信息大于或等于预设阈值时,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置;在相对位置信息小于预设阈值时,将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在一些实例中,在第一处理器11根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置时,第一处理器11用于:在历史对象部位与当前对象部位为同一目 标对象的不同部位时,基于当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;基于第一更新后构图目标位置对目标对象进行跟随操作。
在一些实例中,第一处理器11用于:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置;基于第二对象在采集图像中的采集位置对构图目标位置进行更新,获得与第二对象相对应的第二更新后构图目标位置,以基于第二更新后构图目标位置对第二对象进行跟随操作。
在一些实例中,在第一处理器11基于采集位置,确定用于对目标对象进行跟随操作的控制参数时,第一处理器11用于:计算与采集位置相对应的当前位置预测值;基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在第一处理器11计算与采集位置相对应的当前位置预测值时,第一处理器11用于:确定与采集位置相对应的延时时间,延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长;基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值。
在一些实例中,在第一处理器11确定与采集位置相对应的延时时间时,第一处理器11用于:获取与采集图像相对应的曝光时间;在云台获取到当前采集位置时,确定与当前采集位置相对应的当前接收时间;将当前接收时间与曝光时间之间的时间间隔,确定为与采集位置相对应的延时时间。
在一些实例中,在第一处理器11基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值时,第一处理器11用于:在云台获取到前一采集位置时,确定与前一采集位置相对应的前一接收时间;确定与前一采集位置相对应的前一位置预测值;根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值。
在一些实例中,在第一处理器11根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值时,第一处理器11用于:基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值;将位置调整值与采集位置的和值,确定为与采集位置相对应的当前位置预测值。
在一些实例中,在第一处理器11基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值时,第一处理器11用于:基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度;将移动速度与时间间隔之间的乘积值,确定为与采集位置相对应的位置调整值。
在一些实例中,在第一处理器11基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度时,第一处理器11用于:获取采集位置与前一位置预测值之间的位置差值以及曝光时间与前一接收时间之间的时间差值;将位置差值与时间差值之间的比值,确定为与目标对象相对应的移动速度。
在一些实例中,在第一处理器11基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数时,第一处理器11用于:确定当前位置预测值与构图目标位置之间的位置偏差;基于位置偏差,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在第一处理器11基于位置偏差,确定用于对目标对象进行跟随操作的控制参数时,第一处理器11用于:获取与采集图像相对应的画面视场角;基于画面视场角 和位置偏差,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,控制参数与画面视场角呈负相关。
在一些实例中,在第一处理器11基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数时,第一处理器11用于:获取与云台相对应的跟随模式,跟随模式包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式;基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在第一处理器11基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数时,第一处理器11用于:基于当前位置预测值,确定用于对目标对象进行跟随操作的备选控制参数;在备选控制参数中,确定与跟随模式相对应的目标控制参数。
在一些实例中,在第一处理器11在备选控制参数中,确定与跟随模式相对应的目标控制参数时,第一处理器11用于:在跟随模式为单轴跟随模式时,在备选控制参数中,确定与单轴跟随模式相对应的单轴控制参数,并将其他备选控制参数置零;在跟随模式为双轴跟随模式时,在备选控制参数中,确定与双轴跟随模式相对应的双轴控制参数,并将其他备选控制参数置零;在跟随模式为全跟随模式时,将备选控制参数确定为与全跟随模式相对应的三轴控制参数。
在一些实例中,在第一处理器11根据控制参数对云台进行控制时,第一处理器11用于:获取与目标对象所对应的云台运动模型;基于云台运动模型和控制参数对云台进行控制。
在一些实例中,在第一处理器11基于云台运动模型和控制参数对云台进行控制时,第一处理器11用于:获取用于对目标对象进行跟随操作所对应的时长信息;在时长信息小于第一时间阈值时,则基于云台运动模型对控制参数进行更新,获得更新后控制参数,并基于更新后控制参数对云台进行控制;在时长信息大于或等于第一时间阈值、且云台运动模型为匀加速运动时,利用控制参数对云台进行控制。
在一些实例中,在第一处理器11基于云台运动模型对控制参数进行更新,获得更新后控制参数时,第一处理器11用于:基于云台运动模型,确定与控制参数相对应的更新系数,其中,更新系数小于1;将更新系数与控制参数的乘积值,确定为更新后控制参数。
在一些实例中,在第一处理器11基于云台运动模型,确定与控制参数相对应的更新系数时,第一处理器11用于:在云台运动模型为匀加速运动时,将时长信息与第一时间阈值之间的比值确定为与控制参数相对应的更新系数。
在一些实例中,在第一处理器11根据控制参数对云台进行控制时,第一处理器11用于:获取与目标对象相对应的跟随状态;基于跟随状态和控制参数对云台进行控制。
在一些实例中,在第一处理器11获取与目标对象相对应的跟随状态时,第一处理器11用于:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,则确定第一对象为丢失状态。
在一些实例中,在第一处理器11基于跟随状态和控制参数对云台进行控制时,第一处理器11用于:在目标对象为丢失状态时,则获取对目标对象进行跟随操作过程中所对应的丢失时长信息;根据丢失时长信息对控制参数进行更新,获得更新后控制参数;基于更新后控制参数对云台进行控制。
在一些实例中,在第一处理器11根据丢失时长信息对控制参数进行更新,获得更新后控制参数时,第一处理器11用于:在丢失时长信息大于或等于第二时间阈值时,将控制参 数更新为零;在丢失时长信息小于第二时间阈值时,获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,将更新系数与控制参数的乘积值确定为更新后控制参数。
在一些实例中,在第一处理器11根据控制参数对云台进行控制时,第一处理器11用于:获取目标对象的对象类型;根据对象类型和控制参数对云台进行控制。
在一些实例中,在第一处理器11根据对象类型和控制参数对云台进行控制时,第一处理器11用于:根据对象类型对控制参数进行调整,获得调整后参数;基于调整后参数对云台进行控制。
在一些实例中,在第一处理器11根据对象类型对控制参数进行调整,获得调整后参数时,第一处理器11用于:在目标对象为静止对象时,则降低云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度大于或等于高度阈值时,则提高云台在偏航方向所对应的控制带宽,并降低云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度小于高度阈值时,则提高云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽。
在一些实例中,第一处理器11用于:通过显示界面获取用户针对图像采集装置所输入的执行操作;根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置。
在一些实例中,第一处理器11用于:通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息;将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
在一些实例中,第一处理器11用于:确定图像采集装置所对应的工作模式,工作模式包括以下任意之一:先跟随后对焦模式、先对焦后跟随模式;利用工作模式对图像采集装置进行控制。
在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采集装置有线通信连接。
图27所示装置可以执行图2至图20、图25至图26中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图20、图25至图26中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图20、图25至图26所示实施例中的描述,在此不再赘述。
图28为本发明实施例提供的一种云台系统的控制装置的结构示意图;参考附图28所示,本实施例提供了一种云台系统的控制装置,其中,云台系统包括云台和与云台通信连接的图像采集装置,该云台系统的控制装置可以执行与图21相对应的云台系统的控制方法。具体的,本实施例中的装置可以包括:
第二存储器22,用于存储计算机程序;
第二处理器21,用于运行第二存储器22中存储的计算机程序以实现:
控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,采集位置是通过图像采集装置所确定的;
将采集位置传输至云台;
控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
其中,云台系统的控制装置的结构中还可以包括第二通信接口23,用于电子设备与其 他设备或通信网络通信。
图28所示装置可以执行图21、图25至图26中所示的实施例的方法,本实施例未详细描述的部分,可参考对图21、图25至图26中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图21、图25至图26所示实施例中的描述,在此不再赘述。
图29为本发明实施例提供的另一种云台的控制装置的结构示意图;参考附图29所示,本实施例提供了一种云台的控制装置,用于云台,云台通信连接有图像采集装置,该云台的控制装置可以执行与图22相对应的云台的控制方法。具体的,本实施例中的装置可以包括:
第三存储器32,用于存储计算机程序;
第三处理器31,用于运行第三存储器32中存储的计算机程序以实现:
获取采集图像,采集图像中包括目标对象;
在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作;
将根据目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
其中,云台的控制装置的结构中还可以包括第三通信接口33,用于电子设备与其他设备或通信网络通信。
图29所示装置可以执行图22、图25至图26中所示的实施例的方法,本实施例未详细描述的部分,可参考对图22、图25至图26中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图22、图25至图26所示实施例中的描述,在此不再赘述。
图30为本发明实施例提供的另一种云台系统的控制装置的结构示意图;参考附图30所示,本实施例提供了另一种云台系统的控制装置,其中,云台系统包括云台和与云台通信连接的图像采集装置,该云台系统的控制装置可以执行与图23相对应的云台系统的控制方法。具体的,本实施例中的装置可以包括:
第四存储器42,用于存储计算机程序;
第四处理器41,用于运行第四存储器42中存储的计算机程序以实现:
控制图像采集装置采集图像,图像包括目标对象;
在图像中确定目标对象的位置;
基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
其中,云台系统的控制装置的结构中还可以包括第四通信接口43,用于电子设备与其他设备或通信网络通信。
图30所示装置可以执行图23、图25至图26中所示的实施例的方法,本实施例未详细描述的部分,可参考对图23、图25至图26中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图23、图25至图26所示实施例中的描述,在此不再赘述。
图31为本发明实施例提供的又一种云台系统的控制装置的结构示意图,参考附图31所示,本实施例提供了又一种云台系统的控制装置,其中,云台系统包括云台和与云台通信连接的图像采集装置,该云台系统的控制装置可以执行与图24相对应的云台系统的控制方法。具体的,本实施例中的装置可以包括:
第五存储器52,用于存储计算机程序;
第五处理器51,用于运行第五存储器52中存储的计算机程序以实现:
获取第一对象在采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作;
在第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
其中,云台系统的控制装置的结构中还可以包括第五通信接口53,用于电子设备与其他设备或通信网络通信。
图31所示装置可以执行图24至图26中所示的实施例的方法,本实施例未详细描述的部分,可参考对图24至图26中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图24至图26所示实施例中的描述,在此不再赘述。
可以理解,上述的任一实施例的控制装置可以独立于云台或图像采集装置,也可以是集成于云台或图像采集装置。
图32为本发明实施例提供的一种云台的控制系统的结构示意图;参考附图32所示,本实施例提供了一种云台的控制系统,具体的,该控制系统可以包括:
云台61;
上述图27所示的云台的控制装置62,设置于云台61上,且用于与图像采集装置通信连接,并用于通过图像采集装置对云台61进行控制。
在一些实例中,本实施例中的控制系统还可以包括:
测距传感器63,设置于图像采集装置上,用于获取与目标对象相对应的距离信息;
其中,云台的控制装置62与测距传感器63通信连接,用于将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
图32所示云台的控制系统的具体实现原理、实现过程和实现效果与图27所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图27中所示的实施例的相关说明。
图33为本发明实施例提供的一种云台的控制系统的结构示意图;参考附图33所示,本实施例提供了一种云台的控制系统,具体的,该云台的控制系统可以包括:
云台71;
上述图28所对应的云台系统的控制装置73,设置于云台71上,且用于与图像采集装置72通信连接,并用于分别对图像采集装置72以及云台71进行控制。
图33所示云台的控制系统的具体实现原理、实现过程和实现效果与图28所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图28中所示的实施例的相关说明。
图34为本发明实施例提供的另一种云台的控制系统的结构示意图;参考附图34所示,本实施例提供了另一种云台的控制系统,具体的,该云台的控制系统可以包括:
云台81;
上述图29的云台的控制装置82,设置于云台81上,且用于与图像采集装置通信连接,并用于通过云台81对图像采集装置进行控制。
图34所示云台的控制系统的具体实现原理、实现过程和实现效果与图29所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可 参考对图29中所示的实施例的相关说明。
图35为本发明实施例提供的又一种云台的控制系统的结构示意图;参考附图35所示,本实施例提供了又一种云台的控制系统,具体的,该云台的控制系统可以包括:
云台91;
上述图30的云台系统的控制装置92,设置于云台91上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及云台91进行控制。
图35所示云台的控制系统的具体实现原理、实现过程和实现效果与图30所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图30中所示的实施例的相关说明。
图36为本发明实施例提供的另一种云台的控制系统的结构示意图;参考附图36所示,本实施例提供了另一种云台的控制系统,具体的,该云台的控制系统可以包括:
云台101;
上述图31所对应的云台系统的控制装置103,设置于云台101上,且用于与图像采集装置102通信连接,并用于分别对图像采集装置102以及云台101进行控制。
图36所示云台的控制系统的具体实现原理、实现过程和实现效果与图31所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图31中所示的实施例的相关说明。
可以理解,上述各个实施例中的云台的控制系统中的控制装置可以集成于云台,其还可以进一步包括图像采集装置,该图像采集装置可以集成于云台上,或者,也可以与云台可拆卸连接。
图37为本发明实施例提供的一种可移动平台的结构示意图一;参考附图37所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台112;
支撑机构111,用于连接云台112;
上述图27的云台的控制装置113,设置于云台112上,且用于与图像采集装置114通信连接,并用于通过图像采集装置114对云台112进行控制。
其中,支撑机构111随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构111可以为手柄,当可移动平台为机载云台时,支撑机构111可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图37所示可移动平台的具体实现原理、实现过程和实现效果与图27所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图27中所示的实施例的相关说明。
图38为本发明实施例提供的一种可移动平台的结构示意图二;参考附图38所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台122;
支撑机构121,用于连接云台122;
上述图28的云台系统的控制装置123,设置于云台122上,且用于与图像采集装置124通信连接,并用于分别对图像采集装置124以及云台122进行控制。
其中,支撑机构121随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构121可以为手柄,当可移动平台为机载云台时,支撑机构121可以为用于搭载云台 的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图38所示可移动平台的具体实现原理、实现过程和实现效果与图28所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图28中所示的实施例的相关说明。
图39为本发明实施例提供的一种可移动平台的结构示意图三;参考附图39所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台132;
支撑机构131,用于连接云台132;
上述图29的云台的控制装置133,设置于云台132上,且用于与图像采集装置134通信连接,并用于通过云台132对图像采集装置134进行控制。
其中,支撑机构131随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构131可以为手柄,当可移动平台为机载云台时,支撑机构131可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图37所示可移动平台的具体实现原理、实现过程和实现效果与图29所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图29中所示的实施例的相关说明。
图40为本发明实施例提供的一种可移动平台的结构示意图四;参考附图40所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台142;
支撑机构141,用于连接云台142;
上述图30的云台系统的控制装置143,设置于云台142上,且用于与图像采集装置144通信连接,并用于分别对图像采集装置144以及云台142进行控制。
其中,支撑机构141随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构141可以为手柄,当可移动平台为机载云台时,支撑机构141可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图40所示可移动平台的具体实现原理、实现过程和实现效果与图30所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图30中所示的实施例的相关说明。
图41为本发明实施例提供的一种可移动平台的结构示意图五;参考附图41所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台152;
支撑机构151,用于连接云台152;
上述图31的云台系统的控制装置153,设置于云台152上,且用于与图像采集装置154通信连接,并用于分别对图像采集装置154以及云台152进行控制。
其中,支撑机构151随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构151可以为手柄,当可移动平台为机载云台时,支撑机构151可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图41所示可移动平台的具体实现原理、实现过程和实现效果与图31所示的云台系统的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图31中所示的实施例的相关说明。
可以理解,上述各个实施例中的可移动平台中的控制装置可以集成于云台,其还可以进一步包括图像采集装置,该图像采集装置也可以是集成于云台,或与云台可拆卸连接。
另外,本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图2至图20、图25至图26的云台的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图21、图25至图26的云台系统的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图22、图25至图26的云台的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图23、图25至图26的云台系统的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图24至图26的云台系统的控制方法。
以上各个实施例中的技术方案、技术特征在与本相冲突的情况下均可以单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关检测装置和方法,可以通过其它的方式实现。例如,以上所描述的检测装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,检测装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领 域,均同理包括在本发明的专利保护范围内。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (96)

  1. 一种云台的控制方法,其特征在于,所述方法包括:
    获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置与所述云台通信连接;
    基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  2. 根据权利要求1所述的方法,其特征在于,获取目标对象在采集图像中的采集位置,包括:
    通过图像采集装置获取与所述目标对象相对应的目标对焦位置;
    将所述目标对焦位置确定为目标对象在采集图像中的采集位置。
  3. 根据权利要求2所述的方法,其特征在于,通过图像采集装置获取与所述目标对象相对应的目标对焦位置,包括:
    通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;
    基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置。
  4. 根据权利要求3所述的方法,其特征在于,基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置,包括:
    确定与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位;
    根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置。
  5. 根据权利要求4所述的方法,其特征在于,根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置,包括:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,则获取所述历史对象部位与所述当前对象部位之间的相对位置信息;
    基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置。
  6. 根据权利要求5所述的方法,其特征在于,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置,包括:
    在所述相对位置信息大于或等于预设阈值时,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置;
    在所述相对位置信息小于预设阈值时,将所述当前对焦位置确定为与所述目标对象相对应的目标对焦位置。
  7. 根据权利要求4所述的方法,其特征在于,根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置,包括:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,基于所述当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;
    基于所述第一更新后构图目标位置对所述目标对象进行跟随操作。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,获取所述第二对象在所述采集图像中的采集位置;
    基于所述第二对象在所述采集图像中的采集位置对构图目标位置进行更新,获得与所述第二对象相对应的第二更新后构图目标位置,以基于所述第二更新后构图目标位置对所述第二对象进行跟随操作。
  9. 根据权利要求1所述的方法,其特征在于,基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    计算与所述采集位置相对应的当前位置预测值;
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数。
  10. 根据权利要求9所述的方法,其特征在于,计算与所述采集位置相对应的当前位置预测值,包括:
    确定与所述采集位置相对应的延时时间,所述延时时间用于指示所述云台经由所述图像采集装置获得所述采集位置所需要的时长;
    基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值。
  11. 根据权利要求10所述的方法,其特征在于,确定与所述采集位置相对应的延时时间,包括:
    获取与所述采集图像相对应的曝光时间;
    在所述云台获取到当前采集位置时,确定与当前采集位置相对应的当前接收时间;
    将所述当前接收时间与所述曝光时间之间的时间间隔,确定为与所述采集位置相对应的延时时间。
  12. 根据权利要求10所述的方法,其特征在于,基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值,包括:
    在所述云台获取到前一采集位置时,确定与前一采集位置相对应的前一接收时间;
    确定与所述前一采集位置相对应的前一位置预测值;
    根据所述采集位置、曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值。
  13. 根据权利要求12所述的方法,其特征在于,根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值,包括:
    基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值;
    将所述位置调整值与所述采集位置的和值,确定为与所述采集位置相对应的当前位置预测值。
  14. 根据权利要求13所述的方法,其特征在于,基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值,包括:
    基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度;
    将所述移动速度与所述时间间隔之间的乘积值,确定为与所述采集位置相对应的位置 调整值。
  15. 根据权利要求14所述的方法,其特征在于,基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度,包括:
    获取所述采集位置与前一位置预测值之间的位置差值以及所述曝光时间与所述前一接收时间之间的时间差值;
    将所述位置差值与所述时间差值之间的比值,确定为与所述目标对象相对应的移动速度。
  16. 根据权利要求9所述的方法,其特征在于,基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    确定所述当前位置预测值与构图目标位置之间的位置偏差;
    基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  17. 根据权利要求16所述的方法,其特征在于,基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    获取与所述采集图像相对应的画面视场角;
    基于所述画面视场角和所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  18. 根据权利要求17所述的方法,其特征在于,所述控制参数与所述画面视场角呈负相关。
  19. 根据权利要求9所述的方法,其特征在于,基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    获取与所述云台相对应的跟随模式,所述跟随模式包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式;
    基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制参数。
  20. 根据权利要求19所述的方法,其特征在于,基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的备选控制参数;
    在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数。
  21. 根据权利要求20所述的方法,其特征在于,在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数,包括:
    在所述跟随模式为单轴跟随模式时,在所述备选控制参数中,确定与所述单轴跟随模式相对应的单轴控制参数,并将其他备选控制参数置零;
    在所述跟随模式为双轴跟随模式时,在所述备选控制参数中,确定与所述双轴跟随模式相对应的双轴控制参数,并将其他备选控制参数置零;
    在所述跟随模式为全跟随模式时,将所述备选控制参数确定为与所述全跟随模式相对应的三轴控制参数。
  22. 根据权利要求1-21中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取与所述目标对象所对应的云台运动模型;
    基于所述云台运动模型和所述控制参数对所述云台进行控制。
  23. 根据权利要求22所述的方法,其特征在于,基于所述云台运动模型和所述控制参数对所述云台进行控制,包括:
    获取用于对所述目标对象进行跟随操作所对应的时长信息;
    在所述时长信息小于第一时间阈值时,则基于所述云台运动模型对所述控制参数进行更新,获得更新后控制参数,并基于所述更新后控制参数对所述云台进行控制;
    在所述时长信息大于或等于第一时间阈值、且所述云台运动模型为匀加速运动时,利用所述控制参数对所述云台进行控制。
  24. 根据权利要求23所述的方法,其特征在于,基于所述云台运动模型对所述控制参数进行更新,获得更新后控制参数,包括:
    基于所述云台运动模型,确定与所述控制参数相对应的更新系数,其中,所述更新系数小于1;
    将所述更新系数与所述控制参数的乘积值,确定为所述更新后控制参数。
  25. 根据权利要求24所述的方法,其特征在于,基于所述云台运动模型,确定与所述控制参数相对应的更新系数,包括:
    在所述云台运动模型为匀加速运动时,将所述时长信息与所述第一时间阈值之间的比值确定为与所述控制参数相对应的更新系数。
  26. 根据权利要求1-21中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取与所述目标对象相对应的跟随状态;
    基于所述跟随状态和所述控制参数对所述云台进行控制。
  27. 根据权利要求26所述的方法,其特征在于,获取与所述目标对象相对应的跟随状态,包括:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,则确定所述第一对象为丢失状态。
  28. 根据权利要求26所述的方法,其特征在于,基于所述跟随状态和所述控制参数对所述云台进行控制,包括:
    在所述目标对象为丢失状态时,则获取对所述目标对象进行跟随操作过程中所对应的丢失时长信息;
    根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数;
    基于所述更新后控制参数对所述云台进行控制。
  29. 根据权利要求28所述的方法,其特征在于,根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数,包括:
    在所述丢失时长信息大于或等于第二时间阈值时,将所述控制参数更新为零;
    在所述丢失时长信息小于第二时间阈值时,获取所述丢失时长信息与所述第二时间阈值之间的比值,并将1与所述比值之间的差值确定为与所述控制参数相对应的更新系数,将所述更新系数与所述控制参数的乘积值确定为所述更新后控制参数。
  30. 根据权利要求1-21中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取所述目标对象的对象类型;
    根据所述对象类型和所述控制参数对所述云台进行控制。
  31. 根据权利要求30所述的方法,其特征在于,根据所述对象类型和所述控制参数对所述云台进行控制,包括:
    根据所述对象类型对所述控制参数进行调整,获得调整后参数;
    基于所述调整后参数对所述云台进行控制。
  32. 根据权利要求31所述的方法,其特征在于,根据所述对象类型对所述控制参数进行调整,获得调整后参数,包括:
    在所述目标对象为静止对象时,则降低所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度大于或等于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽,并降低所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度小于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽。
  33. 根据权利要求1-21中任意一项所述的方法,其特征在于,所述方法还包括:
    通过显示界面获取用户针对所述图像采集装置所输入的执行操作;
    根据所述执行操作对所述图像采集装置进行控制,以使得所述图像采集装置确定所述采集位置。
  34. 根据权利要求1-21中任意一项所述的方法,其特征在于,所述方法还包括:
    通过设置于图像采集装置上的测距传感器获取与所述目标对象相对应的距离信息;
    将所述距离信息发送至所述图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  35. 根据权利要求1-21中任意一项所述的方法,其特征在于,所述方法还包括:
    确定所述图像采集装置所对应的工作模式,所述工作模式包括以下任意之一:先跟随后对焦模式、先对焦后跟随模式;
    利用所述工作模式对所述图像采集装置进行控制。
  36. 根据权利要求1-21中任意一项所述的方法,其特征在于,所述云台设有通信串行总线USB接口,所述USB接口用于与所述图像采集装置有线通信连接。
  37. 一种云台系统的控制方法,其特征在于,其中,所述云台系统包括云台和与云台通信连接的图像采集装置,所述方法包括:
    控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过所述图像采集装置所确定的;
    将所述采集位置传输至所述云台;
    控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
  38. 一种云台的控制方法,其特征在于,用于云台,所述云台通信连接有图像采集装置,所述方法包括:
    获取采集图像,所述采集图像中包括目标对象;
    在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
    将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所 述目标对象进行对焦操作。
  39. 一种云台系统的控制方法,其特征在于,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
    控制所述图像采集装置采集图像,所述图像包括目标对象;
    在所述图像中确定所述目标对象的位置;
    基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
  40. 一种云台系统的控制方法,其特征在于,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
    获取第一对象在采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
    在所述第一对象改变为第二对象时,获取所述第二对象在采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
  41. 一种云台的控制装置,其特征在于,所述装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置与所述云台通信连接;
    基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  42. 根据权利要求41所述的装置,其特征在于,在所述处理器获取目标对象在采集图像中的采集位置时,所述处理器用于:
    通过图像采集装置获取与所述目标对象相对应的目标对焦位置;
    将所述目标对焦位置确定为目标对象在采集图像中的采集位置。
  43. 根据权利要求42所述的装置,其特征在于,在所述处理器通过图像采集装置获取与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;
    基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置。
  44. 根据权利要求43所述的装置,其特征在于,在所述处理器基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    确定与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位;
    根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置。
  45. 根据权利要求44所述的装置,其特征在于,在所述处理器根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,则获取所述 历史对象部位与所述当前对象部位之间的相对位置信息;
    基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置。
  46. 根据权利要求45所述的装置,其特征在于,在所述处理器基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述相对位置信息大于或等于预设阈值时,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置;
    在所述相对位置信息小于预设阈值时,将所述当前对焦位置确定为与所述目标对象相对应的目标对焦位置。
  47. 根据权利要求44所述的装置,其特征在于,在所述处理器根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,基于所述当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;
    基于所述第一更新后构图目标位置对所述目标对象进行跟随操作。
  48. 根据权利要求41所述的装置,其特征在于,所述处理器用于:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,获取所述第二对象在所述采集图像中的采集位置;
    基于所述第二对象在所述采集图像中的采集位置对构图目标位置进行更新,获得与所述第二对象相对应的第二更新后构图目标位置,以基于所述第二更新后构图目标位置对所述第二对象进行跟随操作。
  49. 根据权利要求41所述的装置,其特征在于,在所述处理器基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    计算与所述采集位置相对应的当前位置预测值;
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数。
  50. 根据权利要求49所述的装置,其特征在于,在所述处理器计算与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    确定与所述采集位置相对应的延时时间,所述延时时间用于指示所述云台经由所述图像采集装置获得所述采集位置所需要的时长;
    基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值。
  51. 根据权利要求50所述的装置,其特征在于,在所述处理器确定与所述采集位置相对应的延时时间时,所述处理器用于:
    获取与所述采集图像相对应的曝光时间;
    在所述云台获取到当前采集位置时,确定与当前采集位置相对应的当前接收时间;
    将所述当前接收时间与所述曝光时间之间的时间间隔,确定为与所述采集位置相对应的延时时间。
  52. 根据权利要求50所述的装置,其特征在于,在所述处理器基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    在所述云台获取到前一采集位置时,确定与前一采集位置相对应的前一接收时间;
    确定与所述前一采集位置相对应的前一位置预测值;
    根据所述采集位置、曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值。
  53. 根据权利要求52所述的装置,其特征在于,在所述处理器根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值;
    将所述位置调整值与所述采集位置的和值,确定为与所述采集位置相对应的当前位置预测值。
  54. 根据权利要求53所述的装置,其特征在于,在所述处理器基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值时,所述处理器用于:
    基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度;
    将所述移动速度与所述时间间隔之间的乘积值,确定为与所述采集位置相对应的位置调整值。
  55. 根据权利要求54所述的装置,其特征在于,在所述处理器基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度时,所述处理器用于:
    获取所述采集位置与前一位置预测值之间的位置差值以及所述曝光时间与所述前一接收时间之间的时间差值;
    将所述位置差值与所述时间差值之间的比值,确定为与所述目标对象相对应的移动速度。
  56. 根据权利要求49所述的装置,其特征在于,在所述处理器基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    确定所述当前位置预测值与构图目标位置之间的位置偏差;
    基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  57. 根据权利要求56所述的装置,其特征在于,在所述处理器基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    获取与所述采集图像相对应的画面视场角;
    基于所述画面视场角和所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  58. 根据权利要求57所述的装置,其特征在于,所述控制参数与所述画面视场角呈负相关。
  59. 根据权利要求49所述的装置,其特征在于,在所述处理器基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    获取与所述云台相对应的跟随模式,所述跟随模式包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式;
    基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制 参数。
  60. 根据权利要求59所述的装置,其特征在于,在所述处理器基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的备选控制参数;
    在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数。
  61. 根据权利要求60所述的装置,其特征在于,在所述处理器在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数时,所述处理器用于:
    在所述跟随模式为单轴跟随模式时,在所述备选控制参数中,确定与所述单轴跟随模式相对应的单轴控制参数,并将其他备选控制参数置零;
    在所述跟随模式为双轴跟随模式时,在所述备选控制参数中,确定与所述双轴跟随模式相对应的双轴控制参数,并将其他备选控制参数置零;
    在所述跟随模式为全跟随模式时,将所述备选控制参数确定为与所述全跟随模式相对应的三轴控制参数。
  62. 根据权利要求41-61中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取与所述目标对象所对应的云台运动模型;
    基于所述云台运动模型和所述控制参数对所述云台进行控制。
  63. 根据权利要求62所述的装置,其特征在于,在所述处理器基于所述云台运动模型和所述控制参数对所述云台进行控制时,所述处理器用于:
    获取用于对所述目标对象进行跟随操作所对应的时长信息;
    在所述时长信息小于第一时间阈值时,则基于所述云台运动模型对所述控制参数进行更新,获得更新后控制参数,并基于所述更新后控制参数对所述云台进行控制;
    在所述时长信息大于或等于第一时间阈值、且所述云台运动模型为匀加速运动时,利用所述控制参数对所述云台进行控制。
  64. 根据权利要求63所述的装置,其特征在于,在所述处理器基于所述云台运动模型对所述控制参数进行更新,获得更新后控制参数时,所述处理器用于:
    基于所述云台运动模型,确定与所述控制参数相对应的更新系数,其中,所述更新系数小于1;
    将所述更新系数与所述控制参数的乘积值,确定为所述更新后控制参数。
  65. 根据权利要求64所述的装置,其特征在于,在所述处理器基于所述云台运动模型,确定与所述控制参数相对应的更新系数时,所述处理器用于:
    在所述云台运动模型为匀加速运动时,将所述时长信息与所述第一时间阈值之间的比值确定为与所述控制参数相对应的更新系数。
  66. 根据权利要求41-61中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取与所述目标对象相对应的跟随状态;
    基于所述跟随状态和所述控制参数对所述云台进行控制。
  67. 根据权利要求66所述的装置,其特征在于,在所述处理器获取与所述目标对象相对应的跟随状态时,所述处理器用于:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,则确定所述第一对象为丢失状态。
  68. 根据权利要求66所述的装置,其特征在于,在所述处理器基于所述跟随状态和所述控制参数对所述云台进行控制时,所述处理器用于:
    在所述目标对象为丢失状态时,则获取对所述目标对象进行跟随操作过程中所对应的丢失时长信息;
    根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数;
    基于所述更新后控制参数对所述云台进行控制。
  69. 根据权利要求68所述的装置,其特征在于,在所述处理器根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数时,所述处理器用于:
    在所述丢失时长信息大于或等于第二时间阈值时,将所述控制参数更新为零;
    在所述丢失时长信息小于第二时间阈值时,获取所述丢失时长信息与所述第二时间阈值之间的比值,并将1与所述比值之间的差值确定为与所述控制参数相对应的更新系数,将所述更新系数与所述控制参数的乘积值确定为所述更新后控制参数。
  70. 根据权利要求41-61中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取所述目标对象的对象类型;
    根据所述对象类型和所述控制参数对所述云台进行控制。
  71. 根据权利要求70所述的装置,其特征在于,在所述处理器根据所述对象类型和所述控制参数对所述云台进行控制时,所述处理器用于:
    根据所述对象类型对所述控制参数进行调整,获得调整后参数;
    基于所述调整后参数对所述云台进行控制。
  72. 根据权利要求71所述的装置,其特征在于,在所述处理器根据所述对象类型对所述控制参数进行调整,获得调整后参数时,所述处理器用于:
    在所述目标对象为静止对象时,则降低所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度大于或等于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽,并降低所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度小于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽。
  73. 根据权利要求41-61中任意一项所述的装置,其特征在于,所述处理器用于:
    通过显示界面获取用户针对所述图像采集装置所输入的执行操作;
    根据所述执行操作对所述图像采集装置进行控制,以使得所述图像采集装置确定所述采集位置。
  74. 根据权利要求41-61中任意一项所述的装置,其特征在于,所述处理器用于:
    通过设置于图像采集装置上的测距传感器获取与所述目标对象相对应的距离信息;
    将所述距离信息发送至所述图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  75. 根据权利要求41-61中任意一项所述的装置,其特征在于,所述处理器用于:
    确定所述图像采集装置所对应的工作模式,所述工作模式包括以下任意之一:先跟随后对焦模式、先对焦后跟随模式;
    利用所述工作模式对所述图像采集装置进行控制。
  76. 根据权利要求41-61中任意一项所述的装置,其特征在于,所述云台设有通信串行总线USB接口,所述USB接口用于与所述图像采集装置有线通信连接。
  77. 一种云台系统的控制装置,其特征在于,其中,所述云台系统包括云台和与云台通信连接的图像采集装置,所述装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
    将所述采集位置传输至所述云台;
    控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
  78. 一种云台的控制装置,其特征在于,用于云台,所述云台通信连接有图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取采集图像,所述采集图像中包括目标对象;
    在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
    将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
  79. 一种云台系统的控制装置,其特征在于,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    控制所述图像采集装置采集图像,所述图像包括目标对象;
    在所述图像中确定所述目标对象的位置;
    基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
  80. 一种云台系统的控制装置,其特征在于,所述云台系统包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取第一对象在采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
    在所述第一对象改变为第二对象时,获取所述第二对象在采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
  81. 一种云台的控制系统,其特征在于,包括:
    云台;
    权利要求41-76中任一项所述的云台的控制装置,设置于所述云台上,且用于与所述图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
  82. 根据权利要求81所述的控制系统,其特征在于,所述控制系统还包括:
    测距传感器,设置于所述图像采集装置上,用于获取与所述目标对象相对应的距离信息;
    其中,所述云台的控制装置与所述测距传感器通信连接,用于将所述距离信息发送至图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  83. 一种云台的控制系统,其特征在于,包括:
    云台;
    权利要求77所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  84. 一种云台的控制系统,其特征在于,包括:
    云台;
    权利要求78所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
  85. 一种云台的控制系统,其特征在于,包括:
    云台;
    权利要求79所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  86. 一种云台的控制系统,其特征在于,包括:
    云台;
    权利要求80所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  87. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求41-76中任一项所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
  88. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求77所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  89. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    如权利要求78所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通 信连接,并用于通过所述云台对所述图像采集装置进行控制。
  90. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    如权利要求79所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  91. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    如权利要求80所述的云台系统的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  92. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求1-36中任意一项所述的云台的控制方法。
  93. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求37中任意一项所述的云台系统的控制方法。
  94. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求38所述的云台的控制方法。
  95. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求39所述的云台系统的控制方法。
  96. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求40所述的云台系统的控制方法。
PCT/CN2020/141400 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质 WO2022141197A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202080067425.8A CN114982217A (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质
PCT/CN2021/135818 WO2022143022A1 (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置
CN202180086440.1A CN116783568A (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置
US18/215,871 US20230341079A1 (en) 2020-12-30 2023-06-29 Control method based on image capturing apparatus, control method and apparatus for gimbal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质

Publications (1)

Publication Number Publication Date
WO2022141197A1 true WO2022141197A1 (zh) 2022-07-07

Family

ID=82258765

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质
PCT/CN2021/135818 WO2022143022A1 (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135818 WO2022143022A1 (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置

Country Status (3)

Country Link
US (1) US20230341079A1 (zh)
CN (2) CN114982217A (zh)
WO (2) WO2022141197A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004015516A (ja) * 2002-06-07 2004-01-15 Chuo Electronics Co Ltd 動画像の自動追尾撮影方法および自動追尾撮影装置
CN103019024A (zh) * 2012-11-29 2013-04-03 浙江大学 实时精确观测和分析乒乓球旋转系统与系统运行方法
CN204859351U (zh) * 2015-03-31 2015-12-09 深圳市莫孚康技术有限公司 基于视频目标跟踪的自动跟焦装置
CN108259703A (zh) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 一种云台的跟拍控制方法、装置及云台
CN112055158A (zh) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 目标跟踪方法、监控设备、存储介质及系统
CN112073641A (zh) * 2020-09-18 2020-12-11 深圳市众志联城科技有限公司 影像拍摄方法、装置、移动终端以及存储介质
CN112616019A (zh) * 2020-12-16 2021-04-06 重庆紫光华山智安科技有限公司 目标跟踪方法、装置、云台及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616322A (zh) * 2015-02-10 2015-05-13 山东省科学院海洋仪器仪表研究所 船载红外目标图像辨识跟踪方法及其装置
JP2017034320A (ja) * 2015-07-29 2017-02-09 キヤノン株式会社 移動機構を持った撮像装置
CN107295244A (zh) * 2016-04-12 2017-10-24 深圳市浩瀚卓越科技有限公司 一种稳定器的跟踪拍摄控制方法及系统
CN108475075A (zh) * 2017-05-25 2018-08-31 深圳市大疆创新科技有限公司 一种控制方法、装置及云台
CN109688323A (zh) * 2018-11-29 2019-04-26 深圳市中科视讯智能系统技术有限公司 无人机视觉跟踪系统及其控制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004015516A (ja) * 2002-06-07 2004-01-15 Chuo Electronics Co Ltd 動画像の自動追尾撮影方法および自動追尾撮影装置
CN103019024A (zh) * 2012-11-29 2013-04-03 浙江大学 实时精确观测和分析乒乓球旋转系统与系统运行方法
CN204859351U (zh) * 2015-03-31 2015-12-09 深圳市莫孚康技术有限公司 基于视频目标跟踪的自动跟焦装置
CN108259703A (zh) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 一种云台的跟拍控制方法、装置及云台
CN112073641A (zh) * 2020-09-18 2020-12-11 深圳市众志联城科技有限公司 影像拍摄方法、装置、移动终端以及存储介质
CN112055158A (zh) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 目标跟踪方法、监控设备、存储介质及系统
CN112616019A (zh) * 2020-12-16 2021-04-06 重庆紫光华山智安科技有限公司 目标跟踪方法、装置、云台及存储介质

Also Published As

Publication number Publication date
CN116783568A (zh) 2023-09-19
WO2022143022A1 (zh) 2022-07-07
US20230341079A1 (en) 2023-10-26
CN114982217A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2022126436A1 (zh) 延时检测方法、装置、系统、可移动平台和存储介质
CN108399642B (zh) 一种融合旋翼无人机imu数据的通用目标跟随方法和系统
US10217021B2 (en) Method for determining the position of a portable device
US9479703B2 (en) Automatic object viewing methods and apparatus
WO2019113966A1 (zh) 一种避障方法、装置和无人机
WO2018214078A1 (zh) 拍摄控制方法及装置
CN105120146A (zh) 一种利用无人机进行运动物体自动锁定拍摄装置及拍摄方法
CN106973221B (zh) 基于美学评价的无人机摄像方法和系统
CN105487552A (zh) 无人机跟踪拍摄的方法及装置
US20200267309A1 (en) Focusing method and device, and readable storage medium
US10545215B2 (en) 4D camera tracking and optical stabilization
CN104950726B (zh) 遥控行驶装置用的延时校正方法及其装置
WO2020042581A1 (zh) 一种图像获取设备的对焦方法及装置
US20220067974A1 (en) Cloud-Based Camera Calibration
WO2017117749A1 (zh) 基于多种测距方式的跟焦系统、方法及拍摄系统
CN101656883B (zh) 基于最小二乘支持向量机运动预测的实时补偿方法
WO2021081707A1 (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
CN112639652A (zh) 目标跟踪方法和装置、可移动平台以及成像平台
WO2020038720A1 (en) Apparatus, method and computer program for detecting the form of a deformable object
WO2020237478A1 (zh) 一种飞行规划方法及相关设备
CN108419052B (zh) 一种多台无人机全景成像方法
WO2022193081A1 (zh) 无人机的控制方法、装置及无人机
WO2022151473A1 (zh) 拍摄控制方法、拍摄控制装置及云台组件
WO2022141197A1 (zh) 云台的控制方法、装置、可移动平台和存储介质
WO2020019113A1 (zh) 移动机器人的控制方法、装置及移动机器人系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967538

Country of ref document: EP

Kind code of ref document: A1