GB2481027A - Image stabilising apparatus and method - Google Patents

Image stabilising apparatus and method Download PDF

Info

Publication number
GB2481027A
GB2481027A GB1009586.7A GB201009586A GB2481027A GB 2481027 A GB2481027 A GB 2481027A GB 201009586 A GB201009586 A GB 201009586A GB 2481027 A GB2481027 A GB 2481027A
Authority
GB
United Kingdom
Prior art keywords
mapping
display
motion
image
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1009586.7A
Other versions
GB201009586D0 (en
Inventor
David Paul Thompson
Peter Richard Cronshaw
Stuart Pooley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DREAMPACT Ltd
Original Assignee
DREAMPACT Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DREAMPACT Ltd filed Critical DREAMPACT Ltd
Priority to GB1009586.7A priority Critical patent/GB2481027A/en
Publication of GB201009586D0 publication Critical patent/GB201009586D0/en
Publication of GB2481027A publication Critical patent/GB2481027A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/23248
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

An imaging method comprises obtaining a reference mapping for mapping at least one reference position on an array of sensors to at least one position on a display, determining the motion of the array of sensors and determining, in dependence on the determined motion, a further mapping. The further mapping transforms at least one position on the array of sensors to at least one position on the display such that an image generated on the display from the image data is generated in a position on the display such as to at least partially compensate for the measured motion. The difference between the further mapping and the reference mapping is determined and the further mapping adjusted to reduce the determined difference. The method can be used to compensate for the movement of imaging devices mounted on moving platforms such as aircraft and other vehicles.

Description

Image processing apparatus and method
Field of the invention
The present invention relates to an image processing apparatus and method, for example an image processing apparatus and method for processing image data obtained from imaging devices that move in operation, for example projectile imaging devices, handheld imaging devices or imaging devices that are mounted on aircraft or other vehicles.
Background to the invention
A wide range of imaging devices that are intended to be used, whilst in motion, to image a scene or an object within a scene are known.
For example, imaging devices are often mounted on civilian or military aircraft or on unmanned aerial vehicles (UAVs) or other vehicles. Such imaging devices can be used for reconnaissance or surveillance purposes. The imaging devices can be subject to a wide range of rapid movements, for example due to motion of the vehicle on which they are installed. It can be difficult for an operator to monitor or identify a scene or object, or changes of the scene or object, from the resulting images produced by the imaging device due to the variations in the images caused by the motion of the device, A common approach to stabilising images produced by mobile imaging devices that are subject to movement during use is to attempt to reduce the motion of the imaging device itself. For example, it is known to install vehicle-mounted imaging devices on physical structures such as gimbals that are intended to maintain the imaging device in a stable position and orientation, despite movement of the vehicle. For example, US 2010/0110187 describes an airborne camera system in which the camera is mounted within a gimbal system, and measurements of heading, pitch and roll of the aircraft from aircraft gyros and calculations concerning the overflight velocity of the aircraft are used in controlling gimbals of the gimbal system in order to maintain the line of sight of the camera fixed on a target.
However, such systems for stabilising images by physical stabilisation of the imaging device itself become increasingly ineffective as the size of movements and frequency of movements of a vehicle on which the device is mounted increase, due to the inertia of the imaging device itself. Such issues become increasingly significant for smaller camera platforms, such as mini-UAVs. Furthermore, it is a feature of gyros that they suffer from drift in their measurements over relatively short periods of time, for example over periods of minutes, and such drift effects often cannot be effectively calibrated out. Therefore, the images produced by gyro-stabilised imaging devices can also suffer from drift. In order to reduce the effects of drift for applications where high accuracy is required it is usually necessary to obtain more accurate gyros, that are also usually more expensive and larger. The increased physical size of more accurate gyros is undesirable for airborne systems in particular, due to the requirement to keep excess weight of aircraft to a minimum.
In addition to the use of mobile imaging devices for reconnaissance or surveillance purposes, it is also known to mount TV or film cameras on moving platforms or objects. For example it has been suggested to mount TV cameras on or in balls or other moving objects in filming sporting events. In such arrangements it is generally less important to know the exact position or orientation of an object or of the imaging device itself than is the case for reconnaissance or surveillance systems, but it is important that a reasonably stable image of an object or other point of interest, for example a goal, is provided to the viewer. One known approach is to use motion estimation techniques based upon tracking the content of images themselves to estimate the motion of the device, and then to compensate the images in an attempt to provide acceptably stable image to the viewer. Such techniques usually require the use of image recognition techniques to identify and track common objects between different frames. WO 2006/033061 describes a technique that is based upon such image tracking techniques, and is used in a system in which several cameras are mounted on or in a ball, for example for example a soccer ball or basketball, in different orientations. Initial measurements of rotational speed of the ball using strain gauges are performed and are used to obtain a subset of possible motion vectors. A motion vector is then selected from the sub-set using motion tracking techniques based upon tracking images of common objects between frames and the motion vector is then used in the merging of images obtained from the different cameras. The image stabilisation is thus performed in dependence on the contents of the images themselves.
An alternative approach is described in WO 2008/090345, which is directed to a projectile imaging device that can provide stabilised images of a scene in-flight from a desired perspective despite rotation of the device in-flight. The device includes image sensors comprising an array of pixel sensors. Motion of the device is measured in-flight using gyroscopes or accelerometers. The pixel sensor signals are mapped frame-by-frame to corresponding pixels on a display in dependence on the measured motion and so as to compensate for the measured motion. The techniques described in WO 2008/090345 allow for the production of stabilised images without requiring physical stabilisation of the imaging device to compensate for motion, and without using image tracking techniques or any other techniques that require analysis of the content of the images. Indeed, the techniques described in WO 20081090345 can provide for image stabilisation and compensation independent of the content of images. However, they are reliant on motion measurement by gyroscopes or other devices and so stabilised images may drift if the measurements by the gyroscopes or other devices drift. Such drift may not be significant if the device of WO 2008/090345 is in the form of a hand-held device for throwing into a room or other location as image stabilisation or compensation may only be required for images obtained during a short period of time. However, drift problems may become more significant if the device of WO 2008/090345 is used for a longer period of time.
It is an aim of the present invention to provide an improved or at least alternative imaging apparatus or method.
Summary of the invention
In a first, independent aspect of the invention there is provided an image processing method comprising:-obtaining image data representative of an image formed on an array of sensors, the image data being for generation of the image on a display; obtaining a reference mapping for mapping at least one reference position on the array of sensors to at least one position on the display; determining motion of the array of sensors; determining in dependence on the determined motion a further mapping for mapping at least one position on the array of sensors to at least one position on the display, the further mapping being such that an image generated on the display from the image data in accordance with the further mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determining a difference between the further mapping and the reference mapping; and adjusting the further mapping to reduce the difference between the further mapping and the reference mapping.
Each sensor of the array of sensors may be a pixel sensor. The array may form part of an image sensor integrated circuit device. The sensor array may be a focal plane array and/or may be responsive to electromagnetic radiation at any suitable frequencies, for example visible light, infra-red or x-ray frequencies.
By thus adjusting the motion-compensation mapping, images obtained from the array of sensors may be displayed in a stable fashion whilst also providing for at least partial cancellation of drift effects arising from motion sensors that may be used to measure the motion.
Such drift effects can be particularly problematic in vehicle mounted systems in which imaging is performed for a long period of time. By providing for at least partial cancellation of drift effects it can be possible to obtain a desired level of imaging performance whilst also using cheaper and/or smaller motion sensors. That can be particularly beneficial for aircraft mounted systems, for which it is generally desired to reduce excess weight.
By using a motion compensation mapping of the image data, together with the adjustment to the mapping, the use of a gimballed system may be avoided, whilst still providing for image stability and at least some cancellation of drift effects. That can provide for a more compact and lighter imaging system.
The adjustment to the motion compensation mapping to reduce the difference to the reference mapping can also provide for an imaging method that enables displayed images to be stabilised to counteract the effects of motion of the array over the short term and for the displayed images to track longer term motion of the array.
The determining of motion may comprise determining a change of position, for example orientation and/or translational position. The change of position may be determined with respect to a reference position, for example a reference orientation and/or reference translational position.
The determining of motion of the array may comprise obtaining at least one measurement from at least one motion sensor and determining the motion from the at least one measurement.
The at least one motion sensor may comprise at least one gyroscope. Gyroscopes can be particularly susceptible to drift effects and so the method may be particularly useful if one or more gyroscopes are used as the at least one motion sensor.
Alternatively the or each motion sensor may comprise any other suitable type of motion sensor, for example at least one accelerometer. The or each motion sensor may be any suitable type of sensor that provides measurements from which change in position can be determined. Thus, the or each motion sensor may be a position sensor that can be used to measure change in position over time.
The adjusting of the difference between the further mapping and the reference mapping may be performed to least partially cancel a drift effect of the at least one motion sensor.
The adjusting may be performed in accordance with an algorithm comprising at least one parameter, and the value of the at least one parameter may be selected so that the adjusting at least partially cancels a drift effect of the at least one motion sensor.
The at least one parameter may be tuned to compensate for drift effects of a particular motion sensor or sensors Adjusting the further mapping may comprise applying an algorithm that reduces the difference over time according to at least one pre-determined time constant.
The algorithm may be such as to substantially remove any difference between the reference mapping and the further mapping over time, in the absence of further determined motion. Thus, it may be ensured that a viewed image gradually becomes aligned with the perspective of the sensor array following movement of the sensor array. That can be particularly useful if the sensor array is installed in a vehicle, as the method may provide for displayed images to become gradually aligned with the vehicle direction following a change of intended direction of the vehicle.
The algorithm may comprise a proportional-integral-differential (FID) algorithm The method may further comprise displaying the image data on a display at a position on the display determined in accordance with the adjusted further mapping.
The reference mapping may be representative of a reference position of the image on the display, and adjusting the mapping may comprise adjusting the mapping so as to reduce the difference between the reference position of the image and the motion-compensated position of the image.
The image data may comprise a frame of image data representative of the image on the array of sensors at a particular time, and the method further comprises repeatedly obtaining a series of further frames of image data, each further frame of image data being representative of the image on the array of sensors at a respective further time; for each frame of image data, determining motion of the array of sensors at the respective further time, determining a respective further mapping, determining a difference between the further mapping for that frame and the reference mapping, adjusting the further mapping for that frame to reduce the difference between the further mapping and the reference mapping. I5
Thus the method may provide motion compensation and/or drift reduction and/or motion tracking for a series of frames of image data, for example video data.
The determining of a difference between the mapping and the reference mapping may comprise determining, for the or each reference position, a difference between the position relative to the display to which the reference position is mapped according to the reference mapping and the position relative to the display to which that position is mapped according to the further mapping.
The at least one reference position may comprise a plurality of pixel sensor positions, for example two or three pixel sensor positions. That can provide a particularly simple way of determining the reference mapping, and can provide for a computationally efficient way of determining the adjustment of the further mapping.
The image data may comprise a plurality of pixel signals each obtained from a respective position on the sensor array, and the further mapping comprises assigning each of the plurality of pixel signals to a respective position relative to the display.
The determining of the motion of the array may be performed independently of the content of the image. For example, the image data may comprise a plurality of pixel signals and the determining of the motion of the array may be performed independently of the value of any of the pixel signals. In this way the compensation may be made without regard to the image content.
The array of image sensors may be installed on a vehicle or other moveable object, or the array of image sensors may be otherwise arranged to move in use, for example by suspension of a device containing the array of image sensor, for instance from a tree or lampost. The vehicle may comprise, for example, an airborne vehicle, a land vehicle or a waterborne vehicle, for example a car, truck, tank, aeroplane, UAV, ship or rocket. The moveable object may comprise a buoy. The array of image sensors may be included in a handheld device that may move in use, for example during an imaging process.
The further mapping may be determined to be such as to generate the image on the display to maintain a selected perspective.
The method may further comprise receiving user input and selecting the selected perspective in dependence on the user input.
The method may further comprise selecting a portion of the image data, and performing a digital zoom operation on the selected portion of the image data.
That feature is particularly important, and so in another independent aspect of the invention there is provided an imaging method comprising obtaining image data representative of an image formed on an array of sensors, the image data being for generation of the image on a display; determining motion of the array of sensors; determining in dependence on the determined motion a mapping for mapping at least one position on the array of sensors to at least one position on the display, the mapping being such that an image generated on the display from the image data in accordance with the mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; selecting a portion of the image data; and performing a digital zoom operation on the selected portion of image data.
By providing digital zooming, optical zoom components may be omitted thus providing for reduced weight and complexity.
S
The method may further comprise displaying the digitally zoomed selected portion of the image data in accordance with the adjusted further mapping.
The method may further comprise selecting the portion of the image data and/or determining at least one property of the digital zoom operation in dependence on user input.
The method may comprise providing user selection means for selecting the portion of the image data The method may further comprise selecting a plurality of portions of the image data and for each selected portion performing a respective digital zoom operation.
The method may further comprise providing a stream of the image data, the stream comprising a plurality of portions, each portion being for display on a respective, different display or window. The method may comprise interleaving the portions of image data in the stream. Each portion of the image data may represent a respective different portion of the image.
The method may comprise, for each of the plurality of displays, displaying the image data on a display at a position on the display determined in accordance with the adjusted further mapping.
The method may further comprise for each of the selected portions of the image data determining a respective further mapping for mapping at least one position on the array of sensors to at least one position on the display, the further mapping being such that an image generated on the display from the selected portion of image data in accordance with the further mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determining a difference between the further mapping and the reference mapping; and adjusting the further mapping to reduce the difference between the further mapping and the reference mapping.
For each selected portion of image data the respective further mapping may be determined to be such as to generate the image on the display from the selected portion of image data to maintain a selected perspective, and optionally the selected perspective is independently selectable for each portion of image data.
The method may further comprise receiving user input and selecting the selected perspective in dependence on the user input.
For each selected portion of image data the adjusting may be performed in accordance with a respective algorithm comprising at least one respective parameter, and the value of the at least one respective parameter may be selected so that the adjusting at least partially cancels a drift effect of the at least one motion sensor.
The reference mapping and/or the algorithm and/or the or each value of the at least one parameter may be independently selectable for each selected portion of image data.
The method may further comprise receiving user input and selecting the reference mappng, and/or the algorithm and/or the or each value of the at least one parameter in dependence on the user input.
In another independent aspect of the invention there is provided an imaging system comprising:-an array of sensors for obtaining image data representative of an image formed on the array, the image data being for generation of the image on a display; a motion determining sub-system for determining motion of the array of sensors; and a processing resource configured to:-determine in dependence on the determined motion a motion-compensation mapping for mapping at least one position on the array of sensors to at least one position on the display, the motion-compensation mapping being such that an image generated on the display from the image data in accordance with the motion-compensation mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determine a difference between the motion-compensation mapping and a reference mapping; and adjust the further mapping to reduce the difference between the further mapping and the reference mapping.
The system may be embodied wholly or partially in a mobile imaging device. The processing resource may comprise a suitably programmed processor or a plurality of such processors. The processing resource may be provided within a single device or may comprise components distributed across a plurality of devices, for example a mobile imaging device and a terminal configured to communicate with the mobile imaging device. The motion determining sub-system may comprise a motion sensor.
The motion-determining sub-system may comprise at least one motion sensor and may be configured to determine the motion from at least one measurement by the motion sensor, The at least one motion sensor comprises at least one gyroscope.
The processing resource may be configured to adjust the difference between the further mapping and the reference mapping such as to at least partially cancel a drift effect of the at least one motion sensor.
The processing resource may be configured to perform the adjusting in accordance with an algorithm comprising at least one parameter, and the value of the at least one parameter is selected so that the adjusting at least partially cancels a drift effect of the at least one motion sensor.
The processing resource may be configured to perform the adjusting by applying an algorithm that reduces the difference over time according to at least one pre- determined time constant. The algorithm may comprise a proportional-integral-differential (PID) algorithm The system may further comprise a display, and the processing resource may be configured to displaying the image data on the display at a position on the display determined in accordance with the adjusted mapping.
The reference mapping may be representative of a reference position of the image on the display, and adjusting the mapping may comprise adjusting the mapping so as to reduce the difference between the reference position of the image and the motion-compensated position of the image.
The image data may comprises a frame of image data representative of the image on the array of sensors at a particular time, and the system may be configured to:-repeatedly obtain a series of further frames of image data, each further frame of image data being representative of the image on the array of sensors at a respective further time; for each frame of image data, determining motion of the array of sensors at the respective further time, determining a respective further mapping, determining a difference between the further mapping for that frame and the reference mapping, adjusting the further mapping for that frame to reduce the difference between the further mapping and the reference mapping.
The processing resource may be configured to determine a difference between the mapping and the reference mapping by determining, for the or each reference position, a difference between the position relative to the display to which the reference position is mapped according to the reference mapping and the position relative to the display to which that position is mapped according to the further mapping.
The processing resource may be configured to determine the motion of the array is performed independently of the content of the image.
The array of sensors may be installed on or in a vehicle, handheld device or other moveable object, for example a car, truck, tank, aeroplane, UAV, rocket, ship or buoy.
The processing resource may be configured to determine the further mapping to be such as to generate the image on the display to maintain a selected perspective.
The processing resource may be configured to select a portion of the image data, and performing a digital zoom operation on the selected portion of the image data.
That feature is particularly important and so in another independent aspect of the invention there is provided an imaging system comprising an array of sensors for obtaining image data representative of an image formed on the array, the image data being for generation of the image on a display; a motion determining sub-system for determining motion of the array of sensors; and a processing resource configured to determine in dependence on the determined motion a mapping for mapping at least one position on the array of sensors to at least one position on a display, the mapping being such that an image generated on the display from the image data in accordance with the mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; wherein the processing resource is configured to select a portion of the image data and perform a digital zoom operation on the selected portion of image data.
The processing resource may be configured to display the digitally zoomed selected portion of the image data on the display in accordance with the adjusted further mapping.
The processing resource may be configured to select the portion of the image data and/or determine at least one property of the digital zoom operation in dependence on user input.
The processing resource may be configured to select a plurality of portions of the image data and for each selected portion perform a respective digital zoom operation.
The system may further comprise means for providing a stream of the image data, the stream comprising a plurality of portions, each portion being for display on a respective1 different display or window. The streaming means may be configured to interleave the portions of image data in the stream.
The system may comprise a plurality of displays, and the processing resource may be configured to, for each of the plurality of displays, display the image data on a display at a position on the display determined in accordance with the adjusted further mapping.
The processing resource may be configured to, for each of the selected portions of the image data, determine a respective mapping for mapping at least one position on the array of sensors to at least one position on the display, the mapping being such that an image generated on the display from the selected portion of image data in accordance with the mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determine a difference between the mapping and the reference mapping; and adjust the mapping to reduce the difference between the mapping and the reference mapping.
For each selected portion of image data the respective mapping may be determined to be such as to generate the image on the display from the selected portion of image data to maintain a selected perspective, and optionally the selected perspective is independently selectable for each portion of image data. In
The system may further comprise user input means and the processing resource may be configured to select the perspective in dependence on the user input.
For each selected portion of image data the adjusting may be performed in accordance with a respective algorithm comprising at least one respective parameter, and the value of the at least one respective parameter may be selected so that the adjusting at least partially cancels a drift effect of the at least one motion sensor.
The reference mapping, and/or the algorithm and/or the or each value of the at least one parameter may be independently selectable for each selection portion of image data.
The processing resource may be configured to select the reference mapping, and/or the algorithm and/or the or each value of the at least one parameter in dependence on user input obtained via the user input means.
In another independent aspect of the invention there is provided a computer program product comprising computer readable instructions that are executable to perform a method as claimed or described herein.
There may also be provided an apparatus, system or method substantially as described herein with reference to the accompanying drawings.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. For example, apparatus or system features may be applied to method features and vice versa.
Detailed description of embodiments
Embodiments of the invention are now described, by way of non-limiting example, and are illustrated in the following figures, in which:-Figure 1 is a schematic illustration of an imaging device installed in a vehicle; Figure 2 is a schematic diagram showing the imaging device in more detail; Figures 3a and 3b are schematic diagrams of portions of a pixel sensor array and a display; Figures 4a to 41 are schematic diagrams of an image sensor array and a display, illustrating the effect of movement of the image sensor array on the image displayed on the display; Figures 5a to Sf are schematic diagrams of an image sensor array and a display, illustrating the effect of gyroscope drift on the image displayed on the display; Figure 6 is a flowchart illustrating in overview one mode of operation of the imaging device of Figure 1; Figure 7 is a schematic diagram of components of the imaging device in a side view; Figures 8a to Sd are diagrams showing frames of pixel sensor signals overlaid on the display; Figure 9 is a diagram of PID control algorithm module; Figure 10 is a representation of a PID control algorithm; Figures ha tolic are diagrams of the imaging device viewing a scene; Figures 12 and 13 are diagrams of an image sensor array and a display showing reference mappings between the array and the display; and Figures 14 to 18 are diagrams showing the imaging device viewing scenes, and the displays that display images from the imaging device.
An imaging device 2 according to one embodiment installed in an unmanned aerial vehicle (UAV) is illustrated in Figure 1. In alternative embodiments the imaging device can be installed in any other type of vehicle, for example an aeroplane, helicopter, car, tank or other armoured vehicle. Alternatively the imaging device can be a handheld device for operation whilst being held by an operator, for example a handheld visible or infra-red camera, or can be a projectile imaging device for imaging scenes in-flight or after landing after having been thrown or otherwise projected.
The imaging device 2 is illustrated in more detail in Figure 2, and includes a camera including a lens that focuses light on a sensor array 12. In this embodiment the sensor array is a rectangular two dimensional array of 2048 x 2048 CMOS pixel sensors. In alternative embodiments, larger or smaller sized sensors that use CMOS or other technologies are used. Any suitable focal plane array image sensor may be used, that is sensitive to electromagnetic radiation of interest, for example visible, infra-red, or x-ray.
The sensor array 12 is connected to processing circuitry 14 that in this embodiment comprises a core processor 15 and a memory 17, for example a Xilinx Virtex-6 field programmable gate array with DDR2 SDRAM memory. The processor 15 is programmed with, and/or is configured to execute, a motion sensing module for determining motion from measurements by the motion sensing system, a mapping module for mapping pixel signals received from the array 12, and a mapping adjustment module for adjusting mappings determined by the mapping module with respect to a reference mapping as described in more detail below.
The processor is configured to control operation of the sensor array and camera, including for example controlling frame rate, focussing, zoom functions and 0 perspective functions. The processor is also configured to process pixel sensor signals received from the sensor array 12 to provide image data for display, and that processing includes performing image stabilisation algorithms to stabilise the images to compensate for motion of the imaging device 2, as discussed in more detail below.
A motion sensing system 16 is also connected to the processor and comprises three gyroscopes each arranged along a different orthogonal axis, an Analog Devices ADIS 16360 module containing three orthogonal gyroscopes in this embodiment. Any other suitable arrangement of gyroscopes or other motion sensing devices, for example accelerometers, can be provided in alternative embodiments. Measurement signals are provided from the motion sensing system 16 to the processor of the processing circuitry 14 and the processor is configured to determine the angular and/or translational motion of the imaging device 2 from the received measurement signals. The processor is able to determine the motion of the imaging device 2 independently of the imaging process, and for example without reference to the content of images obtained from the imaging device. The determined motion is subsequently used by the processor in processing the image data as described in more detail below.
In alternative embodiments, a motion sensing system is not provided as part of the imaging device 2 itself, and instead the processor is connected to, and determines motion from measurement signals received from, motion sensing apparatus (for example gyroscopes, accelerometers, a compass, a OPS device or a radar device) installed in the UAV 4 or other vehicle.
The processing circuitry 14 is also connected to wireless communication circuitry 18 comprising a wireless transceiver and antenna, that can be used to transmit image data from the processor to a remote server or terminal 200 for viewing on a display by an operator. The terminal 200 includes wireless communication circuitry 202 for communicating with the device 202. The terminal also includes an image processing module 204 for processing image data received from the device to display images on the display 90, and a control module 206 for controlling operation of imaging processes of the device 2, via the sending of control signals to the device via the wireless communication circuitry. The control module 206 and the image processing module can be implemented in any suitable hardware, software or combination of software and hardware. The terminal includes user input devices, for example a keyboard and mouse, and the control module 206 is operable to generate a user interface on the display 90 to enable a user to select various control parameters for controlling the image capture or display processes.
In other embodiments, other communication systems may be employed as well as or instead of the wireless communication circuitry of the device 2 or terminal 200, for example wired or fibre-optic communication systems.
In operation, an image is formed on the pixel sensor array, and the processor receives a respective pixel sensor signal from each of the pixel sensors. For each pixel sensor the pixel sensor signal is representative of the light level received by that pixel sensor. In the case of a colour sensor, sensor signals are usually received for each pixel sensor, representative of red, green and blue light levels. The pixel sensor signals together form a frame of image data. The pixel sensor signals are refreshed and are read periodically by the processor to obtain a succession of frames of image data, each representative of the image at a respective time.
The processor is configured to process the pixel sensor signals and to output corresponding display data that is readable by a display device to display an image on the display device. Each frame of display data comprises or is representative of a plurality of display pixel signals, and each display pixel signal is representative of the brightness (and colour in the case of a colour image) of the output to be provided by the display device at a corresponding pixel position on the display device.
There is a direct correspondence between the pixel sensors of the sensor array and the pixel signals of the display data, as illustrated schematically in Figures 3a and 3b.
Figure 3a is a schematic diagram of a portion 30 of the sensor array 12, which comprises a plurality of pixel sensors labelled a to I, and Figure 3b is a schematic diagram of a corresponding portion 32 of the display showing pixel signals of the display data that are display on the portion 32 of the display. It can be seen that in this embodiment there is a one-to-one mapping of each pixel sensor to a corresponding display pixel, and for each frame the value of a display pixel signal corresponds to the value of a pixel sensor signal obtained from a corresponding pixel sensor. In other embodiments, for example colour sensors using Bayer colour filters, interpolation of the image sensor pixels may be performed to generate colour pixels that map to corresponding display pixels.
It can be understood from Figures 3a and 3b that if the mapping between pixel sensors and display pixers is constant and does not change between frames, then any movement of an image on the sensor array (for example due to movement of the sensor) would be reflected in a corresponding movement of the display image represented by the display data, and in the image displayed to an operator on the display.
It is a feature of the embodiment of Figure 1 that the mapping between pixel sensors and display pixels is not constant but instead is performed in dependence on the motion of the imaging device 2 measured by the motion sensing system 16. Thus the mapping can vary from frame to frame in order to compensate for measured motion of the device 2, a motion compensated mapping, and to compensate for the error due to drift in the motion measuring sensors, an adjusted mapping.
Figures 4a to 4f, which are schematic diagrams of an image sensor array (Figure 4a) and a corresponding display (Figure 4b), illustrate the effects the motion compensated mapping prior to adjustment. Figure 4c shows, for one frame of data, an image of an object (a cross-hatched rectangle) formed on the sensor array 30 and Figure 4d shows the corresponding image represented on the display.
Figure 4e shows, for the next frame of data, the image of the object formed on the sensor array and Figure 4f shows the corresponding image represented on the display as it would appear after applying the motion compensated mapping without adjustment, and it can be seen that the image of the object has not moved on the display correcting, in this case, for the rotation of the UAV 4 on which the imaging device 2 is mounted.
It can be understood that without any image stabilisation, any movement of the imaging device 2 can make viewing or analysis of the image by an operator or other user difficult.
The motion-dependent mapping applied by the processor according to the embodiment of Figure 1 is described in more detail below, and is performed in dependence on gyroscope measurement signals obtained from the gyroscopes of the motion sensing system 16. It has been found that in practice the gyroscope measurement signals can drift over time, both for MEMs gyroscopes and other types of gyroscopes, and thus indicate that there has been movement of the imaging device 2 even when there has not been any such movement, which in turn can affect the position and orientation of the image displayed on the display. That is illustrated in Figures Sa to Sf, which are schematic diagrams of an image sensor array (Figure Sa) and a corresponding display (Figure Sb). Figure Sc shows, for one frame of data, an image of an object (again a cross-hatched rectangle) formed on the sensor array and Figure Sd shows the corresponding image represented on the display.
Figure Se shows, for a later frame of data, the image of the object formed on the sensor array and Figure Sf shows the corresponding image represented on the display. In this case it can be seen that the image of the object formed on the sensor array has not moved as there has been no movement of the imaging device 2.
However, due to drift in the gyroscope measurement signals, the motion compensation mapping causes the image formed on the display to be rotated to compensate for the apparent (although non-existent in this case) motion indicated by the gyroscope measurement signals. In Figure 5f, the imaging device has not moved but the displayed image has moved due to gyroscope drift.
Gyros are used to measure the motion of a body to which an image sensor is attached thus enabling the image to be stabilised by compensating for the body motion when displaying the image to a user or prior to further image processing.
Drift can become visible to a user rapidly, for example an image can drift off a user display within minutes due to the effects of drift on a system using motion-compensation algorithm. Thus drift effects can become significant over relatively small time intervals, in particular for vehicle-mounted applications.
In view of the issue of gyroscope drift, it is a further, important feature of the embodiment of Figure 1 that as well as applying a motion-dependent mapping the processor is also configured to apply an adjustment to the motion-dependent mapping between pixel sensors and display pixels, as described in more detail below. It has been found that the adjustment to the motion-dependent mapping can be effective in compensating for drift effects experienced by the motion sensing system 16 (for example experienced by the gyroscopes of the motion sensing system 16) and can also be effective in adjusting the mapping in the presence of movement of the imaging device over longer timescales.
Figure 6 is a flowchart showing in overview the processing and mapping of pixel sensor signals to pixel display signals, and the processing of gyro measurement signals, by the processor of the embodiment of Figure 1 on a frame-by-frame basis.
With regard to the processing of gyro measurement signals, at the first stage 50 the processor reads measurement signals from the gyroscopes of the motion sensing system 16. Each time the processor reads measurement signals from the gyroscopes it calculates the position of the imaging device 2 from the gyroscope measurement signals using known techniques 52. Usually gyroscope measurement signals are received more frequently than image frame measurements are read by the processor. The processor determines when it is time to obtain a frame measurement by reading sensor signals from the sensor array 12 and at the next stage of the process 54 stores the most recently determined position for use in the processing of the frame data. If the processor determines that it is not yet time to obtain a frame measurement then it repeats the reading of measurement signals from the gyroscopes at stage 50 and recalculates the position at stage 52.
With regard to the imaging process, in the first stage of the process 60 the processor waits until it determines that it is time to take a measurement from the sensor. The processor then, at the next stage of the process 62, reads a pixel sensor signal from each pixel sensor of the sensor array 12.
Next, the processor maps 64 the received pixel sensor signals to other of the pixel sensors in dependence on the most recently determined position data, so as to compensate for the measured motion of the imaging device 2. Effectively in the event of motion of the imaging device 2, for each pixel sensor the pixel sensor signal obtained in response to an image portion located at that pixel sensor is attributed to another pixel sensor that would have received that image portion if the imaging device 2 had not moved.
The mapping of pixel sensor signals to particular pixels in dependence on measured motion can be performed using any suitable technique. In the embodiment of Figure 1, the processor applies a mapping technique as described in W02008/090345, which is hereby incorporated by reference.
The processor maps each of the pixels of the image sensor array 12 onto an imaginary three dimensional sphere or hemisphere or a portion of a hemisphere, whose radius from the centre of the lens is chosen by giving each pixel an offset co-ordinate in three dimensional space which is determined relative to the positional centre of the imaging device 2. These offset co-ordinates onto the one or two hemispheres are hereafter referred to as x'n, y'n and z'n.
The mapping of pixel signals to offset co-ordinates is illustrated schematically in Figure 7 which shows, by way of example, light rays directed by lens 84 of camera onto two pixels 80 82 of the image sensor 12.
The origin of each light ray is in one-to-one correspondence with a pixel of the image sensor 12. Each pixel is also in one-to-one correspondence with a point x'n,y'n,z'n, where n=0,1,2..., on a notional hemisphere or sphere of centre x'O,y'O,z'O around the lens and image sensor assembly, through which the rays of light from an imaged scene pass before arrival at the lens and image sensor 12, as illustrated in Figure 7.
The co-ordinates (x'n,y'n,z'n) are defined with respect to a nominal reference point x'O,y'O,z'O within the volume of the device, for instance at the centre of the device.
The mapping of x'n,y'n,z'n co-ordinates to pixels of the image sensor may be determined by experimentation or by calibration or may be determined by calculation based on the physical relationship between, and performance of, the lens and image sensor.
The processor performs trigonometric calculations on the x'n, y'n and z'n co-ordinates of each pixel's projection onto an hemisphere or sphere in order to alter their x'n, y'n and z'n co-ordinate values to compensate for rotational and/or translational motion of the device 2 such that the centred perspective is maintained in a fixed orientation and the attitude of the display is stable. Thus, the measured change in angle obtained from the motion sensing system, is used to determine the trigonometric correction to be applied to the x'n, y'n and zn co-ordinates that correspond to each pixel of the image sensor, in order to stabilise the image from the sensor in direction and attitude The mapping of pixel signals described in relation to Figure 7 is performed in order to maintain the image stable on the display with respect to a desired perspective, defined in this case with respect to the nominal reference point x'O,y'O,z'O. It is a feature of the described embodiment that the reference point can be set to any desired position, and that the desired perspective can be selected by a user, for example by sending of commands to the processor of the imaging device via the wireless communication circuitry from the terminal 200.
In the mode of operation described above, the pixel signals are represented by Cartesian co-ordinates (x,y,z) and the pixel signals are mapped to offset Cartesian co-ordinates (x', y' and z') in accordance with trigonometric calculations to take account of the motion of the device (which may also be referred to as correction of the pixel signals or image). The device is not limited to using the Cartesian co-ordinate system or to correction of the signals using trignonometric calculations. Any suitable co-ordinates may be used, for instance spherical co-ordinates, and any suitable mapping process for pixel signals to take account of the motion of the device may be used.
In the next stage of the process 66, each pixel sensor signal, following the mapping described in relation to Figure 7, is mapped to a corresponding display pixel. It can be understood that if the resolution of the display of the pixel sensor array 12 and the resolution of the display are different then a plurality of pixel sensor signals can be mapped to a single display pixel (for example the values of the pixel sensor signals can be averaged to provide a value for the display pixel signal) or a single pixel sensor signal can be mapped to a plurality of display pixels.
In the mode of operation described in the preceding paragraph, the mapping to display pixels of pixel sensor signals attributed to particular pixel sensors is constant from frame to frame, but the attribution of particular sensor signals to particular pixel sensors varies based upon a mapping to compensate for measured motion. In an alternative mode of operation, the attribution of particular sensor signals to particular pixel sensors is fixed, but the mapping to display pixels of pixel sensor signals is varied from frame to frame to compensate for the measured motion. It will be understood that both modes of operation produce the same outcome, with pixel sensor signals being mapped to particular display pixels dependent on the measured motion in such a way as to compensate for the measured motion.
Returning to the imaging process described with reference to Figure 6, it is a feature of the embodiment of Figure 1 that the processor applies an adjustment to the motion-dependent mapping between pixel sensors and display pixels (such as that described with reference to Figure 7) before producing a display signal comprising display pixel signals for display of the image on a display.
In the mode of operation described in relation to Figure 6, the adjustment is based on a reference mapping between the pixel sensors of the sensor array and the display pixels. The reference mapping in this embodiment is such as to map the sensor pixel at the top left hand corner of the rectangular sensor array 12 to the display pixel at the top left hand corner of the display, and to map the sensor pixel at the bottom right hand corner of the rectangular sensor array 12 to the display pixel at the bottom right hand corner of the display. The top left hand and bottom right hand pixels are used as reference positions in determining the reference mapping.
By monitoring where the top left hand corner and bottom right hand corner pixels are mapped it is possible to keep the image data aligned with the display and so remove the drift from the display, as described in more detail below.
In this case, if an image were to be displayed on the display according to the reference mapping it would be aligned with the view and orientation of the sensor array. For example, if the imaging device was pointing in a forward direction of the vehicle, the image displayed on the display would be of a view in the forward direction of the vehicle, without rotation.
The control of the adjustment applied to the motion-compensation mapping, and the effects of the adjustment, are now described in more detail with reference to Figures 8a to Sd.
Figure 8a shows the display 90, and a frame 92 of pixel sensor signals overlaid on the display 90 to show the positions relative to the display to which the pixel sensor signals are assigned according to the motion-compensation mapping at stage 66. In this case, the frame of pixel sensor signals is aligned exactly with the display.
Figure Ba includes an image of a stationary cross 94 that forms part of the scene that is imaged by the sensor array and is displayed on the display 90 in the position shown, according to the mapping. It can be seen that the image of the cross is formed roughly at the centre of the display 90 in this case.
Figure Sb shows the display 90 together with the next frame 92 of pixel sensor signals obtained from the sensor array, overlaid on the display according to the motion-compensation mapping. There has been a movement of the sensor array since the frame of pixel sensor signals of Figure Ba was obtained, and the motion-compensation mapping has mapped the pixel sensor signals to the display in order to compensate for the movement.
It can be seen that in Figure Sb the image of the stationary cross 94 is represented by different pixel sensor signals than was the case for Figure Ba (the image of the cross is in a different position relative to the frame of pixel sensor signals) as the image is formed on a different part of the image sensor array due to the movement of the image sensor array. However, it can also be seen that the image of the cross is formed at substantially the same position on the display as was the case for the frame of Figure Ba, as the motion-compensation mapping is used to assign the pixel sensor signals to locations relative to the display to compensate for the measured motion. Thus, a user would see a stable image of the cross 94, or other part of the imaged scene, despite the movement of the image sensor array.
Figure Sb indicates the position of the top left hand pixel (having co-ordinates xdO, ydO) and the bottom right hand pixel (having co-ordinates xdn, ydn). The display positions to which the top left hand pixel sensor signal (having co-ordinates xO, yO) and the bottom right hand pixel sensor signal (having co-ordinates xn, yn) are mapped by the mapping are also shown in Figure 8c.
The mapping to compensate for measured motion means that some parts of the image obtained by the sensor array for a particular frame may not be displayed on the display for that frame (in the case of Figure Sb for instance the pixel sensor signal obtained from the top left hand pixel sensor (xO, yO) would not be displayed on the display according to the mapping obtained at stage 66).
As mentioned above, the motion compensation mapping is adjusted to reduce the difference between the motion compensation mapping and the reference mapping.
Thus, in the case of Figure 8b, the position of the cross 94 on the display would not be identical to the position of the cross 94 on the display in Figure 8a, due to the adjustment. However, the level of adjustment between successive frames is usually sufficiently small that the image formed on the display seems stable to the user.
The adjustment provided by the processor according to the mode of operation illustrated in Figure 6 is determined according to a control algorithm, with a sufficiently long time constant moves the rotated image such that (xO, yO) gradually become the same as (xdO, ydO) and (xn, yn) gradually become the same as (xdn, ydn).
1 5 In the embodiment of Figure 1, the algorithms used to determine the adjustment to the motion compensation mapping are proportional-integral-derivative (PID) control algorithms.
The inputs to the PID control algorithms are illustrated graphically in Figure Bc, which shows the display 90 and the same frame 92 of pixel sensor signals obtained from the sensor array as shown in Figure 8b. In this case, the inputs are x_error (the difference between xO and xd0 according to the motion compensation mapping) y_error (the difference between yO and ydO according to the motion compensation mappng), and z_error indication (the difference between xO and xdn). The values of x_error (=xO-xdo) and y_error (y0-ydO) in this mode of operation represent the offset between the position assigned to the sensor pixel at the top left hand corner of the rectangular sensor array according to the reference mapping and according to the motion-compensation mapping. The value of z error ndication gives an indication of how much the image needs to be rotated. The amount that the image needs to be rotated is determined from the difference between the value of z errorindication and the value of (xn-xO) If the motion-compensation mapping rotates the image from alignment with the display in an anticlockwise direction (as shown in Figure Sb) then the value of z_error_indication is larger than the value of (xn -xO), and if the motion-compensation mapping rotates the image from alignment with the display in a clockwise direction then the value of z error indication is smaller than the value of (xn -xO).
The PID control algorithm modules used to determine the adjustment to the motion compensation mapping are illustrated graphically in Figure 9.
A separate PID control module 100, 102, 104 is provided by the processor to determine the adjustment to each of the x, y and z values to be applied to the motion- compensation mapping. Each of the PID control modules uses programmable co-efficients Kp, Ki and Kd which determine the relative contributions of the proportional, integral and differential terms to the adjustment. The coefficients Kp, Ki and Kd used to determine the adjustment to each of the x, y and z values can be set independently, and are referred to as Kpl, Xii and Kd_1 (for the x adjustment), Kp_2, Ki_2 and Kd_2 (for the y adjustment) and Kp_3, Ki_3 and Kd_3 (for the z adjustment).
The algorithm applied by each of the PID control modules 100, 102, 104 (subject to each having different values for the coefficients, and different error inputs is illustrated in Figure 10, and can be expressed mathematically as follows: (1) Adjustment = Kp x Error + Xi x.iError +Kd x d(Error)/dt It can be understood that, as usual for PID-type control algorithms, the adjustment values are determined in dependence on the error measurements obtained for a series of preceding frames, not only the error measurement obtained for the latest frame, and the PID control algorithm can, if the values of the coefficients are set appropriately, ensure that the motion-compensation mapping reverts to the reference mapping at a desired rate in a stable fashion and without significant overshoot whilst ensuring that a reasonable level of stabilisation of the images is obtained from frame-to-frame. The values of the coefficients can be tuned for a particular imaging device and for a particular installation and application that imaging device.
The processor applies the PlO control algorithms at stage 72 to obtain values of adjustments (x_adjustment value, y_adjustment value and z_adjustment value) that are used to adjust the x, y, and z co-ordinates of the motion-compensation mapping obtained at stage 66.
At stage 74 the processor applies the adjustments to the motion-compensation mapping and then applies the adjusted mapping to the pixel sensor signals to generate a display signal according to the adjusted mapping for display of an image on the display.
In the example of Figure Sb, if there were no subsequent movements of the sensor device 2 then the control algorithm applied by the processor would be such that after a sufficient number of subsequent frames the image obtained by the sensor array would be aligned exactly with the display 90. That is illustrated in Figure 8d, which shows the display 90 after a number of further frames (in this case 6000 frames, representing a time period of 5 minutes) in the absence of further motion of the sensor array. It can be seen that the frame of pixel sensor signals has become aligned with the display 90, due to the successive adjustments of the mapping for each successive frame. It can also be seen that the position of the image of cross 94 on the display has changed, despite the fact that the cross 94 is stationary. That is because the image on the display has gradually become aligned with the new position and orientation of the sensor array following the movement of the sensor array, due to the gradual adjustment of the motion-compensation mapping.
In the example of Figures Sa to 8d, the level of adjustment is set such that the adjusted motion compensation mapping compensates for relatively rapid motion or short term changes in position or orientation of the sensor array (for example due to vibration or turbulence effects causing unwanted movement of the vehicle to which the sensor array is attached) so as to maintain a stable image in the presence of such relatively rapid or short term motion, but such that slower or longer term changes (for example banking or turning of the vehicle) are tracked such that the display is realigned to the new orientation of the sensor array during or following such slower or longer term changes.
The control algorithm can be such as to ensure that images captured by the sensor array are displayed in substantially the same orientation on the display from frame-to-frame, with adjustments to the mapping being done slowly (for example over a predetermined time period or number of frames) so that slow movements (such as a vehicle moving round a corner) are tracked (ie the image appears on the display in the same orientation and position as it appears on the sensor array) but fast movements (such as bumps in the road that are stabilised by the motion compensation/image stabilisation mapping, in the case of a land vehicle) are not.
For example, if the imaging device is arranged to look forward from a vehicle such as a car, tank or other motor vehicle, vibrations and high speed motion of the vehicle would result in a stable image with a fixed perspective, but if the vehicle were to drive round a corner, the image displayed would slowly track that movement such that the display would continue to show what would be seen directly in front of the vehicle.
The level of adjustment of the motion-compensation algorithm may also compensate for drift in the gyroscope or other motion sensor measurements. Usually such drift occurs relatively slowly, and the gyroscope measurements are similar to those that would be obtained, in the absence of gyroscope drift, if the image sensor device 2 or the vehicle on which it installed was slowly turning. The motion compensation iS mapping attempts to compensate for the gyroscope drift as, from the point of view of the processor, the gyroscope drift seems to represent motion of the device 2.
However, the adjustment to the motion compensation mapping reduces the effects of the motion compensation mapping, and if the level of adjustment is calibrated correctly it can cancel the effects of gyroscope drift. In that case, if there is no motion of the sensor array then the display would show an un-rotated image of a scene regardless of drift from the gyros, as the drift in measurements from the gyros occurs sufficiently slowly that the adjustments to the mapping from frame to frame negate the drift effects.
The FID adjustment algorithm can be considered to perform in a similar fashion to a filter on the gyroscope output, , which allows the drift to be measured & compensated for. However, in contrast to a high pass filter, the drift is guaranteed to be removed, whereas some drift passes through and is accumulated from a high pass filter.
More expensive, and larger, gyroscopes tend to have less drift than less expensive gyroscopes. In the described embodiments, the use of the mapping adjustment can provide for the use of cheaper, and smaller, gyroscopes, whilst maintaining a desired level of performance.
In order to ensure that the algorithm can cancel out gyroscope (or other motion sensor) drift effects, the various control coefficients for the PID control algorithm can be set to appropriate values, for example by calibration for a particular device.
The values of the various control coefficients can also be set in dependence on the likely levels (magnitude and/or frequency) of movements that may be experienced and the desired level of stability of an image. The coefficients can be modified to optimise the performance for a particular system. For example, if the coefficients are set so as to centre the image too fast and image stabilisation could be poor, whereas if they are set to centre the image too slowly and the slow tracking could make correlating the image to the vehicle orientation difficult to understand for a user.
For example, if the imaging device is to be mounted on a land vehicle for use off-road, relatively large and unpredictable motion of the device from frame-to-frame may be expected and the values of the coefficients needed to obtain a sufficiently stable image can be set accordingly. In contrast if the imaging device is to be mounted in an aeroplane for use in high altitude reconnaissance then movements of the device from frame-to-frame may in general be relatively small, although greater stability in the images may be required if it is desired to observe objects forming a relatively small part of a scene.
In the mode of operation described in relation to Figures 8a to 8d, magnification of the image (if any) is provided by the lens of the camera 10, and the reference mapping is such as to map all pixel sensor signals to the display. It is a feature of the embodiment of Figure 1 that, in alternative modes of operation, digital magnification on the display of portions of the image on the display, also referred to as digital zooming, can be provided.
A digital zoom operation performed by the embodiment of Figure 1, installed in a UAV 4 is illustrated schematically in Figures 11 a to 11 c. The sensor array 12 of the imaging device 2 images the scene in its field of view 110, and pixel sensor signals from the sensor array representative of the image of the scene are obtained by the processor.
In this case, the imaging operation is controlled remotely by an operator via a wireless link from a remote user terminal 200 to the processor via the wireless communication circuitry 15. The user terminal 200 is associated with the display 90, which is also located remotely from the imaging device 2 and UAV 4 in this embodiment. Display signals obtained by the processor are transmitted via the wireless communication circuitry 18 for receipt at the terminal 200 and display on the display 90.
It is a feature of the mode of operation illustrated in Figures ha to hlc that the operator can choose to digitally zoom in to a region 112 of the imaged scene by using an imaging control system installed at the user terminal. The processor of the imaging device 2 then generates display signals representative of a digitally magnified image of the selected region 112. Any suitable digital magnification or digital zooming technique for obtaining the digitally magnified image can be used, including the interpolation of measured pixel values.
The processor applies motion-compensation and adjustment algorithms to the sensor signals to provide motion-compensated display signals, adjusted to counteract drift effects, as already described. Thus, stabilisation and drift correction of the digitally magnified image can be obtained.
Figure 11 b shows the UAV 4 of Figure 1 la at a later time, after the aircraft has been moved by turbulence. The motion compensation algorithm ensures that the zoomed-in image remains stable on the display despite the movement of the aircraft caused by the turbulence.
Figure 11 c shows the UAV 4 of Figure 1 Ia at a still later time. Adjustments applied to the motion compensation mapping, as described above, ensure that the zoomed in region 112 of the field of view 110 gradually tracks the flight of the aircraft.
The reference mapping used for the adjustment of the motion-compensated mapping in the mode of operation illustrated in Figure 8 was based on reference points at the top left hand and bottom right hand corners of the sensor array and the display. The same reference mapping can be used in generation of the digitally zoomed image, if desired. However, in the mode of operation illustrated in Figures ha to llc, a different reference mapping is used in the processing of the digitally zoomed image.
In this case, as illustrated schematically in Figure 12, reference points at the top left hand corner and bottom right hand corner of the selected region 112 of the field of view (xz0, yzO) and (xzn, yzn), and reference points at the top left hand corner and bottom right hand corner of the display (xdo, ydO) and (xdn, ydn) are used for the reference mapping. The reference mapping aligns the reference point at the top left hand corner of the selected region 112 of the field of view with the top left hand corner of the display 90, and aligns the reference point at the bottom right hand corner of the selected region 112 with the reference point at the bottom right hand corner of the display 90.
An alternative reference mapping is illustrated in Figure 13. In this case, the reference mapping is between the top left and bottom right hand corners of the sensor array (image sensor co-ordinates (xO, yD) and (xn, yn)), and to the top left hand and bottom right hand corners (co-ordinates (xdz0, ydzO) and (xdzn, ydzn)) of a virtual display, which is a display to which the pixel sensor signals would be mapped if all of the pixel sensor signals were to be mapped to a display showing the whole image at the selected level of magnification.
Although one example of a selected region 112 of the field of view is shown in Figures 1 Ia to 11 c, a user is able to electronically pan and zoom anywhere within the field of view of the sensor array 12 by sending suitable instructions to the processor of the imaging device.
As well as being able to display a video sequence from within the field of view (by) of the lens, the system can also show two videos at the same time by splitting the display, for example showing both a zoomed-in and zoomed-out video simultaneously, with the degree of zooming or panning being controlled by the user.
For example, it is a feature of the embodiment of Figure 1 that the control system provided at the user terminal can also be used to zoom into several regions of the sensor array field of view simultaneously, with different selected regions being displayed on different windows on the display. That feature is illustrated in overview in Figures 14 and 15.
Figure 14 again shows the imaging device 2 installed in a UAV 4. The sensor array 12 of the imaging device 2 images the scene in its field of view 110, and pixel sensor signals from the sensor array representative of the image of the scene are obtained by the processor.
In this example, five objects of interest, shown as circles 120, 122, 124, 126, 128 are located within the field of view 112 of the sensor array. n
The user is able to view the whole field of view on the display 90, as illustrated schematically in Figure 15a, or zoom in to view a region 130 of the field of view as illustrated schematically in Figure 15b. The user is also able to view different views optionally at different levels of magnification in different windows, as illustrated in Figure 15c, which shows the whole field of view displayed in one window 132 on the display 90 and a magnified view of a region 130 of the field of view 112 displayed in another window 134 on the display 90. Any number of windows and any number of different views can be displayed on the display 90.
Two (or more) independent video streams can be transmitted from the imaging device at the same time and displayed on separate displays. As an example, they could be transmitted within the same PAL or NTSC signal at the full frame resolution but lower individual frame rate, thereby not increasing the transmitted video bandwidth. A mode of operation in which two displays 90, 91 are used by a single user to show zoomed-in and zoomed-out videos at the same time, with both displays sharing a single PAL or NTSC channel, is illustrated in Figures 16 and 17.
As well as having the powerful capability of producing multiple images at the same time, as illustrated in Figures 14 and 15, the imaging system can also allow multiple users access to images from the imaging device 2 at the same time. For example, the display data from the imaging device 2 can be transmitted to multiple operator terminals, and each operator terminal can be used independently to view the scene, or to select, magnify and view regions within the scene.
A mode of operation in which multi-user capability is provided using a single PAL or NTSC channel from the device is illustrated in Figures 18 and 19. In this example four users are independently controlling their own views for their respective displays 140, 142, 144, 144 and receiving a frame rate 14 of that available to a single user.
Similarly if there were two users, they would receive 1⁄2 the frame rate that is available to a single user. A single stream of image data is streamed from the device, with separate, interleaved portions of the stream comprising the image data for the different displays. In alternative embodiments, multiple full frame rate streams of image data are sent to multiple displays.
Each user selects the region of interest that they wish to view. The first user has chosen to display two windows on his or her display 140, with different levels of magnification.
In certain modes of operation the user, or users, are able to select different perspectives for the different views that they wish to view, for example by selecting different reference co-ordinates for use in the motion-compensation mapping process. The user or users are also able to select different properties of the adjustment of the mapping for the different views, for example different values of parameters or time constants for the PID or other algorithm, or different reference mappings, for each region that is viewed on a different window or display. The device 2 will then perform different mappings or adjustments for different selected portions of image data corresponding to the selected regions. The user or users are able to select different perspectives, options or values for the mapping and adjustment processes via the user interface at the terminal 200. The control module 206 sends corresponding control signals to the imaging device 2 in response to the user input.
The operator terminal in the embodiments illustrated herein is a suitably programmed PC, but in alternative embodiments any suitable processing device, for example a pda, tablet or laptop, that is suitably programmed with image processing or display software, or that includes suitable image processing or display circuitry can be used, for example a ROVER 5 terminal (manufactured by L3 Communications).
The embodiment of Figure 1 can provide for a vehicle-mounted motion-compensated imaging system without requiring the use of gimbals or other similar mechanical components to provide the motion compensation (although such gimbals or other components can also be used if desired) which can provide for a light and mechanically simple, easy to maintain system. The device is responsive to turbulence, vibrations, collisions or other unwanted or unexpected motion without requiring mechanical servo loops such as those used in some gimballed systems.
The imaging system may be particularly suitable for installation in mini/micro UAVs, but can be used in any air, land or water-borne vehicle or plafforms, such as a buoy.
It will be understood that further embodiments are not limited to the components and arrangements of such components described in relation to the illustrated embodiments.
For example, in the embodiment of Figure 1, the mappings are performed by the processor included in the imaging device. In alternative embodiments, the pixel sensor signals are transmitted from the imaging device to a further device, for example the user terminal, and the mapping and other processing is performed at the further device.
S A PID algorithm has been described as being used to determine the adjustment to the motion compensation mapping, but any suitable algorithm can be used, for example a fuzzy logic controller, a model predictive controller or a pseudo-derivative feedback controller. Similarly one example of a motion compensation algorithm has been described in relation to the embodiment of Figure 1, but any suitable motion compensation algorithm can be used.
In the described embodiments, the positions of two selected pixel sensors at the corners of the array have been used as reference positions to determine the reference mapping. In alternative embodiments, any other suitable number and location of pixel sensor can be used, or indeed the reference mapping can be determined in any suitable fashion and without reference to individual pixel sensor positions.
Embodiments, or features of such, can be implemented as a computer program product for use with a computer system, the computer program product being, for example, a series of computer instructions stored on a tangible data recording medium, such as a diskette, CD-ROM, ROM, or fixed disk, or embodied in a computer data signal, the signal being transmitted over a tangible medium or a wireless medium, for example, microwave or infrared. The series of computer instructions can constitute all or part of the functionality described above, and can also be stored in any memory device, volatile or non-volatile, such as semiconductor, magnetic, optical or other memory device.
It will also be well understood by persons of ordinary skill in the art that whilst embodiments implement certain functionality by means of software, that functionality could be implemented solely in hardware (for example by means of one or more ASICs (application specific integrated circuit)) or by a mix of hardware and software.
As such, embodiments are not limited only to being implemented in software.
It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.

Claims (33)

  1. Claini 1. An imaging method comprising:-S obtaining image data representative of an image formed on an array of sensors, the image data being for generation of the image on a display; obtaining a reference mapping for mapping at least one reference position on the array of sensors to at least one position on the display; determining motion of the array of sensors; determining in dependence on the determined motion a further mapping for mapping at least one position on the array of sensors to at least one position on the display, the further mapping being such that an image generated on the display from the image data in accordance with the further mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determining a difference between the further mapping and the reference mapping; and adjusting the further mapping to reduce the difference between the further mapping and the reference mapping.
  2. 2. A method according to Claim 1, wherein the determining of motion of the array comprises obtaining at least one measurement from at least one motion sensor and determining the motion frcm the at least one measurement.
  3. 3. A method according to Claim 2, wherein the at least one motion sensor comprises at least one gyroscope.
  4. 4. A method according to Claim 2 or 3, wherein the adjusting of the difference between the further mapping and the reference mapping is performed to at least partially cancel a drift effect of the at least one motion sensor.
  5. 5. A method according to any of Claims 2 to 4, wherein the adjusting is performed in accordance with an algorithm comprising at least one parameter, and the value of the at least one parameter is selected so that the adjusting at least partially cancels a drift effect of the at Feast one motion sensor.
  6. 6. A method according to any preceding claim, wherein adjusting the further mapping comprises applying an algorithm that reduces the difference over time according to at least one pre-determined time constant.
  7. 7. A method according to Claim 4 or 5, wherein the algorithm comprises a proportional-integral-differential (PID) algorithm
  8. 8. A method according to any preceding claim, further comprising displaying the image data on a display at a position on the display determined in accordance with the adjusted further mapping.
  9. 9. A method according to any preceding claim, wherein the reference mapping is representative of a reference position of the image on the display, and adjusting the mapping comprises adjusting the mapping so as to reduce the difference between the reference position of the image and the motion-compensated position of the image.
  10. 10. A method according to any preceding claim, wherein the image data comprises a frame of image data representative of the image on the array of sensors at a particular time, and the method further comprises:-repeatedly obtaining a series of further frames of image data, each further frame of image data being representative of the image on the array of sensors at a respective further time; for each frame of image data, determining motion of the array of sensors at the respective further time, determining a respective further mapping, determining a difference between the further mapping for that frame and the reference mapping, adjusting the further mapping for that frame to reduce the difference between the further mapping and the reference mapping.
  11. 11. A method according to any preceding claim wherein the determining of a difference between the mapping and the reference mapping comprises determining, for the or each reference position, a difference between the position relative to the display to which the reference position is mapped according to the reference mapping and the position relative to the display to which that position is mapped according to the further mapping.
  12. 12. A method according to any preceding claim, wherein the at least one reference position comprises a plurality of pixel sensor positions, for example two or three pixel sensor positions.
  13. 13. A method according to any preceding claim, wherein the image data comprises a plurality of pixel signals each obtained from a respective position on the sensor array, and the further mapping comprises assigning each of the plurality of pixel signals to a respective position relative to the display.
  14. 14. A method according to any preceding claim, wherein the determining of the motion of the array is performed independently of the content of the image.
  15. 15. A method according to any preceding claim, wherein the array of sensors is installed on or in a vehicle, handheld device or other moveable object, for example a I S car, truck, tank, aeroplane, UAV, rocket, ship or buoy.
  16. 16. A method according to any preceding claim, wherein the further mapping is determined to be such as to generate the image on the display to maintain a selected perspective.
  17. 17. A method according to Claim 16, further comprising receiving user input and selecting the selected perspective in dependence on the user input.
  18. 18. A method according to any preceding claim, further comprising selecting a portion of the image data, and performing a digital zoom operation on the selected portion of the image data.
  19. 19. A method according to Claim 18, further comprising displaying the digitally zoomed selected portion of the image data in accordance with the adjusted further mapping.
  20. 20. A method according to Claim 186 or 19, further comprising selecting the portion of the image data and/or determining at least one property of the digital zoom operation in dependence on user input.
  21. 21. A method according to any of Claims 18 to 20, further comprising selecting a pluralty of portions of the image data and for each selected portion performing a respective digital zoom operation.
  22. 22. A method according to any preceding claim, further comprising providing a stream of the image data, the stream comprising a plurality of portions, each portion being for display on a respective, different display or window.
  23. 23. A method according to Claim 22, comprising interleaving the portions of image data in the stream.
  24. 24. A method according to Claim 22 or 23, wherein each portion of the image data represents a respective different portion of the image.
  25. 25. A method according to any of Claims 22 to 24, wherein the method comprises, for each of the plurality of displays, displaying the image data on a display at a position on the display determined in accordance with the adjusted further mapping.
  26. 26. A method according to any of Claims 21 to 25, further comprising for each of the selected portions of the image data determining a respective further mapping for mapping at least one position on the array of sensors to at least one position on the display, the further mapping being such that an image generated on the display from the selected portion of image data in accordance with the further mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determining a difference between the further mapping and the reference mapping; and adjusting the further mapping to reduce the difference between the further mapping and the reference mapping.
  27. 27. A method according to Claim 26, wherein for each selected portion of image data the respective further mapping is determined to be such as to generate the image on the display from the selected portion of image data to maintain a selected perspective, and optionally the selected perspective is independently selectable for each portion of image data.
  28. 28. A method according to Claim 27, further comprising receiving user input and selecting the selected perspective in dependence on the user input.
  29. 29. A method according to any of Claims 21 to 28, wherein for each selected portion of image data the adjusting is performed in accordance with a respective algorithm comprising at least one respective parameter, and the value of the at least one respective parameter is selected so that the adjusting at least partially cancels a drift effect of the at least one motion sensor.
  30. 30. A method according to Claim 29, wherein the reference mapping, and/or the algorithm and/or the or each value of the at least one parameter is independently selectable for each selection portion of image data.
  31. 31. A method according to CLaim 30, further comprising receiving user input and selecting the reference mapping, and/or the algorithm and/or the or each value of the at least one parameter in dependence on the user input.
  32. 32. An imaging system comprising:-an array of sensors for obtaining image data representative of an image formed on the array, the image data being for generation of the image on a display; a motion determining sub-system for determining motion of the array of sensors; and a processing resource configured to:-determine in dependence on the determined motion a motion-compensation mapping for mapping at Feast one position on the array of sensors to at least one position on the display, the motion-compensation mapping being such that an image generated on the display from the image data in accordance with the motion-compensation mapping would be generated in a position on the display such as to at least partially compensate for the measured motion; determine a difference between the further mapping and a reference mapping; and adjust the further mapping to reduce the difference between the further mapping and the reference mapping.
  33. 33. A computer program product comprising computer readable instructions that are executable to perform a method according to Claim 31.
GB1009586.7A 2010-06-08 2010-06-08 Image stabilising apparatus and method Withdrawn GB2481027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1009586.7A GB2481027A (en) 2010-06-08 2010-06-08 Image stabilising apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1009586.7A GB2481027A (en) 2010-06-08 2010-06-08 Image stabilising apparatus and method

Publications (2)

Publication Number Publication Date
GB201009586D0 GB201009586D0 (en) 2010-07-21
GB2481027A true GB2481027A (en) 2011-12-14

Family

ID=42471333

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1009586.7A Withdrawn GB2481027A (en) 2010-06-08 2010-06-08 Image stabilising apparatus and method

Country Status (1)

Country Link
GB (1) GB2481027A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2952953A1 (en) * 2014-06-04 2015-12-09 MBDA Deutschland GmbH Method for aligning a camera system to a target object and camera system
WO2016009268A1 (en) * 2014-07-17 2016-01-21 Elbit Systems Ltd. Stabilization and display of remote images
US9330306B2 (en) 2014-06-11 2016-05-03 Panasonic Intellectual Property Management Co., Ltd. 3D gesture stabilization for robust input control in mobile environments
DE102014117277A1 (en) * 2014-11-25 2016-05-25 Airbus Ds Optronics Gmbh carrier system
WO2018053785A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Image processing in an unmanned autonomous vehicle
EP3515825A4 (en) * 2016-09-23 2020-05-20 Qualcomm Incorporated Adaptive motion filtering in an unmanned autonomous vehicle
US11656081B2 (en) * 2019-10-18 2023-05-23 Anello Photonics, Inc. Integrated photonics optical gyroscopes optimized for autonomous terrestrial and aerial vehicles

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11205654A (en) * 1998-01-09 1999-07-30 Hitachi Ltd Image display device
US20090309984A1 (en) * 2006-06-29 2009-12-17 Thales Hybrid image stabilization for video camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11205654A (en) * 1998-01-09 1999-07-30 Hitachi Ltd Image display device
US20090309984A1 (en) * 2006-06-29 2009-12-17 Thales Hybrid image stabilization for video camera

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2952953A1 (en) * 2014-06-04 2015-12-09 MBDA Deutschland GmbH Method for aligning a camera system to a target object and camera system
US9330306B2 (en) 2014-06-11 2016-05-03 Panasonic Intellectual Property Management Co., Ltd. 3D gesture stabilization for robust input control in mobile environments
US10178316B2 (en) 2014-07-17 2019-01-08 Elbit Systems Ltd. Stabilization and display of remote images
KR20170047230A (en) * 2014-07-17 2017-05-04 엘비트 시스템스 엘티디. Stabilization and display of remote images
JP2017532798A (en) * 2014-07-17 2017-11-02 エルビット・システムズ・リミテッド Remote image stabilization and display
EP3170146A4 (en) * 2014-07-17 2018-01-10 Elbit Systems Ltd. Stabilization and display of remote images
WO2016009268A1 (en) * 2014-07-17 2016-01-21 Elbit Systems Ltd. Stabilization and display of remote images
KR102471347B1 (en) * 2014-07-17 2022-11-25 엘비트 시스템스 엘티디. Stabilization and display of remote images
DE102014117277A1 (en) * 2014-11-25 2016-05-25 Airbus Ds Optronics Gmbh carrier system
DE102014117277B4 (en) * 2014-11-25 2017-06-14 Airbus Ds Optronics Gmbh carrier system
WO2018053785A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Image processing in an unmanned autonomous vehicle
EP3515825A4 (en) * 2016-09-23 2020-05-20 Qualcomm Incorporated Adaptive motion filtering in an unmanned autonomous vehicle
US10873702B2 (en) 2016-09-23 2020-12-22 Qualcomm Incorporated Adaptive motion filtering in an unmanned autonomous vehicle
US10917561B2 (en) 2016-09-23 2021-02-09 Qualcomm Incorporated Image processing in an unmanned autonomous vehicle
US11656081B2 (en) * 2019-10-18 2023-05-23 Anello Photonics, Inc. Integrated photonics optical gyroscopes optimized for autonomous terrestrial and aerial vehicles

Also Published As

Publication number Publication date
GB201009586D0 (en) 2010-07-21

Similar Documents

Publication Publication Date Title
US11263761B2 (en) Systems and methods for visual target tracking
US8896697B2 (en) Video motion compensation and stabilization gimbaled imaging system
US10771699B2 (en) Systems and methods for rolling shutter correction
GB2481027A (en) Image stabilising apparatus and method
US10337862B2 (en) Digital mapping system based on continuous scanning line of sight
EP2463622B1 (en) Cooperative nesting of mechanical and electronic stabilization for an airborne camera system
US11076082B2 (en) Systems and methods for digital video stabilization
EP1898230B1 (en) Tracking a moving object from a camera on a moving platform
US8780174B1 (en) Three-dimensional vision system for displaying images taken from a moving vehicle
US20180143636A1 (en) Autonomous system for shooting moving images from a drone, with target tracking and holding of the target shooting angle
US20110228047A1 (en) Method and apparatus for displaying stereographic images
GB2483224A (en) Imaging device with measurement and processing means compensating for device motion
CN111226154B (en) Autofocus camera and system
US11272105B2 (en) Image stabilization control method, photographing device and mobile platform
US20080118104A1 (en) High fidelity target identification and acquisition through image stabilization and image size regulation
Bereska et al. System for multi-axial mechanical stabilization of digital camera
Moore et al. UAV altitude and attitude stabilisation using a coaxial stereo vision system
Odelga et al. Efficient real-time video stabilization for UAVs using only IMU data
US11415990B2 (en) Optical object tracking on focal plane with dynamic focal length
CN115442510A (en) Video display method and system for view angle of unmanned aerial vehicle
WO2005048605A1 (en) Synthetic electronic imaging system
KR20190143172A (en) Pan-tilt-gimbal integrated system and control method thereof
CN114296479B (en) Image-based ground vehicle tracking method and system by unmanned aerial vehicle
KR20170083979A (en) System for controlling radio-controlled flight vehicle and its carmera gimbal for aerial tracking shot
Navayot et al. Real-Time Video Stabilization for Aerial MobileMultimedia Communication

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)