WO2021180294A1 - Imaging device and method for efficient capture of stationary objects - Google Patents

Imaging device and method for efficient capture of stationary objects Download PDF

Info

Publication number
WO2021180294A1
WO2021180294A1 PCT/EP2020/056146 EP2020056146W WO2021180294A1 WO 2021180294 A1 WO2021180294 A1 WO 2021180294A1 EP 2020056146 W EP2020056146 W EP 2020056146W WO 2021180294 A1 WO2021180294 A1 WO 2021180294A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging device
movement
image sensor
pixel data
scene
Prior art date
Application number
PCT/EP2020/056146
Other languages
French (fr)
Inventor
Samu Koskinen
Tomi Aarnio
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2020/056146 priority Critical patent/WO2021180294A1/en
Publication of WO2021180294A1 publication Critical patent/WO2021180294A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Definitions

  • the present application generally relates to the field of imaging.
  • some example embodiments relate to capturing images with an event camera.
  • an imaging device may comprise an image sensor comprising a plurality of pixels.
  • the image sensor may be configured to provide pixel data based on the plurality of pixels, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels.
  • the imaging device may further comprise an actuation unit configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels. The initiation of the pixel data may be based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
  • the actuation unit may be configured to attribute the apparent movement by moving at least one component of the imaging device. This solution enables to cause the apparent movement of the stationary objects with respect to the image sensor.
  • the at least one component may comprise at least one of: at least one lens, the image sensor, or a camera module comprising the image sensor and the at least one lens.
  • the image sensor may be configured to provide the pixel data in response to detecting the change of the at least one light property to exceed a threshold. This solution enables to reduces the amount of data provided by the image sensor.
  • the at least one light property may comprise at least one of: intensity of the light; brightness of the light; intensity of at least one color; or intensity of at least one wavelength range.
  • the movement of the at least one component may comprise at least one rotational movement of the camera module.
  • the rotational movement of the camera module may be with respect to at least one axis perpendicular to an optical axis of the camera module. This solution ensures sufficient apparent movement of the stationary objects of the scene.
  • the movement of the at least one component may be in an angle of 88-92 degrees to an optical axis of the imaging device. This solution ensures sufficient apparent movement of the stationary objects.
  • the movement of the at least one component may be smaller than a pixel size of the image sensor. This solution enables to capture the scene at sub-pixel accuracy.
  • the movement of the at least one component may comprise a plurality of sequential movements in a plurality of directions substantially perpendicular to the optical axis. This solution ensures sufficient apparent movement of the stationary objects.
  • the imaging device may comprise an image processor configured to receive the pixel data from the image sensor over a period of time, determine at least one shape in the pixel data, determine at least one vector indicating a location of the at least one shape in the pixel data, and store an image of the scene based on the at least one vector. This enable to efficiently store an image in a vector graphic format.
  • the period of time comprises the plurality of sequential movements. This solution ensures sufficient movement of the stationary objects in order to store the image in a vector graphic format.
  • the actuation unit may be configured to determine the movement of the at least one component based on a stabilization signal and a destabilization signal.
  • the stabilization signal may be configured to compensate for movement of the imaging device.
  • the destabilization signal may be configured to cause the apparent movement of the stationary objects of the scene relative to the image sensor. This solution enables to stabilize unintentional movements while causing the apparent movement of the stationary objects.
  • the movement of the at least one component may be determined based on a combination of the stabilization signal and the destabilization signal. This solution enables to stabilize unintentional movements while causing the apparent movement of the stationary objects with a single component.
  • movement of a first component of the imaging device may be determined based on the stabilization signal. Movement of a second component of the imaging device may be determined based on the destabilization signal.
  • the first component may comprise the at least one lens and the second component may comprise the image sensor.
  • the actuation unit may be configured to determine a perpendicular movement of the at least one component with respect to an optical axis of the imaging device based on a stabilization signal configured to compensate for movement of the imaging device.
  • the actuation unit may be further configured to cause the apparent movement of the stationary objects of the scene relative to the image sensor based on a displacement of the perpendicular movement.
  • the displacement of the perpendicular movement may comprise a rotation of the perpendicular movement with respect to at least one axis perpendicular to the optical axis.
  • the method may comprise providing pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels.
  • the method may further comprise initiating the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
  • a computer program may comprise computer program code configured to cause performance of any implementation form of the method of the second aspect, when the computer program is executed on a computer.
  • a computer program product may comprise a computer readable storage medium storing program code thereon, the program code comprising instructions for performing the method according to any implementation form of the second aspect.
  • a device may comprise means for performing any implementation form of the method of the second aspect.
  • Implementation forms of the invention can thus provide a device, a method, a computer program, and a computer program product for capturing stationary objects of a scene with an event camera.
  • FIG. 1 illustrates an imaging device and a scene, according to an embodiment of the invention
  • FIG. 2 illustrates an example of apparent movement of stationary objects of a scene, according to an embodiment of the invention
  • FIG. 3 illustrates an example of a device configured to practice one or more embodiments of the invention
  • FIG. 4 illustrates an example of a method for destabilizing at least one component of an imaging device, according to an embodiment of the invention
  • FIG. 5 illustrates an example of an imaging device configured to destabilize a camera module of the imaging device, according to an embodiment of the invention
  • FIG. 6 illustrates an example of an imaging device configured to destabilize at least one lens and/or an image sensor, according to an embodiment of the invention
  • FIG. 7 illustrates an example of displacing a perpendicular movement of at least one component of an imaging device, according to an embodiment of the invention
  • FIG. 8 illustrates an example of a method for storing a vector representation of a captured image, according to an embodiment of the invention.
  • FIG. 9 illustrates an example of a method for capturing an image, according to an embodiment of the invention.
  • Event cameras may be generally exploited to capture moving objects of a scene.
  • pixels of an image sensor may be configured to provide output data, when a level of change in at least one quantity, for example brightness, exceeds a threshold.
  • the output pixel data may include a discrete packet of information comprising, for example, at least one of a pixel address, a timestamp, a polarity of the change (e.g. increase or decrease of brightness), or an instantaneous level of the quantity.
  • the image sensor may be therefore configured output an asynchronous stream of data triggered by changes in the scene.
  • An advantage of an event camera is that undesirable effects, such as motion blur, underexposure, and overexposure may be avoided. Furthermore, the amount of data is reduced since pixel data is provided only for the changing pixels. Event cameras may be used, for example, in surveillance systems for which capturing changes in the scene may be sufficient. However, for other image capture purposes it may be also desired to capture stationary objects. Therefore, example embodiments enable capturing both moving and stationary objects of the scene utilizing features of an event camera. This also enables to efficiently store the captured image in a vector graphic format.
  • an imaging device may comprise an image sensor configured to provide pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels.
  • the imaging device may be further configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
  • the apparent movement of the stationary objects enables the stationary objects be also captured by an event based image sensor and thereby also stationary objects of the scene may be efficiently captured.
  • FIG. 1 illustrates an imaging device 100, according to an embodiment of the invention.
  • the imaging device 100 may comprise an image sensor 102.
  • the image sensor 102 may comprise a plurality of pixels, where a single pixel may be configured to capture a portion of a scene 120.
  • the imaging device may 100 comprise an event camera.
  • the image sensor 102 may configured to operate as an image sensor of the event camera.
  • the image sensor 102 may be configured to provide pixel data based on the plurality of pixels, wherein the pixel data may be indicative of the scene 120 and be provided based on a change of at least one light property detected by at least one of the plurality of pixels.
  • the image sensor 102 may be configured to asynchronously provide the pixel data for the scene 120 based on a change of a light property of at least one pixel, or in general based on a change of at least one characteristic of at least one pixel.
  • the at least one light property may comprise at least one of an intensity of the light, brightness of the light, intensity of at least one color, or intensity of at least one wavelength range.
  • the imaging device 100 may further comprise at least one lens 104.
  • the at least one lens 104 may comprise a single lens, a plurality of lenses, a lens group, or a plurality of lens groups.
  • the at least one lens may be configured to guide light received from the scene 120 to the image sensor 102.
  • the at least one lens 104 may be configured to move along the optical axis 110 of the imaging device 100, for example, to enable focusing the imaging device 100 to the scene 120.
  • the at least one lens 104 may be configured to move perpendicular to the optical axis 110, for example, to reduce effects of unintentional movement of the imaging device, for example, by means of optical image stabilization (OIS).
  • OIS optical image stabilization
  • the image sensor 102 and the at least one lens 104 may be comprised in a camera module 106.
  • the imaging device 100 may further comprise an actuation unit 108.
  • the actuation unit 108 may be configured to cause apparent movement of stationary objects of the scene 120 relative to the image sensor 102.
  • the actuation unit may be configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels. The initiation of the pixel data may be based on attributing an apparent movement to stationary objects of the scene 120 relative to the image sensor 102.
  • the image sensor 102 may be configured to provide the pixel data in response to detecting the change of the at least one light property to exceed a threshold.
  • the actuation unit 108 may be configured to attribute the apparent movement by moving at least one component of the imaging device 100.
  • the at least one component may comprise at least one of: the at least one lens 104, the image sensor 102, or the camera module 106.
  • the movement of the at least one component of the imaging device 102 may cause stationary objects of the scene 120, for example tree 114, to appear at different position at the image sensor 102. This change may initiate provision of pixel data at the image sensor 102.
  • the actuation unit 108 may comprise circuitry, such as for example at least one processor and at least one memory, configured to determine movement of the at least one component of the imaging device 120.
  • the actuation unit 108 may further comprise any suitable mechanical elements, for example, one or more actuators, to move the component(s) of the imaging device 100.
  • FIG. 2 illustrates an example of apparent movement of stationary objects of a scene 120, according to an embodiment of the invention.
  • FIG. 2 illustrates a landscape scene 120 comprising a tree 114. Initially the scene 120 may appear at the image sensor 102 as illustrated by the solid lines. After movement of at least one component of the imaging device 100, the scene 120 may appear at the image sensor 102 as illustrated by the dotted lines. For example, tree 114 may have slightly moved in the direction indicated by arrow 202. The other objects of the scene 120 such as the clouds may be subject to similar movement. It is noted that the apparent movement of the stationary objects may be caused without moving the image sensor 102 with respect to the scene 120.
  • FIG. 3 illustrates an embodiment of an imaging device 300, for example, a camera, a mobile phone, or a tablet computer.
  • the device 300 may comprise at least one processor 302.
  • the at least one processor 302 may comprise, for example, one or more of various processing devices, for example, a co-processor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
  • various processing devices for example, a co-processor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the device 300 may further comprise at least one memory 304.
  • the at least one memory 304 may be configured to store, for example, computer program code or the like, for example, operating system software and application software.
  • the at least one memory 304 may comprise one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination thereof.
  • the at least one memory 304 may be embodied as magnetic storage devices (such as hard disk drives, floppy disks, magnetic tapes, etc.), optical magnetic storage devices, or semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).
  • the device 300 may further comprise one or more sensors 308.
  • the sensors may be, for example, configured to detect and report movements of the device 300.
  • the sensors 308 may, for example, comprise at least one gyroscope or at least one accelerometer for measuring movement in one or more directions.
  • the sensors 308 may be, for example, used to detect unintentional camera shake in order to compensate for the unintentional movement by optical or mechanical image stabilization means.
  • the device 300 may further comprise a communication interface (not shown) configured to enable the device 300 to transmit and/or receive information, for example to transmit image data captured by device 300 or to receive configuration information for capturing images.
  • the communication interface may be configured to provide at least one wireless radio connection, such as a 3GPP mobile broadband connection (e.g. 3G, 4G, 5G).
  • the communication interface may be configured to provide one or more other type of connections, for example, a wireless local area network (WLAN) connection such as for example standardized by IEEE 802.11 series or Wi-Fi alliance; a short range wireless network connection, for example, a Bluetooth, NFC (near-field communication), or RFID connection; a wired connection, for example, a local area network (LAN) connection, a universal serial bus (USB) connection, a high-definition multimedia interface (HDMI), or an optical network connection; or a wired Internet connection.
  • the communication interface may comprise, or be configured to be coupled to, at least one antenna to transmit and/or receive radio frequency signals.
  • One or more of the various types of connections may be also implemented as separate communication interfaces, which may be coupled to or configured to be coupled to a plurality of antennas.
  • the device 300 may further comprise a user interface 310 comprising or being configured to be coupled to an input device and/or an output device.
  • the input device may take various forms such as for example a touch screen and/or one or more embedded control buttons.
  • the output device may for example comprise at least one display, speaker, vibration motor, or the like.
  • the at least one processor and/or the memory may be configured to implement this functionality.
  • this functionality may be implemented using program code 306 comprised, for example, in the at least one memory 304.
  • the at least one processor 302 and/or the memory 304 may be configured to at least partially perform functionality of the actuation unit 108 and/or image processor 502 of FIG. 5.
  • the functionality described herein may be performed, at least in part, by one or more computer program product components such as software components.
  • the device 300 comprises a processor or processor circuitry, such as for example a microcontroller, configured by the program code, when executed, to execute the embodiments of the operations and functionality described herein.
  • the functionality described herein can be performed, at least in part, by one or more hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), application-specific Integrated Circuits (ASICs), application-specific Standard Products (ASSPs), System- on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
  • FPGAs Field-programmable Gate Arrays
  • ASICs application-specific Integrated Circuits
  • ASSPs application-specific Standard Products
  • SOCs System- on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • GPUs Graphics Processing Units
  • the device 300 comprises means for performing at least one method described herein.
  • the means comprises at least one processor and at least one memory including computer program code, the at least one memory and the computer code configured to, with the at least one processor, cause the device at least to perform the method.
  • FIG. 4 illustrates an example of a method 400 for destabilizing at least one component of an imaging device, according to an embodiment of the invention.
  • the method 400 may be initiated, for example, in response to receiving instructions to initiate image capture.
  • the instructions may be received, for example, via the user interface 310, an internal data interface, or an external data interface over a network connection.
  • the imaging device 100 may obtain a destabilization signal.
  • the destabilization signal may in general comprise any data suitable for representing movement of at least one component the imaging device 100.
  • the destabilization signal may indicate a plurality of movements, for example, as a data structure that associates directions of movement with corresponding distances of movement.
  • the destabilization signal may further indicate an order of the movements.
  • the destabilization signal may further indicate a pattern of movements that is configured to be repeated.
  • the movement of the at least one component may comprise vibration like movement, for example, a pseudorandom sequence of movements.
  • the movement may be predefined and it may be configured to be performed during a time interval.
  • the destabilization signal may therefore comprise a static or semi- static sequence of movements that is defined before initiating the movement or the image capture.
  • the destabilization signal may be for example retrieved from a memory or received over a network connection.
  • a semi-static destabilization signal may be updated between image capturing events, for example based on information received over the network connection.
  • the imaging device 100 may receive sensor data, for example, from one or more sensors 308 of the imaging device 100.
  • the sensor data may comprise information about movement of the imaging device 100 during image capture.
  • the sensor data may comprise gyroscope and/or accelerometer data indicative of a current position and/or current movement of the imaging device 100.
  • the imaging device 100 may determine a stabilization signal.
  • the stabilization signal may be configured to compensate for movement of the imaging device 100.
  • the stabilization signal may be determined based on the received sensor data such that the stabilization signal indicates a movement opposite to the current movement of the imaging device 100.
  • the destabilization signal may indicate a plurality of movements for a time interval. During the time interval the sensor data may be used to determine stabilization signal(s) to compensate for unintentional movement of the imaging device 100.
  • the stabilization signal may comprise a dynamic signal configured to instantaneously compensate for the current movement of the imaging device 100, for example during the image capture.
  • the imaging device 100 may not be configured to receive sensor data and determine the stabilization signal. In a sufficiently stable capture environment, for example when using a tripod, the imaging device 100 may obtain the destabilization signal at operation 401, and omit operations 402 and/or 403.
  • the imaging device 100 may determine movement of at least one component of the imaging device based on the destabilization and/or the stabilization signal.
  • the movement of the at least one component may be substantially perpendicular to an optical axis 110 of the imaging device, for example in an angle of 88-92 degrees to the optical axis 110.
  • the movement may be perpendicular to the optical axis 110 within a tolerance of ⁇ 0.1 degree, ⁇ 0.5 degrees, ⁇ 1 degrees, ⁇ 1.5 degrees, or ⁇ 2 degrees, for example. However, also larger deviations from the perpendicular direction may be possible.
  • the at least one component may comprise the at least one lens 104, the image sensor 102, and/or the camera module 106.
  • the imaging device 100 may further cause movement of the at least one component based on the destabilization signal and optionally the stabilization signal.
  • the movement of the at least one component may be determined based on combination of the stabilization signal and the destabilization signal.
  • the destabilization signal and stabilization signal may be combined, for example, by summing the signals.
  • the stabilization signal may be configured to compensate for movement of the imaging device.
  • the destabilization signal may be configured to cause the apparent movement of the stationary objects of the scene 120 relative to the image sensor 102. Therefore, the destabilization signal may effectively prevent the stabilization signal from fully stabilizing the image, thereby preserving some movement of the stationary objects.
  • the destabilization signal may be configured to cause relatively rapid movements, while the stabilization signal may be configured to compensate for relatively slow movements caused for example by the user of the imaging device 100.
  • the destabilization signal may indicate each movement of the at least one component to be performed within 1 ms.
  • time resolution of the destabilization signal may be higher than time resolution of the stabilization signal.
  • the destabilization signal and stabilization signal may be applied separately.
  • the actuation unit 108 may determine to cause movement of different components of the imaging device based on the stabilization and destabilization signals, as will be further described with reference to FIG. 6.
  • Using both the destabilization signal and the stabilization signal, either combined or separately, enables to stabilize unintentional movement while ensuring sufficient movement of stationary objects such that also the stationary objects may be captured by an event based image sensor.
  • the method may move back to operation 402 to receive further sensor data.
  • Further sensor data may be received for example during image capture and movement of the at least component defined by the destabilization signal.
  • the stabilization signal may be updated accordingly to compensate for changes in the unintentional movement of the imaging device 100.
  • Operations 403, 404, and 405 may be, for example, iterated during capture of a single image.
  • the destabilization signal may be updated.
  • different destabilization signals may be configured to be used for different applications.
  • the destabilization signal may be for example updated between two image capture events.
  • the destabilization signal may be semi-static.
  • the destabilization signal may be static during a capture of an image, but it may be possible to change the destabilization signal after the image has been captured.
  • Example embodiments of the method 400 will now be further discussed with reference to FIG. 5 to FIG. 8.
  • FIG. 5 illustrates an example of an imaging device 500 configured to destabilize a camera module 106 of the imaging device 500, according to an embodiment of the invention.
  • the imaging device 500 may generally comprise components similar to imaging device 100.
  • the camera module 106 may comprise at least one lens 104 and an image sensor 102 similar to imaging device 100.
  • the imaging device 500 may further comprise the actuation unit 108, the memory 304, the sensor(s) 308, and/or an image processor 502.
  • the actuation unit 108 and/or the image processor 502 may be embodied as separate hardware and/or software components or they may be at least partially implemented by means of the processor 302, memory 304, and program code 306.
  • the method 400 may be initiated at imaging device 500, for example in response to receiving instructions to initiate image capture at image processor 502 of the imaging device 500.
  • the image processor 502 may send instructions to initiate image destabilization to actuation unit 108, for example over an internal data interface such as for example a data bus.
  • the actuation unit 108 may receive instructions to initiate image destabilization, for example from image processor 502, or in general any component responsible of controlling image destabilization at the imaging device 500.
  • the actuation unit 108 may be configured to obtain the destabilization signal, for example similar to operation 401.
  • the destabilization signal may be for example retrieved from memory 304.
  • the destabilization signal may indicate at least one shift or rotation of the camera module 106.
  • the movement of the at least one component may therefore comprise at least one rotational of the camera module 106.
  • Rotational movement of the camera module 106 may comprise a plurality of sequential rotations in different angular directions at different angular distances.
  • the plurality of rotations may comprise a first rotation with respect to a first axis x and a second rotation with respect to a second axisy.
  • the first and second rotations may be associated with first and second angles that indicate direction and amount for each rotation, for example in degrees or radians.
  • the destabilization signal may indicate a plurality of rotations, wherein each rotation may be associated with an axis and an angle.
  • the movement of the at least one component may therefore comprise at least one shift of the camera module 106.
  • Shifting the camera module 106 may comprise a sequence of movements in a plurality of directions, for example in directions substantially perpendicular, for example, within ⁇ 10 degrees, to the optical axis 110 of the imaging device 500.
  • the sequence of movements may comprise a first movement with respect to a first axis x and a second movement with respect to a second axis j ⁇
  • the first and second movements may be associated with first and second distances, respectively.
  • the destabilization signal may comprise an indication of a plurality of movements, wherein each movement may be associated with a direction and a distance.
  • the actuation unit 108 may be further configured to receive sensor data, for example, similar to operation 402.
  • the sensor data may be received from sensor(s) 308 of the imaging device 500, as described above.
  • the actuation unit 108 may be further configured to determine a stabilization signal, for example similar to operation 403.
  • the stabilization signal may indicate movement opposite to the movement of the imaging device 500.
  • the stabilization signal may indicate at least one shift of the camera module 106, for example, with respect to either or both of axes x and y.
  • the stabilization signal may indicate rotation of the camera module 106, for example around either or both of axes x and y.
  • the actuation unit 108 may be further configured to determine movement of the camera module 106, for example in accordance with operation 404.
  • the actuation unit 108 may be further configured to cause movement of the camera module 106, for example based on providing a control signal to the camera module 106, or circuitry associated therewith.
  • the movement of the camera module 106 may comprise at least one shift or rotation of the camera module 106 based on at least the destabilization signal.
  • the at least one shift of the camera module 106 may be in at least one direction perpendicular to the optical axis 110 of the imaging device 500.
  • the at least one rotation of the camera module 106 may be with respect to at least one axis perpendicular to the optical axis 110 (z) of the imaging device 500.
  • the at least one shift or rotation may comprise a plurality of sequential shifts or rotations, where each shift or rotation may cause objects of the scene 120 to appear at different positions at the image sensor 102 of the camera module 106, thereby generating apparent movement of the objects with respect to the image sensor 102 of the camera module 106 and causing initiation of the pixel data at the image sensor 102.
  • the actuation unit 108 may be further configured to combine the stabilization and destabilization signals and control shift or rotation of the camera module 106 based on the combined signal.
  • actuation unit 108 may be configured to combine the destabilization and stabilization signals by summing the shift(s) or rotation(s) indicated by the stabilization signal and the corresponding sequential shift(s) or rotation(s) indicated by the destabilization signal. This enables to simultaneously compensate for unintentional movement of the imaging device 500 and cause apparent movement of the stationary objects of the scene 120. This may be achieved by controlling movement of the camera module 106, which may be beneficial if it is not possible or desirable to interfere operation of internal components of the camera module 106.
  • the image processor 502 may be configured to read the image sensor 102, for example, based on receiving the asynchronously provided pixel data from the image sensor 102 of the camera module 106. Based on the received pixel data, the image processor 502 may construct an image.
  • the image may be stored, for example, in the memory 304, for example as an image file.
  • the image may be stored in any suitable format. Example embodiments for storing the image data in an efficient vector graphic format will be further described below.
  • FIG. 6 illustrates an example of an imaging device 600 configured to destabilize at least one lens 104 and/or an image sensor 102, according to an embodiment of the invention.
  • the imaging device 600 may generally comprise components similar to the imaging device 100 or 500.
  • the method 400 may be initiated at the imaging device
  • the actuation unit 108 may obtain the destabilization signal, for example, similar to operation 402. Obtaining the destabilization signal may be performed in response to receiving instructions to initiate image destabilization.
  • the destabilization signal may be for example retrieved from the memory 304.
  • the destabilization signal may indicate movement of at least one component of the imaging device 600. According to an example embodiment, the destabilization signal may indicate movement of the at least one lens 104 and/or movement of the image sensor 102.
  • the actuation unit 108 may be further configured to receive sensor data, for example similar to operation 403.
  • the sensor data may be received from the one or more sensors 308 of the imaging device 600, similar to what has been described for imaging device 500.
  • the actuation unit 108 may be further configured to determine the stabilization signal, for example similar to operation 404.
  • the stabilization signal may indicate movement of at least one component of the imaging device 600 opposite to the movement of the imaging device 600. According to an example embodiment, the stabilization signal may indicate movement of the at least one lens 104 or the image sensor 102.
  • the actuation unit 108 may be further configured to determine and cause movement of the at least one component of the imaging device 600, for example, similar to operation 404. Movement of the at least one component may be substantially perpendicular, for example within ⁇ 10 degrees, to the optical axis 110 of the imaging device 600. For example, the movement of the at least one component may comprise a plurality of sequential movements in a plurality of directions substantially perpendicular to the optical axis 110, for example at a plane defined by two axes (x,y) substantially perpendicular to the optical axis 110.
  • the actuation unit may be configured to cause movement of the at least one lens 104 or the image sensor 102.
  • the movement may be caused based on the destabilization signal.
  • the movement may comprise a sequence of movements.
  • the sequence of movements may comprise a plurality of linear movements, for example in eight different directions at the p- plane, as illustrated in FIG. 6 by arrows 602. [0086]
  • the sequence of movements may be repeated.
  • the 502 may send destabilization instructions to the actuation unit 108 upon initiation of image capture.
  • the image processor 502 may request termination of the destabilization upon termination of the image capture.
  • the actuation unit 108 may control movement of the at least one component, for example, the at least one lens 104 or the image sensor 102 accordingly.
  • the actuation unit 108 may be configured to repeat the sequence of movements and terminate destabilization in response to a request to terminate image destabilization.
  • movement of the at least one component may be smaller than a pixel size of the image sensor 102.
  • each movement of the sequence of movements, or a subset of movements may be smaller than the pixel size of the image sensor 102.
  • Pixel size may refer to a width or a height of the pixels of the image sensor 102. This enables the scene 120 to be recorded at sub-pixel accuracy. Therefore, resolution of the captured image may be increased. It is noted that even if each movement or a subset of movements were smaller than the pixel size, sequential movements of the at least one component may result in a total movement that exceeds the pixel size. However, the individual movements smaller than the pixel size enable to capture the scene 120 at sub-pixel accuracy.
  • movement of a first component of the imaging device 600 may be determined based on the stabilization signal. Movement of a second component of the imaging device 600 may be determined based on the destabilization signal.
  • the first component may comprise the at least one lens 104.
  • the second component may comprise the image sensor 102.
  • the actuation unit 108 may control movement of the at least one lens 104 based on the stabilization signal and control movement of the image sensor 102 based on the destabilization signal. This enables to simultaneously compensate for unintentional movement of the imaging device 600 while ensuring sufficient apparent movement of stationary objects of the scene 120 with respect to the image sensor 102 without combining the destabilization and stabilization signals at the actuation unit 108.
  • the image destabilization function could be provided independent of the stabilization function. However, the two functions would still contribute together to the position of the scene 120 at the image sensor 102.
  • the first component of the imaging device 600 may comprise the image sensor 102.
  • the second component of the imaging device 600 may comprise the at least one lens 104.
  • the actuation unit 108 may control movement of the image sensor 102 based on the stabilization signal and control movement of the at least one lens 104 based on the destabilization signal.
  • the actuation unit may combine the stabilization and destabilization signals and control movement of the at least one lens 104 or the image sensor 102 based on the combined signal.
  • the actuation unit 108 may combine the destabilization and stabilization signals by summing the movement indicated by the stabilization signal and the corresponding sequential movement(s) indicated by the destabilization signal.
  • the combined signal may be used to control movement of the at least one lens 104 or the image sensor 102.
  • circuitry for causing the movement needs to be included only for either the at least one lens 104 or the image sensor 102, which makes the solution less complex to implement.
  • the amount of mechanical components may be reduced and therefore it may be possible to implement the functionality in a smaller space.
  • FIG. 7 illustrates an example of displacing a perpendicular movement of at least one component 702 of the imaging device, according to an embodiment of the invention.
  • the imaging device 700 may comprise components similar to the imaging devices 100, 500, or 600.
  • the actuation unit 108 may be configured to determine a perpendicular movement of the at least one component 702 with respect to an optical axis 110 of the imaging device 700.
  • the perpendicular movement may be determined based on a stabilization signal configured to compensate for movement of the imaging device 700.
  • the apparent movement of the stationary objects of the scene 120 relative to the image sensor 102 may be caused based on a displacement of the determined perpendicular movement.
  • the displacement of the determined perpendicular movement may comprise a rotation of the perpendicular movement with respect to at least one axis perpendicular to the optical axis 100.
  • the perpendicular movement may be, for example, rotated by angle a with respect to the x-axis. Consequently, the at least one component 702 may move, for example, along a rotated linear trajectory 704 instead of moving along the x-axis.
  • the at least one component 702 may comprise the camera module 106, the at least one lens 104, or the image sensor 102. This enables to simultaneously stabilize unintentional movement of the imaging device 700 and cause apparent movement of the scene 120 with respect to the image sensor 102.
  • destabilization may be provided without a using a destabilization signal that indicates movements of the at least one component. This may be applied for example to compensate for the exact focal point varying a bit towards the corner and if it is for example desired to record the image data temporally at different locations.
  • FIG. 8 illustrates an example of storing a vector representation of a captured image, according to an embodiment of the invention.
  • the method 800 may be for example implemented at image processor 502.
  • the image processor 502 may receive pixel data from the image sensor
  • the pixel data may be received over a period of time.
  • the image processor 502 may receive pixel data from a plurality of pixels of the image sensor 102 based on changes in the scene 120.
  • pixel data may be received multiple times from the same pixel.
  • the period of time comprises the plurality of sequential movements, for example the plurality of sequential movements of the at least one lens 104 or the image sensor 102.
  • Different movements may be determined in order to detect particular shapes in the scene 120, for example a vertical movement may be used for detecting horizontal lines of the scene 120.
  • the plurality of movements may be determined based on the shapes to be detected.
  • the shapes may comprise any suitable shapes for representing the scene 120 in a vector graphic format, such as for example lines, polylines, polygons, circles, ellipses, or the like.
  • the apparent movement of the scene 120 causes desired shapes to appear in the pixel data.
  • the image processor 502 may be configured to determine at least one shape in the pixel data.
  • the pixel data may be for example converted into any suitable image data format such that shapes may recognized based on any suitable shape detection method. For example, edge detection methods including for example Canny, Sobel, or Prewitt filters may be applied.
  • Detecting a shape may comprise detecting a type of the shape, a location of the shape, and/or one or more characteristics of the shape.
  • the image processor 502 may determine at least one vector indicating a location of the at least one shape in the pixel data.
  • a vector may in general comprise an indication of a type of the shape and one or more shape dependent parameters for indicating location and/or characteristics of the shape. For example, if a line is detected in the pixel data, the vector may indicate that the shape is a line with certain width and indicate two endpoints for the line. If the shape is a circle, the vector may indicate that the shape is a circle with a certain width, center point, and radius.
  • the image processor 502 may store an image of the scene 120 based on the at least one vector.
  • the image may be stored for example at memory 304.
  • the image may be stored in a vector graphic format.
  • the vector graphic format may comprise the at least one vector. In general, the vector graphic format may comprise locations for a plurality of shapes. The shapes may be of different types.
  • FIG. 9 illustrates an example of a method 900 for capturing an image, according to an embodiment of the invention.
  • the method may comprise providing pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels.
  • the method may comprise initiating the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
  • Various example embodiments disclose methods, computer programs and devices for generating video representations in a virtual reality space and interacting with video representations in the virtual reality space.
  • Example embodiments improve user experience for example when accessing or editing videos. Taking into account the location information and/or contextual metadata enables to visualize also predictions of future events in an informative manner.
  • a device for example, a camera, a mobile phone, a tablet computer, or another imaging device, may be configured to perform or cause performance of any aspect of the method(s) described herein.
  • a computer program may comprise instructions for causing, when executed, a device to perform any aspect of the method(s) described herein.
  • a device may comprise means for performing any aspect of the method(s) described herein.
  • the means comprises at least one processor, and memory including program code, the at least one processor, and program code configured to, when executed by the at least one processor, cause performance of any aspect of the method(s).

Abstract

Various example embodiments relate to capturing images with an event camera. An imaging device may comprise an image sensor configured provide pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels. The imaging device may be further configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor. The apparent movement of the stationary objects enables also the stationary objects to be captured by an event based image sensor and thereby also stationary objects of the scene may be efficiently captured. Devices, methods, and computer programs are disclosed.

Description

IMAGING DEVICE AND METHOD FOR EFFICIENT CAPTURE OF STATIONARY OBJECTS
TECHNICAL FIELD
[0001] The present application generally relates to the field of imaging. In particular, some example embodiments relate to capturing images with an event camera.
BACKGROUND
[0002] Conventional cameras may use a shutter to capture images by recording values for each pixel of an image sensor substantially simultaneously. By contrast, event cameras contain pixels that independently respond to changes in the scene. Each pixel may be configured to operate independently to asynchronously report changes in the scene as they occur, and not providing output otherwise.
SUMMARY
[0003] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0004] It is an object of the invention to enable capturing stationary objects with an event camera. The foregoing and other objects may be achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description, and the figures.
[0005] According to a first aspect, an imaging device is provided. The imaging device may comprise an image sensor comprising a plurality of pixels. The image sensor may be configured to provide pixel data based on the plurality of pixels, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels. The imaging device may further comprise an actuation unit configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels. The initiation of the pixel data may be based on attributing an apparent movement to stationary objects of the scene relative to the image sensor. This solution enables to capture stationary objects of the scene with an event based image sensor. [0006] In an implementation form of the first aspect, the actuation unit may be configured to attribute the apparent movement by moving at least one component of the imaging device. This solution enables to cause the apparent movement of the stationary objects with respect to the image sensor.
[0007] In another implementation form of the first aspect, the at least one component may comprise at least one of: at least one lens, the image sensor, or a camera module comprising the image sensor and the at least one lens. This solution provides different options for causing the apparent movement of the objects.
[0008] In another implementation form of the first aspect, the image sensor may be configured to provide the pixel data in response to detecting the change of the at least one light property to exceed a threshold. This solution enables to reduces the amount of data provided by the image sensor.
[0009] In another implementation form of the first aspect, the at least one light property may comprise at least one of: intensity of the light; brightness of the light; intensity of at least one color; or intensity of at least one wavelength range. This solution enables to initiate provision of the pixel data based on different light properties.
[0010] In another implementation form of the first aspect, the movement of the at least one component may comprise at least one rotational movement of the camera module. This solution enables to cause the apparent movement of the objects without interfering internal functionality of the camera module.
[0011] In another implementation form of the first aspect, the rotational movement of the camera module may be with respect to at least one axis perpendicular to an optical axis of the camera module. This solution ensures sufficient apparent movement of the stationary objects of the scene.
[0012] In another implementation form of the first aspect, the movement of the at least one component may be in an angle of 88-92 degrees to an optical axis of the imaging device. This solution ensures sufficient apparent movement of the stationary objects.
[0013] In another implementation form of the first aspect, the movement of the at least one component may be smaller than a pixel size of the image sensor. This solution enables to capture the scene at sub-pixel accuracy.
[0014] In another implementation form of the first aspect, the movement of the at least one component may comprise a plurality of sequential movements in a plurality of directions substantially perpendicular to the optical axis. This solution ensures sufficient apparent movement of the stationary objects.
[0015] In another implementation form of the first aspect, the imaging device may comprise an image processor configured to receive the pixel data from the image sensor over a period of time, determine at least one shape in the pixel data, determine at least one vector indicating a location of the at least one shape in the pixel data, and store an image of the scene based on the at least one vector. This enable to efficiently store an image in a vector graphic format.
[0016] In another implementation form of the first aspect, wherein the period of time comprises the plurality of sequential movements. This solution ensures sufficient movement of the stationary objects in order to store the image in a vector graphic format.
[0017] In another implementation form of the first aspect, the actuation unit may be configured to determine the movement of the at least one component based on a stabilization signal and a destabilization signal. The stabilization signal may be configured to compensate for movement of the imaging device. The destabilization signal may be configured to cause the apparent movement of the stationary objects of the scene relative to the image sensor. This solution enables to stabilize unintentional movements while causing the apparent movement of the stationary objects.
[0018] In another implementation form of the first aspect, the movement of the at least one component may be determined based on a combination of the stabilization signal and the destabilization signal. This solution enables to stabilize unintentional movements while causing the apparent movement of the stationary objects with a single component.
[0019] In another implementation form of the first aspect, movement of a first component of the imaging device may be determined based on the stabilization signal. Movement of a second component of the imaging device may be determined based on the destabilization signal. The first component may comprise the at least one lens and the second component may comprise the image sensor. This solution enables to logically separate the stabilization and destabilization processes.
[0020] In another implementation form of the first aspect, the actuation unit may be configured to determine a perpendicular movement of the at least one component with respect to an optical axis of the imaging device based on a stabilization signal configured to compensate for movement of the imaging device. The actuation unit may be further configured to cause the apparent movement of the stationary objects of the scene relative to the image sensor based on a displacement of the perpendicular movement. The displacement of the perpendicular movement may comprise a rotation of the perpendicular movement with respect to at least one axis perpendicular to the optical axis. This solution enables to provide image destabilization without a destabilization signal. [0021] According to a second aspect, a method is provided for imaging. The method may comprise providing pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels. The method may further comprise initiating the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
[0022] In an implementation form of the second aspect, the method of the second aspect may be executed in an imaging device according to any implementation form of the first aspect. [0023] According to a third aspect, a computer program may comprise computer program code configured to cause performance of any implementation form of the method of the second aspect, when the computer program is executed on a computer.
[0024] According to a fourth aspect a computer program product may comprise a computer readable storage medium storing program code thereon, the program code comprising instructions for performing the method according to any implementation form of the second aspect.
[0025] According to a fifth aspect, a device may comprise means for performing any implementation form of the method of the second aspect.
[0026] Implementation forms of the invention can thus provide a device, a method, a computer program, and a computer program product for capturing stationary objects of a scene with an event camera. These and other aspects of the invention will be apparent from the embodiment(s) described below.
DESCRIPTION OF THE DRAWINGS
[0027] The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and constitute a part of this specification, illustrate the embodiments and together with the description help to understand the embodiments. In the drawings:
[0028] FIG. 1 illustrates an imaging device and a scene, according to an embodiment of the invention;
[0029] FIG. 2 illustrates an example of apparent movement of stationary objects of a scene, according to an embodiment of the invention;
[0030] FIG. 3 illustrates an example of a device configured to practice one or more embodiments of the invention; [0031] FIG. 4 illustrates an example of a method for destabilizing at least one component of an imaging device, according to an embodiment of the invention;
[0032] FIG. 5 illustrates an example of an imaging device configured to destabilize a camera module of the imaging device, according to an embodiment of the invention;
[0033] FIG. 6 illustrates an example of an imaging device configured to destabilize at least one lens and/or an image sensor, according to an embodiment of the invention;
[0034] FIG. 7 illustrates an example of displacing a perpendicular movement of at least one component of an imaging device, according to an embodiment of the invention;
[0035] FIG. 8 illustrates an example of a method for storing a vector representation of a captured image, according to an embodiment of the invention; and
[0036] FIG. 9 illustrates an example of a method for capturing an image, according to an embodiment of the invention.
[0037] Like references are used to designate like parts in the accompanying drawings.
DETAILED DESCRIPTION
[0038] Reference will now be made in detail to example embodiments, examples of which are illustrated in the accompanying drawings. The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
[0039] Event cameras may be generally exploited to capture moving objects of a scene.
For example, pixels of an image sensor may be configured to provide output data, when a level of change in at least one quantity, for example brightness, exceeds a threshold. The output pixel data may include a discrete packet of information comprising, for example, at least one of a pixel address, a timestamp, a polarity of the change (e.g. increase or decrease of brightness), or an instantaneous level of the quantity. The image sensor may be therefore configured output an asynchronous stream of data triggered by changes in the scene.
[0040] An advantage of an event camera is that undesirable effects, such as motion blur, underexposure, and overexposure may be avoided. Furthermore, the amount of data is reduced since pixel data is provided only for the changing pixels. Event cameras may be used, for example, in surveillance systems for which capturing changes in the scene may be sufficient. However, for other image capture purposes it may be also desired to capture stationary objects. Therefore, example embodiments enable capturing both moving and stationary objects of the scene utilizing features of an event camera. This also enables to efficiently store the captured image in a vector graphic format.
[0041] According to an example embodiment, an imaging device may comprise an image sensor configured to provide pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels. The imaging device may be further configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor. The apparent movement of the stationary objects enables the stationary objects be also captured by an event based image sensor and thereby also stationary objects of the scene may be efficiently captured.
[0042] FIG. 1 illustrates an imaging device 100, according to an embodiment of the invention. The imaging device 100 may comprise an image sensor 102. The image sensor 102 may comprise a plurality of pixels, where a single pixel may be configured to capture a portion of a scene 120. The imaging device may 100 comprise an event camera. The image sensor 102 may configured to operate as an image sensor of the event camera. For example, the image sensor 102 may be configured to provide pixel data based on the plurality of pixels, wherein the pixel data may be indicative of the scene 120 and be provided based on a change of at least one light property detected by at least one of the plurality of pixels. Hence, the image sensor 102 may be configured to asynchronously provide the pixel data for the scene 120 based on a change of a light property of at least one pixel, or in general based on a change of at least one characteristic of at least one pixel. The at least one light property may comprise at least one of an intensity of the light, brightness of the light, intensity of at least one color, or intensity of at least one wavelength range.
[0043] The imaging device 100 may further comprise at least one lens 104. The at least one lens 104 may comprise a single lens, a plurality of lenses, a lens group, or a plurality of lens groups. The at least one lens may be configured to guide light received from the scene 120 to the image sensor 102. The at least one lens 104 may be configured to move along the optical axis 110 of the imaging device 100, for example, to enable focusing the imaging device 100 to the scene 120. Furthermore, the at least one lens 104 may be configured to move perpendicular to the optical axis 110, for example, to reduce effects of unintentional movement of the imaging device, for example, by means of optical image stabilization (OIS). The image sensor 102 and the at least one lens 104 may be comprised in a camera module 106.
[0044] The imaging device 100 may further comprise an actuation unit 108. The actuation unit 108 may be configured to cause apparent movement of stationary objects of the scene 120 relative to the image sensor 102. For example, the actuation unit may be configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels. The initiation of the pixel data may be based on attributing an apparent movement to stationary objects of the scene 120 relative to the image sensor 102. The image sensor 102 may be configured to provide the pixel data in response to detecting the change of the at least one light property to exceed a threshold. For example, the actuation unit 108 may be configured to attribute the apparent movement by moving at least one component of the imaging device 100. The at least one component may comprise at least one of: the at least one lens 104, the image sensor 102, or the camera module 106. The movement of the at least one component of the imaging device 102 may cause stationary objects of the scene 120, for example tree 114, to appear at different position at the image sensor 102. This change may initiate provision of pixel data at the image sensor 102.
[0045] The actuation unit 108 may comprise circuitry, such as for example at least one processor and at least one memory, configured to determine movement of the at least one component of the imaging device 120. The actuation unit 108 may further comprise any suitable mechanical elements, for example, one or more actuators, to move the component(s) of the imaging device 100.
[0046] FIG. 2 illustrates an example of apparent movement of stationary objects of a scene 120, according to an embodiment of the invention. FIG. 2 illustrates a landscape scene 120 comprising a tree 114. Initially the scene 120 may appear at the image sensor 102 as illustrated by the solid lines. After movement of at least one component of the imaging device 100, the scene 120 may appear at the image sensor 102 as illustrated by the dotted lines. For example, tree 114 may have slightly moved in the direction indicated by arrow 202. The other objects of the scene 120 such as the clouds may be subject to similar movement. It is noted that the apparent movement of the stationary objects may be caused without moving the image sensor 102 with respect to the scene 120. For example, the apparent movement may be caused by moving the at least one lens 104 such that light coming from the scene 120 is guided to different pixels of the image sensor 102, even if the image sensor 102 were at the same position with respect to the scene 120. [0047] FIG. 3 illustrates an embodiment of an imaging device 300, for example, a camera, a mobile phone, or a tablet computer. The device 300 may comprise at least one processor 302. The at least one processor 302 may comprise, for example, one or more of various processing devices, for example, a co-processor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
[0048] The device 300 may further comprise at least one memory 304. The at least one memory 304 may be configured to store, for example, computer program code or the like, for example, operating system software and application software. The at least one memory 304 may comprise one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination thereof. For example, the at least one memory 304 may be embodied as magnetic storage devices (such as hard disk drives, floppy disks, magnetic tapes, etc.), optical magnetic storage devices, or semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).
[0049] The device 300 may further comprise one or more sensors 308. The sensors may be, for example, configured to detect and report movements of the device 300. The sensors 308 may, for example, comprise at least one gyroscope or at least one accelerometer for measuring movement in one or more directions. The sensors 308 may be, for example, used to detect unintentional camera shake in order to compensate for the unintentional movement by optical or mechanical image stabilization means.
[0050] The device 300 may further comprise a communication interface (not shown) configured to enable the device 300 to transmit and/or receive information, for example to transmit image data captured by device 300 or to receive configuration information for capturing images. The communication interface may be configured to provide at least one wireless radio connection, such as a 3GPP mobile broadband connection (e.g. 3G, 4G, 5G). Alternatively, or additionally, the communication interface may be configured to provide one or more other type of connections, for example, a wireless local area network (WLAN) connection such as for example standardized by IEEE 802.11 series or Wi-Fi alliance; a short range wireless network connection, for example, a Bluetooth, NFC (near-field communication), or RFID connection; a wired connection, for example, a local area network (LAN) connection, a universal serial bus (USB) connection, a high-definition multimedia interface (HDMI), or an optical network connection; or a wired Internet connection. The communication interface may comprise, or be configured to be coupled to, at least one antenna to transmit and/or receive radio frequency signals. One or more of the various types of connections may be also implemented as separate communication interfaces, which may be coupled to or configured to be coupled to a plurality of antennas.
[0051] The device 300 may further comprise a user interface 310 comprising or being configured to be coupled to an input device and/or an output device. The input device may take various forms such as for example a touch screen and/or one or more embedded control buttons. The output device may for example comprise at least one display, speaker, vibration motor, or the like.
[0052] When the device 300 is configured to implement some functionality, some component and/or components of the device 300, for example, the at least one processor and/or the memory, may be configured to implement this functionality. Furthermore, when the at least one processor is configured to implement some functionality, this functionality may be implemented using program code 306 comprised, for example, in the at least one memory 304. For example, the at least one processor 302 and/or the memory 304 may be configured to at least partially perform functionality of the actuation unit 108 and/or image processor 502 of FIG. 5.
[0053] The functionality described herein may be performed, at least in part, by one or more computer program product components such as software components. According to an embodiment, the device 300 comprises a processor or processor circuitry, such as for example a microcontroller, configured by the program code, when executed, to execute the embodiments of the operations and functionality described herein. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), application- specific Integrated Circuits (ASICs), application-specific Standard Products (ASSPs), System- on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
[0054] The device 300 comprises means for performing at least one method described herein. In one example, the means comprises at least one processor and at least one memory including computer program code, the at least one memory and the computer code configured to, with the at least one processor, cause the device at least to perform the method. [0055] Although the device 300 is illustrated as a single device it is appreciated that, wherever applicable, functions of the device 300 may be distributed to a plurality of devices. [0056] FIG. 4 illustrates an example of a method 400 for destabilizing at least one component of an imaging device, according to an embodiment of the invention. The method 400 may be initiated, for example, in response to receiving instructions to initiate image capture. The instructions may be received, for example, via the user interface 310, an internal data interface, or an external data interface over a network connection.
[0057] At 401, the imaging device 100 may obtain a destabilization signal. The destabilization signal may in general comprise any data suitable for representing movement of at least one component the imaging device 100. The destabilization signal may indicate a plurality of movements, for example, as a data structure that associates directions of movement with corresponding distances of movement. The destabilization signal may further indicate an order of the movements. The destabilization signal may further indicate a pattern of movements that is configured to be repeated. The movement of the at least one component may comprise vibration like movement, for example, a pseudorandom sequence of movements. The movement may be predefined and it may be configured to be performed during a time interval. The destabilization signal may therefore comprise a static or semi- static sequence of movements that is defined before initiating the movement or the image capture. The destabilization signal may be for example retrieved from a memory or received over a network connection. A semi-static destabilization signal may be updated between image capturing events, for example based on information received over the network connection.
[0058] At 402, the imaging device 100 may receive sensor data, for example, from one or more sensors 308 of the imaging device 100. The sensor data may comprise information about movement of the imaging device 100 during image capture. As discussed above, the sensor data may comprise gyroscope and/or accelerometer data indicative of a current position and/or current movement of the imaging device 100.
[0059] At 403, the imaging device 100 may determine a stabilization signal. The stabilization signal may be configured to compensate for movement of the imaging device 100. The stabilization signal may be determined based on the received sensor data such that the stabilization signal indicates a movement opposite to the current movement of the imaging device 100. As discussed above, the destabilization signal may indicate a plurality of movements for a time interval. During the time interval the sensor data may be used to determine stabilization signal(s) to compensate for unintentional movement of the imaging device 100. Hence, the stabilization signal may comprise a dynamic signal configured to instantaneously compensate for the current movement of the imaging device 100, for example during the image capture.
[0060] Even though operations 402 and 403 have been illustrated in FIG. 4, it is appreciated that in some applications the imaging device 100 may not be configured to receive sensor data and determine the stabilization signal. In a sufficiently stable capture environment, for example when using a tripod, the imaging device 100 may obtain the destabilization signal at operation 401, and omit operations 402 and/or 403.
[0061] At 404, the imaging device 100, for example, the actuation unit 108, may determine movement of at least one component of the imaging device based on the destabilization and/or the stabilization signal. The movement of the at least one component may be substantially perpendicular to an optical axis 110 of the imaging device, for example in an angle of 88-92 degrees to the optical axis 110. The movement may be perpendicular to the optical axis 110 within a tolerance of ±0.1 degree, ±0.5 degrees, ±1 degrees, ±1.5 degrees, or ±2 degrees, for example. However, also larger deviations from the perpendicular direction may be possible. The at least one component may comprise the at least one lens 104, the image sensor 102, and/or the camera module 106. The imaging device 100 may further cause movement of the at least one component based on the destabilization signal and optionally the stabilization signal.
[0062] According to an example embodiment, the movement of the at least one component may be determined based on combination of the stabilization signal and the destabilization signal. The destabilization signal and stabilization signal may be combined, for example, by summing the signals. The stabilization signal may be configured to compensate for movement of the imaging device. The destabilization signal may be configured to cause the apparent movement of the stationary objects of the scene 120 relative to the image sensor 102. Therefore, the destabilization signal may effectively prevent the stabilization signal from fully stabilizing the image, thereby preserving some movement of the stationary objects.
[0063] The destabilization signal may be configured to cause relatively rapid movements, while the stabilization signal may be configured to compensate for relatively slow movements caused for example by the user of the imaging device 100. For example, the destabilization signal may indicate each movement of the at least one component to be performed within 1 ms. According to an embodiment, time resolution of the destabilization signal may be higher than time resolution of the stabilization signal. When combining the destabilization and stabilization signals, one stabilization signal sample may be therefore applied to a plurality of destabilization signal samples. This enables to reduce the amount of memory needed for processing the stabilization signal.
[0064] According to an example embodiment, the destabilization signal and stabilization signal may be applied separately. For example, the actuation unit 108 may determine to cause movement of different components of the imaging device based on the stabilization and destabilization signals, as will be further described with reference to FIG. 6. [0065] Using both the destabilization signal and the stabilization signal, either combined or separately, enables to stabilize unintentional movement while ensuring sufficient movement of stationary objects such that also the stationary objects may be captured by an event based image sensor.
[0066] At 405, the method may move back to operation 402 to receive further sensor data. Further sensor data may be received for example during image capture and movement of the at least component defined by the destabilization signal. The stabilization signal may be updated accordingly to compensate for changes in the unintentional movement of the imaging device 100. Operations 403, 404, and 405 may be, for example, iterated during capture of a single image.
[0067] At 406, the destabilization signal may be updated. For example, different destabilization signals may be configured to be used for different applications. The destabilization signal may be for example updated between two image capture events. As discussed above, the destabilization signal may be semi-static. For example, the destabilization signal may be static during a capture of an image, but it may be possible to change the destabilization signal after the image has been captured.
[0068] Example embodiments of the method 400 will now be further discussed with reference to FIG. 5 to FIG. 8.
[0069] FIG. 5 illustrates an example of an imaging device 500 configured to destabilize a camera module 106 of the imaging device 500, according to an embodiment of the invention. The imaging device 500 may generally comprise components similar to imaging device 100. For example, the camera module 106 may comprise at least one lens 104 and an image sensor 102 similar to imaging device 100. The imaging device 500 may further comprise the actuation unit 108, the memory 304, the sensor(s) 308, and/or an image processor 502. The actuation unit 108 and/or the image processor 502 may be embodied as separate hardware and/or software components or they may be at least partially implemented by means of the processor 302, memory 304, and program code 306. [0070] The method 400 may be initiated at imaging device 500, for example in response to receiving instructions to initiate image capture at image processor 502 of the imaging device 500. The image processor 502 may send instructions to initiate image destabilization to actuation unit 108, for example over an internal data interface such as for example a data bus. The actuation unit 108 may receive instructions to initiate image destabilization, for example from image processor 502, or in general any component responsible of controlling image destabilization at the imaging device 500.
[0071] The actuation unit 108 may be configured to obtain the destabilization signal, for example similar to operation 401. The destabilization signal may be for example retrieved from memory 304.
[0072] According to an example embodiment, the destabilization signal may indicate at least one shift or rotation of the camera module 106. The movement of the at least one component may therefore comprise at least one rotational of the camera module 106. Rotational movement of the camera module 106 may comprise a plurality of sequential rotations in different angular directions at different angular distances. For example, the plurality of rotations may comprise a first rotation with respect to a first axis x and a second rotation with respect to a second axisy. The first and second rotations may be associated with first and second angles that indicate direction and amount for each rotation, for example in degrees or radians. In general, the destabilization signal may indicate a plurality of rotations, wherein each rotation may be associated with an axis and an angle.
[0073] Alternatively, the movement of the at least one component may therefore comprise at least one shift of the camera module 106. Shifting the camera module 106 may comprise a sequence of movements in a plurality of directions, for example in directions substantially perpendicular, for example, within ±10 degrees, to the optical axis 110 of the imaging device 500. The sequence of movements may comprise a first movement with respect to a first axis x and a second movement with respect to a second axis j\ The first and second movements may be associated with first and second distances, respectively. In general, the destabilization signal may comprise an indication of a plurality of movements, wherein each movement may be associated with a direction and a distance.
[0074] The actuation unit 108 may be further configured to receive sensor data, for example, similar to operation 402. The sensor data may be received from sensor(s) 308 of the imaging device 500, as described above.
[0075] The actuation unit 108 may be further configured to determine a stabilization signal, for example similar to operation 403. The stabilization signal may indicate movement opposite to the movement of the imaging device 500. The stabilization signal may indicate at least one shift of the camera module 106, for example, with respect to either or both of axes x and y. Alternatively, or additionally, the stabilization signal may indicate rotation of the camera module 106, for example around either or both of axes x and y.
[0076] The actuation unit 108 may be further configured to determine movement of the camera module 106, for example in accordance with operation 404. The actuation unit 108 may be further configured to cause movement of the camera module 106, for example based on providing a control signal to the camera module 106, or circuitry associated therewith. The movement of the camera module 106 may comprise at least one shift or rotation of the camera module 106 based on at least the destabilization signal. The at least one shift of the camera module 106 may be in at least one direction perpendicular to the optical axis 110 of the imaging device 500. The at least one rotation of the camera module 106 may be with respect to at least one axis perpendicular to the optical axis 110 (z) of the imaging device 500. As discussed above, the at least one shift or rotation may comprise a plurality of sequential shifts or rotations, where each shift or rotation may cause objects of the scene 120 to appear at different positions at the image sensor 102 of the camera module 106, thereby generating apparent movement of the objects with respect to the image sensor 102 of the camera module 106 and causing initiation of the pixel data at the image sensor 102.
[0077] The actuation unit 108 may be further configured to combine the stabilization and destabilization signals and control shift or rotation of the camera module 106 based on the combined signal. For example, actuation unit 108 may be configured to combine the destabilization and stabilization signals by summing the shift(s) or rotation(s) indicated by the stabilization signal and the corresponding sequential shift(s) or rotation(s) indicated by the destabilization signal. This enables to simultaneously compensate for unintentional movement of the imaging device 500 and cause apparent movement of the stationary objects of the scene 120. This may be achieved by controlling movement of the camera module 106, which may be beneficial if it is not possible or desirable to interfere operation of internal components of the camera module 106.
[0078] Furthermore, the image processor 502 may be configured to read the image sensor 102, for example, based on receiving the asynchronously provided pixel data from the image sensor 102 of the camera module 106. Based on the received pixel data, the image processor 502 may construct an image. The image may be stored, for example, in the memory 304, for example as an image file. The image may be stored in any suitable format. Example embodiments for storing the image data in an efficient vector graphic format will be further described below.
[0079] FIG. 6 illustrates an example of an imaging device 600 configured to destabilize at least one lens 104 and/or an image sensor 102, according to an embodiment of the invention. The imaging device 600 may generally comprise components similar to the imaging device 100 or 500.
[0080] With reference to FIG. 4, the method 400 may be initiated at the imaging device
600, for example, similar to operations described for the imaging device 500.
[0081] The actuation unit 108 may obtain the destabilization signal, for example, similar to operation 402. Obtaining the destabilization signal may be performed in response to receiving instructions to initiate image destabilization. The destabilization signal may be for example retrieved from the memory 304. The destabilization signal may indicate movement of at least one component of the imaging device 600. According to an example embodiment, the destabilization signal may indicate movement of the at least one lens 104 and/or movement of the image sensor 102.
[0082] The actuation unit 108 may be further configured to receive sensor data, for example similar to operation 403. The sensor data may be received from the one or more sensors 308 of the imaging device 600, similar to what has been described for imaging device 500.
[0083] The actuation unit 108 may be further configured to determine the stabilization signal, for example similar to operation 404. The stabilization signal may indicate movement of at least one component of the imaging device 600 opposite to the movement of the imaging device 600. According to an example embodiment, the stabilization signal may indicate movement of the at least one lens 104 or the image sensor 102.
[0084] The actuation unit 108 may be further configured to determine and cause movement of the at least one component of the imaging device 600, for example, similar to operation 404. Movement of the at least one component may be substantially perpendicular, for example within ±10 degrees, to the optical axis 110 of the imaging device 600. For example, the movement of the at least one component may comprise a plurality of sequential movements in a plurality of directions substantially perpendicular to the optical axis 110, for example at a plane defined by two axes (x,y) substantially perpendicular to the optical axis 110.
[0085] According to an example embodiment, the actuation unit may be configured to cause movement of the at least one lens 104 or the image sensor 102. The movement may be caused based on the destabilization signal. The movement may comprise a sequence of movements. The sequence of movements may comprise a plurality of linear movements, for example in eight different directions at the p- plane, as illustrated in FIG. 6 by arrows 602. [0086] The sequence of movements may be repeated. For example, the image processor
502 may send destabilization instructions to the actuation unit 108 upon initiation of image capture. The image processor 502 may request termination of the destabilization upon termination of the image capture. The actuation unit 108 may control movement of the at least one component, for example, the at least one lens 104 or the image sensor 102 accordingly. For example, the actuation unit 108 may be configured to repeat the sequence of movements and terminate destabilization in response to a request to terminate image destabilization.
[0087] According to an example embodiment, movement of the at least one component, for example, the at least one lens 104 or the image sensor 102, may be smaller than a pixel size of the image sensor 102. For example, each movement of the sequence of movements, or a subset of movements, may be smaller than the pixel size of the image sensor 102. Pixel size may refer to a width or a height of the pixels of the image sensor 102. This enables the scene 120 to be recorded at sub-pixel accuracy. Therefore, resolution of the captured image may be increased. It is noted that even if each movement or a subset of movements were smaller than the pixel size, sequential movements of the at least one component may result in a total movement that exceeds the pixel size. However, the individual movements smaller than the pixel size enable to capture the scene 120 at sub-pixel accuracy.
[0088] According to an example embodiment, movement of a first component of the imaging device 600 may be determined based on the stabilization signal. Movement of a second component of the imaging device 600 may be determined based on the destabilization signal. The first component may comprise the at least one lens 104. The second component may comprise the image sensor 102. For example, the actuation unit 108 may control movement of the at least one lens 104 based on the stabilization signal and control movement of the image sensor 102 based on the destabilization signal. This enables to simultaneously compensate for unintentional movement of the imaging device 600 while ensuring sufficient apparent movement of stationary objects of the scene 120 with respect to the image sensor 102 without combining the destabilization and stabilization signals at the actuation unit 108. This enables to logically separate the stabilization and destabilization process. For example, the image destabilization function could be provided independent of the stabilization function. However, the two functions would still contribute together to the position of the scene 120 at the image sensor 102. [0089] Alternatively, the first component of the imaging device 600 may comprise the image sensor 102. The second component of the imaging device 600 may comprise the at least one lens 104. For example, the actuation unit 108 may control movement of the image sensor 102 based on the stabilization signal and control movement of the at least one lens 104 based on the destabilization signal.
[0090] According to an example embodiment, the actuation unit may combine the stabilization and destabilization signals and control movement of the at least one lens 104 or the image sensor 102 based on the combined signal. For example, the actuation unit 108 may combine the destabilization and stabilization signals by summing the movement indicated by the stabilization signal and the corresponding sequential movement(s) indicated by the destabilization signal. The combined signal may be used to control movement of the at least one lens 104 or the image sensor 102. This solution enables to simultaneously compensate for unintentional movement of the imaging device 600 while ensuring apparent movement of the stationary objects by controlling a single component of the camera module 106. Therefore, circuitry for causing the movement needs to be included only for either the at least one lens 104 or the image sensor 102, which makes the solution less complex to implement. For example, the amount of mechanical components may be reduced and therefore it may be possible to implement the functionality in a smaller space.
[0091] FIG. 7 illustrates an example of displacing a perpendicular movement of at least one component 702 of the imaging device, according to an embodiment of the invention. The imaging device 700 may comprise components similar to the imaging devices 100, 500, or 600. According to an example embodiment, the actuation unit 108 may be configured to determine a perpendicular movement of the at least one component 702 with respect to an optical axis 110 of the imaging device 700. The perpendicular movement may be determined based on a stabilization signal configured to compensate for movement of the imaging device 700. The apparent movement of the stationary objects of the scene 120 relative to the image sensor 102 may be caused based on a displacement of the determined perpendicular movement. The displacement of the determined perpendicular movement may comprise a rotation of the perpendicular movement with respect to at least one axis perpendicular to the optical axis 100. As illustrated in FIG. 7, the perpendicular movement may be, for example, rotated by angle a with respect to the x-axis. Consequently, the at least one component 702 may move, for example, along a rotated linear trajectory 704 instead of moving along the x-axis. The at least one component 702 may comprise the camera module 106, the at least one lens 104, or the image sensor 102. This enables to simultaneously stabilize unintentional movement of the imaging device 700 and cause apparent movement of the scene 120 with respect to the image sensor 102. In addition, destabilization may be provided without a using a destabilization signal that indicates movements of the at least one component. This may be applied for example to compensate for the exact focal point varying a bit towards the corner and if it is for example desired to record the image data temporally at different locations.
[0092] FIG. 8 illustrates an example of storing a vector representation of a captured image, according to an embodiment of the invention. The method 800 may be for example implemented at image processor 502.
[0093] At 801, the image processor 502 may receive pixel data from the image sensor
102. The pixel data may be received over a period of time. For example, during the period of time the image processor 502 may receive pixel data from a plurality of pixels of the image sensor 102 based on changes in the scene 120. Furthermore, during the period of time pixel data may be received multiple times from the same pixel. According to an example embodiment, the period of time comprises the plurality of sequential movements, for example the plurality of sequential movements of the at least one lens 104 or the image sensor 102. [0094] Different movements may be determined in order to detect particular shapes in the scene 120, for example a vertical movement may be used for detecting horizontal lines of the scene 120. The plurality of movements may be determined based on the shapes to be detected. The shapes may comprise any suitable shapes for representing the scene 120 in a vector graphic format, such as for example lines, polylines, polygons, circles, ellipses, or the like. The apparent movement of the scene 120 causes desired shapes to appear in the pixel data. [0095] At 802, the image processor 502 may be configured to determine at least one shape in the pixel data. The pixel data may be for example converted into any suitable image data format such that shapes may recognized based on any suitable shape detection method. For example, edge detection methods including for example Canny, Sobel, or Prewitt filters may be applied. Detecting a shape may comprise detecting a type of the shape, a location of the shape, and/or one or more characteristics of the shape.
[0096] At 803, the image processor 502 may determine at least one vector indicating a location of the at least one shape in the pixel data. A vector may in general comprise an indication of a type of the shape and one or more shape dependent parameters for indicating location and/or characteristics of the shape. For example, if a line is detected in the pixel data, the vector may indicate that the shape is a line with certain width and indicate two endpoints for the line. If the shape is a circle, the vector may indicate that the shape is a circle with a certain width, center point, and radius. [0097] At 804, the image processor 502 may store an image of the scene 120 based on the at least one vector. The image may be stored for example at memory 304. The image may be stored in a vector graphic format. The vector graphic format may comprise the at least one vector. In general, the vector graphic format may comprise locations for a plurality of shapes. The shapes may be of different types.
[0098] FIG. 9 illustrates an example of a method 900 for capturing an image, according to an embodiment of the invention.
[0099] At 901, the method may comprise providing pixel data based on a plurality of pixels of an image sensor, wherein the pixel data is indicative of a scene and are provided based on a change of at least one light property detected by at least one of the plurality of pixels. [00100] At 902, the method may comprise initiating the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene relative to the image sensor.
[00101] Further features of the method directly result from the functionalities and parameters of the imaging device 100, 300, 500, 600, 700 as described in the appended claims and throughout the specification, and are therefore not repeated here.
[00102] Various example embodiments disclose methods, computer programs and devices for generating video representations in a virtual reality space and interacting with video representations in the virtual reality space. Example embodiments improve user experience for example when accessing or editing videos. Taking into account the location information and/or contextual metadata enables to visualize also predictions of future events in an informative manner.
[00103] A device, for example, a camera, a mobile phone, a tablet computer, or another imaging device, may be configured to perform or cause performance of any aspect of the method(s) described herein. Further, a computer program may comprise instructions for causing, when executed, a device to perform any aspect of the method(s) described herein. Further, a device may comprise means for performing any aspect of the method(s) described herein. According to an example embodiment, the means comprises at least one processor, and memory including program code, the at least one processor, and program code configured to, when executed by the at least one processor, cause performance of any aspect of the method(s). [00104] Any range or device value given herein may be extended or altered without losing the effect sought. Also, any embodiment may be combined with another embodiment unless explicitly disallowed. [00105] Although the subj ect matter has been described in language specific to structural features and/or acts, it is to be understood that the subj ect matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
[00106] It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to 'an' item may refer to one or more of those items.
[00107] The steps or operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the embodiments described above may be combined with aspects of any of the other embodiments described to form further embodiments without losing the effect sought.
[00108] The term 'comprising' is used herein to mean including the method, blocks, or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or device may contain additional blocks or elements. Although subjects may be referred to as ‘first’ or ‘second’ subjects, this does not necessarily indicate an order or importance of the subjects. Instead, such attributes may be used solely for the purpose of making a difference between subjects. Any reference numbers in the claims should not be construed as limiting the scope of the claims.
[00109] It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from scope of this specification.

Claims

1. An imaging device (100, 500, 600, 700) comprising: an image sensor (102) comprising a plurality of pixels, wherein the image sensor (102) is configured to provide pixel data based on the plurality of pixels, and wherein the pixel data is indicative of a scene (120) and are provided based on a change of at least one light property detected by at least one of the plurality of pixels; and an actuation unit (108) configured to initiate the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene (120) relative to the image sensor (102).
2. The imaging device (100, 500, 600, 700) according to claim 1, wherein the actuation unit (108) is configured to attribute the apparent movement by moving at least one component of the imaging device (100, 500, 600, 700).
3. The imaging device (100, 500, 600, 700) according to claim 2, wherein the at least one component comprises at least one of: at least one lens (104); the image sensor (102); or a camera module (106) comprising the image sensor (102) and the at least one lens
(104).
4. The imaging device (100, 500, 600, 700) according to any preceding claim, wherein the image sensor (102) is configured to provide the pixel data in response to detecting the change of the at least one light property to exceed a threshold.
5. The imaging device (100, 500, 600, 700) according to any preceding claim, wherein the at least one light property comprises at least one of: intensity of the light; brightness of the light; intensity of at least one color; or intensity of at least one wavelength range.
6. The imaging device (500) according to any of claims 2 - 5, wherein the movement of the at least one component comprises at least one rotational movement of the camera module (106).
7. The imaging device (500) according to claim 6, wherein the at least one rotational movement of the camera module (106) is with respect to at least one axis perpendicular to an optical axis (110) of the camera module (106).
8. The imaging device (100, 500, 600, 700) according to any of claims 2 - 5, wherein the movement of the at least one component is in an angle of 88-92 degrees to an optical axis (110) of the imaging device (100, 500, 600, 700), and/or wherein the movement of the at least one component is smaller than a pixel size of the image sensor (102), and/or wherein the movement of the at least one component comprises a plurality of sequential movements in a plurality of directions substantially perpendicular to the optical axis (110).
9. The imaging device (500, 600) according to any preceding claim, further comprising an image processor (502) configured to: receive the pixel data from the image sensor (102) over a period of time; determine at least one shape in the pixel data; determine at least one vector indicating a location of the at least one shape in the pixel data; and store an image of the scene (120) based on the at least one vector.
10. The imaging device (500, 600) according to claim 8 and claim 9, wherein the period of time comprises the plurality of sequential movements.
11. The imaging device (100, 500, 600, 700) according to any preceding claim, wherein the actuation unit (108) is configured to determine the movement of the at least one component based on a stabilization signal and a destabilization signal, wherein the stabilization signal is configured to compensate for movement of the imaging device, and wherein the destabilization signal is configured to cause the apparent movement of the stationary objects of the scene (120) relative to the image sensor.
12. The imaging device (600) according to claim 11, wherein the movement of the at least one component is determined based on a combination of the stabilization signal and the destabilization signal, or wherein movement of a first component of the imaging device (600) is determined based on the stabilization signal, and wherein movement of a second component of the imaging device (600) is determined based on the destabilization signal.
13. The imaging device according to claim 12, wherein the first component comprises the at least one lens (104), and wherein the second component comprises the image sensor (102).
14. The imaging device (700) according to any of claims 1 - 5, wherein the actuation unit (108) is configured to determine a perpendicular movement of the at least one component (702) with respect to an optical axis (110) of the imaging device (700) based on a stabilization signal configured to compensate for movement of the imaging device (700), and wherein the apparent movement of the stationary objects of the scene (120) relative to the image sensor (102) is caused based on a displacement of the perpendicular movement.
15. The imaging device according to claim 14, wherein the displacement of the perpendicular movement comprises a rotation of the perpendicular movement with respect to at least one axis perpendicular to the optical axis (110).
16. A method for imaging, the method comprising: providing pixel data based on a plurality of pixels of an image sensor (102), wherein the pixel data is indicative of a scene (120) and are provided based on a change of at least one light property detected by at least one of the plurality of pixels; and initiating the pixel data by changing the at least one light property of the at least one of the plurality of pixels, wherein the initiation of the pixel data is based on attributing an apparent movement to stationary objects of the scene (120) relative to the image sensor (102).
17. The method according to claim 16, which is executed in an imaging device (100, 500, 600, 700) according to any of claims 1 - 15.
18. A computer program comprising a program code configured to cause performance of the method according to claim 16 or claim 17, when the computer program is executed on a computer. 19. A computer program product comprising a computer readable storage medium storing program code thereon, the program code comprising instructions for performing the method according to claim 16 or claim 17.
PCT/EP2020/056146 2020-03-09 2020-03-09 Imaging device and method for efficient capture of stationary objects WO2021180294A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/056146 WO2021180294A1 (en) 2020-03-09 2020-03-09 Imaging device and method for efficient capture of stationary objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/056146 WO2021180294A1 (en) 2020-03-09 2020-03-09 Imaging device and method for efficient capture of stationary objects

Publications (1)

Publication Number Publication Date
WO2021180294A1 true WO2021180294A1 (en) 2021-09-16

Family

ID=69784439

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/056146 WO2021180294A1 (en) 2020-03-09 2020-03-09 Imaging device and method for efficient capture of stationary objects

Country Status (1)

Country Link
WO (1) WO2021180294A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242952A (en) * 2022-07-28 2022-10-25 联想(北京)有限公司 Image acquisition method and device
WO2023089321A1 (en) * 2021-11-17 2023-05-25 Cambridge Mechatronics Limited Camera apparatus and methods

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158372A1 (en) * 2006-12-27 2008-07-03 Palum Russell J Anti-aliasing in an imaging device using an image stabilization system
US20130235220A1 (en) * 2012-03-12 2013-09-12 Raytheon Company Intra-frame optical-stabilization with intentional inter-frame scene motion
US20170132794A1 (en) * 2015-11-05 2017-05-11 Samsung Electronics Co., Ltd. Pose estimation method and apparatus
US20180316840A1 (en) * 2017-05-01 2018-11-01 Qualcomm Incorporated Optical image stabilization devices and methods for gyroscope alignment
US20190014258A1 (en) * 2017-07-05 2019-01-10 Intel Corporation Micro-saccadic actuation for an event camera
US20190356849A1 (en) * 2018-05-18 2019-11-21 Samsung Electronics Co., Ltd. Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms
WO2020033427A1 (en) * 2018-08-08 2020-02-13 Google Llc Optical image stabilization movement to create a super-resolution image of a scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158372A1 (en) * 2006-12-27 2008-07-03 Palum Russell J Anti-aliasing in an imaging device using an image stabilization system
US20130235220A1 (en) * 2012-03-12 2013-09-12 Raytheon Company Intra-frame optical-stabilization with intentional inter-frame scene motion
US20170132794A1 (en) * 2015-11-05 2017-05-11 Samsung Electronics Co., Ltd. Pose estimation method and apparatus
US20180316840A1 (en) * 2017-05-01 2018-11-01 Qualcomm Incorporated Optical image stabilization devices and methods for gyroscope alignment
US20190014258A1 (en) * 2017-07-05 2019-01-10 Intel Corporation Micro-saccadic actuation for an event camera
US20190356849A1 (en) * 2018-05-18 2019-11-21 Samsung Electronics Co., Ltd. Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms
WO2020033427A1 (en) * 2018-08-08 2020-02-13 Google Llc Optical image stabilization movement to create a super-resolution image of a scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023089321A1 (en) * 2021-11-17 2023-05-25 Cambridge Mechatronics Limited Camera apparatus and methods
CN115242952A (en) * 2022-07-28 2022-10-25 联想(北京)有限公司 Image acquisition method and device

Similar Documents

Publication Publication Date Title
KR102187146B1 (en) Dual-aperture zoom digital camera with automatic adjustable tele field of view
CN111557016B (en) Method and apparatus for generating an image comprising simulated motion blur
EP3357229B1 (en) Systems and methods for performing automatic zoom
US10970915B2 (en) Virtual viewpoint setting apparatus that sets a virtual viewpoint according to a determined common image capturing area of a plurality of image capturing apparatuses, and related setting method and storage medium
CN109565551B (en) Synthesizing images aligned to a reference frame
US20130250040A1 (en) Capturing and Displaying Stereoscopic Panoramic Images
EP3296952B1 (en) Method and device for blurring a virtual object in a video
EP3545686B1 (en) Methods and apparatus for generating video content
EP2880510B1 (en) Improved video tracking
US11044398B2 (en) Panoramic light field capture, processing, and display
WO2019059020A1 (en) Control device, control method and program
CN113875220B (en) Shooting anti-shake method, shooting anti-shake device, terminal and storage medium
WO2021180294A1 (en) Imaging device and method for efficient capture of stationary objects
US10362231B2 (en) Head down warning system
WO2021139764A1 (en) Method and device for image processing, electronic device, and storage medium
US9082183B2 (en) Image processing device and image processing method
WO2021168804A1 (en) Image processing method, image processing apparatus and image processing system
WO2017024954A1 (en) Method and device for image display
US10860169B2 (en) Method, apparatus or computer program for user control of access to displayed content
KR101790994B1 (en) 360-degree video implementing system based on rotatable 360-degree camera
KR20210080334A (en) Method, apparatus, and device for identifying human body and computer readable storage
CN112637482B (en) Image processing method, image processing device, storage medium and electronic equipment
Lee et al. A mobile spherical mosaic system
JP2011010010A (en) Image reproduction device and imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20710480

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20710480

Country of ref document: EP

Kind code of ref document: A1