WO2017161198A1 - Procédé et camera d'analyse vidéo à rotation adaptative - Google Patents

Procédé et camera d'analyse vidéo à rotation adaptative Download PDF

Info

Publication number
WO2017161198A1
WO2017161198A1 PCT/US2017/022826 US2017022826W WO2017161198A1 WO 2017161198 A1 WO2017161198 A1 WO 2017161198A1 US 2017022826 W US2017022826 W US 2017022826W WO 2017161198 A1 WO2017161198 A1 WO 2017161198A1
Authority
WO
WIPO (PCT)
Prior art keywords
surveillance camera
rotational orientation
fov
video
video image
Prior art date
Application number
PCT/US2017/022826
Other languages
English (en)
Inventor
Pieter Messely
Dwight T. DUMPERT
Original Assignee
Flir Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/456,074 external-priority patent/US11030775B2/en
Application filed by Flir Systems, Inc. filed Critical Flir Systems, Inc.
Publication of WO2017161198A1 publication Critical patent/WO2017161198A1/fr
Priority to US16/115,455 priority Critical patent/US10735659B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19617Surveillance camera constructional details
    • G08B13/1963Arrangements allowing camera rotation to change view, e.g. pivoting camera, pan-tilt and zoom [PTZ]
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19663Surveillance related processing done local to the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • One or more embodiments of the invention relate generally to imaging devices and more particularly, for example, to image processing and video analytics techniques for surveillance cameras and methods.
  • Typical surveillance cameras when installed in their intended orientation, have a field- of-view (FOV) with a wider horizontal dimension than a vertical dimension.
  • FOV field- of-view
  • typical surveillance cameras capture and generate video image frames in an aspect ratio that matches the wide horizontal FOV, such as in 5:4, 4:3, 3:2, or 16:9 width-to-height aspect ratios or other aspect ratios having a larger width (horizontal dimension) than height (vertical dimension).
  • a FOV having a larger vertical dimension than the horizontal dimension may be beneficial, such as when monitoring foot and/or vehicle traffic up and down a deep corridor, a long sidewalk, or a long road.
  • Various embodiments of the methods and systems disclosed herein may be used to provide a surveillance camera that generates native video image frames in the appropriate FOV (orientation) that corresponds to the orientation in which the surveillance camera is installed when the video image frames are captured.
  • the surveillance cameras implemented in accordance with embodiments of the disclosure may facilitate installation that provides a desired FOV in a particular orientation, generate video image frames that natively correspond to the desired FOV, and allow user interaction and video analytics to be performed on the FOV- matched video image frames.
  • a surveillance camera may include: an imaging sensor configured to generate image signals representing a scene within a sensor field of view (FOV) of the imaging sensor, wherein the sensor FOV has a vertical dimension and a horizontal dimension that is wider than the vertical dimension; an adjustable mount configured to securely attach the surveillance camera to a structure and adjustable to rotate or pivot the surveillance camera about the optical axis direction; and a logic device communicatively coupled with the imaging sensor and configured to: determine a rotational orientation of the surveillance camera about the optical axis direction; generate, based on the image signals and the determined rotational orientation, video image frames having an output FOV with a vertical dimension that corresponds to the determined rotational orientation; and perform video analytics on the generated video image frames.
  • FOV sensor field of view
  • a method for providing rotation-adaptive video image frames may include the steps of: generating, by an imaging sensor of a surveillance camera, image signals representing a scene within sensor field of view (FOV) of the imaging sensor, the sensor FOV having a vertical dimension and a horizontal dimension that is wider than the vertical dimension; determining, by a logic device of the surveillance camera, a rotational orientation of the surveillance camera about the optical axis direction of the surveillance camera; generating, by the logic device based on the image signals and the determined rotational orientation, video image frames having an output FOV with a vertical dimension that corresponds to the determined rotational orientation; and performing video analytics on the generated video image frames
  • Fig. 1 illustrates an operating environment in which a surveillance camera may operate in accordance with an embodiment of the disclosure.
  • Fig. 2 illustrates a block diagram of a surveillance camera in accordance with an embodiment of the disclosure.
  • FIG. 3 illustrates a flowchart of a process 300 to provide rotation-adaptive video analytics in a surveillance camera in accordance with an embodiment of the disclosure
  • Figs. 4A and 4B illustrate an a surveillance camera installed in its normal orientation and a corresponding example video image frame generated by the surveillance camera, in accordance with an embodiment of the disclosure.
  • Figs 5A and 5B illustrate a surveillance camera installed at a 90-degree rotational orientation and a corresponding example video image frame generated by the surveillance camera to match a field-of-view corresponding to the rotational orientation, in accordance with an embodiment of the disclosure.
  • Fig. 6 illustrates an example of how video image frames having a FOV corresponding to the determined rotational orientation is generated in accordance with an embodiment of the disclosure.
  • Figs. 7A and 7B illustrate examples of how rotation-adaptive video analytics operations may be performed in accordance with an embodiment of the disclosure.
  • Fig. 8 illustrates an example calibration process performed by a surveillance camera in accordance with an embodiment of the disclosure.
  • Fig. 9 illustrates an example of adjusting the calibration configuration in response to a change of rotational orientation of the surveillance camera in accordance with an embodiment of the disclosure.
  • Various embodiments of the methods and systems disclosed herein may be used to provide a surveillance camera that generates native video image frames in the appropriate field of view (FOV) (orientation) that corresponds to the orientation in which the surveillance camera is installed when the video image frames are captured.
  • FOV field of view
  • typical surveillance cameras when installed in their intended orientation, have a FOV with a wider horizontal dimension than a vertical dimension.
  • typical surveillance cameras capture and generate video image frames in an aspect ratio that matches the wide horizontal FOV, such as in 5:4, 4:3, 3:2, or 16:9 width-to-height aspect ratios or other aspect ratios having a larger width (horizontal dimension) than height (vertical dimension).
  • a FOV having a larger vertical dimension than the horizontal dimension may be beneficial, such as when monitoring foot and/or vehicle traffic up and down a deep corridor, a long sidewalk, or a long road.
  • cameras e.g., surveillance cameras
  • a surveillance camera may include: an imaging sensor configured to generate image signals representing a scene within a FOV of the imaging sensor, the FOV has a vertical dimension and a horizontal dimension that is wider than the vertical dimension; an adjustable mount that is configured to securely attach the camera to a structure and adjustable to rotate or pivot the surveillance camera about the optical axis direction; and a logic device
  • a surveillance camera in one or more embodiments of the disclosure may provide rotation-adaptive video analytics natively on the camera side, thereby advantageously allowing users to orient the surveillance camera to perform video analytics on video image frames having a desired vertical FOV.
  • FIG. 1 illustrates an environment 100 in which a surveillance camera 102 may be operated.
  • Surveillance camera 102 includes an imaging sensor 120 and optical elements 103.
  • Imaging sensor 120 is configured to generate image signals representing a scene 104 within a FOV associated with imaging sensor 120.
  • the FOV associated with imaging sensor 120 may be defined by the sensor dimension (e.g., the width and height of the sensor comprising sensor elements arranged in an two-dimensional array) and optical elements 103 that direct electromagnetic radiation (e.g., including visible light, near infrared (IR) radiation, thermal 1R radiation, ultraviolet (UV) radiation) from scene 104 to the imaging sensor 120.
  • electromagnetic radiation e.g., including visible light, near infrared (IR) radiation, thermal 1R radiation, ultraviolet (UV) radiation
  • the FOV associated with imaging sensor 120 includes a horizontal dimension and a vertical dimension, and imaging sensor 120 is positioned in surveillance camera such that the horizontal dimension of the imaging sensor is wider than the vertical dimension of the imaging sensor when surveillance camera 100 is in its normal (e.g., upright) position.
  • the FOV associated imaging sensor 120 may have an aspect ratio of 5:4, 4:3, 3:2, 16:9 (width-to-height), or other ratios in which the width is larger than the height when surveillance camera 102 is in its normal (upright) position.
  • Imaging sensor 120 may include a visible light (VL) imaging sensor which may be implemented, for example, with a charge-coupled device (CCD) sensor, a complementary metal-oxide semiconductor (CMOS) sensor, an electron multiplying CCD (EMCCD), a scientific CMOS (sCMOS) sensor and/or other appropriate image sensor to generate image signals of visible light received from the scene.
  • VL camera may be configured to capture electromagnetic radiation in other wavelengths in addition to or instead of visible light.
  • Imaging sensor 120 may include an IR imaging sensor which may be implemented, for example, with a focal plane array (FPA) of bolometers, thermocouples, thermopiles, pyroelectric detectors, or other IR sensor elements responsive to IR radiation in various wavelengths such as for example, in the range between 1 micron and 14 microns.
  • image sensor 120 may be configured to capture images of near IR and/or short-wave IR radiation from the scene.
  • image sensor 120 may be a thermal IR sensor configured to capture images of IR radiations in the mid-wave (MWIR) or long- wave (LWIR) wavelength ranges.
  • image sensor 102 of surveillance camera 102 may include both a VL imaging sensor and a IR imaging sensor.
  • Surveillance camera 102 can be securely attached to a structure 108 (e.g., a wall, ceiling, pole, or other structure appropriate for installing surveillance camera 102 for surveillance purposes) via adjustable mount 106.
  • Adjustable mount 106 is adjustable to rotate or pivot the surveillance camera about the optical axis direction. That is, adjustable mount 106 allows a housing 101 of surveillance camera 102 to rotate or pivot 110 about an axis that is parallel or substantially parallel to the optical axis 130 of optical elements 103 and imaging sensor 120, such that the horizontal dimension of imaging sensor 120 spans vertically and the vertical dimension of imaging sensor 120 spans horizontally when rotated or pivoted 110 at
  • adjustable mount 106 allows users to rotate or pivot surveillance camera 102 conveniently to provide a larger vertical FOV when desired (e.g., when monitoring foot and/or vehicle traffic up and down a deep corridor, a long sidewalk, or a long road).
  • Adjustable mount 106 in some embodiments may be configured to rotate or pivot housing 101 of surveillance camera 102 to adjust additionally for yaw 112 (e.g., for panning) and/or pitch 114 (e.g., for tilting).
  • the additional ranges of adjustment by adjustable mount 106 may further facilitate installation of surveillance camera 102 on a variety of mounting points
  • adjustable mount 106 may include a rotatable joint 118 (e.g., a ball joint) that allows rotation or pivoting in directions 110, 112, and 114 (e.g., roll, yaw, and pitch, respectively).
  • a rotatable joint 118 e.g., a ball joint
  • directions 110, 112, and 114 e.g., roll, yaw, and pitch, respectively.
  • System 200 comprises, according to one implementation, a processing component 210, a memory component 220, an imaging sensor 230, a video interface component 234, a control component 240, a display component 250, a sensing component 260, and a communication interface device 280.
  • Imaging sensor 230 of system 200 may be the same as sensor 120 of surveillance camera 102 as described above.
  • Processing component 210 logic device may be implemented as any appropriate circuitry or device (e.g., a processor, microcontroller, application specific integrated circuit (ASIC), field-programmable gate array (FPGA), or other programmable or configurable logic devices) that is configured (e.g., by hardware configuration, software instructions, or a combination of both) to perform various operations to provide rotation-adaptive video analytics.
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • processing component 210 may be communicatively coupled to (e.g., configured to communicate with) imaging sensor 230 and memory component 220, and configured to determine a rotational orientation of surveillance camera 102 about optical axis 130, generate, based on image signals received from imaging sensor 102 and the determined rotational orientation, video image frames having a FOV with a vertical dimension that corresponds to the determined rotational orientation of surveillance camera 102, and perform video analytics on the generated video image frames.
  • rotation adaption module 212 may, in some embodiments, be integrated in software and/or hardware as part of processing component 210, with code (e.g., software instructions and/or configuration data) for rotation adaption 212 stored, for example, in memory component 220.
  • code e.g., software instructions and/or configuration data
  • a separate machine-readable medium 221 e.g., a memory, such as a hard drive, a compact disk, a digital video disk, or a flash memory
  • a computer e.g., a logic device or processor-based system
  • machine-readable medium 221 may be portable and/or located separate from system 200, with the stored software instructions and/or data provided to system 200 by coupling the computer-readable medium to system 200 and/or by system 200 downloading (e.g., via a wired link and/or a wireless link) from computer-readable medium 221.
  • some or all of the operations to provide rotation-adaptive video analytics may be performed by processing component 210 and rotation adaption module 212.
  • processing component 210 may be communicatively coupled to (e.g., configured to communicate with) sensing component 260 and video interface 234, and configured to receive image signals from imaging sensor 230 via video interface 234, determine the rotational orientation of the surveillance camera, generate rotation- adaptive video image frames based on the image signals and the determined rotational orientation, and perform video analytics on the generated video image frames.
  • Memory component 220 comprises, in one embodiment, one or more memory devices configured to store data and information, including video image data and information.
  • Memory component 220 may comprise one or more various types of memory devices including volatile and non-volatile memory devices, such as RAM (Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically-Erasable Read-Only Memory), flash memory, hard disk drive, and/or other types of memory.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically-Erasable Read-Only Memory
  • flash memory hard disk drive, and/or other types of memory.
  • processing component 210 may be configured to execute software instructions stored in memory component 220 so as to perform method and process steps and/or operations described herein.
  • Processing component 210 and/or video interface 234 may be configured to store in memory component 220 video image frames or digital image data captured by the imaging sensor 230.
  • Video interface 234 may include, in some embodiments, appropriate input ports, connectors, switches, and/or circuitry configured to interface with imaging sensor 230 to receive image signals (e.g., digital image data).
  • image signals e.g., digital image data
  • the received videos or image data may be provided to processing component 210.
  • the received videos or image data may be converted into signals or data suitable for processing by processing component 210.
  • video interface 234 may be configured to receive analog video data and convert it into suitable digital data to be provided to processing component 210.
  • Control component 240 comprises, in one embodiment, a user input and/or interface device, such as a rotatable knob (e.g., potentiometer), push buttons, slide bar, keyboard, touch sensitive display devices, and/or other devices, that is adapted to generate a user input control signal.
  • Processing component 210 may be configured to sense control input signals from a user via control component 240 and respond to any sensed control input signals received therefrom. Processing component 210 may be configured to interpret such a control input signal as a value, as generally understood by one skilled in the art.
  • control component 240 may comprise a control unit (e.g., a wired or wireless handheld control unit) having push buttons adapted to interface with a user and receive user input control values.
  • the push buttons of the control unit may be used to control various functions of system 200, such as initiate a calibration, adjusting one or more parameters of video analytics, autofocus, menu enable and selection, field of view, brightness, contrast, noise filtering, image enhancement, and/or various other features of an imaging system or camera.
  • Display component 250 comprises, in one embodiment, an image display device (e.g., a liquid crystal display (LCD)) or various other types of generally known video displays or monitors.
  • Processing component 210 may be configured to display image data and information (e.g., video analytics information) on display component 250.
  • Processing component 210 may be configured to retrieve image data and information from memory component 220 and display any retrieved image data and information on display component 250.
  • Display component 250 may comprise display circuitry, which may be utilized by the processing component 210 to display image data and information.
  • Display component 250 may be adapted to receive image data and information directly from the imaging sensor 230, processing component 210, and/or video interface component 234, or the image data and information may be transferred from memory component 220 via processing component 210.
  • Sensing component 260 comprises, in one embodiment, one or more sensors of various types, including orientation sensor, implemented with a gyroscope, accelerometer, or other appropriate sensor that is disposed within or relative to housing 101 and configured to detect the rotational orientation of surveillance camera 100 about the optical axis direction.
  • processing component 210 may be configured to communicate with or otherwise utilize sensing component 260 to determine the rotational orientation.
  • adjustable mount 106 may include a position sensor which may be implemented, for example, using a potentiometer, optical sensor, or other sensor configured to detect a position of moveable joint 118.
  • Processing component 210 may be configured to communicate with or otherwise utilize sensing component 260 to determine the rotational orientation based on the position of moveable joint 118, for example.
  • Communication interface device 280 may include a network interface component (NIC) or a hardware module adapted for wired and/or wireless communication with a network and with other devices connected to the network. Through communication interface device 280, processing component 210 may transmit video image frames generated at surveillance camera
  • communication interface device 280 may include a wireless communication component, such as a wireless local area network (WLAN) component based on the IEEE 802.11 standards, a wireless broadband component, mobile cellular component, a wireless satellite component, or various other types of wireless communication components including radio frequency (RF), microwave frequency
  • WLAN wireless local area network
  • RF radio frequency
  • MMF mobile broadband
  • IRF infrared frequency
  • communication interface device 280 may include an antenna coupled thereto for wireless communication purposes.
  • communication interface device 280 may be adapted to interface with a wired network via a wired communication component, such as a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a cable modem, a power-line modem, etc. for interfacing with DSL, Ethernet, cable, optical-fiber, power-line and/or various other types wired networks and for communication with other devices on the wired network.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • processing component 210 may be combined with memory component 220, the imaging sensor 230, video interface component 234, display component 250, communication interface device 280, and/or sensing component 260 and implemented within the enclosure of surveillance camera 102.
  • processing component 210 may be combined with the imaging sensor 230, such that certain functions of processing component 210 are performed by circuitry (e.g., a processor, a microprocessor, a logic device, a microcontroller, etc.) within the imaging sensor 230.
  • circuitry e.g., a processor, a microprocessor, a logic device, a microcontroller, etc.
  • system 200 may include communication interface device 280 configured to facilitate wired and/or wireless communication among various components of system 200 over network 290.
  • some of the component may be implemented in surveillance camera 102 while the other components may be implemented in remote device 282.
  • components may also be replicated if desired for particular applications of system 200. That is, components configured for same or similar operations may be distributed over a network.
  • at least some of the components in system 200 may be implemented in both surveillance camera 102 and remote device 282.
  • any one of the various components may be implemented using appropriate components of a remote device 282 in communication with various components of system 200 via communication interface device 280 over network 290, if desired.
  • all or part of processor 210, all or part of memory component 220, and/or all of part of display component 250 may be implemented or replicated at remote device 282, and configured to perform rotation-adaptive video analytics as further described herein.
  • all components in system 200 are implemented in surveillance camera 102, and remote device 282 is omitted from the surveillance camera system. It will be appreciated that many other combinations of distributed implementations of system 200 are possible, without departing from the scope and spirit of the disclosure.
  • processing component 210 configured as such can provide rotation- adaptive video analytics. Details of such operations to provide rotation-adaptive video analytics are discussed in more details below.
  • Fig. 3 illustrates a process 300 for providing rotation- adaptive video analytics.
  • process 300 is performed by surveillance camera 102, such as by processing component 210 utilizing various components of surveillance camera 102, when surveillance camera 102 is initially installed at a location or when surveillance camera 102 has been moved to a different location or orientation. It should, however, be appreciated that any other suitable cameras, devices, systems, and components may perform all or part of process 300.
  • Process 300 begins by determining (at step 302) a rotational orientation of the surveillance camera.
  • the determination of a rotational orientation of surveillance camera 102 about optical axis 103 may be performed in various ways according to embodiments.
  • surveillance camera 102 may include sensing component 260 that includes an orientation sensor.
  • the orientation sensor may be implemented with a gyroscope, accelerometer, or other appropriate sensor that is disposed within or relative to housing 101 of surveillance camera 102.
  • the sensing component 260 is configured to detect the rotational orientation of surveillance camera 102 about optical axis 130.
  • rotation adaption module 212 may be configured to communicate with or otherwise utilize sensing component 260 to determine the rotational orientation of surveillance camera 102.
  • adjustable mount 106 may include a position sensor which may be implemented, for example, using a potentiometer, optical sensor, or other sensor configured to detect a position of moveable joint 118.
  • Processing component 210 may be configured to be configured to communicate with or otherwise utilize sensing component 260 to determine the rotational orientation based on the position of moveable joint 118, for example.
  • processing component 210 may be configured to determine the rotational orientation based on a user input received at control component 240 having one or more of a push button, slide bar, rotatable knob, touchpad, touchscreen, pointing device, keyboard, and/or other component that are actuatable by a user to provide an indication of whether surveillance camera 102 is to be installed in a normal (upright) orientation or rotated/pivoted to provide a FOV with a larger vertical dimension than a horizontal dimension.
  • processing component 210 may be configured to determine the rotational orientation by processing the image signals received from imaging sensor 120.
  • processing component 210 may be configured to operate surveillance camera 100 in a training or calibration mode that tracks persons or vehicles and determine the rotational orientation as they move up and down along a corridor, road, or sidewalk to determine the perspective (e.g., based on the direction of movement and/or the change in sizes due to the perspective) capture by imaging sensor 120.
  • Process 300 then receives (at step 304) image signals from the imaging sensor. Based on the received image signals and the rotational orientation of surveillance camera 102 about optical axis 130 that is determined according to any of the embodiments discussed above, process 300 then generates (at step 306) video image frames with a FOV that corresponds to the determined rotational orientation.
  • rotation adaption module 212 is configured to generate video image frames having a FOV with a vertical dimension that corresponds to the determined rotational orientation of surveillance camera 102.
  • FIG. 4A is an image 402 of a surveillance camera (e.g., surveillance camera 102) being installed in its intended, normal orientation (in an upright position).
  • Fig. 4B illustrates an example video image frame 404 of a scene that is captured and generated by processing component 210 of surveillance camera 102 that corresponds to the installation orientation as shown in Fig. 4A.
  • video image frame 404 has a FOV with a horizontal dimension 406 that is larger than its vertical dimension 408, which corresponds to the normal orientation of imaging sensor 1
  • Fig. 5 A is an image 502 of surveillance camera 102 being installed at a 90-degree rotational orientation (e.g., by pivoting surveillance camera 102 for 90 degrees about optical axisl30).
  • Fig. 5B illustrates an example video image frame 504 of a scene that is captured and generated by processing component 210 of surveillance camera 102 that corresponds to the installation orientation as shown in Fig. 5A.
  • image video frame 504 has a FOV with a vertical dimension 508 that is larger than its horizontal dimension 506.
  • the adjusted native FOV as shown in image 504 allows surveillance camera 102 to properly perform calibration and other video analytics.
  • video image frame 504 generated by rotation adaption module 212 at surveillance camera 100, natively provides appropriate FOV and image orientation that corresponds to the installation orientation of surveillance camera 102. Without this rotation-adaptive generation of video image frames, interpretation of the video image frames would be difficult and performing video analytics on the video image frames would be difficult if not impossible, since the direction of movement, the orientations and positions of objects in the scene, and the FOV shown by the video image frames would not correspond to the actual scene captured by surveillance camera 102.
  • Fig. 6 illustrates an example implementation of how processing component 210 can be configured to generate video image frames having a FOV with a vertical dimension that corresponds to the determined rotational orientation in accordance with an embodiment of the disclosure.
  • video image frame 602 represents pixels as provided in the image signals from imaging sensor 120.
  • the image signals from imaging sensor 120 may be digital signals or data, or analog signals which are indicative of the sensed intensity for each sensor element locations in imaging sensor 120.
  • video image frame 602 as provided in the image signals comprises pixels arranged in a horizontally wide rectangle corresponding to the array of sensor elements in imaging sensor 120.
  • Each of these pixels e.g., only pixels 612, 614, 616,
  • processing component 210 is configured to map or assign each pixel to its original position.
  • pixel 612 is assigned to position (0,0)
  • pixel 614 is assigned to position (m)
  • pixel 616 is assigned to position (0, n)
  • pixel 618 is assigned to position (m,n)
  • pixel 620 is assigned to position (a,b) in the output FOV.
  • processing component 210 is configured to remap or reassign the pixels to different positions in the output FOV.
  • Video image frame 604 represents a video image frame after rotation adaption module 212 has remapped/reassigned the pixels to different positions in the output FOV when it is detected that surveillance camera 102 has been installed at a substantially 90-degree rotational orientation (e.g., by pivoting surveillance camera 102 for 90 degrees about optical axisl30, as shown in Fig. 5 A).
  • processing component 210 is configured to generate a new FOV having a vertical dimension that corresponds to the detected rotational orientation (e.g., having a vertical dimension that is larger than the horizontal dimension).
  • Rotation adaption module 212 is also configured to remap/reassign each pixel to a new position in the new FOV, for example, by applying a rotation transform that corresponds to the determined rotational orientation of surveillance camera 102 to natively generate video image frame in the new FOV.
  • pixel 612 is remapped/reassigned from position (0,0) to position 622 (m,0)
  • pixel 614 is remapped/reassigned from position (m,0) to position 624 (m,n)
  • pixel 616 is remapped/reassigned from position (0,n) to position 626 (0,0)
  • pixel 618 is remapped/reassigned from position (m,n) to position 628 (0,n)
  • pixel 620 is also configured to remap/reassign each pixel to a new position in the new FOV, for example, by applying a rotation transform that corresponds to the determined rotational orientation of surveillance camera 102 to natively generate video image frame in the new FOV.
  • process 300 performs (at step 308) video analytics on the video image frames (e.g., video image frame 604).
  • rotation adaption module 212 may be configured to change the orientation of the FOV upon detecting a rotational orientation change that exceeds a threshold.
  • rotation adaption module 212 of some embodiments may be configured to apply a 90- degree rotational transform to the pixels of image signals when the determined rotational orientation is more than 45 degrees off the normal orientation.
  • the image signals from imaging sensor 120 may actually be converted and stored (e.g., buffered) as video image frame (e.g., video image frame 602) in an image buffer (e.g., implemented in memory component 220 as a software data structure or in hardware), and processing component 210 may access the image buffer to remap the pixels as discussed above and store the resulting video image frame 604 in the image buffer.
  • the image signals from imaging sensor 120 may be received by processing component 210, which may remap and store video image frame 604 in the image buffer without creating an intermediate video image frame 602. In either case, the resulting video image frame
  • the rotation-adaptive video image frames (e.g., video image frames 504, 604) generated natively at surveillance camera 102 advantageously permit calibration and/or video analytics to be performed correctly.
  • calibration and/or video analytics operations performed by processing component 210 are adaptive to the FOV captured in the video image frames so as to access, analyze, and/or process the pixels of the video image frames according to the rotational orientation of surveillance camera 102 when the video image frames were captured.
  • the video analytics operations may be configured to detect and process video image frames natively output at an aspect ratio of 4:3, 3:4, 5:4, 4:5, 16:9, and 9:16 (e.g., an image dimensions of 640x480, 480x640, 640x512, 512x640, 1280x720, and 720x1280).
  • one or more video analytics may be performed on the video image frames that are natively generated by surveillance camera 102.
  • Different embodiments may perform different video analytics on the video image frames.
  • processing component 210 of some embodiments is configured to enable a user to provide video analytics marker to the scene captured by surveillance camera 102 and to perform one or more object detection, identification, or tracking based on the user provided video analytics marker.
  • Figs. 7A and 7B illustrate examples of how such rotation-adaptive video analytics operations, specifically video analytics markers, may be performed in accordance with an embodiment of the disclosure.
  • Fig. 7A illustrates a video image frame 700 generated by processing component 210 when surveillance camera 102 is installed in an intended, normal orientation.
  • video image frame 700 has a FOV with a shorter vertical dimension and a wider horizontal dimension corresponding to a normal (e.g., upright) installation orientation of surveillance camera 102.
  • Fig. 7B illustrates video image frame 710 generated by processing component 210 when surveillance camera 102 is installed/adjusted to a different rotational orientation such that the FOV has a larger vertical dimension than the horizontal dimension.
  • video image frame 710 has a taller vertical dimension and a narrower horizontal dimension corresponding to a 90-degree installation orientation.
  • image video frame 710 offers better coverage than image video frame 700 as video image frame 710 captures a portion 716 of the road not covered in video image frame 700.
  • users can place video analytics markers for detection, for example, in that extra portion 716 with the FOV provided by the video image frame 710.
  • the video analytics operations can place video analytics markers (e.g., based on user's input via control component 240) such as virtual tripwires 702 and 712 or detection zones 704 and 714 in correct corresponding locations in video image frames 700 and 710, respectively, to perform detection or other video analytics operations.
  • processing component 210 is configured to provide a user interface that enables a user to provide indication for the video analytics markers (e.g., received via control component 240 and/or from an external device at a remote monitoring station 282) regardless of the FOV captured in the video image frames.
  • processing component 210 may begin perform video analysis on subsequent video image frames, for example, detecting objects or movement or objects within the video image frames based on the video analytics markers (when certain objects appear or move into an area in the scene represented by detection zones 704 and 714, when certain objects crosses a line within the scene represented by virtual tripwires 702 and 712, etc.), identifying objects based on the video analytics markers, etc.
  • processing component 210 is configured to retain the video analytics configurations after the user has changed the rotational orientation of surveillance camera 102. For example, by analyzing pixel values and performing object recognition algorithms, processing component 210 of some embodiments can be configured to identify the locations of the video analytics markers in the new FOV after the rotational orientation of surveillance camera 102 has been changed.
  • processing component 210 in response to detecting a change of rotational orientation of surveillance camera 102, processing component 210 is configured to not only perform rotational transform to image signals and generate video image frames in a new FOV that corresponds to the new rotational orientation, processing component 210 is also configured to automatically adjust the video analytics configuration parameters (e.g., video analytics markers) based on the new FOV. In one embodiment, processing component 210 is configured to adjust the video analytics markers by determining the pixel positions corresponding to the video analytics markers and apply a rotational transform (e.g., the same rotational transform being applied to the pixels) to the pixel positions of the markers.
  • a rotational transform e.g., the same rotational transform being applied to the pixels
  • processing component 210 is also configured to analyze pixel values from an image video frame captured before the change of rotational orientation and pixel values from an image video frame captured after the change of rotational orientation to determine a remapping/reassigning of pixel positions for the video analytics markers. This way, the user is not required to provide additional video analytics parameter input after adjusting the rotational orientation of surveillance camera 102.
  • surveillance camera 102 determines video analytics markers (e.g., virtual tripwire 702 and detection zone 704 of Fig. 7), after receiving user input when surveillance camera 102 is mounted in its intended, normal orientation. As such, the user input was made with respect to the FOV provided in video image frame 700.
  • processing component 210 is configured to automatically remap/reassign pixel positions for the video analytics markers to generate virtual tripwire 712 based on virtual tripwire 702, and detection zone 714 based on detection zone 704, so that the new video analytics markers correspond to the same area and location within the scene as the old video analytics markers.
  • calibration may be required on surveillance camera 102 before any video analytics cam be performed.
  • the camera e.g., the video analytics operations to be performed for the camera
  • processing component 210 is configured to perform calibration for camera 102.
  • processing component 210 of some embodiments is configured to perform the calibration operation for camera 102 by detecting/tracking an object moving around in a scene captured by imaging sensor 120, and determining correlations between different image location on the video image frame and corresponding image sizes of the tracked object.
  • Fig. 8 illustrates an example calibration process performed by processing component 210. Specifically, Fig. 8 illustrates video image frames 802, 804, and 806 that imaging sensor 120 captured of a scene a person is moving around within the scene. As shown, the movement of the person around the scene causes the image of the person to move from image location 816 in video image frame 802 to image location 818 in video image frame 804, and then to image location 820 in video image frame 806. In addition, due to the perspective of surveillance camera 102, the image size of the image of the person changes as the person moves in scene.
  • image of the person 810 in video image frame 802 appears to be larger than image of the person 812 in video image frame 804, and image of the person 812 in video image frame 804 appears yet to be larger than image of the person 814 in video image frame 806.
  • the size of the image of person in video image frames 802, 804, and 806 changes depending on the person's locations in the scene and corresponding image locations 816-820 in video image frames 802, 804, and 806.
  • the varying image size of the person in video image frames 802, 804, and 806 should be known and accounted for.
  • processing component 210 of surveillance camera 102 is configured to detect and track the person moving about in the scene and determines a correlation between various image locations (e.g., including image locations 816- 820) in video image frames 802, 804, and 806, and corresponding image sizes of the tracked object (e.g., the images 810-814).
  • image locations e.g., including image locations 816- 820
  • the determination of the correlation may in some
  • embodiments include storing the association between the tracked image locations 816-820 and the corresponding imaged sizes of the object (e.g., image sizes of images 810, 812, and 814) as they appear in video image frames 802, 804, and 806.
  • the determination of the correlation may in some embodiments include interpolating and/or extrapolating, for example using a regression algorithm, the stored association between the tracked image locations and the corresponding imaged sizes to obtain estimated imaged size for image locations that have not been tracked. In this way, the imaged size need not be recorded for every possible image location, but rather the imaged size can be estimated with sufficient accuracy from a predetermined number of tracked image locations.
  • Fig. 8 also shows an image frame 808 that displays various recorded and learned correlations between image sizes of the person and corresponding image locations.
  • processing component 210 may be configured to store the image sizes of the person and their corresponding image locations on image frame 808.
  • the recorded image sizes and corresponding image locations are represented on image frame 808 as solid rectangular boxes, such as image size 822 that corresponds to image location 816, image size 824 that corresponds to image location 818, and image size 826 that corresponds to image location 820.
  • processing component 210 may be configured to extrapolate/interpolate additional image sizes at other image location on image frames 808.
  • processing component 210 may be configured to estimate image size of the person at image location 828 (displayed as broken rectangular box) based on the rate of change of image sizes 822, 824, and 826 (e.g., how fast the image sizes change/shrink) and the position of image location 828 relative to image locations 816, 818, and 820. Since image location 828 is between image locations 818 and 820, the estimated size at image location 828 by processing component 210 is larger than image size 826 but smaller than image size 824.
  • estimated image sizes may be determined to be similar for image locations that differ in their horizontal locations of the scene (different locations across the image frame that is parallel to the horizon of the scene) but have same or adjacent vertical locations of the scene (different location in a direction that is perpendicular to the horizon of the scene).
  • surveillance camera 102 is installed in an upright orientation (the intended, normal orientation), such that the horizontal dimension of image sensor 120 is parallel to the horizon of the scene (as in the case in the example illustrated here), the horizontal dimension (e.g., x-axis or width) of the video image frames is parallel to the horizon of the scene, and the vertical dimension (e.g., y-axis or height) of the video image frames is perpendicular to the horizon of the scene.
  • processing component 210 is configured to determine that the horizon of the scene is parallel to the width of the image frame. As such, processing component 210 is configured to estimate that image sizes of the person at various horizontal image locations should be the same. For example, calibration and analytics module
  • processing component 210 may be configured to estimate that image sizes at image locations 832 and 834 to be the same as image size 822 (indicated as dotted rectangular boxes). Furthermore, calibration and analytics module 212 may be configured to estimate image sizes at other locations (e.g., image location 838) using the techniques described above. In addition to determining correlation between image sizes and image locations, processing component 210 of some embodiments is also configured to determine a correlation between the actual size of an object (e.g., the person) and the image sizes of the object at various image locations. The correlations determined may be used by processing component 210 to perform various video analytics (e.g., detecting/identifying objects based on video analytics markers, etc.).
  • video analytics e.g., detecting/identifying objects based on video analytics markers, etc.
  • the calibration operation was performed while the surveillance was mounted in its intended, normal (upright) orientation in this example. It can be appreciated that the same calibration operation may be performed on video image frames when surveillance camera 102 is installed at a substantially 90-degree rotational orientation such that the FOV provided has a larger vertical dimension than its horizontal dimension. Since rotation adaption module 212 is configured to produce video image frames having a FOV that is adaptive to the rotational orientation in which surveillance camera 102 is installed, processing component 210 may properly perform the calibration process using the natively generated video image frames, as the height, width, and/or direction of the object's movement can all be properly determined based on the natively generated video image frames.
  • Fig. 9 illustrates a video image frame 902 captured by imaging sensor 120 after the user has pivoted surveillance camera 102 for 90 degrees about optical axisl30.
  • processing component 210 is configured to adaptively adjust the correlations determined during the calibration process to correspond to the new FOV such that it is not necessary to perform another calibration process after the rotational orientation of surveillance camera 102 is changed.
  • processing component 210 of some embodiments can be configured to identify the image locations in the new FOV after the rotational orientation of surveillance camera 102 has been changed that corresponds to image locations in the old FOV before the rotational orientation of surveillance camera 102 is changed.
  • processing component 210 in response to detecting a change of rotational orientation of surveillance camera 102, is configured to automatically adjust the correlations of image sizes and image locations based on the new FOV.
  • processing component 210 is configured to analyze pixel values from an image video frame captured before the change of rotational orientation and pixel values from an image video frame captured after the change of rotational orientation to determine a
  • processing component 210 may determine that image location 922 in video image frame 902 corresponds to image location 822 in video image frame 808 by determining that they represent the same area in the scene based on an analysis of the pixel values.
  • processing component 210 may determine that image location 924 in video image frame 902 corresponds to image location 824 in video image frame 808, image location 926 in video image frame 902 corresponds to image location 826 in video image frame 808, image location 928 in video image frame 902 corresponds to image location 828 in video image frame 808, image location 932 in video image frame 902 corresponds to image location 832 in video image frame 808, image location 934 in video image frame 902 corresponds to image location 834 in video image frame 808, image location 938 in video image frame 902 corresponds to image location 838 in video image frame 808.
  • processing component 210 is configured to adjust the correlations between image sizes and image locations, and correlations between actual physical size of objects with image sizes at different image locations for the new FOV automatically, without any user input, and without requiring performing another calibration operation.
  • surveillance camera 102 may perform various video analytics (e.g., detection of objects within the scene, identification of objects within the scene, etc.) seamlessly even if the user decides to adjust the rotational orientation of the camera.
  • various embodiments provided by the present disclosure can be implemented using hardware, software, or combinations of hardware and software.
  • the various hardware components and/or software components set forth herein can be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure.
  • the various hardware components and/or software components set forth herein can be separated into sub-components comprising software, hardware, or both without departing from the spirit of the present disclosure.
  • software components can be implemented as hardware components, and vice-versa.
  • the ordering of various steps described herein can be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

Divers modes de réalisation de la présente invention concernent des procédés et des systèmes pouvant être utilisés de façon à fournir une caméra de surveillance qui génère des trames d'image vidéo natives dans le FOV (orientation) approprié qui correspond à l'orientation dans laquelle la caméra de surveillance est montée lorsque les trames d'image vidéo sont capturées. Les caméras de surveillance mises en œuvre selon des modes de réalisation de l'invention peuvent faciliter un montage qui fournit un FOV souhaité dans une orientation spécifique, générer des trames d'image vidéo qui correspondent de manière native au FOV souhaité, et permettre à une interaction d'utilisateur et à une analyse vidéo d'être effectuées sur les trames d'image vidéo qui correspondent au FOV.
PCT/US2017/022826 2016-03-17 2017-03-16 Procédé et camera d'analyse vidéo à rotation adaptative WO2017161198A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/115,455 US10735659B2 (en) 2016-03-17 2018-08-28 Rotation-adaptive video analytics camera and method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201662309940P 2016-03-17 2016-03-17
US201662309956P 2016-03-17 2016-03-17
US62/309,956 2016-03-17
US62/309,940 2016-03-17
US15/456,074 US11030775B2 (en) 2016-03-17 2017-03-10 Minimal user input video analytics systems and methods
US15/456,074 2017-03-10

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/456,074 Continuation-In-Part US11030775B2 (en) 2016-03-17 2017-03-10 Minimal user input video analytics systems and methods

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/115,455 Continuation US10735659B2 (en) 2016-03-17 2018-08-28 Rotation-adaptive video analytics camera and method

Publications (1)

Publication Number Publication Date
WO2017161198A1 true WO2017161198A1 (fr) 2017-09-21

Family

ID=58461467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/022826 WO2017161198A1 (fr) 2016-03-17 2017-03-16 Procédé et camera d'analyse vidéo à rotation adaptative

Country Status (1)

Country Link
WO (1) WO2017161198A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385525A (zh) * 2018-12-28 2020-07-07 杭州海康机器人技术有限公司 视频监控方法、装置、终端及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292222B1 (en) * 1997-02-13 2001-09-18 Videor Technical Services Gmbh Protective housing for optical apparatus with a mounting body for attachment to a mounting surface
WO2005048605A1 (fr) * 2003-11-14 2005-05-26 The Commonwealth Of Australia Systeme d'imagerie electronique synthetique
US20070165137A1 (en) * 2006-01-17 2007-07-19 Lai Simon Y Supporting frame for CCTV camera enclosure
US20110228112A1 (en) * 2010-03-22 2011-09-22 Microsoft Corporation Using accelerometer information for determining orientation of pictures and video images
US20140152815A1 (en) * 2012-11-30 2014-06-05 Pelco, Inc. Window Blanking for Pan/Tilt/Zoom Camera
US20150172567A1 (en) * 2013-12-12 2015-06-18 Flir Systems Ab Orientation-adapted image remote inspection systems and methods
US20150181123A1 (en) * 2013-12-19 2015-06-25 Lyve Minds, Inc. Image orientation adjustment based on camera orientation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292222B1 (en) * 1997-02-13 2001-09-18 Videor Technical Services Gmbh Protective housing for optical apparatus with a mounting body for attachment to a mounting surface
WO2005048605A1 (fr) * 2003-11-14 2005-05-26 The Commonwealth Of Australia Systeme d'imagerie electronique synthetique
US20070165137A1 (en) * 2006-01-17 2007-07-19 Lai Simon Y Supporting frame for CCTV camera enclosure
US20110228112A1 (en) * 2010-03-22 2011-09-22 Microsoft Corporation Using accelerometer information for determining orientation of pictures and video images
US20140152815A1 (en) * 2012-11-30 2014-06-05 Pelco, Inc. Window Blanking for Pan/Tilt/Zoom Camera
US20150172567A1 (en) * 2013-12-12 2015-06-18 Flir Systems Ab Orientation-adapted image remote inspection systems and methods
US20150181123A1 (en) * 2013-12-19 2015-06-25 Lyve Minds, Inc. Image orientation adjustment based on camera orientation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385525A (zh) * 2018-12-28 2020-07-07 杭州海康机器人技术有限公司 视频监控方法、装置、终端及系统
CN111385525B (zh) * 2018-12-28 2021-08-17 杭州海康机器人技术有限公司 视频监控方法、装置、终端及系统

Similar Documents

Publication Publication Date Title
US10735659B2 (en) Rotation-adaptive video analytics camera and method
US11030775B2 (en) Minimal user input video analytics systems and methods
US10970556B2 (en) Smart surveillance camera systems and methods
EP2764686B1 (fr) Systèmes et procédés de caméra de surveillance intelligente
US10033944B2 (en) Time spaced infrared image enhancement
US10425603B2 (en) Anomalous pixel detection
US10937140B2 (en) Fail-safe detection using thermal imaging analytics
US11010878B2 (en) Dynamic range compression for thermal video
US9819880B2 (en) Systems and methods of suppressing sky regions in images
US20140218520A1 (en) Smart surveillance camera systems and methods
US9635285B2 (en) Infrared imaging enhancement with fusion
EP2936799B1 (fr) Amélioration d'image infrarouge espacée dans le temps
US10244190B2 (en) Compact multi-spectrum imaging with fusion
US10909364B2 (en) Uncooled gas imaging camera
JP2006523043A (ja) 監視を行なう方法及びシステム
WO2014100741A2 (fr) Systèmes et procédés permettant de supprimer des zones de ciel dans des images
WO2017161198A1 (fr) Procédé et camera d'analyse vidéo à rotation adaptative
US11385105B2 (en) Techniques for determining emitted radiation intensity
CN114630060A (zh) 与红外成像相关的不确定度测量系统和方法
KR101738514B1 (ko) 어안 열상 카메라를 채용한 감시 시스템 및 이를 이용한 감시 방법

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17714973

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17714973

Country of ref document: EP

Kind code of ref document: A1