US20180286075A1 - Setting Different Background Model Sensitivities by User Defined Regions and Background Filters - Google Patents

Setting Different Background Model Sensitivities by User Defined Regions and Background Filters Download PDF

Info

Publication number
US20180286075A1
US20180286075A1 US15/562,798 US201615562798A US2018286075A1 US 20180286075 A1 US20180286075 A1 US 20180286075A1 US 201615562798 A US201615562798 A US 201615562798A US 2018286075 A1 US2018286075 A1 US 2018286075A1
Authority
US
United States
Prior art keywords
image
background
objects
video frame
filters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/562,798
Other versions
US10366509B2 (en
Inventor
Lawrence Richard JONES
Bryce Hayden LEMBKE
Sezai Sablak
Michael D. Dortch
Larry J. Price
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thermal Imaging Radar LLC
Original Assignee
Thermal Imaging Radar LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thermal Imaging Radar LLC filed Critical Thermal Imaging Radar LLC
Priority to US15/562,798 priority Critical patent/US10366509B2/en
Assigned to Thermal Imaging Radar, LLC reassignment Thermal Imaging Radar, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JONES, Lawrence Richard, LEMBKE, BRYCE HAYDEN, DORTCH, MICHAEL D., PRICE, LARRY J., SABLAK, SEZAI
Publication of US20180286075A1 publication Critical patent/US20180286075A1/en
Application granted granted Critical
Publication of US10366509B2 publication Critical patent/US10366509B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • H04N5/23238
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • Panoramic images can be created by an array of wide angle cameras that together create up to a 360 degree field of view or by one camera with a fish eye lens or other panoramic mirror that allows for a continuous “mirror ball” image that is later flattened out by computer.
  • a relatively new means of capturing panoramic images is by continuously spinning a thermal sensor or other high speed camera at less than 60 RPM and processing the images from the camera with a computer where they may be stitched together and analyzed.
  • a common first step for performing video analysis is to develop a background model from successive video frames and then to compare new frames against that background model to look for changes that could be foreground movement.
  • background objects such as trees, banners, etc.
  • This tolerance is typically set for the entire video image and used for all changes regardless of where they are in the video frame.
  • object classification in computer vision requires identifying characteristics about a foreground object that make it a likely match to a real world object, such as a person, animal or vehicle. Calculations performed to identify these characteristics can be computationally expensive, limiting the amount of analysis that can be performed on embedded or lower power systems.
  • One embodiment illustrated herein includes a method of creating a background model for image processing to identify new foreground objects in successive video frames.
  • the method includes providing a background image in a user interface.
  • the method further includes receiving a first user input in the user interface.
  • the first user input comprises an identification of one or more different regions within the background image.
  • the method further includes receiving a second user input in the user interface.
  • the second user input comprises a selection of an image change tolerance for each of the one or more identified different regions.
  • the method further includes providing the background image, information identifying the one or more different regions, and the image change tolerances to an image processor.
  • the background image, the information identifying the one or more different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify foreground objects within the successive image.
  • a method of identifying a foreground object in a video frame by comparing the video frame to a background model includes obtaining a background model.
  • the background model comprises a background image and identifies one or more user defined regions. Each of the one or more user defined regions includes an image change tolerance.
  • the method further includes obtaining a video frame and evaluating the video frame against the background model such that a foreground object is identified in a region of the video frame when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located.
  • Image change tolerance for each user defined region may be independently selected and adjusted by the user.
  • a method of identifying foreground objects of interest in a video frame is illustrated.
  • the method further includes obtaining a background model, wherein the background model comprises a background image.
  • the method further includes obtaining a video frame and evaluating the video frame against the background model to identify objects. This includes identifying differences between the video frame and the background image.
  • the method further includes applying one or more filters to the identified differences to identify one or more foreground objects of interest in the video frame.
  • FIG. 1 schematically illustrates an example computing system in which the principles described herein may operate
  • FIG. 2A schematically illustrates virtual camera positions, also referred to herein as stop positions, where the camera may rotate in a clockwise direction, with the camera pointed at stop position 1 ;
  • FIG. 2B schematically illustrates virtual camera positions, as in FIG. 2A , with the camera pointed at stop position 5 ;
  • FIG. 3 schematically illustrates an example implementation in which the video processor illustrated in FIG. 1 may operate
  • FIG. 4 illustrates a thermal imaging camera in an environment allowing the camera to capture a plurality of images at corresponding stop positions
  • FIG. 5 illustrates communication between a thermal imaging camera and an event dispatcher
  • FIG. 6 illustrates a method of creating a background model for image processing to identify new foreground objects in successive video frames
  • FIG. 7 illustrates a method of identifying a foreground object in a video frame by comparing the video frame to a background model
  • FIG. 8 illustrates a method of identifying foreground objects of interest in a video frame.
  • background objects such as trees, banners, etc.
  • movement which represents change between images.
  • the pixels in successive images will be different, and thus, pixels in one image are changed with respect to another image.
  • a certain amount of pixel change tolerance should be built in for movement to the analysis to view these objects as background and not foreground objects.
  • change occurs in the background (e.g., sunrise or sunset), which may appear to change objects (e.g., trees, power poles, utility trailers, stockyard equipment and buildings, and the like) in the foreground, as the effect of the background sun may alter their appearance, making it appear as if the foreground object has changed, when in fact it may have not.
  • this pixel change tolerance is typically set for the entire video image and used for all changes regardless of where they are in the video frame.
  • Some embodiments described herein allow for setting different background model sensitives by user defined regions in an image. Such embodiments may be designed for embedded systems with limited resources (such as limited processing power and memory) which rates amount of variance between background model and movement as a score for the foreground object.
  • the user can set up any number of regions in the image with different scores.
  • the score of each change i.e., the amount of pixel change, is compared to the region in which the movement is detected to determine whether this movement should be detected as a foreground change or simply background movement (e.g., due to sunrise, or sunset). This gives the user greater flexibility in determining how to treat change in video frames.
  • object classification in computer vision requires identifying characteristics about a foreground object that make it a likely match to a real world object, such as a person, animal or vehicle. Calculations performed to identify these characteristics can be computationally expensive, limiting the amount of analysis that can be performed on embedded or lower power systems.
  • Embodiments may use a number of filters to more effectively reduce the number of objects that require CPU intensive calculation. These filters may include one or more of distance filters used to determine the relative size of the object, a cross correlation filter to determine if the object matches a background model, an edge detection filter, an aspect ratio filter, a thermal filter to filter objects based on thermal characteristics, and so forth. These filters can help to reduce false detection of foreground objects and facilitate computationally inexpensive algorithms for determining object classification.
  • a background model is used to compare the new image for changes within regions of interest.
  • Each region of interest has a change tolerance used to determine if the change is of a great enough magnitude to be considered for later filters.
  • FIG. 1 illustrates a block diagram for camera system 100 .
  • Camera system 100 allows camera 150 (e.g., a thermal imaging camera) to rotate up to a full 360° around a fixed-axis. The full revolution comprises a number of positions corresponding to “stops” where it is desired that an image be captured. Because of requirements related to camera calibration, particularly with a thermal imaging camera, the camera can capture images at a constant frame rate.
  • the spectrum captured may be from about 8,000 nm to about 14,000 nm.
  • some of the images may correspond to positions between designated “stop” positions, while others will correspond to a “stop” position. As will be described in further detail hereafter, only those images corresponding to a “stop” position may be retained. The others may be discarded.
  • a stop position can be characterized as having an angle offset relative to a designated home position (i.e., “stop” 1 ) at which camera 150 captures an image.
  • the system may determine the home position by using a camera-mount with a hole, along with a transmissive optical encoder that can detect the home position when the hole in the camera-mount lines up with the encoder.
  • a transmissive optical encoder may be a 1-bit encoder.
  • a higher resolution encoder may be used to allow more granular feedback of actual camera position at any given time.
  • Camera system 100 can allow any number of stop positions per revolution.
  • the number of stop positions may be between 1 and 16.
  • the stops may be positioned at equally spaced intervals. For example, ten stop positions, or stops, per revolution would result in ten stops that are located 36° apart.
  • the camera system 100 may use any suitable motor mechanism for ensuring that the camera remains momentarily stationary at each stop position, so as to facilitate capture of a non-blurry image at each desired stop position.
  • a stepper motor may be employed to hold camera 150 stationary at each stop position for the appropriate amount of time to acquire an image before moving to the next stop position. Details of an exemplary stepper motor mechanism are disclosed in the inventors PCT Patent Application Serial No. PCT/US/2014/033539 filed Apr.
  • FIG. 9 titled STEPPER MOTOR CONTROL AND FIRE DETECTION SYSTEM, herein incorporated by reference in its entirety.
  • Another example of a motor mechanism that may be employed in rotating the camera through a plurality of stop positions in which the camera remains stationary momentarily at each stop position includes a mechanical cam system, e.g., as described in the inventors U.S. Pat. No. 8,773,503 issued Jul. 8, 2014 and titled AUTOMATED PANORAMIC CAMERA AND SENSOR PLATFORM WITH COMPUTER AND OPTIONAL POWER SUPPLY, herein incorporated by reference in its entirety.
  • Motor mechanisms as described in the above patents and applications, as well as any other suitable design may be employed.
  • the system can utilize two processors, microprocessor 110 and video processor 140 .
  • Microprocessor 110 can manage position and timing of camera 150
  • video processor 140 can process video from camera 150 .
  • Utilizing two processors allows the real-time performance of the camera 150 and motor synchronization by the microprocessor 110 to be de-coupled from the high-throughput video processor 140 .
  • some implementations may use one processor to manage position and timing, as well as video of the camera 150 .
  • more than two processors could alternatively be employed.
  • data from the camera 150 passes through slip rings 160 , which allows the camera 150 to rotate as described above (continuous rotation with intermittent, very short stops). Because the periods where the camera is actually stationary are so short, the camera may appear to rotate continuously. In some cases, the camera frame rate may not exactly match the rotational speed and stop rate of the camera system 100 , thus creating fractional video frames.
  • Digital switch 130 may then be used to throw away any unwanted video frames. In other words, as described above, some captured frames may correspond to one of the stop positions, while other captured frames may be captured while the camera is rotating. Using the digital switch 130 allows the video processor 140 to sleep during those times at which unwanted video frames are discarded, thus creating better power efficiency. Of course, in other embodiments of the system, the video processor (e.g., ARM/DSP) may have little or no time to sleep.
  • each stop position can be represented as an individual virtual camera.
  • a virtual camera may act like a stationary camera pointed in a single direction.
  • the camera system 100 can support any number of stop positions and corresponding number of virtual cameras.
  • system 100 may include from 1 to 16 stop positions, and 1 to 16 virtual cameras, each virtual camera associated with a particular stop position.
  • FIG. 2A illustrates how each stop position may correlate to a physical space with the camera 150 facing stop position 1 (i.e., home position).
  • FIG. 2B shows the camera 150 having rotated so as to be facing stop position 5 .
  • a numbering system used for the stop positions may increase in a clockwise direction.
  • FIGS. 2A and 2B illustrate an exemplary configuration including 8 stop positions, although it will be understood that more or fewer stop positions may be provided.
  • the number of stop positions may be from 2 to about 30, from 4 to about 20, or from 6 to about 16.
  • each stop period i.e., the dwell time
  • each stop period may be from about 30 ms to about 120 ms (e.g., about 60 ms).
  • Each image captured by camera 150 corresponding to a stop position may be multiplexed, or muxed, into a video stream and sent to video processor 140 .
  • Each image captured by camera 150 to be retained may be sent out of the camera over the same interface.
  • Microprocessor 110 can manage and track the angle offset (and corresponding stop position) of camera 150 when an image to be retained is captured (i.e., the image corresponds to a stop position). Even images that are to be discarded (i.e., the image does not correspond to a stop position), may be sent to digital switch 130 over the same interface as retained images. At digital switch 130 , those images that are to be discarded may be separated and discarded, rather than passed on to video processor 140 .
  • video demultiplexing driver 330 separates the video stream into individual images (i.e., frames) that each correspond to a particular stop position, or virtual camera.
  • the video stream referred to at this stage may include images to be retained that were captured at different stop positions.
  • Demulitplexing driver 330 may use position information tracked by microprocessor 110 to determine the corresponding virtual camera for each image of the video stream, allowing sorting of the images to their proper virtual camera devices.
  • each image can be sent to its corresponding virtual camera ( 301 - 308 ) for storage and future analytics (e.g., comparison of images taken adjacent in time from the same stop position), once that determination has been made.
  • images from different stop positions may be stitched together to create a panoramic image (e.g., of up to 360°).
  • An advantage of the present embodiments is that any such stitching is optional, and is typically not carried out on-site (if at all). If done, stitching may be done off-site, allowing the total power requirements for on-site system 100 to be no more than about 10 watts, as stitching is power and computer processor intensive.
  • the video stream associated with a given virtual camera device may be analyzed to detect a change from a given image to a subsequent image.
  • the analytics carried out may involve comparison of an image with a subsequently captured image (or a previously captured image) from the same stop position (and thus the same virtual camera device) to detect any changes (e.g., a change in temperature of an object, movement of an object, etc.).
  • the analytics carried out may include fire detection, as discussed in the inventors' prior PCT Patent Application Serial No. PCT/US/2014/033539 filed Apr. 9, 2014 titled STEPPER MOTOR CONTROL AND FIRE DETECTION SYSTEM, and PCT Patent Application Serial No.
  • PCT/US/2014/033547 filed Apr. 9, 2014 titled FIRE DETECTION SYSTEM, each of which is herein incorporated by reference in its entirety.
  • Another example of use may include intruder detection, or perimeter monitoring to ensure security at a secure border, or the like.
  • Each virtual camera can have a video stream similar to that which would be obtained by a stationary camera placed at the corresponding stop position.
  • the system of virtual cameras may be on-site relative to thermal imaging camera 150 .
  • the frame rate of a virtual camera video stream is equal to the rotational speed of motor 170 , as camera 150 passes each stop once per revolution.
  • the muxed signal frame rate may be equal to a rotational speed of motor 170 multiplied by the number of stops per revolution.
  • Utilizing virtual cameras can allow analytics and any stitching to be decoupled from image acquisition. Because each virtual camera can have its own video stream, images associated with each virtual camera can be processed independently. System 100 may normally capture images in numerical (e.g., clockwise or counter-clockwise) order, but because the angle offset from home position (i.e., stop position 1 ) is known based on the stop number of the virtual camera, analytics or any stitching may update in any order desired.
  • Correlating geographical location data with the position of camera 150 can include, but is not limited to, correlating pixel positions of a captured image and determining depth values for pixel positions of individual thermal imaging camera images based on the geographical location of thermal imaging camera 150 .
  • the distance or depth value for each pixel of an image may be calculated using elevation data, for example, from the National Elevation Dataset.
  • the depth value calculation to be associated with a given pixel can be done in a series of steps for determining (e.g., calculating) how each pixel represents a ray projected from the camera across the landscape intersecting the ground. Generally, this may be achieved by using a projected camera view on a wireframe terrain model created using elevation data (e.g., from the National Elevation Dataset) to estimate where each rendered pixel of the camera view would intersect the wireframe to calculate the probable “z” depth value of the bottom of each image element or pixel. Such a process may employ a loop process carried out through increasing z-distances until the projected height intersects the elevation height at a distance.
  • elevation data e.g., from the National Elevation Dataset
  • This may be done by determining (e.g., calculating) if a ray having a length equal to the camera's height intersects the ground at the projected distance. This determination may be repeated by repeatedly increasing the ray length by a given amount (e.g., 1 decimeter) until the ground is reached (e.g., intersected) or the ray exceeds a given length (e.g., 30 kilometers). Such an excessive length may be used to help render the horizon. Data for latitude, longitude, elevation and distance of the intersection point may be stored, and the determination (e.g., calculation) may be repeated for the next pixel of a column.
  • a given amount e.g. 1 decimeter
  • the determination may move onto a new column. Such determinations or calculations may be based off variable Vertical Field of View, Horizontal Field of View, elevation and orientation.
  • the final data set may be used to render an image that depicts distance (e.g., in gray scale) with lines placed at a given distance (e.g., every 100 meters).
  • the determined or calculated image may be compared against an actual image for a final adjustment of the input variables. Once completed, the final result would provide a “z” depth value map that can be saved for future immediate analytics availability.
  • an image of 640 ⁇ 512 may require repetition of the described determinations approximately 250,000 times.
  • depth values for pixel positions may allow determination of the size or movement speed of an object captured within an image. For example, it may provide information on the size or movement speed of a wildfire, person, animal, or any other object of interest.
  • Processing that correlates pixel positions of an image with location data and determination of a depth value associated with each pixel may be performed off-site at a remote user interface terminal. The actual depth values associated with given pixels of the images may be relayed to the camera system for storage and use on-site.
  • Relay of any data on-site to off-site or vice versa may be by any suitable mechanism, e.g., including, but not limited to satellite link or network, cellular link or network, WiFi link or network, radio transmission, hard wired link or network, etc.
  • FIG. 4 illustrates the camera 150 capturing a plurality of images at various stops and the images optionally being stitched into a panoramic image 402 . While FIG. 4 illustrates the panoramic image 402 , it should be appreciated that embodiments do not require the images to be stitched together, but rather, individual images can be processed by a virtual camera such as virtual cameras 301 - 308 illustrated in FIG. 3 .
  • Embodiments may be implemented where, at a virtual camera, foreground objects are scored based on the amount of change compared to a background model. For example, a score may be generated based on the amount of changes between pixels (or edges) in a background image and the corresponding pixels (or edges) in a subsequent image. Scoring may be based, for example, on an aggregation of changes over a number of different pixels (or edges). In particular, the cumulative changes over a selected region may represent the score for the region. For example, the number of pixels or edges associated with a given object (e.g., a power pole, a utility trailer, etc.) may be determined, for later reference.
  • a given object e.g., a power pole, a utility trailer, etc.
  • the system may compare the number of pixels or edges of the object at a later time, to determine whether the object has changed (the number of pixels or edges has changed over a threshold value), or if the object is the same as before. This may be particularly useful during sunset and sunrise, when objects may appear to be changing, but upon counting the number of pixels or edges associated with such a stationary object, it will be apparent that the object has not changed, only the background.
  • the particular background model can include user defined regions within a video frame where each region is assigned a sensitivity score.
  • FIG. 4 illustrates four user defined regions 404 , 406 , 408 , 410 and 412 .
  • Regions 404 , 406 , 408 and 410 are selected to encompass various pieces of vegetation. As wind or other factors may cause movement in the vegetation, these regions may be configured to be less sensitive to movement than other regions are.
  • any comparison of the defined regions 404 , 406 , 408 , 410 and 412 in the background image to corresponding regions in subsequent video frames will be allowed a certain amount of variation without causing an alarm or other indication that there is a change in the object or background. Even though there may be a slight change due to swaying vegetation, such slight changes will not cause an event to be triggered indicating a change.
  • the region 412 may be defined such that little or no change is allowed.
  • the region 412 is set to define a region including a mountain range which should remain quite static. Such a region may similarly be defined around various pieces of equipment such as trailers, buildings, power poles, and the like that may generally remain stationary. Thus, this region 412 could be defined to be more sensitive to change such that even slight changes will cause an alarm or other indication of a change to be triggered.
  • embodiments may be implemented where foreground object scores are compared to the region of interest in which they were detected to determine if they are background or foreground objects.
  • a user may have control over sensitivity of movement detection over all coordinates of the video frame.
  • a user-defined region of interest may span across more than 1 virtual camera position (e.g., as region 412 does), even if no panoramic stitching is performed.
  • Embodiments can be implemented which do not require extra memory buffers for learning or learning time to allow for background motion to be detected. Rather, a user can manually define where background motion might occur.
  • the user can increase sensitivity by increasing the motion detection range in areas where there is little or no repetitive motion (or where none should be) while still allowing detection of motion in areas where there is a lot of repetitive motion by decreasing the sensitivity and the motion detection range for these areas. This gives a user more control over detection range and percentage of objects falsely detected as foreground objects of interest (e.g., an intruder).
  • there may be a default level of sensitivity for all regions of the video frame that are not user defined i.e., not selected by the user and given a specific sensitivity). For example, all regions that are not user defined may be assumed to remain relatively static and thus have a relatively high sensitivity to any changes.
  • embodiments may first create a background model.
  • the background model may be based on a background image, such as the panoramic image 402 (or the individual images associated with stops S1-S8).
  • the panoramic image 402 may be provided to, or created on a user system 190 .
  • the user system 190 may be remote from the camera system 100 . Thus, images may be provided from the camera system 100 to be used in creating a background model.
  • the panoramic image 402 is provided as part of a user interface at the user system 190 .
  • the user system 190 may include a user interface that allows a user to select regions on the overall panoramic image 402 .
  • the user interface may include the ability to allow a user to drag and select a region using mouse graphical user interface interactions. While this would allow for selecting rectangular regions, it should be appreciated that other regions may also be selected. For example, a user could select a region bounded by an ellipse, a triangle, or some other complex shape. A user may also be able to select a region by drawing the region using any applicable graphical user interface interactions (e.g., a mouse, a finger or stylus when using a touchscreen device, and so forth). In some embodiments, any portions of the image not selected by the user may represent a region that can be associated with an image change tolerance and set forth in more detail below.
  • the user can also indicate, using the user interface, a sensitivity for a selected region.
  • the selected regions can be converted to coordinates (such as Cartesian coordinates) which can then be provided back to the camera system 100 along with the sensitivities. This information can then be used by the camera system as part of the background model used to determine when indications of changes should be issued to a user.
  • regions may be automatically selected and given a particular sensitivity by the system 100 . Such selections and sensitivities may be determined by system 100 based on historical data. For example, the system 100 may automatically determine that an area with numerous trees should be given a low sensitivity, while an area with little-to-no vegetation should be given a high sensitivity. Similarly, in an area where security is particularly important (e.g., of a stockyard, border area, building installation, or other facility or environment), the surrounding area may be given a higher sensitivity than other areas.
  • Embodiments may alternatively or additionally use one or more descriptive filters to distinguish between background changes and new foreground objects in an image.
  • Using filters can reduce the number of times or the complexity of computationally expensive algorithms that may need to be used for object classification. This can allow object classification to be, at least partially, performed at a low-power, limited resource (e.g. limited memory, limited processing power, etc.) system. Even on higher power, higher resource systems, such functionality can reduce power consumption.
  • an image may differ from a background image which may indicate a new foreground object in an image or simply a change to the background.
  • Various environmental inputs and/or various filters may be used to filter out background changes which appear as foreground objects.
  • new foreground objects can be filtered out when it can be determined that they are not foreground objects of interest.
  • one embodiment uses a distance map to determine distance and size of a foreground object, i.e. a detected change in an image.
  • This distance map may be generated, for example, using a method similar to that described above for calculating a probable “z” depth value.
  • a size filter may be used to filter out objects based on their size. For example, it may be determined that only animals and/or people within a field of view of the thermal camera are objects of interest. Thus, all objects detected that are bigger than an animal or person may be filtered out.
  • Another filter may be based on the aspect ratio, pixels, or number of edges of the candidate foreground object to determine a probability of it being a specific object type. For example, the aspect ratio of a car when viewed from the front is significantly different than the aspect ratio of a motorcycle when viewed from the front, thus filtering out cars, but not motorcycles, or vice versa, may be possible. Accordingly, a user may identify which types of objects are of interest to the user, then use their associated sizes to filter out objects that are not of interest.
  • a correlation filter is used to determine if a potential foreground object matches an object in a background model.
  • the background model can be examined for a matching object in the vicinity of the potential foreground object. This can help to identify that perhaps a tree has swayed, a large rock has been moved, or some other movement has occurred that may not be of interest to a user rather than having detected a new foreground object. Accordingly, a false detection and subsequent false alarm to a user may be avoided by using a correlation filter.
  • An edge detection filter may be used in a similar manner.
  • the edge detection filter may be based on the number of edges of a foreground object. For example, if animal movement were of interest, animals have a large number of outline edges when compared to an automobile.
  • a filter may be configured to filter out objects with few edges while allowing the appearance of foreground objects with a larger number of edges to cause a user to be notified of changes to an environment.
  • a system could count the edges in the foreground candidate to see if it is complex enough to be an object of interest to a user.
  • An edge filter may be particularly useful during the periods of time that the sun rises and sets because the sun itself may give the camera system a false positive. More specifically, while the sun is setting and rising, the sun may be in view of the camera, thus causing the camera to detect a new source of heat (i.e., an object) in the location of the setting or rising sun. As such, a false alarm or alert could be sent to a user that an unwanted object has entered the surveilled area.
  • a new source of heat i.e., an object
  • objects e.g., trees, telephone poles, cars, houses, buildings, trailers, and so forth
  • background objects e.g., trees, telephone poles, cars, houses, buildings, trailers, and so forth
  • objects in the field of view foreground or background objects
  • background or background objects may be compared against a background model to ensure that the number of edges of all detected objects is the same currently as it is in the background model.
  • a particular area that is being surveilled by a thermal camera may include a building and telephone or power pole.
  • an edge detection filter may be applied in order to detect the number of edges of all the objects within the field of view of the camera.
  • the only detected objects would be the building and the pole, which would result in the same number of edges as the background model of the same area.
  • objects in the field of view may be compared against a background model to ensure that the number of edges of the detected objects is the same currently as it is in the background model, thus avoiding any false positives caused by the sun.
  • an edge filter may be automatically implemented each day during sunrise and sunset for that particular day.
  • a database containing information regarding times for sunrises and sunsets for each day of the year in particular locations may be accessed to determine the appropriate times to apply the edge detection filter.
  • an edge filter may be automatically implemented from 6:15 a.m. until 6:45 a.m. while the sun is rising and from 7:15 p.m. until 7:45 p.m. while the sun is setting on a particular day based on data contained within a particular database.
  • an edge filter may be implemented during periods of the day that may be much hotter than others (i.e., where objects may appear much hotter) and then the number of edges can tell the user that no intruder has entered the property.
  • a filter may be a thermal filter.
  • Thermal cameras capture differences in temperature. Wildlife may have a different difference in temperature than an ambient temperature from humans, automobiles, motor cycles, the setting sun, the rising sun, and so forth.
  • filters could be created to filter out humans and automobiles while allowing animals to be detected (or vice versa).
  • confidence level filters can be applied.
  • a user can adjust confidence level for any number of regions in the video frame to determine required level of confidence to classify a foreground object. For instance, a user may assign a low confidence level to a certain region because there is a high quantity of vegetation (trees, bushes, and so forth) in the region, or for any other reason. Accordingly, in order to identify a foreground object in such a region, there must be a high level of movement detected. Filters can be applied in any order. Further, any number of filters can be included or excluded from execution. Additional filters, while not illustrated here could be nonetheless applied to embodiments herein.
  • FIG. 5 illustrates transmission of data from the on-site camera system 150 through a wireless transmitter 504 to a wireless receiver 506 .
  • Wired transmission is of course also possible.
  • This data may include, for example, location of the object, size of the object, aspect-ratio of the object, temperature of the object, and so forth. Continuous monitoring of the object can provide additional data such as changes in size and speed.
  • alerts may also include a confidence level of the alert/object. For example, high confidence alerts may be of sufficient certainty to dispatch aircraft or other response units very early, potentially saving time, money, property, and so forth.
  • FIGS. 6-8 illustrate three different flowcharts for performing embodiments described herein. It should be noted that FIGS. 6-8 are described with frequent reference to FIG. 4 .
  • a method 600 is illustrated. The method includes acts for creating a background model for image processing to identify new foreground objects in successive video frames.
  • the method 600 includes providing a background image in a user interface (Act 610 ).
  • a background image taken at stop position S3 (illustrated in FIG. 4 ) includes at least a portion of a car and house.
  • the method further includes receiving a first user input in the user interface, wherein the first user input comprises identifying one or more different regions within the background image (Act 620 ).
  • the first user input may identify region 404 .
  • the user input may be of any suitable type. For instance, a user may identify a region using a click-and-drag rectangular selection tool using a mouse.
  • the method further includes receiving a second user input in the user interface, the second user input comprising selecting an image change tolerance for each of the one or more identified different regions (Act 630 ). For instance, the user may select an image change tolerance for region 406 that necessitates a relatively high amount of change before detecting a foreground object because the bush in region 404 may consistently move because of wind.
  • the method 600 further includes providing the background image, information identifying the one or more different regions, and the image change tolerances to an image processor (Act 640 ).
  • the background image, the information identifying the one or more different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify one or more foreground objects within the successive image (Act 640 ).
  • a user may help create a contextually aware background model, against which successive images can be compared.
  • FIG. 7 illustrates a method of identifying a foreground object in a video frame by comparing the video frame to a background model.
  • the method 700 includes obtaining a background model (such as the background model described above in connection with FIG. 6 ), wherein the background model comprises a background image and identifies one or more user defined regions (Act 710 ).
  • a background model such as the background model described above in connection with FIG. 6
  • the background model comprises a background image and identifies one or more user defined regions (Act 710 ).
  • Each of the one or more user defined regions can include an image change tolerance (Act 710 ), as described above.
  • the method further includes obtaining a video frame (Act 720 ) that is evaluated against the background model, such that a foreground object is identified in a region of the video frame when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located (Act 720 ).
  • region 404 may have a low sensitivity to change because of the bush located within the region. Accordingly, in order for a foreground object to be identified within region 404 , the object must demonstrate more movement than simply the branches/leaves of the bush within region 404 swaying in the wind.
  • FIG. 8 illustrates a method, implemented at a computer system, of identifying foreground objects of interest in a video frame.
  • the method 800 includes obtaining a background model, wherein the background model comprises a background image (Act 810 ).
  • the method further includes obtaining a video frame (Act 820 ) and evaluating the video frame against the background model to identify objects, including identifying differences between the video frame and the background image (Act 830 ), as described above. Perhaps as part of the evaluation, or after performing an initial evaluation, one or more filters may be applied to the identified differences in order to identify one or more foreground objects of interest in the video frame (Act 840 ).
  • a filter such as a size filter
  • the bush within region 404 may initially be identified as a potential object of interest because the wind caused it to move when comparing background model to the video frame.
  • a size filter may be applied that specifies that objects smaller than a person are not of interest. Thus, the bush within region 404 would be filtered out rather than being reported to a user.
  • any of the filters described herein may be used.
  • a thermal camera system may allow users to identify particular regions within the field of view of the camera, and additionally, to select a particular image change tolerance for each specified region.
  • filters may be applied to any potential identified objects to filter out objects that are not of interest.
  • a user can customize the thermal imaging camera system to identify only objects of interest to the user and further alert the user only in instances where such an object has been identified.

Abstract

Creating a background model for image processing to identify new foreground objects in successive video frames. A method includes providing a background image in a user interface. The method further includes receiving a first user input in the user interface that comprises an identification of one or more different regions within the background image. The method further includes receiving a second user input in the user interface that comprises a selection of an image change tolerance for each of the identified different regions. The method further includes providing the background image, information identifying the different regions, and the image change tolerances to an image processor. The background image, the information identifying the different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify foreground objects within the successive image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/140,942 filed Mar. 31, 2015, titled “SETTING DIFFERENT BACKGROUND MODEL SENSITIVITIES BY USER DEFINE REGIONS AND BACKGROUND FILTERS”, which is incorporated herein by reference in its entirety.
  • BACKGROUND
  • Panoramic images can be created by an array of wide angle cameras that together create up to a 360 degree field of view or by one camera with a fish eye lens or other panoramic mirror that allows for a continuous “mirror ball” image that is later flattened out by computer.
  • A relatively new means of capturing panoramic images is by continuously spinning a thermal sensor or other high speed camera at less than 60 RPM and processing the images from the camera with a computer where they may be stitched together and analyzed.
  • A common first step for performing video analysis is to develop a background model from successive video frames and then to compare new frames against that background model to look for changes that could be foreground movement. As some background objects (such as trees, banners, etc.) can have movement and change, a certain amount of tolerance should be built in for movement to the analysis to view these objects as background and not foreground objects. This tolerance is typically set for the entire video image and used for all changes regardless of where they are in the video frame.
  • Relatedly, object classification in computer vision requires identifying characteristics about a foreground object that make it a likely match to a real world object, such as a person, animal or vehicle. Calculations performed to identify these characteristics can be computationally expensive, limiting the amount of analysis that can be performed on embedded or lower power systems.
  • The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
  • BRIEF SUMMARY
  • One embodiment illustrated herein includes a method of creating a background model for image processing to identify new foreground objects in successive video frames. The method includes providing a background image in a user interface. The method further includes receiving a first user input in the user interface. The first user input comprises an identification of one or more different regions within the background image. The method further includes receiving a second user input in the user interface. The second user input comprises a selection of an image change tolerance for each of the one or more identified different regions. The method further includes providing the background image, information identifying the one or more different regions, and the image change tolerances to an image processor. The background image, the information identifying the one or more different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify foreground objects within the successive image.
  • In another embodiment, a method of identifying a foreground object in a video frame by comparing the video frame to a background model is illustrated. The method includes obtaining a background model. The background model comprises a background image and identifies one or more user defined regions. Each of the one or more user defined regions includes an image change tolerance. The method further includes obtaining a video frame and evaluating the video frame against the background model such that a foreground object is identified in a region of the video frame when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located. Image change tolerance for each user defined region may be independently selected and adjusted by the user.
  • In yet another embodiment, a method of identifying foreground objects of interest in a video frame is illustrated. The method further includes obtaining a background model, wherein the background model comprises a background image. The method further includes obtaining a video frame and evaluating the video frame against the background model to identify objects. This includes identifying differences between the video frame and the background image. The method further includes applying one or more filters to the identified differences to identify one or more foreground objects of interest in the video frame.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
  • FIG. 1 schematically illustrates an example computing system in which the principles described herein may operate;
  • FIG. 2A schematically illustrates virtual camera positions, also referred to herein as stop positions, where the camera may rotate in a clockwise direction, with the camera pointed at stop position 1;
  • FIG. 2B schematically illustrates virtual camera positions, as in FIG. 2A, with the camera pointed at stop position 5;
  • FIG. 3 schematically illustrates an example implementation in which the video processor illustrated in FIG. 1 may operate;
  • FIG. 4 illustrates a thermal imaging camera in an environment allowing the camera to capture a plurality of images at corresponding stop positions;
  • FIG. 5 illustrates communication between a thermal imaging camera and an event dispatcher;
  • FIG. 6 illustrates a method of creating a background model for image processing to identify new foreground objects in successive video frames;
  • FIG. 7 illustrates a method of identifying a foreground object in a video frame by comparing the video frame to a background model; and
  • FIG. 8 illustrates a method of identifying foreground objects of interest in a video frame.
  • DETAILED DESCRIPTION
  • As noted above, background objects (such as trees, banners, etc.) in successive images can have movement, which represents change between images. In particular, when objects in the background move, the pixels in successive images will be different, and thus, pixels in one image are changed with respect to another image. Thus, a certain amount of pixel change tolerance should be built in for movement to the analysis to view these objects as background and not foreground objects. A similar scenario occurs where change occurs in the background (e.g., sunrise or sunset), which may appear to change objects (e.g., trees, power poles, utility trailers, stockyard equipment and buildings, and the like) in the foreground, as the effect of the background sun may alter their appearance, making it appear as if the foreground object has changed, when in fact it may have not.
  • However, this pixel change tolerance is typically set for the entire video image and used for all changes regardless of where they are in the video frame. Some embodiments described herein allow for setting different background model sensitives by user defined regions in an image. Such embodiments may be designed for embedded systems with limited resources (such as limited processing power and memory) which rates amount of variance between background model and movement as a score for the foreground object. The user can set up any number of regions in the image with different scores. The score of each change, i.e., the amount of pixel change, is compared to the region in which the movement is detected to determine whether this movement should be detected as a foreground change or simply background movement (e.g., due to sunrise, or sunset). This gives the user greater flexibility in determining how to treat change in video frames. This can result in reducing the frequency and number of false positive alerts (e.g., a conclusion that an intruder is present, when none is), while also maintaining reliability, so that an alert is provided when it actually should be (e.g., an intruder is in fact present).
  • Also as discussed above, object classification in computer vision requires identifying characteristics about a foreground object that make it a likely match to a real world object, such as a person, animal or vehicle. Calculations performed to identify these characteristics can be computationally expensive, limiting the amount of analysis that can be performed on embedded or lower power systems. Embodiments may use a number of filters to more effectively reduce the number of objects that require CPU intensive calculation. These filters may include one or more of distance filters used to determine the relative size of the object, a cross correlation filter to determine if the object matches a background model, an edge detection filter, an aspect ratio filter, a thermal filter to filter objects based on thermal characteristics, and so forth. These filters can help to reduce false detection of foreground objects and facilitate computationally inexpensive algorithms for determining object classification.
  • In some embodiments, a background model is used to compare the new image for changes within regions of interest. Each region of interest has a change tolerance used to determine if the change is of a great enough magnitude to be considered for later filters. In this way, the two features described above can be combined. This may be particularly useful for object classification in low power systems or to otherwise conserve power in systems.
  • The following now illustrates a general environment where embodiments may be practiced. FIG. 1 illustrates a block diagram for camera system 100. Camera system 100 allows camera 150 (e.g., a thermal imaging camera) to rotate up to a full 360° around a fixed-axis. The full revolution comprises a number of positions corresponding to “stops” where it is desired that an image be captured. Because of requirements related to camera calibration, particularly with a thermal imaging camera, the camera can capture images at a constant frame rate. The spectrum captured may be from about 8,000 nm to about 14,000 nm. Of course, it may be possible to employ concepts disclosed herein within systems configured to capture and use image data based on other spectrums (e.g., visible light, or higher or lower wavelengths). When capturing images at a constant frame rate, some of the images may correspond to positions between designated “stop” positions, while others will correspond to a “stop” position. As will be described in further detail hereafter, only those images corresponding to a “stop” position may be retained. The others may be discarded.
  • The positions where camera 150 captures images that will be retained are referred to herein as stop positions because camera 150 must be stationary, or momentarily stopped, in order for camera 150 to acquire a non-blurry image. A stop position can be characterized as having an angle offset relative to a designated home position (i.e., “stop” 1) at which camera 150 captures an image. In some implementations, the system may determine the home position by using a camera-mount with a hole, along with a transmissive optical encoder that can detect the home position when the hole in the camera-mount lines up with the encoder. Such a transmissive optical encoder may be a 1-bit encoder. In other implementations, a higher resolution encoder may be used to allow more granular feedback of actual camera position at any given time.
  • Camera system 100 can allow any number of stop positions per revolution. In an embodiment, the number of stop positions may be between 1 and 16. The stops may be positioned at equally spaced intervals. For example, ten stop positions, or stops, per revolution would result in ten stops that are located 36° apart. The camera system 100 may use any suitable motor mechanism for ensuring that the camera remains momentarily stationary at each stop position, so as to facilitate capture of a non-blurry image at each desired stop position. For example, a stepper motor may be employed to hold camera 150 stationary at each stop position for the appropriate amount of time to acquire an image before moving to the next stop position. Details of an exemplary stepper motor mechanism are disclosed in the inventors PCT Patent Application Serial No. PCT/US/2014/033539 filed Apr. 9, 2014 titled STEPPER MOTOR CONTROL AND FIRE DETECTION SYSTEM, herein incorporated by reference in its entirety. Another example of a motor mechanism that may be employed in rotating the camera through a plurality of stop positions in which the camera remains stationary momentarily at each stop position includes a mechanical cam system, e.g., as described in the inventors U.S. Pat. No. 8,773,503 issued Jul. 8, 2014 and titled AUTOMATED PANORAMIC CAMERA AND SENSOR PLATFORM WITH COMPUTER AND OPTIONAL POWER SUPPLY, herein incorporated by reference in its entirety. Motor mechanisms as described in the above patents and applications, as well as any other suitable design may be employed.
  • As depicted in FIG. 1, the system can utilize two processors, microprocessor 110 and video processor 140. Microprocessor 110 can manage position and timing of camera 150, while video processor 140 can process video from camera 150. Utilizing two processors allows the real-time performance of the camera 150 and motor synchronization by the microprocessor 110 to be de-coupled from the high-throughput video processor 140. Alternatively, some implementations may use one processor to manage position and timing, as well as video of the camera 150. Of course, more than two processors could alternatively be employed.
  • In an implementation, data from the camera 150 passes through slip rings 160, which allows the camera 150 to rotate as described above (continuous rotation with intermittent, very short stops). Because the periods where the camera is actually stationary are so short, the camera may appear to rotate continuously. In some cases, the camera frame rate may not exactly match the rotational speed and stop rate of the camera system 100, thus creating fractional video frames. Digital switch 130 may then be used to throw away any unwanted video frames. In other words, as described above, some captured frames may correspond to one of the stop positions, while other captured frames may be captured while the camera is rotating. Using the digital switch 130 allows the video processor 140 to sleep during those times at which unwanted video frames are discarded, thus creating better power efficiency. Of course, in other embodiments of the system, the video processor (e.g., ARM/DSP) may have little or no time to sleep.
  • Where an image is taken at each stop position, each stop position can be represented as an individual virtual camera. A virtual camera may act like a stationary camera pointed in a single direction. The camera system 100 can support any number of stop positions and corresponding number of virtual cameras. In an embodiment, system 100 may include from 1 to 16 stop positions, and 1 to 16 virtual cameras, each virtual camera associated with a particular stop position. FIG. 2A illustrates how each stop position may correlate to a physical space with the camera 150 facing stop position 1 (i.e., home position). FIG. 2B shows the camera 150 having rotated so as to be facing stop position 5. As depicted in FIGS. 2A and 2B, a numbering system used for the stop positions may increase in a clockwise direction. FIGS. 2A and 2B illustrate an exemplary configuration including 8 stop positions, although it will be understood that more or fewer stop positions may be provided. By way of example, the number of stop positions may be from 2 to about 30, from 4 to about 20, or from 6 to about 16.
  • The period in which the camera is momentarily stopped may be any suitable period of time (e.g., may depend on the characteristics of the image capture capabilities of the camera). In an embodiment, each stop period (i.e., the dwell time) may be from about 30 ms to about 120 ms (e.g., about 60 ms).
  • Each image captured by camera 150 corresponding to a stop position may be multiplexed, or muxed, into a video stream and sent to video processor 140. Each image captured by camera 150 to be retained may be sent out of the camera over the same interface. Microprocessor 110 can manage and track the angle offset (and corresponding stop position) of camera 150 when an image to be retained is captured (i.e., the image corresponds to a stop position). Even images that are to be discarded (i.e., the image does not correspond to a stop position), may be sent to digital switch 130 over the same interface as retained images. At digital switch 130, those images that are to be discarded may be separated and discarded, rather than passed on to video processor 140.
  • An exemplary implementation showing how video processor 140 may operate is described in more detail in FIG. 3. Referring to FIG. 3, video demultiplexing driver 330 separates the video stream into individual images (i.e., frames) that each correspond to a particular stop position, or virtual camera. For example, the video stream referred to at this stage may include images to be retained that were captured at different stop positions. Demulitplexing driver 330 may use position information tracked by microprocessor 110 to determine the corresponding virtual camera for each image of the video stream, allowing sorting of the images to their proper virtual camera devices. As illustrated by FIG. 3, each image can be sent to its corresponding virtual camera (301-308) for storage and future analytics (e.g., comparison of images taken adjacent in time from the same stop position), once that determination has been made.
  • If desired, images from different stop positions may be stitched together to create a panoramic image (e.g., of up to 360°). An advantage of the present embodiments is that any such stitching is optional, and is typically not carried out on-site (if at all). If done, stitching may be done off-site, allowing the total power requirements for on-site system 100 to be no more than about 10 watts, as stitching is power and computer processor intensive.
  • In some implementations, the video stream associated with a given virtual camera device may be analyzed to detect a change from a given image to a subsequent image. In other words, rather than stitching together images to create a panoramic image, the analytics carried out may involve comparison of an image with a subsequently captured image (or a previously captured image) from the same stop position (and thus the same virtual camera device) to detect any changes (e.g., a change in temperature of an object, movement of an object, etc.). For example, the analytics carried out may include fire detection, as discussed in the inventors' prior PCT Patent Application Serial No. PCT/US/2014/033539 filed Apr. 9, 2014 titled STEPPER MOTOR CONTROL AND FIRE DETECTION SYSTEM, and PCT Patent Application Serial No. PCT/US/2014/033547 filed Apr. 9, 2014 titled FIRE DETECTION SYSTEM, each of which is herein incorporated by reference in its entirety. Another example of use may include intruder detection, or perimeter monitoring to ensure security at a secure border, or the like.
  • Each virtual camera can have a video stream similar to that which would be obtained by a stationary camera placed at the corresponding stop position. The system of virtual cameras may be on-site relative to thermal imaging camera 150. In some implementations, the frame rate of a virtual camera video stream is equal to the rotational speed of motor 170, as camera 150 passes each stop once per revolution. The muxed signal frame rate may be equal to a rotational speed of motor 170 multiplied by the number of stops per revolution. For example, a system running at 30 RPM with 16 stops per revolution would have a muxed frame rate of 8 frames per second (FPS) and each virtual camera device would have a frame rate of 1/2 FPS, or 1 frame per 2 seconds (e.g., 30 RPM/60 seconds in a minute=0.5 FPS). While rotation rates of less than 60 RPM may be typically employed, it will be appreciated that any suitable RPM may be used (e.g., either higher or lower).
  • Utilizing virtual cameras can allow analytics and any stitching to be decoupled from image acquisition. Because each virtual camera can have its own video stream, images associated with each virtual camera can be processed independently. System 100 may normally capture images in numerical (e.g., clockwise or counter-clockwise) order, but because the angle offset from home position (i.e., stop position 1) is known based on the stop number of the virtual camera, analytics or any stitching may update in any order desired.
  • Additional details of exemplary fire detection systems, including the ability to correlation of geographical location data and determination of depth values is found in PCT Patent Application Serial No. PCT/US/2014/033547 filed Apr. 9, 2014 titled FIRE DETECTION SYSTEM, already incorporated by reference in its entirety.
  • Correlating geographical location data with the position of camera 150 can include, but is not limited to, correlating pixel positions of a captured image and determining depth values for pixel positions of individual thermal imaging camera images based on the geographical location of thermal imaging camera 150. Given the elevation and orientation of thermal imaging camera 150, the distance or depth value for each pixel of an image may be calculated using elevation data, for example, from the National Elevation Dataset.
  • The depth value calculation to be associated with a given pixel can be done in a series of steps for determining (e.g., calculating) how each pixel represents a ray projected from the camera across the landscape intersecting the ground. Generally, this may be achieved by using a projected camera view on a wireframe terrain model created using elevation data (e.g., from the National Elevation Dataset) to estimate where each rendered pixel of the camera view would intersect the wireframe to calculate the probable “z” depth value of the bottom of each image element or pixel. Such a process may employ a loop process carried out through increasing z-distances until the projected height intersects the elevation height at a distance.
  • This may be done by determining (e.g., calculating) if a ray having a length equal to the camera's height intersects the ground at the projected distance. This determination may be repeated by repeatedly increasing the ray length by a given amount (e.g., 1 decimeter) until the ground is reached (e.g., intersected) or the ray exceeds a given length (e.g., 30 kilometers). Such an excessive length may be used to help render the horizon. Data for latitude, longitude, elevation and distance of the intersection point may be stored, and the determination (e.g., calculation) may be repeated for the next pixel of a column. Progressing upwardly from the bottom, within the image, once a column of pixels reaches the horizon, the determination may move onto a new column. Such determinations or calculations may be based off variable Vertical Field of View, Horizontal Field of View, elevation and orientation. The final data set may be used to render an image that depicts distance (e.g., in gray scale) with lines placed at a given distance (e.g., every 100 meters). The determined or calculated image may be compared against an actual image for a final adjustment of the input variables. Once completed, the final result would provide a “z” depth value map that can be saved for future immediate analytics availability.
  • Illustrative of the steps described above, an image of 640×512 may require repetition of the described determinations approximately 250,000 times.
  • Once depth values for pixel positions are determined, this may allow determination of the size or movement speed of an object captured within an image. For example, it may provide information on the size or movement speed of a wildfire, person, animal, or any other object of interest. Processing that correlates pixel positions of an image with location data and determination of a depth value associated with each pixel may be performed off-site at a remote user interface terminal. The actual depth values associated with given pixels of the images may be relayed to the camera system for storage and use on-site.
  • Relay of any data on-site to off-site or vice versa may be by any suitable mechanism, e.g., including, but not limited to satellite link or network, cellular link or network, WiFi link or network, radio transmission, hard wired link or network, etc.
  • With this background being laid, additional details are now illustrated.
  • Reference is now made to FIG. 4 which illustrates the camera 150 capturing a plurality of images at various stops and the images optionally being stitched into a panoramic image 402. While FIG. 4 illustrates the panoramic image 402, it should be appreciated that embodiments do not require the images to be stitched together, but rather, individual images can be processed by a virtual camera such as virtual cameras 301-308 illustrated in FIG. 3.
  • Embodiments may be implemented where, at a virtual camera, foreground objects are scored based on the amount of change compared to a background model. For example, a score may be generated based on the amount of changes between pixels (or edges) in a background image and the corresponding pixels (or edges) in a subsequent image. Scoring may be based, for example, on an aggregation of changes over a number of different pixels (or edges). In particular, the cumulative changes over a selected region may represent the score for the region. For example, the number of pixels or edges associated with a given object (e.g., a power pole, a utility trailer, etc.) may be determined, for later reference. The system may compare the number of pixels or edges of the object at a later time, to determine whether the object has changed (the number of pixels or edges has changed over a threshold value), or if the object is the same as before. This may be particularly useful during sunset and sunrise, when objects may appear to be changing, but upon counting the number of pixels or edges associated with such a stationary object, it will be apparent that the object has not changed, only the background.
  • The particular background model can include user defined regions within a video frame where each region is assigned a sensitivity score. For example, FIG. 4 illustrates four user defined regions 404, 406, 408, 410 and 412. Regions 404, 406, 408 and 410 are selected to encompass various pieces of vegetation. As wind or other factors may cause movement in the vegetation, these regions may be configured to be less sensitive to movement than other regions are. In particular, when comparing video frames with a background image, any comparison of the defined regions 404, 406, 408, 410 and 412 in the background image to corresponding regions in subsequent video frames will be allowed a certain amount of variation without causing an alarm or other indication that there is a change in the object or background. Even though there may be a slight change due to swaying vegetation, such slight changes will not cause an event to be triggered indicating a change.
  • In contrast, the region 412 may be defined such that little or no change is allowed. The region 412 is set to define a region including a mountain range which should remain quite static. Such a region may similarly be defined around various pieces of equipment such as trailers, buildings, power poles, and the like that may generally remain stationary. Thus, this region 412 could be defined to be more sensitive to change such that even slight changes will cause an alarm or other indication of a change to be triggered.
  • Thus, embodiments may be implemented where foreground object scores are compared to the region of interest in which they were detected to determine if they are background or foreground objects. A user may have control over sensitivity of movement detection over all coordinates of the video frame. As shown, a user-defined region of interest may span across more than 1 virtual camera position (e.g., as region 412 does), even if no panoramic stitching is performed. Embodiments can be implemented which do not require extra memory buffers for learning or learning time to allow for background motion to be detected. Rather, a user can manually define where background motion might occur. The user can increase sensitivity by increasing the motion detection range in areas where there is little or no repetitive motion (or where none should be) while still allowing detection of motion in areas where there is a lot of repetitive motion by decreasing the sensitivity and the motion detection range for these areas. This gives a user more control over detection range and percentage of objects falsely detected as foreground objects of interest (e.g., an intruder). In some embodiments, there may be a default level of sensitivity for all regions of the video frame that are not user defined (i.e., not selected by the user and given a specific sensitivity). For example, all regions that are not user defined may be assumed to remain relatively static and thus have a relatively high sensitivity to any changes.
  • To allow a user to define the various regions, embodiments may first create a background model. The background model may be based on a background image, such as the panoramic image 402 (or the individual images associated with stops S1-S8). The panoramic image 402 may be provided to, or created on a user system 190. The user system 190 may be remote from the camera system 100. Thus, images may be provided from the camera system 100 to be used in creating a background model. In one embodiment, the panoramic image 402 is provided as part of a user interface at the user system 190. The user system 190 may include a user interface that allows a user to select regions on the overall panoramic image 402. For example, the user interface may include the ability to allow a user to drag and select a region using mouse graphical user interface interactions. While this would allow for selecting rectangular regions, it should be appreciated that other regions may also be selected. For example, a user could select a region bounded by an ellipse, a triangle, or some other complex shape. A user may also be able to select a region by drawing the region using any applicable graphical user interface interactions (e.g., a mouse, a finger or stylus when using a touchscreen device, and so forth). In some embodiments, any portions of the image not selected by the user may represent a region that can be associated with an image change tolerance and set forth in more detail below.
  • The user can also indicate, using the user interface, a sensitivity for a selected region. The selected regions can be converted to coordinates (such as Cartesian coordinates) which can then be provided back to the camera system 100 along with the sensitivities. This information can then be used by the camera system as part of the background model used to determine when indications of changes should be issued to a user.
  • When selecting regions, by using the stitched image, or other similar image or set of images, a user can select multiple regions in a given stop. Alternatively or additionally, a user can select a region that spans multiple stops. In some embodiments, regions may be automatically selected and given a particular sensitivity by the system 100. Such selections and sensitivities may be determined by system 100 based on historical data. For example, the system 100 may automatically determine that an area with numerous trees should be given a low sensitivity, while an area with little-to-no vegetation should be given a high sensitivity. Similarly, in an area where security is particularly important (e.g., of a stockyard, border area, building installation, or other facility or environment), the surrounding area may be given a higher sensitivity than other areas.
  • Embodiments may alternatively or additionally use one or more descriptive filters to distinguish between background changes and new foreground objects in an image. Using filters can reduce the number of times or the complexity of computationally expensive algorithms that may need to be used for object classification. This can allow object classification to be, at least partially, performed at a low-power, limited resource (e.g. limited memory, limited processing power, etc.) system. Even on higher power, higher resource systems, such functionality can reduce power consumption.
  • Illustrating now an example, an image may differ from a background image which may indicate a new foreground object in an image or simply a change to the background. Various environmental inputs and/or various filters may be used to filter out background changes which appear as foreground objects. Alternatively, new foreground objects can be filtered out when it can be determined that they are not foreground objects of interest.
  • Illustratively, one embodiment uses a distance map to determine distance and size of a foreground object, i.e. a detected change in an image. This distance map may be generated, for example, using a method similar to that described above for calculating a probable “z” depth value. Accordingly, a size filter may be used to filter out objects based on their size. For example, it may be determined that only animals and/or people within a field of view of the thermal camera are objects of interest. Thus, all objects detected that are bigger than an animal or person may be filtered out.
  • Another filter may be based on the aspect ratio, pixels, or number of edges of the candidate foreground object to determine a probability of it being a specific object type. For example, the aspect ratio of a car when viewed from the front is significantly different than the aspect ratio of a motorcycle when viewed from the front, thus filtering out cars, but not motorcycles, or vice versa, may be possible. Accordingly, a user may identify which types of objects are of interest to the user, then use their associated sizes to filter out objects that are not of interest.
  • In one embodiment a correlation filter is used to determine if a potential foreground object matches an object in a background model. In particular, the background model can be examined for a matching object in the vicinity of the potential foreground object. This can help to identify that perhaps a tree has swayed, a large rock has been moved, or some other movement has occurred that may not be of interest to a user rather than having detected a new foreground object. Accordingly, a false detection and subsequent false alarm to a user may be avoided by using a correlation filter.
  • An edge detection filter may be used in a similar manner. The edge detection filter may be based on the number of edges of a foreground object. For example, if animal movement were of interest, animals have a large number of outline edges when compared to an automobile. Thus, a filter may be configured to filter out objects with few edges while allowing the appearance of foreground objects with a larger number of edges to cause a user to be notified of changes to an environment. Thus, a system could count the edges in the foreground candidate to see if it is complex enough to be an object of interest to a user.
  • An edge filter may be particularly useful during the periods of time that the sun rises and sets because the sun itself may give the camera system a false positive. More specifically, while the sun is setting and rising, the sun may be in view of the camera, thus causing the camera to detect a new source of heat (i.e., an object) in the location of the setting or rising sun. As such, a false alarm or alert could be sent to a user that an unwanted object has entered the surveilled area. Accordingly, by implementing an edge filter during those periods of time, objects (e.g., trees, telephone poles, cars, houses, buildings, trailers, and so forth) in the field of view (foreground or background objects) may be compared against a background model to ensure that the number of edges of all detected objects is the same currently as it is in the background model.
  • For example, a particular area that is being surveilled by a thermal camera may include a building and telephone or power pole. When the sun sets in that particular area and comes within the field of view of the thermal camera, an edge detection filter may be applied in order to detect the number of edges of all the objects within the field of view of the camera. As such, the only detected objects would be the building and the pole, which would result in the same number of edges as the background model of the same area. Thus, it would be determined that no new objects had entered the area (as the number of edges is the same). Accordingly, any false positives occurring because of a sunrise or sunset may be mitigated or eliminated. Accordingly, by implementing an edge filter during those periods of time, objects in the field of view (foreground or background objects) may be compared against a background model to ensure that the number of edges of the detected objects is the same currently as it is in the background model, thus avoiding any false positives caused by the sun.
  • In some embodiments, an edge filter may be automatically implemented each day during sunrise and sunset for that particular day. In such an embodiment, a database containing information regarding times for sunrises and sunsets for each day of the year in particular locations may be accessed to determine the appropriate times to apply the edge detection filter. For example, an edge filter may be automatically implemented from 6:15 a.m. until 6:45 a.m. while the sun is rising and from 7:15 p.m. until 7:45 p.m. while the sun is setting on a particular day based on data contained within a particular database. Similarly, an edge filter may be implemented during periods of the day that may be much hotter than others (i.e., where objects may appear much hotter) and then the number of edges can tell the user that no intruder has entered the property.
  • Similar analysis and conclusions may be possible by counting the number of pixels associated with a given object, particularly where the object is expected to be stationary (a pole, a building, a trailer, or the like).
  • In another example, a filter may be a thermal filter. Thermal cameras capture differences in temperature. Wildlife may have a different difference in temperature than an ambient temperature from humans, automobiles, motor cycles, the setting sun, the rising sun, and so forth. Thus, filters could be created to filter out humans and automobiles while allowing animals to be detected (or vice versa).
  • Alternatively or additionally, confidence level filters can be applied. A user can adjust confidence level for any number of regions in the video frame to determine required level of confidence to classify a foreground object. For instance, a user may assign a low confidence level to a certain region because there is a high quantity of vegetation (trees, bushes, and so forth) in the region, or for any other reason. Accordingly, in order to identify a foreground object in such a region, there must be a high level of movement detected. Filters can be applied in any order. Further, any number of filters can be included or excluded from execution. Additional filters, while not illustrated here could be nonetheless applied to embodiments herein.
  • As illustrated in FIG. 5, whether or not a filter is applied, once one or more objects of interest are detected an alert or alarm (used interchangeably herein) may be sent to an alert dispatcher 502. More specifically, FIG. 5 illustrates transmission of data from the on-site camera system 150 through a wireless transmitter 504 to a wireless receiver 506. Wired transmission is of course also possible. This data may include, for example, location of the object, size of the object, aspect-ratio of the object, temperature of the object, and so forth. Continuous monitoring of the object can provide additional data such as changes in size and speed. These alerts may also include a confidence level of the alert/object. For example, high confidence alerts may be of sufficient certainty to dispatch aircraft or other response units very early, potentially saving time, money, property, and so forth.
  • FIGS. 6-8 illustrate three different flowcharts for performing embodiments described herein. It should be noted that FIGS. 6-8 are described with frequent reference to FIG. 4. Referring now to FIG. 6, a method 600 is illustrated. The method includes acts for creating a background model for image processing to identify new foreground objects in successive video frames. The method 600 includes providing a background image in a user interface (Act 610). For example, a background image taken at stop position S3 (illustrated in FIG. 4) includes at least a portion of a car and house. The method further includes receiving a first user input in the user interface, wherein the first user input comprises identifying one or more different regions within the background image (Act 620). For example, the first user input may identify region 404. The user input may be of any suitable type. For instance, a user may identify a region using a click-and-drag rectangular selection tool using a mouse.
  • The method further includes receiving a second user input in the user interface, the second user input comprising selecting an image change tolerance for each of the one or more identified different regions (Act 630). For instance, the user may select an image change tolerance for region 406 that necessitates a relatively high amount of change before detecting a foreground object because the bush in region 404 may consistently move because of wind. The method 600 further includes providing the background image, information identifying the one or more different regions, and the image change tolerances to an image processor (Act 640). The background image, the information identifying the one or more different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify one or more foreground objects within the successive image (Act 640). Thus, by selecting specific regions within a field of view of camera 150 and giving each specified region an image change tolerance, a user may help create a contextually aware background model, against which successive images can be compared.
  • FIG. 7 illustrates a method of identifying a foreground object in a video frame by comparing the video frame to a background model. The method 700 includes obtaining a background model (such as the background model described above in connection with FIG. 6), wherein the background model comprises a background image and identifies one or more user defined regions (Act 710). Each of the one or more user defined regions can include an image change tolerance (Act 710), as described above.
  • The method further includes obtaining a video frame (Act 720) that is evaluated against the background model, such that a foreground object is identified in a region of the video frame when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located (Act 720). Thus, referring back to the example of FIG. 6 and region 404, region 404 may have a low sensitivity to change because of the bush located within the region. Accordingly, in order for a foreground object to be identified within region 404, the object must demonstrate more movement than simply the branches/leaves of the bush within region 404 swaying in the wind.
  • FIG. 8 illustrates a method, implemented at a computer system, of identifying foreground objects of interest in a video frame. The method 800 includes obtaining a background model, wherein the background model comprises a background image (Act 810). The method further includes obtaining a video frame (Act 820) and evaluating the video frame against the background model to identify objects, including identifying differences between the video frame and the background image (Act 830), as described above. Perhaps as part of the evaluation, or after performing an initial evaluation, one or more filters may be applied to the identified differences in order to identify one or more foreground objects of interest in the video frame (Act 840). For instance, once an object has been identified during the evaluation of a video frame against the background model, a filter, such as a size filter, may be used to filter out objects that are not of interest. More specifically, the bush within region 404 may initially be identified as a potential object of interest because the wind caused it to move when comparing background model to the video frame. However, a size filter may be applied that specifies that objects smaller than a person are not of interest. Thus, the bush within region 404 would be filtered out rather than being reported to a user. Similarly, any of the filters described herein may be used.
  • Accordingly, a thermal camera system may allow users to identify particular regions within the field of view of the camera, and additionally, to select a particular image change tolerance for each specified region. Notably, filters may be applied to any potential identified objects to filter out objects that are not of interest. In this way, a user can customize the thermal imaging camera system to identify only objects of interest to the user and further alert the user only in instances where such an object has been identified.
  • The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

What is claimed is:
1. A method of creating a background model for image processing to identify new foreground objects in successive video frames, the method comprising:
providing a background image in a user interface;
receiving a first user input in the user interface, the first user input comprising identifying one or more different regions within the background image;
receiving a second user input in the user interface, the second user input comprising selecting an image change tolerance for each of the one or more identified different regions; and
providing the background image, information identifying the one or more different regions, and the image change tolerances to an image processor, wherein the background image, the information identifying the one or more different regions, and the image change tolerances are used by the image processor to create a background model to thereby compare a successive image with the background model in order to identify one or more foreground objects within the successive image.
2. The method of claim 1, wherein the background image is a 360 degree panorama image.
3. The method of claim 1, wherein receiving the first user input comprises receiving a drag and select user input that identifies a region boundary.
4. The method of claim 1, wherein the information identifying the one or more different regions comprises Cartesian coordinates derived from the user input identifying the one or more different regions within the background image.
5. The method of claim 1, wherein identifying one or more foreground objects within the successive image comprises determining that a change from the background image to the successive image in a region exceeds an image change tolerance for the region such that the foreground object is identified in the region when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located.
6. The method of claim 1, wherein the image processor further applies one or more filters to the foreground object to filter out certain foreground objects to identify one or more foreground objects of interest in the successive video frame.
7. The method of claim 6, wherein one of the one or more filters comprises an edge detection filter.
8. The method of claim 6, wherein one of the one or more filters comprises a confidence level filter.
9. A method of identifying a foreground object in a video frame by comparing the video frame to a background model, the method comprising:
obtaining a background model, wherein the background model comprises a background image and identifies one or more user defined regions, each of the one or more user defined regions including an image change tolerance;
obtaining a video frame; and
evaluating the video frame against the background model such that a foreground object is identified in a region of the video frame when a score for the foreground object exceeds the image change tolerance for the region in which the foreground object is located.
10. The method of claim 9, wherein the background image is a 360 degree panorama image.
11. The method of claim 9, wherein the score comprises an amount of change between pixels when comparing the background model to the video frame containing the foreground object.
12. The method of claim 9, further comprising using one or more filters to identify foreground objects.
13. The method of claim 9, wherein one of the one or more filters comprises an edge detection filter.
14. A method, implemented at a computer system, of identifying foreground objects of interest in a video frame, the method comprising:
obtaining a background model, wherein the background model comprises a background image;
obtaining a video frame;
evaluating the video frame against the background model to identify objects, including identifying differences between the video frame and the background image; and
applying one or more filters to the identified differences to identify one or more foregrounds object of interest in the video frame.
15. The method of claim 14, wherein the background image and video frame are 360 degree panoramic images.
16. The method of claim 14, wherein the one or more filters includes an aspect ratio filter that filters out objects based on their aspect ratios.
17. The method of claim 14, wherein the one or more filters includes a correlation filter that filters out an object based on the object's similarity to a background model object that is contained within the background image.
18. The method of claim 14, wherein the one or more filters includes an edge detection filter that filters out one or more objects based a number of edges that the one or more objects have.
19. The method of claim 14, wherein the one or more filters includes a size filter that filters out one or more objects based the sizes of the one or more objects.
20. The method of claim 14, wherein the one or more filters includes a thermal filter that filters out one or more objects based on their thermal characteristics.
US15/562,798 2015-03-31 2016-03-29 Setting different background model sensitivities by user defined regions and background filters Active US10366509B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/562,798 US10366509B2 (en) 2015-03-31 2016-03-29 Setting different background model sensitivities by user defined regions and background filters

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562140942P 2015-03-31 2015-03-31
PCT/US2016/024694 WO2016160794A1 (en) 2015-03-31 2016-03-29 Setting different background model sensitivities by user defined regions and background filters
US15/562,798 US10366509B2 (en) 2015-03-31 2016-03-29 Setting different background model sensitivities by user defined regions and background filters

Publications (2)

Publication Number Publication Date
US20180286075A1 true US20180286075A1 (en) 2018-10-04
US10366509B2 US10366509B2 (en) 2019-07-30

Family

ID=57005335

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/562,798 Active US10366509B2 (en) 2015-03-31 2016-03-29 Setting different background model sensitivities by user defined regions and background filters

Country Status (3)

Country Link
US (1) US10366509B2 (en)
MX (1) MX368852B (en)
WO (1) WO2016160794A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190095737A1 (en) * 2017-09-28 2019-03-28 Ncr Corporation Self-service terminal (sst) facial authentication processing
US20190126941A1 (en) * 2017-10-31 2019-05-02 Wipro Limited Method and system of stitching frames to assist driver of a vehicle
US10282847B2 (en) * 2016-07-29 2019-05-07 Otis Elevator Company Monitoring system of a passenger conveyor and monitoring method thereof
CN109822564A (en) * 2019-01-14 2019-05-31 巨轮(广州)机器人与智能制造有限公司 A kind of construction method of vision system
US10366509B2 (en) * 2015-03-31 2019-07-30 Thermal Imaging Radar, LLC Setting different background model sensitivities by user defined regions and background filters
US10380805B2 (en) 2017-12-10 2019-08-13 International Business Machines Corporation Finding and depicting individuals on a portable device display
US10574886B2 (en) 2017-11-02 2020-02-25 Thermal Imaging Radar, LLC Generating panoramic video for video management systems
US10861163B2 (en) * 2018-01-17 2020-12-08 Sensormatic Electronics, LLC System and method for identification and suppression of time varying background objects
US10887610B2 (en) * 2016-09-23 2021-01-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Transform block coding
US20210256729A1 (en) * 2020-02-13 2021-08-19 Proprio, Inc. Methods and systems for determining calibration quality metrics for a multicamera imaging system
US11417080B2 (en) * 2017-10-06 2022-08-16 Nec Corporation Object detection apparatus, object detection method, and computer-readable recording medium
USD968499S1 (en) 2013-08-09 2022-11-01 Thermal Imaging Radar, LLC Camera lens cover
US11514582B2 (en) * 2019-10-01 2022-11-29 Axis Ab Method and device for image analysis
US11601605B2 (en) 2019-11-22 2023-03-07 Thermal Imaging Radar, LLC Thermal imaging camera device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3565260A1 (en) * 2016-12-28 2019-11-06 Sony Corporation Generation device, identification information generation method, reproduction device, and image generation method
EP3564917B1 (en) * 2018-05-04 2020-07-01 Axis AB A method for detecting motion in a video sequence
CN108848306B (en) * 2018-06-25 2021-03-02 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
US11068718B2 (en) 2019-01-09 2021-07-20 International Business Machines Corporation Attribute classifiers for image classification

Family Cites Families (151)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3336810A (en) 1964-04-28 1967-08-22 Sperry Rand Corp Antifriction support means for gyroscopes
US3648384A (en) 1970-04-07 1972-03-14 Borg Warner Turntable drive release mechanism
US3769501A (en) 1973-01-02 1973-10-30 Gte Sylvania Inc Selective actuating mechanism for percussive photoflash lamp array
US4977323A (en) 1973-08-16 1990-12-11 The United States Of America As Represented By The Secretary Of The Navy 360 degree infrared surveillance with panoramic display
US3883788A (en) 1973-10-26 1975-05-13 Sperry Sun Well Surveying Co Gyroscope orientation controller
US4263513A (en) 1979-02-28 1981-04-21 Compagnie Generale De Radiologie Apparatus for panoramic radiography
GB2104321B (en) 1981-07-21 1985-07-10 Shinshu Seiki Kk Method of and apparatus for controlling a stepping motor
US4602857A (en) 1982-12-23 1986-07-29 James H. Carmel Panoramic motion picture camera and method
DE3706735A1 (en) 1986-03-03 1987-09-10 Canon Kk DEVICE FOR ADJUSTING THE OPTICAL SYSTEM OF A CAMERA
US4710691A (en) 1986-03-27 1987-12-01 Anacomp, Inc. Process and apparatus for characterizing and controlling a synchronous motor in microstepper mode
JPH02188196A (en) 1989-01-13 1990-07-24 Copal Co Ltd Controlling method for driving of stepper motor
US4922275A (en) 1989-09-25 1990-05-01 Burle Technologies, Inc. Automatic panoramic camera mount
US6088534A (en) 1990-09-19 2000-07-11 Minolta Co., Ltd. Camera having an aperture controllable shutter
US5652643A (en) 1992-03-17 1997-07-29 Sony Corporation Photographic and video image system
US5650813A (en) 1992-11-20 1997-07-22 Picker International, Inc. Panoramic time delay and integration video camera system
BR9301438A (en) 1993-04-05 1994-11-15 Petroleo Brasileiro Sa Process of preparation of spherical ziegler catalyst for polymerization of alpha-olefins, spherical catalyst, process of obtaining spherical polyethylene of very high molecular weight and spherical polyethylene of very high by molecular
US5453618A (en) 1994-01-31 1995-09-26 Litton Systems, Inc. Miniature infrared line-scanning imager
US5598207A (en) 1994-04-12 1997-01-28 Hughes Electronics Camera pointing mechanism using proximate magnetic sensing
US20070061735A1 (en) 1995-06-06 2007-03-15 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5752113A (en) 1995-12-22 1998-05-12 Borden; John Panoramic indexing camera mount
USD381997S (en) 1996-01-30 1997-08-05 Sony Corporation Video camera
US5790183A (en) 1996-04-05 1998-08-04 Kerbyson; Gerald M. High-resolution panoramic television surveillance system with synoptic wide-angle field of view
US6133943A (en) 1996-09-30 2000-10-17 Intel Corporation Method and apparatus for producing a composite image
JP4332231B2 (en) 1997-04-21 2009-09-16 ソニー株式会社 Imaging device controller and imaging system
US6229546B1 (en) 1997-09-09 2001-05-08 Geosoftware, Inc. Rapid terrain model generation with 3-D object features and user customization interface
US6034716A (en) 1997-12-18 2000-03-07 Whiting; Joshua B. Panoramic digital camera system
US6304284B1 (en) 1998-03-31 2001-10-16 Intel Corporation Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera
US20030025599A1 (en) 2001-05-11 2003-02-06 Monroe David A. Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events
WO2000017318A1 (en) 1998-09-18 2000-03-30 Kerouac Paul E In vessel composting process and apparatus
US6023588A (en) 1998-09-28 2000-02-08 Eastman Kodak Company Method and apparatus for capturing panoramic images with range data
US6215115B1 (en) * 1998-11-12 2001-04-10 Raytheon Company Accurate target detection system for compensating detector background levels and changes in signal environments
JP2000156806A (en) 1998-11-20 2000-06-06 Sony Corp Video camera
JP2000243062A (en) 1999-02-17 2000-09-08 Sony Corp Device and method for video recording and centralized monitoring and recording system
JP3620327B2 (en) 1999-02-17 2005-02-16 富士通株式会社 Electronic equipment
US6539162B1 (en) 1999-03-30 2003-03-25 Eastman Kodak Company Photographing a panoramic image produced from a captured digital image
US6738073B2 (en) 1999-05-12 2004-05-18 Imove, Inc. Camera system with both a wide angle view and a high resolution view
US6327327B1 (en) 1999-09-27 2001-12-04 Picker International, Inc. Multi-channel segmented slip ring
JP4439045B2 (en) 1999-10-07 2010-03-24 日本電産コパル株式会社 Step motor drive device
US6678001B1 (en) 1999-11-01 2004-01-13 Elbex Video Ltd. Ball shaped camera housing with simplified positioning
JP2001154295A (en) 1999-11-30 2001-06-08 Matsushita Electric Ind Co Ltd Omniazimuth vision camera
US20010026684A1 (en) 2000-02-03 2001-10-04 Alst Technical Excellence Center Aid for panoramic image creation
US7525567B2 (en) 2000-02-16 2009-04-28 Immersive Media Company Recording a stereoscopic image of a wide field of view
AU2001264723A1 (en) 2000-05-18 2001-11-26 Imove Inc. Multiple camera video system which displays selected images
US6731799B1 (en) * 2000-06-01 2004-05-04 University Of Washington Object segmentation with background extraction and moving boundary techniques
USD435577S (en) 2000-07-27 2000-12-26 Mcbride Richard L Video camera housing
US6677982B1 (en) 2000-10-11 2004-01-13 Eastman Kodak Company Method for three dimensional spatial panorama formation
US20110058036A1 (en) 2000-11-17 2011-03-10 E-Watch, Inc. Bandwidth management and control
US6586898B2 (en) 2001-05-01 2003-07-01 Magnon Engineering, Inc. Systems and methods of electric motor control
US6782363B2 (en) * 2001-05-04 2004-08-24 Lucent Technologies Inc. Method and apparatus for performing real-time endpoint detection in automatic speech recognition
KR200252100Y1 (en) 2001-08-07 2001-11-23 정운기 Bullet camera for closed circuit television
US6948402B1 (en) 2001-09-12 2005-09-27 Centricity Corporation Rotary work table with cycloidal drive gear system
JP2003123157A (en) 2001-10-15 2003-04-25 Masahiro Aoki Building provided with environmental disaster preventing function and environment security system
US6788198B2 (en) 2002-03-12 2004-09-07 Bob F. Harshaw System for verifying detection of a fire event condition
US7259496B2 (en) 2002-04-08 2007-08-21 University Of North Carolina At Charlotte Tunable vibratory actuator
AU2002329039A1 (en) 2002-07-16 2004-02-02 Gs Gestione Sistemi S.R.L. System and method for territory thermal monitoring
JP4048907B2 (en) 2002-10-15 2008-02-20 セイコーエプソン株式会社 Panorama composition of multiple image data
US20040075741A1 (en) 2002-10-17 2004-04-22 Berkey Thomas F. Multiple camera image multiplexer
USD482712S1 (en) 2003-01-07 2003-11-25 Joe Hsu Digital camera
US7157879B2 (en) 2003-01-30 2007-01-02 Rensselaer Polytechnic Institute On-hardware optimization of stepper-motor system performance
US20040179098A1 (en) 2003-02-25 2004-09-16 Haehn Craig S. Image reversing for infrared camera
USD486847S1 (en) 2003-04-09 2004-02-17 Sony Corporation Video camera
US6809887B1 (en) 2003-06-13 2004-10-26 Vision Technologies, Inc Apparatus and method for acquiring uniform-resolution panoramic images
US8824730B2 (en) * 2004-01-09 2014-09-02 Hewlett-Packard Development Company, L.P. System and method for control of video bandwidth based on pose of a person
US7436438B2 (en) 2004-03-16 2008-10-14 Creative Technology Ltd. Digital still camera and method of forming a panoramic image
US7487029B2 (en) 2004-05-21 2009-02-03 Pratt & Whitney Canada Method of monitoring gas turbine engine operation
US6991384B1 (en) 2004-05-27 2006-01-31 Davis Robert C Camera tripod rotating head
JP2006013832A (en) 2004-06-25 2006-01-12 Nippon Hoso Kyokai <Nhk> Video photographing apparatus and video photographing program
JP2006013923A (en) 2004-06-25 2006-01-12 Sony Corp Surveillance apparatus
JP2006033257A (en) 2004-07-14 2006-02-02 Fujitsu Ltd Image distribution apparatus
US7381952B2 (en) 2004-09-09 2008-06-03 Flir Systems, Inc. Multiple camera systems and methods
USD520548S1 (en) 2004-10-26 2006-05-09 Inventec Multimedia & Telecom Corporation Image recorder
US20060179463A1 (en) * 2005-02-07 2006-08-10 Chisholm Alpin C Remote surveillance
TWI427440B (en) * 2005-04-06 2014-02-21 Kodak Graphic Comm Canada Co Methods and apparatus for correcting banding of imaged regular patterns
JP2006333133A (en) 2005-05-26 2006-12-07 Sony Corp Imaging apparatus and method, program, program recording medium and imaging system
CN1971336A (en) 2005-11-22 2007-05-30 三星电子株式会社 Light deflecting apparatus
JP4740723B2 (en) 2005-11-28 2011-08-03 富士通株式会社 Image analysis program, recording medium storing the program, image analysis apparatus, and image analysis method
US7460773B2 (en) 2005-12-05 2008-12-02 Hewlett-Packard Development Company, L.P. Avoiding image artifacts caused by camera vibration
USD543644S1 (en) 2006-02-08 2007-05-29 Koninklijke Philips Electronics N.V. Master traffic lamp
JP2007271869A (en) 2006-03-31 2007-10-18 Digital Service International Co Ltd Map creating method by image processing, its device and its computer program
US9369679B2 (en) 2006-11-07 2016-06-14 The Board Of Trustees Of The Leland Stanford Junior University System and process for projecting location-referenced panoramic images into a 3-D environment model and rendering panoramic images from arbitrary viewpoints within the 3-D environment model
US20080159732A1 (en) 2006-12-29 2008-07-03 Stuart Young Positioning device
US8203714B2 (en) 2007-03-13 2012-06-19 Thomas Merklein Method for the camera-assisted detection of the radiation intensity of a gaseous chemical reaction product and uses of said method and corresponding device
EP2562578B1 (en) 2007-03-16 2017-06-14 Kollmorgen Corporation System for panoramic image processing
JP2008236632A (en) 2007-03-23 2008-10-02 Hitachi Ltd Image processing apparatus and method thereof
US7949172B2 (en) 2007-04-27 2011-05-24 Siemens Medical Solutions Usa, Inc. Iterative image processing
US7863851B2 (en) 2007-08-22 2011-01-04 National Instruments Corporation Closed loop stepper motor control
US8041116B2 (en) * 2007-09-27 2011-10-18 Behavioral Recognition Systems, Inc. Identifying stale background pixels in a video analysis system
KR20090067762A (en) 2007-12-21 2009-06-25 삼성전기주식회사 Camera module
US8355042B2 (en) 2008-10-16 2013-01-15 Spatial Cam Llc Controller in a camera for creating a panoramic image
US20100097444A1 (en) 2008-10-16 2010-04-22 Peter Lablans Camera System for Creating an Image From a Plurality of Images
TW201001338A (en) * 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
US8223206B2 (en) 2008-10-15 2012-07-17 Flir Systems, Inc. Infrared camera filter wheel systems and methods
EP2359193B1 (en) 2008-12-05 2013-02-13 Micronic Mydata AB Rotating arm for writing an image on a workpiece
US7991575B2 (en) 2009-01-08 2011-08-02 Trimble Navigation Limited Method and system for measuring angles based on 360 degree images
KR101670282B1 (en) * 2009-02-10 2016-10-28 톰슨 라이센싱 Video matting based on foreground-background constraint propagation
GB0908200D0 (en) 2009-05-13 2009-06-24 Red Cloud Media Ltd Method of simulation of a real physical environment
US8744121B2 (en) * 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
CN101719299B (en) 2009-11-10 2012-03-28 天津市浦海新技术有限公司 Alarm system and method for fire and combustible gas
KR20110052124A (en) 2009-11-12 2011-05-18 삼성전자주식회사 Method for generating and referencing panorama image and mobile terminal using the same
US8320621B2 (en) * 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
WO2011085535A1 (en) 2010-01-12 2011-07-21 Tsai Chwei-Jei Pet bottle lid and manufacturing method thereof
US8905311B2 (en) 2010-03-11 2014-12-09 Flir Systems, Inc. Infrared camera with infrared-transmissive dome systems and methods
JP2011199750A (en) 2010-03-23 2011-10-06 Olympus Corp Image capturing terminal, external terminal, image capturing system, and image capturing method
JP2011239195A (en) 2010-05-11 2011-11-24 Sanyo Electric Co Ltd Electronic apparatus
US8625897B2 (en) 2010-05-28 2014-01-07 Microsoft Corporation Foreground and background image segmentation
CN103250405B (en) 2010-07-16 2016-08-17 双光圈国际株式会社 Flash system for multiple aperture imaging
USD640721S1 (en) 2010-08-09 2011-06-28 Scott Satine Cannon-shaped, low-light video camera for marine use
EP2612491A1 (en) 2010-08-31 2013-07-10 Raul Goldemann Rotary image generator
JP5645566B2 (en) 2010-09-17 2014-12-24 キヤノン株式会社 Imaging apparatus, control method therefor, and program
JP5249297B2 (en) 2010-09-28 2013-07-31 シャープ株式会社 Image editing device
US8681151B2 (en) 2010-11-24 2014-03-25 Google Inc. Rendering and navigating photographic panoramas with depth information in a geographic information system
US8823707B2 (en) 2010-11-24 2014-09-02 Google Inc. Guided navigation through geo-located panoramas
JP2012113204A (en) 2010-11-26 2012-06-14 Canon Inc Imaging device
US20120133639A1 (en) 2010-11-30 2012-05-31 Microsoft Corporation Strip panorama
CA140587S (en) 2010-12-03 2012-01-20 Riegl Laser Measurement Sys Camera
US8717381B2 (en) 2011-01-11 2014-05-06 Apple Inc. Gesture mapping for image filter input parameters
US20120194564A1 (en) 2011-01-31 2012-08-02 White Christopher J Display with secure decompression of image signals
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system
CN102176270A (en) 2011-02-25 2011-09-07 广州飒特电力红外技术有限公司 Safety monitoring and fire alarming integrated system and method
CN102971770B (en) * 2011-03-31 2016-02-10 松下电器产业株式会社 Carry out the all-round image displaying device, the image drawing method that enclose the description of stereo-picture
US8970665B2 (en) 2011-05-25 2015-03-03 Microsoft Corporation Orientation-based generation of panoramic fields
CN102280005B (en) 2011-06-09 2014-10-29 广州飒特红外股份有限公司 Early warning system for fire prevention of forest based on infrared thermal imaging technology and method
KR101073076B1 (en) 2011-06-10 2011-10-12 주식회사 창성에이스산업 Fire monitoring system and method using compound camera
US8773501B2 (en) 2011-06-20 2014-07-08 Duco Technologies, Inc. Motorized camera with automated panoramic image capture sequences
JP5875265B2 (en) 2011-07-04 2016-03-02 キヤノン株式会社 Stepping motor drive control device, drive control method, drive control system, and optical apparatus
US8718922B2 (en) 2011-07-28 2014-05-06 Navteq B.V. Variable density depthmap
US9000371B2 (en) 2011-08-26 2015-04-07 Custom Scene Technology, Inc. Camera, computer program and method for measuring thermal radiation and thermal rates of change
US9274595B2 (en) * 2011-08-26 2016-03-01 Reincloud Corporation Coherent presentation of multiple reality and interaction models
US20130249947A1 (en) * 2011-08-26 2013-09-26 Reincloud Corporation Communication using augmented reality
US9127597B2 (en) 2011-09-23 2015-09-08 The Boeing Company Sensor system
US9116011B2 (en) 2011-10-21 2015-08-25 Here Global B.V. Three dimensional routing
US9171384B2 (en) 2011-11-08 2015-10-27 Qualcomm Incorporated Hands-free augmented reality for wireless communication devices
US9111147B2 (en) * 2011-11-14 2015-08-18 Massachusetts Institute Of Technology Assisted video surveillance of persons-of-interest
AU2011265429B2 (en) 2011-12-21 2015-08-13 Canon Kabushiki Kaisha Method and system for robust scene modelling in an image sequence
US9024765B2 (en) 2012-01-11 2015-05-05 International Business Machines Corporation Managing environmental control system efficiency
US20140340427A1 (en) 2012-01-18 2014-11-20 Logos Technologies Llc Method, device, and system for computing a spherical projection image based on two-dimensional images
WO2013109976A1 (en) 2012-01-20 2013-07-25 Thermal Imaging Radar, LLC Automated panoramic camera and sensor platform with computer and optional power supply
USD695809S1 (en) 2012-06-01 2013-12-17 Panasonic Corporation Camera
US9361730B2 (en) * 2012-07-26 2016-06-07 Qualcomm Incorporated Interactions of tangible and augmented reality objects
CN104036226B (en) 2013-03-04 2017-06-27 联想(北京)有限公司 A kind of object information acquisition method and electronic equipment
WO2014159726A1 (en) * 2013-03-13 2014-10-02 Mecommerce, Inc. Determining dimension of target object in an image using reference object
US20140282275A1 (en) * 2013-03-15 2014-09-18 Qualcomm Incorporated Detection of a zooming gesture
EP2984640B1 (en) 2013-04-09 2019-12-18 Thermal Imaging Radar, LLC Fire detection system
CN105122633B (en) 2013-04-09 2019-04-05 热成像雷达有限责任公司 Step motor control system and method
US9165444B2 (en) 2013-07-26 2015-10-20 SkyBell Technologies, Inc. Light socket cameras
JP6581086B2 (en) 2013-08-09 2019-09-25 サーマル イメージング レーダ、エルエルシーThermal Imaging Radar, Llc Method for analyzing thermal image data using multiple virtual devices and method for correlating depth values with image pixels
USD728655S1 (en) 2013-12-10 2015-05-05 Isaac S. Daniel Covert recording alarm device for vehicles
US9756234B2 (en) * 2014-07-18 2017-09-05 Intel Corporation Contrast detection autofocus using multi-filter processing and adaptive step size selection
WO2016103566A1 (en) * 2014-12-25 2016-06-30 日本電気株式会社 Image processing method and image processing device
US10366509B2 (en) * 2015-03-31 2019-07-30 Thermal Imaging Radar, LLC Setting different background model sensitivities by user defined regions and background filters
USD776181S1 (en) 2015-04-06 2017-01-10 Thermal Imaging Radar, LLC Camera

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD968499S1 (en) 2013-08-09 2022-11-01 Thermal Imaging Radar, LLC Camera lens cover
US10366509B2 (en) * 2015-03-31 2019-07-30 Thermal Imaging Radar, LLC Setting different background model sensitivities by user defined regions and background filters
US10282847B2 (en) * 2016-07-29 2019-05-07 Otis Elevator Company Monitoring system of a passenger conveyor and monitoring method thereof
US10887610B2 (en) * 2016-09-23 2021-01-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Transform block coding
US20190095737A1 (en) * 2017-09-28 2019-03-28 Ncr Corporation Self-service terminal (sst) facial authentication processing
US10679082B2 (en) * 2017-09-28 2020-06-09 Ncr Corporation Self-Service Terminal (SST) facial authentication processing
US11417080B2 (en) * 2017-10-06 2022-08-16 Nec Corporation Object detection apparatus, object detection method, and computer-readable recording medium
US20190126941A1 (en) * 2017-10-31 2019-05-02 Wipro Limited Method and system of stitching frames to assist driver of a vehicle
US11108954B2 (en) 2017-11-02 2021-08-31 Thermal Imaging Radar, LLC Generating panoramic video for video management systems
US10574886B2 (en) 2017-11-02 2020-02-25 Thermal Imaging Radar, LLC Generating panoramic video for video management systems
US10380805B2 (en) 2017-12-10 2019-08-13 International Business Machines Corporation Finding and depicting individuals on a portable device display
US10832489B2 (en) 2017-12-10 2020-11-10 International Business Machines Corporation Presenting location based icons on a device display
US10546432B2 (en) 2017-12-10 2020-01-28 International Business Machines Corporation Presenting location based icons on a device display
US10521961B2 (en) 2017-12-10 2019-12-31 International Business Machines Corporation Establishing a region of interest for a graphical user interface for finding and depicting individuals
US10861163B2 (en) * 2018-01-17 2020-12-08 Sensormatic Electronics, LLC System and method for identification and suppression of time varying background objects
CN109822564A (en) * 2019-01-14 2019-05-31 巨轮(广州)机器人与智能制造有限公司 A kind of construction method of vision system
US11514582B2 (en) * 2019-10-01 2022-11-29 Axis Ab Method and device for image analysis
TWI821597B (en) * 2019-10-01 2023-11-11 瑞典商安訊士有限公司 Method and device for image analysis
US11601605B2 (en) 2019-11-22 2023-03-07 Thermal Imaging Radar, LLC Thermal imaging camera device
US20210256729A1 (en) * 2020-02-13 2021-08-19 Proprio, Inc. Methods and systems for determining calibration quality metrics for a multicamera imaging system

Also Published As

Publication number Publication date
WO2016160794A1 (en) 2016-10-06
MX368852B (en) 2019-10-18
MX2017012505A (en) 2018-01-30
US10366509B2 (en) 2019-07-30

Similar Documents

Publication Publication Date Title
US10366509B2 (en) Setting different background model sensitivities by user defined regions and background filters
US10970859B2 (en) Monitoring method and device for mobile target, monitoring system and mobile robot
US10949995B2 (en) Image capture direction recognition method and server, surveillance method and system and image capture device
US9886776B2 (en) Methods for analyzing thermal image data using a plurality of virtual devices
US11232685B1 (en) Security system with dual-mode event video and still image recording
US20160042621A1 (en) Video Motion Detection Method and Alert Management
CN109816745B (en) Human body thermodynamic diagram display method and related products
US9390604B2 (en) Fire detection system
US20170032192A1 (en) Computer-vision based security system using a depth camera
US9582731B1 (en) Detecting spherical images
US20210241468A1 (en) Systems and methods for intelligent video surveillance
US20050073585A1 (en) Tracking systems and methods
JP2004534315A (en) Method and system for monitoring moving objects
US10769909B1 (en) Using sensor data to detect events
JP6182676B2 (en) Noise map production method and apparatus
WO2021120591A1 (en) Systems and methods for adjusting a monitoring device
CN112733690A (en) High-altitude parabolic detection method and device and electronic equipment
US11302156B1 (en) User interfaces associated with device applications
CN102843551A (en) Mobile detection method and system and business server
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN111627049B (en) Method and device for determining high-altitude parabolic object, storage medium and processor
CN111210464A (en) System and method for alarming people falling into water based on convolutional neural network and image fusion
CN108876824B (en) Target tracking method, device and system and dome camera
CN109841022A (en) A kind of target motion track detecting/warning method, system and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: THERMAL IMAGING RADAR, LLC, UTAH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JONES, LAWRENCE RICHARD;LEMBKE, BRYCE HAYDEN;SABLAK, SEZAI;AND OTHERS;SIGNING DATES FROM 20160401 TO 20160412;REEL/FRAME:043730/0099

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4