WO2017132074A1 - System and method for targeted imaging from collection platforms - Google Patents

System and method for targeted imaging from collection platforms Download PDF

Info

Publication number
WO2017132074A1
WO2017132074A1 PCT/US2017/014472 US2017014472W WO2017132074A1 WO 2017132074 A1 WO2017132074 A1 WO 2017132074A1 US 2017014472 W US2017014472 W US 2017014472W WO 2017132074 A1 WO2017132074 A1 WO 2017132074A1
Authority
WO
WIPO (PCT)
Prior art keywords
system
image
platform
imager
imaging
Prior art date
Application number
PCT/US2017/014472
Other languages
French (fr)
Inventor
David Wayne RUSSELL
Original Assignee
Russell David Wayne
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201662287276P priority Critical
Priority to US62/287,276 priority
Application filed by Russell David Wayne filed Critical Russell David Wayne
Publication of WO2017132074A1 publication Critical patent/WO2017132074A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/005Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

In many cultures the fear of and right to avoid unwarranted invasions of privacy are widespread. This invention discloses a system and methodology to specify a four-dimensional boundary within which judicial approval has been granted for a given period of time, and only objects and persons within that boundary are imaged by the platform. Objects and persons outside the boundary are automatically blurred to preserve a reasonable expectation of privacy.

Description

SPECIFICATION TITLE OF INVENTION

System and Method for Targeted Imaging from Collection Platforms INVENTORS

David Wayne Russell, (USA) Winter Garden, Florida USA CROSS-REFERENCE TO RELATED APPLICATIONS US 62/287,276 1/26/2016

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

Not Applicable

REFERENCE TO SEQUENCE LISTING, A TABLE, OR A COMPUTER PROGRAM

LISTING COMPACT DISK APPENDIX

Not Applicable

FIELD

[0001] This invention relates generally to the field of photographic imaging and more specifically to targeted imaging from an autonomous platform.

BACKGROUND

[0002] In many systems such as but not limited to autonomous vehicles, piloted drones, and other vehicles government entities face strong resistance to unwarranted imaging of persons and property. Notable exclusions exist such as but not limited to traffic cameras, metropolitan street cameras, and law enforcement officer body cameras.

[0003] For some time television videos depicting persons not granting permission were blurred to obscure their features. In autonomous systems, for example without limitation, a drone flying over a boat to check for smugglers or contraband may unwittingly image a person innocently sunbathing on deck. [0004] While surveillance on a suspect residence is being carried out, their neighbors may also be imaged. In the State of Florida legislations exists to the extent that no law enforcement agency may use an unmanned vehicle to gather any type of information. To protect civil liberties a system and methodology is desired where those elements of an image or video stream not covered by judicial oversight and/or warrants may be automatically deleted or blurred in real time before the images are presented to law enforcement officers.

BRIEF SUMMARY OF THE INVENTION

[0005] In order to meet these requirements the system must first be able to automatically recognize the difference between what is allowed to be imaged and what is not. In some cases this may be as simple as a set of geographic coordinates which are the 3D equivalent of a street address. In other cases, a codified set of parameters which are the imaging system equivalent of "probable cause" may be required.

[0006] Utilizing a 3D camera system and/or a 3D model of the environment and/or a parametric model of a recognition trigger it is possible to distinguish elements within and outside of a boundary established by judicial oversight and a warrant, or a scene analysis system may be triggered by recognition of a probable cause scenario or parameter.

[0007] This processing may be done within the camera processing system itself or by postprocessing before the image is stored or forwarded.

BRIEF DESCRIPTION OF DRAWINGS

[0008] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.

[0009] The novel features believed characteristic of the illustrative embodiments are set forth in the appended claims. The illustrative embodiments, however, as well as a preferred mode of use, further objectives, and features thereof will best be understood by reference to the following detailed description of illustrative embodiments of the present disclosure when read in conjunction with the accompanying drawings, wherein: [0010] FIG. 1 depicts a block diagram Targeted Image Collection Platform

[0011] FIG. 2 depicts an example of the determination of a 3D image differentiating In -Box and Out-of Box.

DETAILED DESCRIPTION OF INVENTION

[0012] The following detailed description illustrates embodiments of the invention by way of example and not by way of limitation. The description clearly enables one skilled in the art to make and use the disclosure, describes several embodiments, adaptations, variations,

alternatives, and use of the disclosure, including what is currently believed to be the best mode of carrying out the disclosure. The disclosure is described as applied to an exemplary embodiment namely, systems and methods for targeted imaging. However, it is contemplated that this disclosure has general application to vehicle management systems in industrial, commercial, military, and residential applications.

[0013] As used herein, an element or step recited in the singular and preceded with the word "a" or "an" should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited. Furthermore, references to "one embodiment" of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.

[0014] In one embodiment an Unmanned or Autonomous Aerial Vehicle (UAV or AAV) is selected as the camera platform. In other embodiments the platform could be a human, animal, land vehicle, or stationary camera mount of some structural type. The platform may be mobile and able to change the pan, tilt, and zoom aspect of the camera and/or the camera may have a pan, tilt, zoom mounting of its own independent of the platform. The imager itself may be a combination of spectral imaging sensors and may include without limitation a 3D imaging system, Forward Looking Infra-Red (FLIR), Ultraviolet (UV) and other sensors alone or in combination.

[0015] In one embodiment, the camera system is loaded with a 3-Dimensional description of the area within which imaging is allowed. A time value, start, stop, or multiple combinations of time and/or location may also be included as part of the system parameters. The description may also be specified as a base + offset combination with the base being a GPS or other coordinate system point and the offset providing X, Y, Z, Cartesian, or other coordinate offsets from that point.

[0016] In the first pass, the imaging system begins with its own known coordinate location in space to an accuracy within the agreed limits of the judicial warrant, legislative requirements, loaded parameters or standard practices. From this point the local area is imaged and

comparison is made in three dimensions to determine which part, if any, of the acceptable imaging area is visible in the image. If no part of the image is visible, the camera and/or the platform may initiate movement of the platform, the imager, or both to acquire the appropriate location.

[0017] In one embodiment the camera may be active for some time and potentially over significant distance to reach the point of active imaging, or the platform may be in a patrol loop covering area until a target is acquired for imaging.

[0018] In one embodiment the system is composed to a vehicle platform and the imaging system. The vehicle platform, while it may use cameras to determine its location in space and for obstacle avoidance, is independent of the mission imaging system. The platform system in this embodiment is not allowed to store any images in a manner that may be retrieved by outside entities nor transmit images without encryption or sufficient safeguards of the privacy of the persons and properties being imaged. One skilled in the art could see that different imaging systems, including non-imaging sensors, could be utilized for the platform independent of the mission imaging system without altering the intent or uniqueness of this invention.

[0019] The platform may be required to fly from a central depot or nearby position to the imaging area. In other embodiments the platform may be carried, driven, towed, or otherwise transported to the imaging area while activated.

[0020] As long as the mission imager is active, each frame is processed to determine if any part of the allowed image area impinges on the frame. If it does not, the frame is simply dumped or not stored in memory. Once some part of the image is within the area, the 3D camera system is used to determine which areas of the image are within the allowed area for each frame. Pixels or 3D object models of elements outside the allowed area are then blurred to preserve their privacy. [0021] Blurring may be accomplished by a number of methods such as but not limited to Gaussian distribution, averaging, Huffman encoding, fractals or Fourier Transform. The platform may continue to move around or within the allowed area from any angle, its internal position known at each point which allows the camera to continue functioning even within a structure or in areas such as forests with diverse topography.

[0022] Methods for the computation of the convergence of two images taken by different cameras are well described in the literature. One familiar with the art would understand that the particular sensor utilized for implementing the 3D imager would not alter the fundamental intent of this invention.

[0023] If a four-dimensional (temporal) requirement is added to the imager, all frames taken outside of an allowed time slot are discarded before being stored to memory independent of the imaging system's determination of being within the allowed area.

[0024] Referring now to the invention in more detail, in Fig. 1 there is shown an overall diagram of a targeted imaging system. The different illustrative embodiments recognize and take into account a number of different considerations. "A number", as used herein with reference to items, means one or more items. For example, "a number of different considerations" means one or more different considerations. "Some number", as used herein with reference to items, may mean zero or more items.

[0025] FIG. 1 depicts a block diagram of one embodiment of a targeted imaging system. The data flow begins with a 3D imaging system 100 which may contain some combination of 3D single lens camera system, stereoscopic imager, LIDAR, infra-red or multi-spectral imager and visible light or RGB imaging system.

[0026] The imager's parameters such as but not limited to pan, tilt, zoom, spectral distribution, and frame rate are controlled by an image control system 110 which in turn receives information from the 3D scene analysis system 120. This forms a feedback control loop from the 3D imager, through the analytic system, to the imager control and back to the 3D imager.

[0027] In one embodiment the imager control and analytics system may be implemented as separate subsystems consisting of some combination of hardware and software controllers, or some combination of the subsystem functions may be implemented within the 3D imager itself. [0028] The defined image area and/or time is described and stored within the system memory in volatile or non-volatile fashion 130. This information, along with the current location of the imaging platform 140 and information about the imaging platform itself 150 are combined to calculate the final image which is stored for retrieval in memory 160 and/or transmitted to some receiving system 170.

[0029] The accuracy of the location data may be determined by sensors or processes such as but not limited to Global Positioning Satellites (GPS), Enhanced-GPS, RF signal tri angulation, offsets from other known location beacons, IR or visible light landmarks, or distance offsets from the objective itself or a known point in space base plus offset calculation. The platform sensor array is necessary because a flying platform, for example, may induce a tilt of up to 180 degrees in the imaging system in either direction, which affects the calculation of the scene analytics.

[0030] In FIG 2, an example of the geometric relationships of the scene analytics is presented. The imager 200 is depicted in this embodiment as being pointed in the general direction of the target objective 205. One familiar with the art would recognize that this 2D depiction can be extrapolated to three or more dimensions without altering the fundamental nature of the invention.

[0031] The imager will have some angle of lens aperture such that a wider or narrower cone of view is expressed towards the target, shown in this embodiment as left viewing angle 215 and right viewing angle 220. Other aspects of the imager such as but not limited to pan, tilt, zoom, and the relative angle of the platform will alter the view of the target. As the 3D location and orientation of the platform are known, the adjustments of the image given the angles on the imager can be calculated by a number of different measures such as but not limited to Euler's method.

[0032] The imager also requires a measurement of the distance to any imaged object, such as the objective target. In this embodiment the 3D distance to the object is a native function of the single, dual, or multiple lens array 3D camera system. This may be expressed as the distance to a given pixel in the image, or as a full 3D model of the viewed objective returned from the imager. Distance may also be measured by secondary techniques such as but not limited to LIDAR or RADAR, and a number of 3D image inference techniques such as but not limited to distance-by-focus, distance-by-defocus, spatial phase integration and IR interferometry may be employed to calculate the objective distance.

[0033] The allowed imaging area in this embodiment is defined by a circle 210 expressed mathematically, such as but not limited to a point with radius or diameter distances. In other embodiments the area may be defined by line segments, splines, interconnected points, polygons either convex or concave and may be implicitly or explicitly limited in one or more dimensions as a default without specification.

[0034] Once the current location of the imager and its n-dimensional offset with respect to the objective and the allowed target zone is established, it becomes a relatively simple calculation that any object such as Object A 235 can be determined to be within or not within the zone. A virtual line from the imager to the object must intersect the target area specification at one or more locations. The algorithm for determining whether a point is within a convex or concave polygon is well described in the literature as well as algorithms for standard or platonic geometry and solids.

[0035] In this example the line for object A intersects the target area at entry 225 and exit 230. Because part of the objective obscures part of object A, only the part of Object A visible in the resulting image would be obscured. If a full 3D model of the area is returned, all of object A would be obscured in the 3D model.

[0036] Similarly, object B 240 partially obscures the objective along line from object B's corner 245 across the objective at 260 along line 250. Object B would then be obscured, partially blocking the view of the objective. In another embodiment, a line tangent to the circle or known to be behind the target area at some given point could be artificially generated as a plane of obscured data 255 in the image or model obscuring all data behind the plane.

[0037] As the imager moves, the relationships between the elements visible to the imager remain and similar calculations for each frame or time slice of the resultant model are capable of delivering clear views of anything within the target area and obscuring anything outside of the area.

[0038] In other embodiments fixed offsets or offsets calculated from initial target frames may be utilized such that portions of the initial objective may be imaged and then based on that image data other areas may be imaged. For example, an initial image of an automobile might be programmed to only image the driver, and obscure the passengers. If this first image determines that probable cause exists to image the entire vehicle, which may occur in a second step.

[0039] The concept of probable cause imaging may be quite complex. As an example without limitation consider a UAV flying over a forested area imaging the forest for analysis. There may be no explicit violation of privacy unless the platform happens to fly over a person in the forest. The application-specific recognition parameters and conditions which qualify as probable cause could be reviewed and approved by judicial oversight, for example without limitation the observed human carrying a firearm. The scene analysis system is capable of recognizing the firearm within the image. If the time of the image is within hunting season, or no firearm is observed, the human is blurred out of the image. If a firearm is observed and it is not hunting season, the requirement for probable cause is fulfilled and the image is returned and possibly flagged for review.

[0040] In another example of a maritime patrol, judicial oversight may define probable cause as roughly rectangular bundles observable on deck, the presence of firearms, or other criteria. The patrol UAV might begin imaging a boat, and if none of these criteria is met the boat and occupants are blurred from observation. The images may be stored and reviewed at a later time or video or still frames may be transmitted, streamed, or otherwise transferred directly to humans or other systems tasked with observation of the mission imager.

[0041] In another embodiment the mission imager might detect an image parameter, with or without probable cause, which qualifies as exigent circumstances and this may override the programming of the blurring system and cause behavioral or parametric changes to the platform such as but not limited to going into a tracking mode, transmitting messages, or other combinations of behaviors. These types of incidents may also trigger additional capabilities such as but not limited to external control of the imaging system and new instructions for the platform.

[0042] For example without limitation the platform may simply be recording a traffic stop, when the scene analysis system detects an "officer down" condition, or a platform may be patrolling beaches and the scene analysis system detects a shark near a crowded beach. [0043] Probable cause image recognition may be accomplished by a variety of methodologies and systems including without limitation any combination of deep learning systems, hardware accelerators, software, graphics processors, artificial intelligence, and/or neural networks.

[0044] In another embodiment any number of frames might be imaged while the platform is en route to the target area, and these frames, once it is determined that no part of the target area is imaged, can simply be discarded.

[0045] While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the invention. Further, different illustrative embodiments may provide different benefits as compared to other illustrative embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

[0046] The flowcharts and block diagrams described herein illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various illustrative embodiments. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function or functions. It should also be noted that, in some alternative implementations, the functions noted in a block may occur out of the order noted in the figures. For example, the functions of two blocks shown in succession may be executed substantially concurrently, or the functions of the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Claims

The invention claimed is:
1) A computation engine or system capable of automatically blurring or otherwise obscuring observable elements outside a target area while clearly imaging elements within.
2) The system of 1 where the target area is defined by geometric objects such as but not limited to circles, squares, ellipses, rectangles, triangles, spheres, cubes, dodecahedrons or other solids or polygons constructed from points, line segments, splines or other spatial mathematical descriptors or parameters.
3) The system of 1 where 3D imaging via single lens or multiple lens systems are
implemented to determine distance to a point or object in the image.
4) The system of 1 where electromagnetic transmissive and reflective means are
implemented to determine distance to a point or object in the image.
5) The system of 1 where the attitude and other known parameters of the imaging platform are utilized to determine the exact geometric relationship between the imager and the objective and/or the target area.
6) The system of 1 where the current location of the imager is utilized to determine the exact geometric relationship between the imager and the objective and/or the target area.
7) The system of 1 where the current location is derived from sources such as but not
limited to GPS data, radio frequency tri angulation, landmarks, inertial measurement, or known position offsets.
8) The system of 1 where objects outside of the target area are obscured by simple removal from the model or image and replaced with a nondescript exemplar such as a gray background, or replaced with stored images from public, private, or commercial records, not the currently imaged pixels.
9) The system of 1 where "probable cause" metrics within a scene analysis system are
applied to determine whether or not an image should be preserved intact and/or the extent of the image applied to the blurring system.
10) The system of 1 where probable cause recognition is implemented by a variety of
methodologies and systems including without limitation any combination of deep learning systems, hardware accelerators, software, graphics processors, artificial intelligence, and/or neural networks.
11) The system of 1 where some combination of location, time, and/or other parameters are utilized to alter the behavior of the mission imaging platform or define when and where a reasonable expectation of privacy exists and where it does not.
12) The system of 1 where exigent circumstances, either from external command or internal scene recognition, can override the previous programming of the blurring system and allow full surveillance to be performed.
13) The system of 1 where detection of probable cause can cause a change to the platform behavior such as but not limited to switching to tracking mode and/or transmitting notification to monitoring authorities.
14) The system of 1 where detection of specific conditions from the scene analysis system can trigger behaviors and parametric changes within the system such as but not limited to human control of the mission imager and/or new commands to the platform.
PCT/US2017/014472 2016-01-26 2017-01-22 System and method for targeted imaging from collection platforms WO2017132074A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201662287276P true 2016-01-26 2016-01-26
US62/287,276 2016-01-26

Publications (1)

Publication Number Publication Date
WO2017132074A1 true WO2017132074A1 (en) 2017-08-03

Family

ID=59398971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/014472 WO2017132074A1 (en) 2016-01-26 2017-01-22 System and method for targeted imaging from collection platforms

Country Status (1)

Country Link
WO (1) WO2017132074A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080297587A1 (en) * 2007-05-31 2008-12-04 Kurtz Andrew F Multi-camera residential communication system
US20090138138A1 (en) * 2006-09-29 2009-05-28 Bran Ferren Imaging and display system to aid helicopter landings in brownout conditions
WO2013137534A1 (en) * 2012-03-12 2013-09-19 Samsung Techwin Co.,Ltd. System and method for processing image to protect privacy
US20150146043A1 (en) * 2004-12-29 2015-05-28 Nokia Corporation Electronic Device and Method In An Electronic Device For Processing Image Data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146043A1 (en) * 2004-12-29 2015-05-28 Nokia Corporation Electronic Device and Method In An Electronic Device For Processing Image Data
US20090138138A1 (en) * 2006-09-29 2009-05-28 Bran Ferren Imaging and display system to aid helicopter landings in brownout conditions
US20080297587A1 (en) * 2007-05-31 2008-12-04 Kurtz Andrew F Multi-camera residential communication system
WO2013137534A1 (en) * 2012-03-12 2013-09-19 Samsung Techwin Co.,Ltd. System and method for processing image to protect privacy

Similar Documents

Publication Publication Date Title
US6804607B1 (en) Collision avoidance system and method utilizing variable surveillance envelope
Artieda et al. Visual 3-d slam from uavs
US6903676B1 (en) Integrated radar, optical surveillance, and sighting system
US8902308B2 (en) Apparatus and method for generating an overview image of a plurality of images using a reference plane
EP2735902A1 (en) Multi-lens array system and method
JP5150615B2 (en) Aircraft collision detection and avoidance system and method
US8996207B2 (en) Systems and methods for autonomous landing using a three dimensional evidence grid
KR100955483B1 (en) Method of building 3d grid map and method of controlling auto travelling apparatus using the same
US10168179B2 (en) Vehicle display system and method with enhanced vision system and synthetic vision system image display
CA2767312C (en) Automatic video surveillance system and method
Merino et al. Cooperative fire detection using unmanned aerial vehicles
EP2286397B1 (en) Controlling an imaging apparatus over a delayed communication link
US20060244826A1 (en) Method and system for surveillance of vessels
US8116527B2 (en) Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
US20080159591A1 (en) Human detection with imaging sensors
USRE45253E1 (en) Remote image management system (RIMS)
US20090157233A1 (en) System and methods for autonomous tracking and surveillance
Kaiser et al. Vision-based estimation for guidance, navigation, and control of an aerial vehicle
EP1926007B1 (en) Method and system for navigation of an unmanned aerial vehicle in an urban environment
EP2348280A2 (en) Systems and methods for monocular airborne object detection
US20140140575A1 (en) Image capture with privacy protection
US7949150B2 (en) Automatic camera calibration and geo-registration using objects that provide positional information
US20060077255A1 (en) Method and system for performing adaptive image acquisition
US8577083B2 (en) Geolocating objects of interest in an area of interest with an imaging system
US9026272B2 (en) Methods for autonomous tracking and surveillance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17744728

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/12/2018)