WO2003030550A1 - Optimal multi-camera setup for computer-based visual surveillance - Google Patents

Optimal multi-camera setup for computer-based visual surveillance Download PDF

Info

Publication number
WO2003030550A1
WO2003030550A1 PCT/IB2002/003717 IB0203717W WO03030550A1 WO 2003030550 A1 WO2003030550 A1 WO 2003030550A1 IB 0203717 W IB0203717 W IB 0203717W WO 03030550 A1 WO03030550 A1 WO 03030550A1
Authority
WO
WIPO (PCT)
Prior art keywords
deployment
measure
effectiveness
camera
computer
Prior art date
Application number
PCT/IB2002/003717
Other languages
French (fr)
Inventor
Miroslav Trajkovic
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/165,089 external-priority patent/US20030058342A1/en
Priority claimed from US10/189,272 external-priority patent/US20030058111A1/en
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP02765217A priority Critical patent/EP1433326A1/en
Priority to JP2003533612A priority patent/JP2005505209A/en
Priority to KR10-2004-7004440A priority patent/KR20040037145A/en
Publication of WO2003030550A1 publication Critical patent/WO2003030550A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/1968Interfaces for setting up or customising the system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • This invention relates to the field of security systems, and in particular to the placement of multiple cameras to facilitate computer- vision applications.
  • Cameras are often used in security systems and other visual monitoring applications.
  • Computer programs and applications are continually being developed to process the image information obtained from a camera, or from multiple cameras.
  • Face and figure recognition systems provide the capability of tracking identified persons or items as they move about a field of view, or among multiple fields of view.
  • each camera affects the performance and effectiveness of the image processing system.
  • the determination of proper placement of each camera is a manual process, wherein a security professional assesses the area and places the cameras in locations that provide effective and efficient coverage.
  • Effective coverage is commonly defined as a camera placement that n inimizes "blind spots" within each camera's field of view.
  • Efficient coverage is commonly defined as coverage using as few cameras as possible, to reduce cost and complexity.
  • the objective of the placement is to maximize the visual coverage of the secured area using a minimum number of cameras. Achieving such an objective, however, is often neither effective nor efficient for computer- vision applications.
  • These objects and others are achieved by defining a measure of effectiveness of a camera's deployment that includes the camera's effectiveness in providing image information to computer-vision applications.
  • the effectiveness of the deployment includes measures based on the ability of one or more computer- vision applications to perform their intended functions using the image information provided by the deployed cameras.
  • the deployment of the cameras includes consideration of the perspective information that is provided by the deployment.
  • FIG. 1 illustrates an example flow diagram of a multi-camera deployment system in accordance with this invention.
  • Fig. 2 illustrates a second example flow diagram of a multi-camera deployment system in accordance with this invention.
  • This invention is premised on the observation that a camera deployment that provides effective visual coverage does not necessarily provide sufficient image information for effective computer- vision processing. Camera locations that provide a wide coverage area may not provide perspective information; camera locations that provide perspective discrimination may not provide discernible context information; and so on.
  • a regular-shaped room with no obstructions will be allocated a single camera, located at an upper corner of the room, and aimed coincident with the diagonal of the room, and slightly downward. Assuming that the field of view of the camera is wide enough to encompass the entire room, or adjustable to sweep the entire room, a single camera will be sufficient for visual coverage of the room.
  • a room or hallway rarely contains more than one camera, an additional camera being used only when an obstruction interferes with the camera's field of view.
  • Computer- vision systems often require more than one camera's view of a scene to identify the context of the view and to provide an interpretation of the scene based on the 3 -dimensional location of objects within the scene. As such, the placement of cameras to provide visual coverage is often insufficient.
  • algorithms are available for estimating 3-D dimensions from a single 2-D image, or from multiple 2-D images from a single camera with pan-tilt-zoom capability, such approaches are substantially less effective or less efficient than algorithms that use images of the same scene from different viewpoints.
  • Some 2-D images from a single camera do provide for excellent 3-D dimension determination, such as a top-down view from a ceiling-mounted camera, because the image identifies where in the room a target object is located, and the type of object identifies its approximate height.
  • 3-D dimension determination such as a top-down view from a ceiling-mounted camera
  • Fig. 1 illustrates an example flow diagram of a multi-camera deployment system that includes consideration of a deployment's computer- vision effectiveness in accordance with this invention.
  • a proposed initial camera deployment is defined, for example, by identifying camera locations on a displayed floor plan of the area that is being secured.
  • the visual coverage provided by the deployment is assessed, using techniques common in the art.
  • the "computer- vision effectiveness" of the deployment is determined, as discussed further below.
  • Each computer- vision application performs its function based on select parameters that are extracted from the image.
  • the particular parameters, and the function's sensitivity to each, are identifiable.
  • a gesture-recognition function may be very sensitive to horizontal and vertical movements (waving arms, etc.), and somewhat insensitive to depth movements. Defining x, y, and z, as horizontal, vertical, and depth dimensions, respectively, the gesture-recognition function can be said to be sensitive to delta-x and delta- y detection. Therefore, in this example, determining the computer- vision effectiveness of the deployment for gesture-recognition will be based on how well the deployment provides delta- x and delta-y parameters from the image.
  • Such a determination is made based on each camera's location and orientation relative to the secured area, using, for example, a geometric model and conventional differential mathematics. Heuristics and other simplifications may also be used. Obviously, for example, a downward pointing camera will provide minimal, if any, delta-y information, and its measure of effectiveness for gesture-recognition will be poor. In lieu of a formal geometric model, a rating system may be used, wherein each camera is assigned a score based on its viewing angle relative to the horizontal.
  • an image-recognition function may be sensitive to the resolution of the image in the x and y directions, and the measure of image-recognition effectiveness will be based on the achievable resolution throughout the area being covered.
  • a camera on a wall of a room may provide good x and y resolution for objects near the wall, but poor x and y resolution for objects near a far-opposite wall.
  • placing an additional camera on the far-opposite wall will increase the available resolution throughout the room, but will be redundant relative to providing visual coverage of the room.
  • a motion-estimation function that predicts a path of an intruder in a secured area may be sensitive to horizontal and depth movements (delta-x and delta-z), but relatively insensitive to vertical movements (delta-y), in areas such as rooms that do not provide a vertical egress, and sensitive to vertical movements in areas such as stairways that provide vertical egress.
  • the measure of the computer- vision effectiveness will include a measure of the delta-x and delta-z sensitivity provided by the cameras in rooms and a measure of the delta-y sensitivity provided by the cameras in the hallways.
  • sensitivities of a computer- vision system need not be limited to the example x, y, and z parameters discussed above.
  • a face-recognition system may be expected to recognize a person regardless of the direction that the person is facing. As such, in addition to x and y resolution, the system will be sensitive to the orientation of each camera's field of view, and the effectiveness of the deployment will be dependent upon having intersecting fields of view from a plurality of directions.
  • the assessment of the deployment's effectiveness is typically a composite measure based on each camera's effectiveness, as well as the effectiveness of combinations of cameras. For example, if the computer- vision application is sensitive to delta-x, delta-y, and delta-z, the relationship of two cameras to each other and to the secured area may provide sufficient perspective information to determine delta-x, delta-y, and delta-z, even though neither of the two cameras provides all three parameters. In such a situation, the deployment system of this invention is configured to "ignore" the poor scores that may be determined for an individual camera when a higher score is determined for a combination of this camera with another camera.
  • the deployment system is configured to assume that the deployment must provide a proper x, y, and z coordinates for objects in the secured area, and measures the computer- vision effectiveness in terms of the perspective information provided by the deployment.
  • this perspective measure is generally determined based on the location and orientation of two or more cameras with intersecting fields of view in the secured area.
  • the acceptability of the deployment is assessed, based on the measure of computer- vision effectiveness, from 130, and optionally, the visual coverage provided by this deployment, from 120. If the deployment is unacceptable, it is modified, at 150, and the process 130-140 (optionally 120-130-140) is repeated until an acceptable deployment is found.
  • the modification at 150 may include a relocation of existing camera placements, or the addition of new cameras to the deployment, or both.
  • the modification at 150 may be automated, or manual, or a combination of both.
  • the deployment system highlights the area or areas having insufficient computer- vision effectiveness, and suggests a location for an additional camera. Because the initial deployment 110 will typically be designed to assure sufficient visual coverage, it is assumed that providing an additional camera is a preferred alternative to changing the initial camera locations, although the user is provided the option of changing these initial locations. Also, this deployment system is particularly well suited for enhancing existing multi-camera systems, and the addition of a camera is generally an easier task than moving a previously installed camera.
  • Fig. 2 illustrates a second example flow diagram of a multi-camera deployment system in accordance with this invention.
  • the camera locations are determined at 210 in order to provide sufficient visual coverage.
  • This deployment at 210 may correspond to an existing deployment that had been installed to provide visual coverage, or it may correspond to a proposed deployment, such as provided by the techniques disclosed in the above referenced PCT Application PCT/USOO/40011, or other automated deployment processes common in the art.
  • the computer- vision effectiveness of the deployment is determined at 220, as discussed above with regard to block 130 of Fig. 1.
  • the acceptability of the deployment is determined.
  • the acceptability of the deployment at 230 is based solely on the determined computer-vision effectiveness from 220.
  • a new camera is added to the deployment, and at 250, the location for each new camera is determined.
  • the particular deficiency of the existing deployment is determined, relative to the aforementioned sensitivities of the particular computer- vision application. For example, if a delta-z sensitivity is not provided by the current deployment, a ceiling-mounted camera location is a likely solution.
  • the user is provided the option of identifying areas within which new cameras may be added and/or identifying areas within which new cameras may not be added. For example, in an external area, the location of existing poles or other structures upon which a camera can be mounted will be identified.
  • the process 250 is configured to re-determine the location of each of the added cameras, each time that a new camera is added. That is, as is known in the art, an optimal placement of one camera may not correspond to that camera's optimal placement if another camera is also available for placement. Similarly, if a third camera is added, the optimal locations of the first two cameras may change.
  • the secured area is partitioned into sub-areas, wherein the deployment of cameras in one sub-area is virtually independent of the deployment in another sub-area.
  • the deployment of cameras in each room is processed as an independent deployment process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A measure of effectiveness of a camera's deployment includes the camera's effectiveness in providing image information to computer-vision applications. In addition to, or in lieu of, measures based on the visual coverage provided by the deployment of multiple cameras, the effectiveness of the deployment includes measures based on the ability of one or more computer-vision applications to perform their intended functions using the image information provided by the deployed cameras. Of particular note, the deployment of the cameras includes consideration of the perspective information that is provided by the deployment.

Description

Optimal multi-camera setup for computer-based visual surveillance
This application claims the benefit of U.S. Provisional Application No. 60/325,399, filed 27 September 2001, Attorney Docket US010482P.
This invention relates to the field of security systems, and in particular to the placement of multiple cameras to facilitate computer- vision applications.
Cameras are often used in security systems and other visual monitoring applications. Computer programs and applications are continually being developed to process the image information obtained from a camera, or from multiple cameras. Face and figure recognition systems provide the capability of tracking identified persons or items as they move about a field of view, or among multiple fields of view.
US Patent 6,359,647 "Automated camera handoff system for figure tracking in a multiple camera system", issued 19 March 2002 to Soumitra Sengupta, Damian Lyons, Thomas Murphy, and Daniel Reese, discloses an automated tracking system that is configured to automatically direct cameras in a multi-camera environment to keep a target image within a field of view of at least one camera as the target moves from room-to-room, or region-to-region, in a secured building or area, and is incorporated by reference herein. Other multiple-camera image processing systems are common in the art.
In a multiple-camera system, the placement of each camera affects the performance and effectiveness of the image processing system. Typically, the determination of proper placement of each camera is a manual process, wherein a security professional assesses the area and places the cameras in locations that provide effective and efficient coverage. Effective coverage is commonly defined as a camera placement that n inimizes "blind spots" within each camera's field of view. Efficient coverage is commonly defined as coverage using as few cameras as possible, to reduce cost and complexity.
Because of the likely intersections of camera fields of view in a multiple- camera deployment, and the different occulted views caused by obstructions relative to each camera location, the determination of an optimal placement of cameras is often not a trivial matter. Algorithms continue to be developed for optimizing the placement of cameras for effective and efficient coverage of a secured area. PCT Application PCT/US00/40011 "Method for optimization of video coverage", published as WO 00/56056 on 21 September 2000 for Moshe Levin and Ben Mordechai, and incorporated by reference herein, teaches a method for determining the position and angular orientation of multiple cameras for optimal coverage, using genetic algorithms and simulated annealing algorithms. Alternative potential placements are generated and evaluated until the algorithms converge on a solution that optimizes the coverage provided by the system.
In the conventional schemes that are used to optimally place multiple cameras about a secured area, whether a manual scheme or an automated scheme, or a combination of both, the objective of the placement is to maximize the visual coverage of the secured area using a minimum number of cameras. Achieving such an objective, however, is often neither effective nor efficient for computer- vision applications.
It is an object of this invention to provide a method and system for determining a placement of cameras in a multiple-camera environment that facilitates computer- vision applications. It is a further object of this invention to determine the placement of additional cameras in a conventional multiple-camera deployment to facilitate computer-vision applications. These objects and others are achieved by defining a measure of effectiveness of a camera's deployment that includes the camera's effectiveness in providing image information to computer-vision applications. In addition to, or in lieu of, measures based on the visual coverage provided by the deployment of multiple cameras, the effectiveness of the deployment includes measures based on the ability of one or more computer- vision applications to perform their intended functions using the image information provided by the deployed cameras. Of particular note, the deployment of the cameras includes consideration of the perspective information that is provided by the deployment.
The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein:
Fig. 1 illustrates an example flow diagram of a multi-camera deployment system in accordance with this invention. Fig. 2 illustrates a second example flow diagram of a multi-camera deployment system in accordance with this invention.
Throughout the drawings, the same reference numerals indicate similar or corresponding features or functions.
This invention is premised on the observation that a camera deployment that provides effective visual coverage does not necessarily provide sufficient image information for effective computer- vision processing. Camera locations that provide a wide coverage area may not provide perspective information; camera locations that provide perspective discrimination may not provide discernible context information; and so on. In a typical 'optimal' camera deployment, for example, a regular-shaped room with no obstructions will be allocated a single camera, located at an upper corner of the room, and aimed coincident with the diagonal of the room, and slightly downward. Assuming that the field of view of the camera is wide enough to encompass the entire room, or adjustable to sweep the entire room, a single camera will be sufficient for visual coverage of the room. As illustrated in the referenced US Patent 6,359,647, a room or hallway rarely contains more than one camera, an additional camera being used only when an obstruction interferes with the camera's field of view. Computer- vision systems often require more than one camera's view of a scene to identify the context of the view and to provide an interpretation of the scene based on the 3 -dimensional location of objects within the scene. As such, the placement of cameras to provide visual coverage is often insufficient. Although algorithms are available for estimating 3-D dimensions from a single 2-D image, or from multiple 2-D images from a single camera with pan-tilt-zoom capability, such approaches are substantially less effective or less efficient than algorithms that use images of the same scene from different viewpoints.
Some 2-D images from a single camera do provide for excellent 3-D dimension determination, such as a top-down view from a ceiling-mounted camera, because the image identifies where in the room a target object is located, and the type of object identifies its approximate height. However, such images are notably poor for determining the context of a scene, and particularly poor for typical computer- vision applications, such as image or gesture recognition.
Fig. 1 illustrates an example flow diagram of a multi-camera deployment system that includes consideration of a deployment's computer- vision effectiveness in accordance with this invention. At 110, a proposed initial camera deployment is defined, for example, by identifying camera locations on a displayed floor plan of the area that is being secured. Optionally, at 120, the visual coverage provided by the deployment is assessed, using techniques common in the art. At 130, the "computer- vision effectiveness" of the deployment is determined, as discussed further below.
Each computer- vision application performs its function based on select parameters that are extracted from the image. The particular parameters, and the function's sensitivity to each, are identifiable. For example, a gesture-recognition function may be very sensitive to horizontal and vertical movements (waving arms, etc.), and somewhat insensitive to depth movements. Defining x, y, and z, as horizontal, vertical, and depth dimensions, respectively, the gesture-recognition function can be said to be sensitive to delta-x and delta- y detection. Therefore, in this example, determining the computer- vision effectiveness of the deployment for gesture-recognition will be based on how well the deployment provides delta- x and delta-y parameters from the image. Such a determination is made based on each camera's location and orientation relative to the secured area, using, for example, a geometric model and conventional differential mathematics. Heuristics and other simplifications may also be used. Obviously, for example, a downward pointing camera will provide minimal, if any, delta-y information, and its measure of effectiveness for gesture-recognition will be poor. In lieu of a formal geometric model, a rating system may be used, wherein each camera is assigned a score based on its viewing angle relative to the horizontal.
In like manner, an image-recognition function may be sensitive to the resolution of the image in the x and y directions, and the measure of image-recognition effectiveness will be based on the achievable resolution throughout the area being covered. In this example, a camera on a wall of a room may provide good x and y resolution for objects near the wall, but poor x and y resolution for objects near a far-opposite wall. In such an example, placing an additional camera on the far-opposite wall will increase the available resolution throughout the room, but will be redundant relative to providing visual coverage of the room.
A motion-estimation function that predicts a path of an intruder in a secured area, on the other hand, may be sensitive to horizontal and depth movements (delta-x and delta-z), but relatively insensitive to vertical movements (delta-y), in areas such as rooms that do not provide a vertical egress, and sensitive to vertical movements in areas such as stairways that provide vertical egress. In such an application, the measure of the computer- vision effectiveness will include a measure of the delta-x and delta-z sensitivity provided by the cameras in rooms and a measure of the delta-y sensitivity provided by the cameras in the hallways.
Note that the sensitivities of a computer- vision system need not be limited to the example x, y, and z parameters discussed above. A face-recognition system may be expected to recognize a person regardless of the direction that the person is facing. As such, in addition to x and y resolution, the system will be sensitive to the orientation of each camera's field of view, and the effectiveness of the deployment will be dependent upon having intersecting fields of view from a plurality of directions.
The assessment of the deployment's effectiveness is typically a composite measure based on each camera's effectiveness, as well as the effectiveness of combinations of cameras. For example, if the computer- vision application is sensitive to delta-x, delta-y, and delta-z, the relationship of two cameras to each other and to the secured area may provide sufficient perspective information to determine delta-x, delta-y, and delta-z, even though neither of the two cameras provides all three parameters. In such a situation, the deployment system of this invention is configured to "ignore" the poor scores that may be determined for an individual camera when a higher score is determined for a combination of this camera with another camera.
These and other methods of determining a deployment's computer- vision effectiveness will be evident to one of ordinary skill in the art in view of this disclosure and in view of the particular functions being performed by the computer-vision application. In a preferred embodiment, if the particular computer- vision application is unknown, the deployment system is configured to assume that the deployment must provide a proper x, y, and z coordinates for objects in the secured area, and measures the computer- vision effectiveness in terms of the perspective information provided by the deployment. As noted above, this perspective measure is generally determined based on the location and orientation of two or more cameras with intersecting fields of view in the secured area.
At 140, the acceptability of the deployment is assessed, based on the measure of computer- vision effectiveness, from 130, and optionally, the visual coverage provided by this deployment, from 120. If the deployment is unacceptable, it is modified, at 150, and the process 130-140 (optionally 120-130-140) is repeated until an acceptable deployment is found. The modification at 150 may include a relocation of existing camera placements, or the addition of new cameras to the deployment, or both.
The modification at 150 may be automated, or manual, or a combination of both. In a preferred embodiment, the deployment system highlights the area or areas having insufficient computer- vision effectiveness, and suggests a location for an additional camera. Because the initial deployment 110 will typically be designed to assure sufficient visual coverage, it is assumed that providing an additional camera is a preferred alternative to changing the initial camera locations, although the user is provided the option of changing these initial locations. Also, this deployment system is particularly well suited for enhancing existing multi-camera systems, and the addition of a camera is generally an easier task than moving a previously installed camera.
Fig. 2 illustrates a second example flow diagram of a multi-camera deployment system in accordance with this invention. In this embodiment, the camera locations are determined at 210 in order to provide sufficient visual coverage. This deployment at 210 may correspond to an existing deployment that had been installed to provide visual coverage, or it may correspond to a proposed deployment, such as provided by the techniques disclosed in the above referenced PCT Application PCT/USOO/40011, or other automated deployment processes common in the art. The computer- vision effectiveness of the deployment is determined at 220, as discussed above with regard to block 130 of Fig. 1. At 230, the acceptability of the deployment is determined. In this embodiment, because the initial deployment is explicitly designed to provide sufficient visual coverage, at 210, the acceptability of the deployment at 230 is based solely on the determined computer-vision effectiveness from 220. At 240, a new camera is added to the deployment, and at 250, the location for each new camera is determined. In a preferred embodiment of this invention, the particular deficiency of the existing deployment is determined, relative to the aforementioned sensitivities of the particular computer- vision application. For example, if a delta-z sensitivity is not provided by the current deployment, a ceiling-mounted camera location is a likely solution. In a preferred embodiment, the user is provided the option of identifying areas within which new cameras may be added and/or identifying areas within which new cameras may not be added. For example, in an external area, the location of existing poles or other structures upon which a camera can be mounted will be identified.
Note that, in a preferred embodiment of this invention, the process 250 is configured to re-determine the location of each of the added cameras, each time that a new camera is added. That is, as is known in the art, an optimal placement of one camera may not correspond to that camera's optimal placement if another camera is also available for placement. Similarly, if a third camera is added, the optimal locations of the first two cameras may change. In a preferred embodiment, to ease the processing task in a complex environment, the secured area is partitioned into sub-areas, wherein the deployment of cameras in one sub-area is virtually independent of the deployment in another sub-area. That is, for example, because the computer- vision effectiveness of cameras that are deployed in one room is likely to be independent of the computer- vision effectiveness of cameras that are deployed in another room that is substantially visually-isolated from the first room, the deployment of cameras in each room is processed as an independent deployment process.
The foregoing merely illustrates the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the invention and are thus within the spirit and scope of the following claims.

Claims

CLAIMS:
1. A method of deploying cameras in a multi-camera system, comprising:
- determining (130, 220) a measure of effectiveness based at least in part on a measure of expected computer- vision effectiveness provided by a deployment of the cameras at a plurality of camera locations, and - determining (140, 230) whether the deployment is acceptable, based on the measure of effectiveness of the deployment.
2. The method of claim 1 , further including:
- modifying (150, 240-250) one or more of the plurality of camera locations to provide an alternative deployment,
- determining (130, 220) a second measure of effectiveness, based at least in part on the alternative deployment, and
- determining (140, 230) whether the alternative deployment is acceptable, based on the second measure of effectiveness.
3. The method of claim 1, further including:
- modifying (240-250) the deployment by adding one or more camera locations to the plurality of camera locations to provide an alternative deployment,
- determining (130, 220) a second measure of effectiveness, based at least in part on the alternative deployment, and
- determining (140, 230) whether the alternative deployment is acceptable, based on the second measure of effectiveness.
4. The method of claim 1, wherein determining (130, 220) the measure of effectiveness is further based at least in part on a measure of expected visual coverage provided by the deployment of the cameras at the plurality of camera locations.
5. The method of claim 1 , wherein the measure of computer- vision effectiveness is based on a measure of perspective provided by the deployment.
6. The method of claim 1, further including deploying (160, 260) the cameras at the plurality of camera locations.
7. A method of deploying cameras in a multi-camera system, comprising:
- determining (210) a first deployment of the cameras at a plurality of camera locations based on an expected visual coverage provided by the deployment,
- determining (220) a measure of expected computer- vision effectiveness provided by the first deployment of the cameras at the plurality of camera locations, and - determimng (250) a second deployment of cameras based on the first deployment and the measure of expected computer- vision effectiveness.
8. The method of claim 7, wherein the second deployment includes the plurality of camera locations of the first deployment and one or more additional camera locations that provide a higher measure of expected computer- vision effectiveness than the first deployment.
9. The method of claim 7, wherein the measure of expected computer- vision effectiveness includes a measure of perspective provided by the first deployment.
10. The method of claim 7, further including deploying (160, 260) the cameras according to the second deployment.
11. A computer program that, when operated on a computer system, causes the computer system to effect the following operations:
- determine (130, 220) a measure of effectiveness based at least in part on a measure of expected computer- vision effectiveness provided by a deployment of cameras at a plurality of camera locations, and
- determine (140, 230) whether the deployment is acceptable, based on the measure of effectiveness of the deployment.
12. The computer program of claim 11 , wherein the computer program further causes the computer system to: - modify (150) one or more of the plurality of camera locations to provide an alternative deployment,
- determine (130) a second measure of effectiveness, based at least in part on the alternative deployment, and - determine (140) whether the alternative deployment is acceptable, based on the second measure of effectiveness.
13. The computer program of claim 11 , wherein the computer program further causes the computer system to: - modify (240-250) the deployment by adding one or more camera locations to the plurality of camera locations to provide an alternative deployment,
- determine (220) a second measure of effectiveness, based at least in part on the alternative deployment, and
- determine (230) whether the alternative deployment is acceptable, based on the second measure of effectiveness.
14. The computer program of claim 11 , wherein the computer system further determines the measure of effectiveness based at least in part on a measure (120) of expected visual coverage provided by the deployment of the cameras at the plurality of camera locations.
15. The computer program of claim 11 , wherein the measure of computer- vision effectiveness is based on a measure of perspective provided by the deployment.
16. A multi-camera deployement system comprising:
- a measurement unit being arranged to determine (130, 220) a measure of effectiveness based at least in part on a measure of expected computer- vision effectiveness provided by a deployment of cameras at a plurality of camera locations, and
- a test unit being arranged to determine (140, 230) whether the deployment is acceptable, based on the measure of effectiveness of the deployment.
PCT/IB2002/003717 2001-09-27 2002-09-11 Optimal multi-camera setup for computer-based visual surveillance WO2003030550A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP02765217A EP1433326A1 (en) 2001-09-27 2002-09-11 Optimal multi-camera setup for computer-based visual surveillance
JP2003533612A JP2005505209A (en) 2001-09-27 2002-09-11 Optimal multi-camera setup for computer-based visual surveillance
KR10-2004-7004440A KR20040037145A (en) 2001-09-27 2002-09-11 Optimal multi-camera setup for computer-based visual surveillance

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US32539901P 2001-09-27 2001-09-27
US60/325,399 2001-09-27
US10/165,089 US20030058342A1 (en) 2001-09-27 2002-06-07 Optimal multi-camera setup for computer-based visual surveillance
US10/165,089 2002-06-07
US10/189,272 2002-07-03
US10/189,272 US20030058111A1 (en) 2001-09-27 2002-07-03 Computer vision based elderly care monitoring system

Publications (1)

Publication Number Publication Date
WO2003030550A1 true WO2003030550A1 (en) 2003-04-10

Family

ID=27389101

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/003717 WO2003030550A1 (en) 2001-09-27 2002-09-11 Optimal multi-camera setup for computer-based visual surveillance

Country Status (5)

Country Link
EP (1) EP1433326A1 (en)
JP (1) JP2005505209A (en)
KR (1) KR20040037145A (en)
CN (1) CN1561640A (en)
WO (1) WO2003030550A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008142504A1 (en) * 2007-05-19 2008-11-27 Videotec S.P.A. Method and system for monitoring an environment
CN101572804B (en) * 2009-03-30 2012-03-21 浙江大学 Multi-camera intelligent control method and device
US8817102B2 (en) 2010-06-28 2014-08-26 Hitachi, Ltd. Camera layout determination support device
US20140278281A1 (en) * 2013-03-15 2014-09-18 Adt Us Holdings, Inc. Security system using visual floor plan
JP2015517247A (en) * 2012-04-02 2015-06-18 マックマスター ユニバーシティー Optimal camera selection in an array of cameras for monitoring and surveillance applications
US9898921B2 (en) 2013-03-15 2018-02-20 Adt Us Holdings, Inc. Security system installation
CN112291526A (en) * 2020-10-30 2021-01-29 重庆紫光华山智安科技有限公司 Monitoring point determining method and device, electronic equipment and storage medium
WO2021035012A1 (en) * 2019-08-22 2021-02-25 Cubic Corporation Self-initializing machine vision sensors
WO2022060442A1 (en) * 2020-09-18 2022-03-24 Microsoft Technology Licensing, Llc Camera placement guidance

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010125489A1 (en) * 2009-04-29 2010-11-04 Koninklijke Philips Electronics N.V. Method of selecting an optimal viewing angle position for a camera
CN101853399B (en) * 2010-05-11 2013-01-09 北京航空航天大学 Method for realizing blind road and pedestrian crossing real-time detection by utilizing computer vision technology
JP6218089B2 (en) * 2013-06-18 2017-10-25 パナソニックIpマネジメント株式会社 Imaging position determination device and imaging position determination method
US9955124B2 (en) * 2013-06-21 2018-04-24 Hitachi, Ltd. Sensor placement determination device and sensor placement determination method
EP2835792B1 (en) * 2013-08-07 2016-10-05 Axis AB Method and system for selecting position and orientation for a monitoring camera
CN106716447B (en) * 2015-08-10 2018-05-15 皇家飞利浦有限公司 Take detection
CN108234900B (en) * 2018-02-13 2020-11-20 深圳市瑞立视多媒体科技有限公司 Camera configuration method and device
CN108495057B (en) * 2018-02-13 2020-12-08 深圳市瑞立视多媒体科技有限公司 Camera configuration method and device
CN108471496B (en) * 2018-02-13 2020-11-03 深圳市瑞立视多媒体科技有限公司 Camera configuration method and device
CN108449551B (en) * 2018-02-13 2020-11-03 深圳市瑞立视多媒体科技有限公司 Camera configuration method and device
US20230288527A1 (en) * 2020-10-29 2023-09-14 Nec Corporation Allocation determination apparatus, allocation determination method, and computer-readable medium
CN114724323B (en) * 2022-06-09 2022-09-02 北京科技大学 Point distribution method of portable intelligent electronic fence for fire scene protection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0529317A1 (en) * 1991-08-22 1993-03-03 Sensormatic Electronics Corporation Surveillance system with master camera control of slave cameras
US5331413A (en) * 1992-09-28 1994-07-19 The United States Of America As Represented By The United States National Aeronautics And Space Administration Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0529317A1 (en) * 1991-08-22 1993-03-03 Sensormatic Electronics Corporation Surveillance system with master camera control of slave cameras
US5331413A (en) * 1992-09-28 1994-07-19 The United States Of America As Represented By The United States National Aeronautics And Space Administration Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008142504A1 (en) * 2007-05-19 2008-11-27 Videotec S.P.A. Method and system for monitoring an environment
EP2533535A1 (en) * 2007-05-19 2012-12-12 Videotec S.p.a. Method and system for monitoring an environment
US8350911B2 (en) 2007-05-19 2013-01-08 Videotec S.P.A. Method and system for monitoring an environment
RU2494567C2 (en) * 2007-05-19 2013-09-27 Видеотек С.П.А. Environment monitoring method and system
CN101572804B (en) * 2009-03-30 2012-03-21 浙江大学 Multi-camera intelligent control method and device
US8817102B2 (en) 2010-06-28 2014-08-26 Hitachi, Ltd. Camera layout determination support device
US9591272B2 (en) 2012-04-02 2017-03-07 Mcmaster University Optimal camera selection in array of monitoring cameras
JP2015517247A (en) * 2012-04-02 2015-06-18 マックマスター ユニバーシティー Optimal camera selection in an array of cameras for monitoring and surveillance applications
US9942468B2 (en) 2012-04-02 2018-04-10 Mcmaster University Optimal camera selection in array of monitoring cameras
US20140278281A1 (en) * 2013-03-15 2014-09-18 Adt Us Holdings, Inc. Security system using visual floor plan
US9898921B2 (en) 2013-03-15 2018-02-20 Adt Us Holdings, Inc. Security system installation
US10073929B2 (en) * 2013-03-15 2018-09-11 Adt Us Holdings, Inc. Security system using visual floor plan
WO2021035012A1 (en) * 2019-08-22 2021-02-25 Cubic Corporation Self-initializing machine vision sensors
US11380013B2 (en) 2019-08-22 2022-07-05 Cubic Corporation Self-initializing machine vision sensors
WO2022060442A1 (en) * 2020-09-18 2022-03-24 Microsoft Technology Licensing, Llc Camera placement guidance
US11496674B2 (en) 2020-09-18 2022-11-08 Microsoft Technology Licensing, Llc Camera placement guidance
CN112291526A (en) * 2020-10-30 2021-01-29 重庆紫光华山智安科技有限公司 Monitoring point determining method and device, electronic equipment and storage medium
CN112291526B (en) * 2020-10-30 2022-11-22 重庆紫光华山智安科技有限公司 Monitoring point determining method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN1561640A (en) 2005-01-05
EP1433326A1 (en) 2004-06-30
KR20040037145A (en) 2004-05-04
JP2005505209A (en) 2005-02-17

Similar Documents

Publication Publication Date Title
US20030058342A1 (en) Optimal multi-camera setup for computer-based visual surveillance
EP1433326A1 (en) Optimal multi-camera setup for computer-based visual surveillance
US7397929B2 (en) Method and apparatus for monitoring a passageway using 3D images
KR100660762B1 (en) Figure tracking in a multiple camera system
RU2251739C2 (en) Objects recognition and tracking system
US20020196330A1 (en) Security camera system for tracking moving objects in both forward and reverse directions
US20050134685A1 (en) Master-slave automated video-based surveillance system
JP5956248B2 (en) Image monitoring device
WO2005026907A9 (en) Method and apparatus for computerized image background analysis
WO1999045511A1 (en) A combined wide angle and narrow angle imaging system and method for surveillance and monitoring
WO2011054971A2 (en) Method and system for detecting the movement of objects
Snidaro et al. Automatic camera selection and fusion for outdoor surveillance under changing weather conditions
GB2368482A (en) Pose-dependent viewing system
US11227376B2 (en) Camera layout suitability evaluation apparatus, control method thereof, optimum camera layout calculation apparatus, and computer readable medium
US7355626B2 (en) Location of events in a three dimensional space under surveillance
Conci et al. Camera placement using particle swarm optimization in visual surveillance applications
CN113841180A (en) Method for capturing movement of an object and movement capturing system
KR102441436B1 (en) System and method for security
Jung et al. Tracking multiple moving targets using a camera and laser rangefinder
GB2352899A (en) Tracking moving objects
JP6548683B2 (en) Object image estimation device and object image determination device
JP6548682B2 (en) Object image judgment device
JP4448249B2 (en) Image recognition device
KR102672032B1 (en) System and method for determining the position of the camera image center point by the vanishing point position
WO2024135342A1 (en) Control system, control method, and program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FR GB GR IE IT LU MC NL PT SE SK TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002765217

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2003533612

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 20028190580

Country of ref document: CN

Ref document number: 1020047004440

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2002765217

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2002765217

Country of ref document: EP