US20160350921A1 - Automatic camera calibration - Google Patents
Automatic camera calibration Download PDFInfo
- Publication number
- US20160350921A1 US20160350921A1 US15/169,035 US201615169035A US2016350921A1 US 20160350921 A1 US20160350921 A1 US 20160350921A1 US 201615169035 A US201615169035 A US 201615169035A US 2016350921 A1 US2016350921 A1 US 2016350921A1
- Authority
- US
- United States
- Prior art keywords
- scene
- video
- video camera
- height
- vertical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06T7/0018—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2178—Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
-
- G06K9/00771—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G06T7/0085—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
Definitions
- This specification generally relates to methods, systems, devices, and other techniques for video monitoring, including techniques for calibrating cameras used in a video monitoring system.
- Video monitoring systems can provide one or more video cameras to monitor at least one location in view of the cameras. Some video monitoring systems are configured to transmit video signals from the one or more cameras to a central location for presentation on a limited set of monitors, and in certain instances, for recording and additional analysis.
- a video monitoring system may be adapted to capture and analyze video from various locations including banks, casinos, airports, military installations, convenience stores, parking lots, or the like. Video information from video cameras of video monitoring systems may be sent to and analyzed by a video analytics platform.
- a video monitoring system may include one or more computers that receive video content captured by one or more video cameras.
- the system may analyze the video content and perform various analytics processes to detect certain events and other features of interest. For example, the system may apply analytics processes to perform facial recognition, generate safety alerts, identify vehicle license plates, perform post-event analysis, count people or objects in a crowd, track objects across multiple cameras, perform incident detection, recognize objects, index video content, monitor pedestrian or vehicle traffic conditions, detect objects left at a scene, identify suspicious behavior, or perform a combination of multiple of these.
- Some video analytics processes rely on parameters associated with video cameras that captured video content that is the subject of analysis.
- a vehicle detection process may identify the make and model of a vehicle based in part on an absolute dimension of the vehicle derived from the video content, such as its height or width in inches. But in order for the vehicle detection process to determine the absolute dimensions of a vehicle in view of a video camera, one or more parameters associated with the camera may be required (e.g., the physical location of the camera relative to a ground plane, reference object, or both; the camera's resolution; the camera's perspective).
- a video monitoring system can analyze video content captured by the one or more cameras to perform automatic camera calibration and determine parameters that may be needed for one or more other video analytics processes.
- the system may automatically learn a camera's height above ground plane, the perspective of the camera, or a distance or location of the camera relative to one or more reference objects in a scene.
- Video information comprising a video signal that shows a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal; identifying, by the computing system, (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene; based on characteristics of the identified one or more objects, determining a height of a first vertical line in the 2D scene; and based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the first vertical line shown in the 2D scene, calibrating the video camera.
- inventions of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination thereof installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus (e.g., one or more computers or computer processors), cause the apparatus to perform the actions.
- determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a height of one or more of the identified objects.
- determining the height of the first vertical line that the video signal shows in the 2D scene comprises inferring a height of one or more of the identified objects based on video camera settings.
- video camera settings comprise one or more of (i) video camera installation angle, (ii) video camera resolution, (iii) video camera field of view.
- determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a probability distribution of an expected height of one or more of the identified objects.
- the identified one or more objects that the video signal shows in the 2D scene comprise stationary objects.
- the identified one or more objects that the video signal shows in the 2D scene comprise moving objects.
- determining the height of a first vertical line that the video signal shows in the 2D scene comprises determining a direction an object is moving relative to the video camera.
- calibrating the video camera comprises determining intrinsic and extrinsic video camera parameters.
- the intrinsic camera parameters comprise (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and the extrinsic parameters comprise (i) camera height, (ii) pan, (iii) tilt and (iv) roll.
- determining the intrinsic and extrinsic video camera parameters comprises: based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view, calculating three vanishing points, wherein the three vanishing points comprise (i) a vertical vanishing point and (ii) two horizontal vanishing points; based on the two horizontal vanishing points, calculating a horizon line and roll angle; based on the determined height of the first vertical line that the video shows in the 2D scene, calculating a height of the video camera; based on the calculated vanishing points and roll angle, calculating a tilt and focal length; based on the calculated height of the video camera and focal length, calculating vertical and horizontal angles of view.
- each vanishing point comprises a point at which receding identified vertical or horizontal lines viewed in perspective appear to converge in the 2D scene.
- the skew intrinsic camera parameter is assumed to be zero. In some cases the aspect ratio intrinsic camera parameter is assumed to be equal to one.
- identifying two or more vertical lines and two or more horizontal lines that the video signal shows in the 2D scene comprises: applying a canny edge detector operator to the 2D scene to detect one or more edges in the scene; and applying a Hough Transform to select vertical and horizontal lines from the detected edges.
- Video monitoring systems and applications may be greatly aided if video cameras included in the video monitoring system are calibrated, e.g., such that the video camera's intrinsic parameters and the camera's position and orientation with respect to some reference point in a scene captured by the video camera are known.
- Manual camera calibration can be a time consuming and tedious process, particularly in large scale CCTV environments that may include tens or hundreds of individual video cameras.
- a system implementing automatic video camera calibration as described in this specification provides a practical and efficient way to calibrate cameras for large scale video monitoring systems.
- the system is able to automatically calibrate video cameras without any human intervention.
- a system implementing automatic video camera calibration as described in this specification may be more flexible than other systems and methods for video camera calibration since the accuracy achieved by the system described in this specification is variable dependent on the input to the calibration method. For example, by identifying an increased number of parallel lines in a two-dimensional scene of an environment captured by a video camera the system described in this specification may achieve higher levels of accuracy.
- FIG. 1 depicts an example image showing a transformation between a world coordinate system and a camera coordinate system.
- FIG. 2 depicts a conceptual block diagram of an example process for automatically calibrating a video camera.
- FIG. 3 depicts an example system for automatic video camera calibration.
- FIG. 4 is a flowchart of an example process for automatically calibrating a video camera.
- FIG. 5 is a flowchart of an example process for determining intrinsic and extrinsic video camera parameters.
- FIG. 6 depicts an example image illustrating how to calculate a height of the video camera.
- FIG. 7 depicts an example computing device that may be used to carry out the computer-implemented methods and other techniques described herein.
- a video analytics platform may automatically calibrate a ground plane, an angle or a zoom for the video camera based on video information received from the video camera, such as a height of a person or a size of an object (e.g., a building, a vehicle or a road sign).
- the video camera may automatically self-calibrate based on video information received by the video camera.
- the video analytics platform may provide, to the video camera, configuration information that enables the video camera to automatically calibrate itself based on video information received by the video camera.
- FIG. 1 depicts an example image 100 showing a transformation between a world coordinate system and a camera coordinate system.
- the example image includes a person 106 of height h standing on a ground plane 110 .
- the position of an object relative to the ground plane may be described as a point (x, y, z) in a World Coordinate system (WCS).
- WCS World Coordinate system
- the position in which person 106 is standing may be described by a point (x p , 0, z p ) in the WCS.
- the WCS includes an X axis, Y axis and Z axis meeting at an origin 102 .
- the example image 100 further includes a video camera 104 and a 2D image projection 108 representing a field of view of the video camera 108 .
- the position of an object in the image projections 108 may be described as a point (u, v) in a video camera coordinate system (CCS).
- the relationship between a 3D point in a WCS [x, y, z, 1] T and its 2D image projection in a CCS [u, v, 1] T may be represented by a 3*4 projection matrix M, namely [u, v, 1] T ⁇ M. [x,y,z,1] T .
- M may be determined by a set of intrinsic parameters, e.g., including video camera focal length y principle point (u p , v p ), aspect ratio a, and skew s, and a set of extrinsic parameters corresponding to a transformation between the world coordinate system (WCS) and the camera coordinate system (CCS).
- the transformation may be specified by first placing the origin 104 of the CSS vertically above, e.g., along the Y-axis, the WCS origin 102 at the height H C of the video camera.
- the transformation may be further specified by performing a rotation around the Y axis by an angle pan( ⁇ ), a rotation around the X axis by an angle tilt( ⁇ ), and a rotation around the Z axis by an angle roll( ⁇ ).
- the video camera 104 may be calibrated by determining a set of intrinsic video camera parameters, e.g., focal length, principal point, aspect ratio and skew, and extrinsic video camera parameters, e.g., video camera height, pan, tilt and roll. Determining a set of intrinsic and extrinsic video camera parameters to automatically calibrate a video camera is described in more detail below with reference to FIGS. 2-6 .
- FIG. 2 depicts a conceptual block diagram of an example computing system performing a process for automatically calibrating a video camera.
- the system 200 can be enabled to receive data that represents a video signal from a video camera 230 , where the video signal shows a two-dimensional scene of an environment that was captured by the video camera from a first field of view.
- the video signal may be analyzed to identify multiple real-world vertical and horizontal lines in the 2D scene, and one or more objects in the scene. Based on the identified vertical and horizontal lines and one or more objects, a known height of a vertical line in the 2D scene is determined. The known height of the vertical line may be used, together with the identified vertical and horizontal lines, to determine calibration instructions including values for one or more video camera parameters.
- the system 200 may provide data representing the calibration instructions to the video camera 230 for processing.
- the system 200 can be implemented as a system of one or more computers having physical hardware like that described with respect to FIG. 7 below.
- the computing system may include one or more computers that operate in a coordinate fashion across one or more locations.
- the system 200 includes a video analytics platform 210 and a video camera 230 .
- the components of the system 200 can exchange electronic communications over one or more networks, or can exchange communications in another way, such as over one or more wired or wireless connections.
- the video analytics platform 210 receives data representing video information including a video signal from a video camera.
- the video signal may be a signal showing a two-dimensional (2D) scene of an environment that was captured by the video camera from a first field of view, e.g., 2D scene 108 of FIG. 1 above.
- the video analytics platform 210 can transmit data that represents the video information including a video signal from a video camera to a video analyzer component 280 .
- the video analyzer component 280 can receive the data that represents the video information including a video signal from a video camera and analyze the video signal to identify multiple lines in the 2D scene showing in the video signal.
- the video analyzer may analyze the video signal to identify two or more vertical lines in the 2D scene, where a vertical line is a line that is vertical in real life and may not appear vertical in the 2D scene as captured by the video camera, such as the side of a building or a lamp post.
- the video analyzer may also analyze the video signal to identify two or more horizontal lines in the 2D scene that are orthogonal to the identified two or more vertical lines.
- the video analyzer may have one or more software applications installed thereon that are configured to or may be used to identify vertical or horizontal lines in a video signal, such as canny edge detectors.
- Canny edge detectors may be used to identify edges in a 2D scene, such as the edge of a building or a window.
- the video analyzer may then apply transformation functions, such as a Hough transform, to select vertical and horizontal lines from the detected edges.
- the video analyzer 280 may further identify one or more objects in the 2D scene.
- the video analyzer 280 may identify stationary or moving objects in the 2D scene, such as cars, people, doorways, famous or known buildings, street signs, animals, lampposts, prams, or bicycles.
- the video analyzer 280 may be configured to identify particular objects belonging to larger classes of objects, such as a particular breed of dog or a particular make or model of a car.
- the video analyzer 280 can transmit data that represents the identified vertical and horizontal lines, and the identified one or more objects to the video analytics platform 210 .
- the video analytics platform 210 can receive the data that represents the identified vertical and horizontal lines, and the identified one or more objects and use the identified one or more objects to determine a height of a vertical line in the 2D scene.
- the determined height of the vertical line includes a determined height of a vertical line that is different from the one or more identified vertical lines, as described in stage (B).
- the determined height of the vertical line includes a determined height of a vertical line that is among the one or more identified vertical lines.
- the video analytics platform 210 may access one or more external databases 260 to determine characteristics of the one or more objects identified by the video analyzer component, e.g., a height of one or more of the objects identified by the video analyzer component 280 .
- the video analyzer 280 may have identified a standardized street sign as one of the objects in the 2D scene, and may reference an external database 260 that stores information relating to standardized dimensions of street objects to determine a height of the street sign, and in turn a known height of a vertical line in the 2D scene.
- the video analyzer 280 may have identified a dog as one of the objects in the 2D scene, and may reference an external database 260 that stores information relating to statistical distributions for the heights of dogs.
- the video analytics platform may analyze the statistical distribution to determine an average height of a dog, and in turn a known height of a vertical line in the 2D scene.
- the video analyzer 280 may have identified a person walking through the 2D scene, e.g., walking towards or away from the camera, as a moving object in the 2D scene.
- the video analytics platform 210 may reference an external database 260 that stores information relating to average heights of people to determine an average height of a person. Since the height of people can vary greatly, the video analytics platform 210 may use additional information such as video camera settings, e.g., video camera installation angle, video camera resolution, or video camera field of view, or an estimated speed and direction in which the person is walking to infer a more precise height of the person walking through the 2D scene.
- the video analytics platform 210 can transmit data that represents the determined height of the vertical line in the 2D scene and data representing the identified vertical and horizontal lines, as received during stage C, to a calibration instruction generator component 290 .
- the calibration instruction generator 290 can receive the data representing the identified vertical and horizontal lines and generate calibration instructions including determined values for intrinsic and extrinsic video camera parameters. Determining intrinsic and extrinsic video camera parameters using a known height of a vertical line in a 2D scene and identified vertical and horizontal lines in the 2D scene is described in more detail below with reference to FIG. 5 .
- the calibration instruction generator 290 transmits data that represents the generated calibration instructions for video camera calibration to the video camera 230 .
- FIG. 3 depicts an example system 300 for automatic video camera calibration.
- a computer network 370 such as a local area network (LAN), wide area network (WAN), the Internet, or a combination thereof, connects video analytics platform 310 , video management system 320 , multiple video cameras 330 , user device 340 and databases 360 .
- all or some of the video analytics platform 310 , video management system 320 , multiple video cameras 330 , user device 340 and databases 360 can be implemented in a single computing system, and may communicate with none, one, or more other components over a network.
- Video analytics platform 310 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein.
- video analytics platform 310 may include one or more computing devices, such as one or more server devices, desktop computers, workstation computers, virtual machines (VMs) provided in a cloud computing environment, or similar devices.
- VMs virtual machines
- video analytics platform 310 may receive video information from video management systems 320 and/or video cameras 330 , and may store the video information.
- video analytics platform 310 may receive video information and/or other information (e.g., fire alarms, weather alerts, or the like) from other devices and/or systems, such as, for example, social media systems, mobile devices, emergency service systems (e.g., police, fire department, weather agencies, or the like), building management systems, or the like.
- other information e.g., fire alarms, weather alerts, or the like
- social media systems e.g., social media systems, mobile devices, emergency service systems (e.g., police, fire department, weather agencies, or the like), building management systems, or the like.
- emergency service systems e.g., police, fire department, weather agencies, or the like
- building management systems e.g., building management systems, or the like.
- video analytics platform 310 may apply video analytics to automatically analyze the video information and to generate real-time safety information, security information, operations information, or marketing information.
- the safety information may include information associated with utilization of restricted or forbidden areas, fire and/or smoke detection, overcrowding and/or maximum occupancy detection, slip and/or fall detection, vehicle speed monitoring, or the like.
- the security information may include information associated with perimeter monitoring, access control, loitering and/or suspicious behavior, vandalism, abandoned and/or removed objects, person of interest tracking, or the like.
- the operations information may include information associated with service intervention tracking, package and/or vehicle count, mobile asset locations, operations layout optimization, resource monitoring and/or optimization, or the like.
- the marketing information may include information associated with footfall traffic, population density analysis, commercial space layout optimization, package demographics, or the like.
- the video analytics applied by video analytics platform 310 may include people recognition, safety alert generation, license plate recognition, augmented reality, post-event analysis, crowd counting, cross-camera tracking, incident detection, wide-spectrum imagery, object recognition, video indexing, traffic monitoring, footfall traffic determination, left object detection, suspicious behavior detection, or the like.
- video analytics platform 310 may generate a user interface that includes the real-time safety information, the security information, the operations information, or the marketing information, and may provide the user interface to user device 340 .
- User device 340 may display the user interface to a user of user device 340 .
- the video analytics platform 310 may communicate with databases 360 to obtain information stored by the databases 360 .
- the databases 360 may include one or more databases that store information about one or more objects or entities, such as information relating to attributes of objects including dimensions of objects.
- one or more of the databases 360 may be external to the system 300 .
- one or more of the databases 360 may be included in the system 300 .
- Video management system 320 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein.
- video management system 320 may include a computing device, such as a server, a desktop computer, a laptop computer, a tablet computer, a handheld computer, one or more VMs provided in a cloud computing environment, or a similar device.
- video management system 320 may be associated with a company that receives, stores, processes, manages, and/or collects information received by video cameras 330 .
- video management systems 320 may communicate with video analytics platform 310 via network 370 .
- Video camera 330 may include a device capable of receiving, generating, storing, processing, and/or providing video information, audio information, and/or image information.
- video camera 330 may include a photographic camera, a video camera, a microphone, or a similar device.
- video camera 330 may include a PTZ video camera.
- video camera 330 may communicate with video analytics platform 310 via network 370 .
- User device 340 may include a device capable of receiving, generating, storing, processing, and/or providing information, such as information described herein.
- user device 340 may include a computing device, such as a desktop computer, a laptop computer, a tablet computer, a handheld computer, a smart phone, a radiotelephone, or a similar device.
- user device 340 may communicate with video analytics platform 310 via network 350 .
- Network 370 may include one or more wired and/or wireless networks.
- network 150 may include a cellular network, a public land mobile network (“PLMN”), a local area network (“LAN”), a wide area network (“WAN”), a metropolitan area network (“MAN”), a telephone network (e.g., the Public Switched Telephone Network (“PSTN”)), an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or a combination of these or other types of networks.
- PLMN public land mobile network
- LAN local area network
- WAN wide area network
- MAN metropolitan area network
- PSTN Public Switched Telephone Network
- FIG. 4 is a flowchart of an example process 400 for automatically calibrating a video camera.
- the process 400 may be carried out by the devices and systems described herein, including computing system 300 depicted in FIG. 3 .
- the flowchart depicts the various stages of the process 400 occurring in a particular order, certain stages may in some implementations be performed in parallel or in a different order than what is depicted in the example process 400 of FIG. 4 .
- the system receives video information including a video signal from a video camera.
- the video signal may be a signal showing a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal.
- the system identifies (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene.
- the system identifies the two or more vertical lines and two or more horizontal lines that the video signal shows in the 2D scene by applying a canny edge detector operator to the 2D scene to detect one or more edges in the scene, and applying a Hough Transform to select vertical and horizontal lines from the detected edges.
- a vertical line may not appear vertical in the 2D scene captured by the camera.
- the identified one or more objects may include stationary objects, such as parked cars, road signs, or a picnic bench.
- the identified one or more objects may also include one or more moving objects, such as people walking or moving cars.
- the system determines a height of a first vertical line in the 2D scene.
- the system determines the height of a first vertical line in the 2D scene based on characteristics of the identified one or more objects.
- the determined height of the vertical line includes a determined height of a vertical line that is different from the one or more identified vertical lines, as described with reference to step 404 .
- the determined height of the vertical line includes a determined height of a vertical line that is among the one or more identified vertical lines.
- the system may determine a height of a first vertical line in the 2D scene by referencing an external database to determine a height of one or more of the identified objects. For example, the system may identify a car of a particular make and model as one of the objects in the 2D scene, and may reference an external database that stores information relating to dimensions of vehicles to determine a height of the car, and in turn determine a height of a vertical line in the 2D scene. Other characteristics may include other dimensions of one or more of the identified objects.
- the system may determine a height of a vertical line in the 2D scene by referencing an external database to determine a probability distribution of an expected height of one or more of the identified objects. For example, the system may identify a person as one of the objects in the 2D scene, and may reference an external database that stores information relating to distributions of the heights of people to determine an expected height of the person, and in turn a height of a vertical line in the 2D scene.
- the system may determine a height of a vertical line in the 2D scene by inferring a height of one or more of the identified objects based on video camera settings, where video camera settings may include one or more of (i) video camera installation angle, (ii) video camera resolution, or (iii) video camera field of view.
- the system may determine a height of a vertical line in the 2D scene by determining a direction an object is moving relative to the video camera. For example, the system may identify a moving object as a person walking towards or away from the video camera, and may use known or accessed information about an expected height of a person together with the direction in which the person is moving to determine the height of one or more vertical lines in the 2D scene.
- the system calibrates the video camera.
- Calibrating the video camera includes determining intrinsic and extrinsic video camera parameters, where intrinsic camera parameters include (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and extrinsic parameters include (i) camera height, (ii) pan, (iii) tilt and (iv) roll.
- the skew intrinsic camera parameter is assumed to be zero.
- the aspect ratio intrinsic camera parameter is assumed to be equal to one.
- the calibrated video camera may be used for a variety of tasks, including correcting optical distortion artifacts, estimating the distance of an object from the video camera, or measuring the size of objects in an image captured by the video camera. Such tasks may be used in applications such as machine vision to detect and measure objects, or in robotics, navigation systems or 3D scene reconstruction for augmented reality systems.
- the system determines the intrinsic and extrinsic video camera parameters to calibrate the video camera based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the vertical line in the 2D scene, as determined in steps 404 and 406 above. Determining intrinsic and extrinsic video camera parameters is described in more detail below with reference to FIG. 5 .
- FIG. 5 is a flowchart of an example process 500 for determining intrinsic and extrinsic video camera parameters.
- the process 500 may be carried out by the devices and systems described herein, including computing system 300 depicted in FIG. 3 .
- the flowchart depicts the various stages of the process 500 occurring in a particular order, certain stages may in some implementations be performed in parallel or in a different order than what is depicted in the example process 500 of FIG. 5 .
- the system calculates three vanishing points, including (i) a vertical vanishing point and (ii) two horizontal vanishing points.
- Each vanishing point includes a point at which receding identified vertical or horizontal lines viewed in perspective appear to converge in the 2D scene.
- the system calculates the three vanishing points based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view.
- the system may calculate a vertical vanishing point V y using the two or more vertical lines L 1 , . . . , Ln described above with reference to step 404 of FIG. 4 .
- Each vertical line may be represented by two respective points A and B, e.g., L 1 _A and L 1 _B corresponding to line L 1 .
- the matrix A and vector b are given by equation (1) below.
- A is a N*2 coefficient matrix and b is a N*1 vector with N the number of vertical lines.
- the system may calculate the horizontal vanishing point V X in a similar way.
- the system calculates a horizon line and roll angle.
- the system calculates the horizon line and roll angle based on the two horizontal vanishing points.
- the horizon line may be determined by the horizontal vanishing points V X , V Z , where the roll angle is represented by the angle between the calculated horizon line and the horizontal line, as given by equation (2) below.
- the system calculates a height of the video camera.
- the system calculates the height of the video camera H C based on the determined height of the vertical line in the 2D scene. For example, if the height h of a vertical line is known, the system may calculate the camera height H C using equation (3) below.
- d(B,Vy) represents a distance between the vanishing point Vy and the lower point of the known vertical line
- d(B,Vy) represents a distance between the vanishing point Vy and the upper point of the known vertical line
- d(C,D) represents a distance between the upper point of the known vertical line and the horizon line
- d(B,D) represents a distance between the lower point of the known vertical line and the horizon line.
- the points C and B may represent the top of the person as shown in the 2D scene and the bottom of the person in the person in the 2D scene.
- the system calculates a tilt and focal length.
- the system calculates the tilt and focal length based on the calculated vanishing points and roll angle. For example, the system may calculate the video camera focal length using equation (4) below.
- the system may calculate the video camera tilt using equation (5) below.
- P represents the principle point of the image shown by the video camera.
- the system calculates vertical and horizontal angles of view.
- the system calculates the vertical and horizontal angles of view based on the calculated height of the video camera and focal length. For example, the system may determine the vertical angle of view using equation (6) below.
- the system may determine the horizontal angle of view using equation (7) below.
- the system may use the determined intrinsic and extrinsic video camera parameters to automatically adjust physical settings of the video camera, including the pan-tilt-zoom settings of the video camera, the height of the video camera or the location of the video camera.
- the system may perform the process 500 to determine a current height of the video camera, as described above in step 506 . If the determined height of the video camera in relation to the ground plane is higher or lower than an expected calibrated height, the system may automatically adjust the height to the calibrated height or generate an alert that informs a user of the video camera that the video camera height needs adjusting.
- the system may perform the process 500 to determine a current focal length of the video camera, as described above in step 508 .
- the system may automatically adjust the focal length, e.g., by using a wider angle lens, to lower the focal length to the calibrated focal length or generate an alert that informs a user of the video camera that the focal length needs adjusting.
- FIG. 6 depicts an example image 600 illustrating how to calculate a height of the video camera, as described above with reference to step 506 of FIG. 5 .
- the image 600 includes a 2D scene 602 of an environment that was captured by the video camera from a first field of view. Included in the scene 602 is an image of a person 604 .
- the system may model the person 604 as a vertical line and determine the height H of the vertical line modeling the person 604 using one or more methods as described above with reference to FIG. 2 and FIG. 4 .
- the system may calculate the height of the video camera H C using on the determined height h. For example, the system may calculate the camera height H C using equation (3) above, which is repeated below for clarity.
- d(B,Vy) represents a distance between the vanishing point Vy and the position at which the person 604 is standing on the ground plane B.
- d(C,Vy) represents a distance between the vanishing point Vy and the position of the top of the person's head.
- d(C,D) represents a distance between the position of the top of the person's head and the point at which the dotted line meets the horizon line.
- d(B,D) represents a distance between the position at which the person 604 is standing on the ground plane B and the horizon line.
- FIG. 7 illustrates a schematic diagram of an exemplary generic computer system 700 .
- the system 700 can be used for the operations described in association with the processes 400 and 500 according to some implementations.
- the system 700 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, mobile devices and other appropriate computers.
- the components shown here, their connections and relationships, and their functions, are exemplary only, and do not limit implementations of the inventions described and/or claimed in this document.
- the system 700 includes a processor 710 , a memory 720 , a storage device 730 , and an input/output device 740 .
- Each of the components 710 , 720 , 730 , and 720 are interconnected using a system bus 750 .
- the processor 710 may be enabled for processing instructions for execution within the system 700 .
- the processor 710 is a single-threaded processor.
- the processor 710 is a multi-threaded processor.
- the processor 710 may be enabled for processing instructions stored in the memory 720 or on the storage device 730 to display graphical information for a user interface on the input/output device 740 .
- the memory 720 stores information within the system 700 .
- the memory 720 is a computer-readable medium.
- the memory 720 is a volatile memory unit.
- the memory 720 is a non-volatile memory unit.
- the storage device 730 may be enabled for providing mass storage for the system 700 .
- the storage device 730 is a computer-readable medium.
- the storage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
- the input/output device 740 provides input/output operations for the system 700 .
- the input/output device 740 includes a keyboard and/or pointing device.
- the input/output device 740 includes a display unit for displaying graphical user interfaces.
- Embodiments and all of the functional operations described in this specification may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
- Embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
- the computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
- data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
- the apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
- a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus.
- a computer program (also known as a program, software, software application, script, or code) may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
- the processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- a computer need not have such devices.
- a computer may be embedded in another device, e.g., a tablet computer, a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few.
- PDA personal digital assistant
- GPS Global Positioning System
- Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
- semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
- magnetic disks e.g., internal hard disks or removable disks
- magneto optical disks e.g., CD ROM and DVD-ROM disks.
- the processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry.
- embodiments may be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user may provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
- Embodiments may be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user may interact with an implementation, or any combination of one or more such back end, middleware, or front end components.
- the components of the system may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
- LAN local area network
- WAN wide area network
- the computing system may include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- HTML file In each instance where an HTML file is mentioned, other file types or formats may be substituted. For instance, an HTML file may be replaced by an XML, JSON, plain text, or other types of files. Moreover, where a table or hash table is mentioned, other data structures (such as spreadsheets, relational databases, or structured files) may be used.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
This document describes systems, methods, devices, and other techniques for automatically calibrating a video camera based on video information received from the video camera. In some implementations, a computing device receives video information comprising a video signal from a video camera, wherein the video signal shows a 2D scene of an environment captured by the video camera; identifies (i) two or more vertical lines shown in the 2D scene, (ii) two or more horizontal lines shown in the 2D scene, and (iii) one or more objects shown in the 2D scene; based on the identified one or more objects, determines a height of a vertical line in the 2D scene; and based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the vertical line in the 2D scene, calibrate the video camera.
Description
- This application claims the benefit of U.S. Provisional Application No. 62/167,930, filed May 29, 2015, and titled “Video Analytics of Video Information,” which is hereby incorporated by reference in its entirety.
- This specification generally relates to methods, systems, devices, and other techniques for video monitoring, including techniques for calibrating cameras used in a video monitoring system.
- Video monitoring systems (e.g., a closed-circuit television system) can provide one or more video cameras to monitor at least one location in view of the cameras. Some video monitoring systems are configured to transmit video signals from the one or more cameras to a central location for presentation on a limited set of monitors, and in certain instances, for recording and additional analysis. For example, a video monitoring system may be adapted to capture and analyze video from various locations including banks, casinos, airports, military installations, convenience stores, parking lots, or the like. Video information from video cameras of video monitoring systems may be sent to and analyzed by a video analytics platform.
- This document generally describes systems, methods, devices, and other techniques for calibrating cameras in a video monitoring system. A video monitoring system may include one or more computers that receive video content captured by one or more video cameras. The system may analyze the video content and perform various analytics processes to detect certain events and other features of interest. For example, the system may apply analytics processes to perform facial recognition, generate safety alerts, identify vehicle license plates, perform post-event analysis, count people or objects in a crowd, track objects across multiple cameras, perform incident detection, recognize objects, index video content, monitor pedestrian or vehicle traffic conditions, detect objects left at a scene, identify suspicious behavior, or perform a combination of multiple of these.
- Some video analytics processes rely on parameters associated with video cameras that captured video content that is the subject of analysis. For example, a vehicle detection process may identify the make and model of a vehicle based in part on an absolute dimension of the vehicle derived from the video content, such as its height or width in inches. But in order for the vehicle detection process to determine the absolute dimensions of a vehicle in view of a video camera, one or more parameters associated with the camera may be required (e.g., the physical location of the camera relative to a ground plane, reference object, or both; the camera's resolution; the camera's perspective). In some implementations according to the techniques described herein, a video monitoring system can analyze video content captured by the one or more cameras to perform automatic camera calibration and determine parameters that may be needed for one or more other video analytics processes. For example, by tracking changes in the dimensions of objects detected in a video scene as they move across the scene at different locations, speeds, and angles, the system may automatically learn a camera's height above ground plane, the perspective of the camera, or a distance or location of the camera relative to one or more reference objects in a scene.
- Innovative aspects of the subject matter described in this specification may be embodied in methods that include the actions of receiving, at a computing system, video information comprising a video signal that shows a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal; identifying, by the computing system, (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene; based on characteristics of the identified one or more objects, determining a height of a first vertical line in the 2D scene; and based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the first vertical line shown in the 2D scene, calibrating the video camera.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination thereof installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus (e.g., one or more computers or computer processors), cause the apparatus to perform the actions.
- The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. In some implementations determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a height of one or more of the identified objects.
- In some implementations determining the height of the first vertical line that the video signal shows in the 2D scene comprises inferring a height of one or more of the identified objects based on video camera settings.
- In some cases video camera settings comprise one or more of (i) video camera installation angle, (ii) video camera resolution, (iii) video camera field of view.
- In some implementations determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a probability distribution of an expected height of one or more of the identified objects.
- In some cases the identified one or more objects that the video signal shows in the 2D scene comprise stationary objects.
- In some implementations the identified one or more objects that the video signal shows in the 2D scene comprise moving objects.
- In some cases determining the height of a first vertical line that the video signal shows in the 2D scene comprises determining a direction an object is moving relative to the video camera.
- In some implementations calibrating the video camera comprises determining intrinsic and extrinsic video camera parameters.
- In some implementations the intrinsic camera parameters comprise (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and the extrinsic parameters comprise (i) camera height, (ii) pan, (iii) tilt and (iv) roll.
- In some cases determining the intrinsic and extrinsic video camera parameters comprises: based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view, calculating three vanishing points, wherein the three vanishing points comprise (i) a vertical vanishing point and (ii) two horizontal vanishing points; based on the two horizontal vanishing points, calculating a horizon line and roll angle; based on the determined height of the first vertical line that the video shows in the 2D scene, calculating a height of the video camera; based on the calculated vanishing points and roll angle, calculating a tilt and focal length; based on the calculated height of the video camera and focal length, calculating vertical and horizontal angles of view.
- In some implementations each vanishing point comprises a point at which receding identified vertical or horizontal lines viewed in perspective appear to converge in the 2D scene.
- In some cases the skew intrinsic camera parameter is assumed to be zero. In some cases the aspect ratio intrinsic camera parameter is assumed to be equal to one.
- In some implementations identifying two or more vertical lines and two or more horizontal lines that the video signal shows in the 2D scene comprises: applying a canny edge detector operator to the 2D scene to detect one or more edges in the scene; and applying a Hough Transform to select vertical and horizontal lines from the detected edges.
- Some implementations of the subject matter described herein may realize, in certain instances, one or more of the following advantages.
- Video monitoring systems and applications may be greatly aided if video cameras included in the video monitoring system are calibrated, e.g., such that the video camera's intrinsic parameters and the camera's position and orientation with respect to some reference point in a scene captured by the video camera are known. Manual camera calibration can be a time consuming and tedious process, particularly in large scale CCTV environments that may include tens or hundreds of individual video cameras.
- A system implementing automatic video camera calibration as described in this specification provides a practical and efficient way to calibrate cameras for large scale video monitoring systems. The system is able to automatically calibrate video cameras without any human intervention.
- A system implementing automatic video camera calibration as described in this specification may be more flexible than other systems and methods for video camera calibration since the accuracy achieved by the system described in this specification is variable dependent on the input to the calibration method. For example, by identifying an increased number of parallel lines in a two-dimensional scene of an environment captured by a video camera the system described in this specification may achieve higher levels of accuracy.
- The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other potential features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
-
FIG. 1 depicts an example image showing a transformation between a world coordinate system and a camera coordinate system. -
FIG. 2 depicts a conceptual block diagram of an example process for automatically calibrating a video camera. -
FIG. 3 depicts an example system for automatic video camera calibration. -
FIG. 4 is a flowchart of an example process for automatically calibrating a video camera. -
FIG. 5 is a flowchart of an example process for determining intrinsic and extrinsic video camera parameters. -
FIG. 6 depicts an example image illustrating how to calculate a height of the video camera. -
FIG. 7 depicts an example computing device that may be used to carry out the computer-implemented methods and other techniques described herein. - Like reference symbols in the various drawings indicate like elements.
- This specification generally describes systems, methods, devices, and other techniques for automatically calibrating a video camera based on video information (e.g., video content) received from the video camera. For example, a video analytics platform may automatically calibrate a ground plane, an angle or a zoom for the video camera based on video information received from the video camera, such as a height of a person or a size of an object (e.g., a building, a vehicle or a road sign). In some implementations, the video camera may automatically self-calibrate based on video information received by the video camera. For example, the video analytics platform may provide, to the video camera, configuration information that enables the video camera to automatically calibrate itself based on video information received by the video camera.
-
FIG. 1 depicts anexample image 100 showing a transformation between a world coordinate system and a camera coordinate system. The example image includes aperson 106 of height h standing on aground plane 110. The position of an object relative to the ground plane may be described as a point (x, y, z) in a World Coordinate system (WCS). For example, the position in whichperson 106 is standing may be described by a point (xp, 0, zp) in the WCS. - The WCS includes an X axis, Y axis and Z axis meeting at an
origin 102. Theexample image 100 further includes avideo camera 104 and a2D image projection 108 representing a field of view of thevideo camera 108. The position of an object in theimage projections 108 may be described as a point (u, v) in a video camera coordinate system (CCS). - The relationship between a 3D point in a WCS [x, y, z, 1]T and its 2D image projection in a CCS [u, v, 1]T may be represented by a 3*4 projection matrix M, namely [u, v, 1]T˜M. [x,y,z,1]T. M may be determined by a set of intrinsic parameters, e.g., including video camera focal length y principle point (up, vp), aspect ratio a, and skew s, and a set of extrinsic parameters corresponding to a transformation between the world coordinate system (WCS) and the camera coordinate system (CCS). The transformation may be specified by first placing the
origin 104 of the CSS vertically above, e.g., along the Y-axis, theWCS origin 102 at the height HC of the video camera. The transformation may be further specified by performing a rotation around the Y axis by an angle pan(α), a rotation around the X axis by an angle tilt(β), and a rotation around the Z axis by an angle roll(γ). - The
video camera 104 may be calibrated by determining a set of intrinsic video camera parameters, e.g., focal length, principal point, aspect ratio and skew, and extrinsic video camera parameters, e.g., video camera height, pan, tilt and roll. Determining a set of intrinsic and extrinsic video camera parameters to automatically calibrate a video camera is described in more detail below with reference toFIGS. 2-6 . -
FIG. 2 depicts a conceptual block diagram of an example computing system performing a process for automatically calibrating a video camera. Thesystem 200 can be enabled to receive data that represents a video signal from avideo camera 230, where the video signal shows a two-dimensional scene of an environment that was captured by the video camera from a first field of view. The video signal may be analyzed to identify multiple real-world vertical and horizontal lines in the 2D scene, and one or more objects in the scene. Based on the identified vertical and horizontal lines and one or more objects, a known height of a vertical line in the 2D scene is determined. The known height of the vertical line may be used, together with the identified vertical and horizontal lines, to determine calibration instructions including values for one or more video camera parameters. Thesystem 200 may provide data representing the calibration instructions to thevideo camera 230 for processing. Generally, thesystem 200 can be implemented as a system of one or more computers having physical hardware like that described with respect toFIG. 7 below. The computing system may include one or more computers that operate in a coordinate fashion across one or more locations. - Briefly, the
system 200 includes avideo analytics platform 210 and avideo camera 230. The components of thesystem 200 can exchange electronic communications over one or more networks, or can exchange communications in another way, such as over one or more wired or wireless connections. - During stage (A) of the process for automatic video camera calibration, the
video analytics platform 210 receives data representing video information including a video signal from a video camera. The video signal may be a signal showing a two-dimensional (2D) scene of an environment that was captured by the video camera from a first field of view, e.g.,2D scene 108 ofFIG. 1 above. - During stage (B), the
video analytics platform 210 can transmit data that represents the video information including a video signal from a video camera to avideo analyzer component 280. Thevideo analyzer component 280 can receive the data that represents the video information including a video signal from a video camera and analyze the video signal to identify multiple lines in the 2D scene showing in the video signal. For example, the video analyzer may analyze the video signal to identify two or more vertical lines in the 2D scene, where a vertical line is a line that is vertical in real life and may not appear vertical in the 2D scene as captured by the video camera, such as the side of a building or a lamp post. The video analyzer may also analyze the video signal to identify two or more horizontal lines in the 2D scene that are orthogonal to the identified two or more vertical lines. In some implementations the video analyzer may have one or more software applications installed thereon that are configured to or may be used to identify vertical or horizontal lines in a video signal, such as canny edge detectors. Canny edge detectors may be used to identify edges in a 2D scene, such as the edge of a building or a window. The video analyzer may then apply transformation functions, such as a Hough transform, to select vertical and horizontal lines from the detected edges. - The
video analyzer 280 may further identify one or more objects in the 2D scene. For example, thevideo analyzer 280 may identify stationary or moving objects in the 2D scene, such as cars, people, doorways, famous or known buildings, street signs, animals, lampposts, prams, or bicycles. In some cases thevideo analyzer 280 may be configured to identify particular objects belonging to larger classes of objects, such as a particular breed of dog or a particular make or model of a car. - During stage (C), the
video analyzer 280 can transmit data that represents the identified vertical and horizontal lines, and the identified one or more objects to thevideo analytics platform 210. Thevideo analytics platform 210 can receive the data that represents the identified vertical and horizontal lines, and the identified one or more objects and use the identified one or more objects to determine a height of a vertical line in the 2D scene. In some implementations the determined height of the vertical line includes a determined height of a vertical line that is different from the one or more identified vertical lines, as described in stage (B). In other implementations the determined height of the vertical line includes a determined height of a vertical line that is among the one or more identified vertical lines. - Optionally, during stage (D), the
video analytics platform 210 may access one or moreexternal databases 260 to determine characteristics of the one or more objects identified by the video analyzer component, e.g., a height of one or more of the objects identified by thevideo analyzer component 280. For example, thevideo analyzer 280 may have identified a standardized street sign as one of the objects in the 2D scene, and may reference anexternal database 260 that stores information relating to standardized dimensions of street objects to determine a height of the street sign, and in turn a known height of a vertical line in the 2D scene. - As another example, the
video analyzer 280 may have identified a dog as one of the objects in the 2D scene, and may reference anexternal database 260 that stores information relating to statistical distributions for the heights of dogs. The video analytics platform may analyze the statistical distribution to determine an average height of a dog, and in turn a known height of a vertical line in the 2D scene. - As a further example, the
video analyzer 280 may have identified a person walking through the 2D scene, e.g., walking towards or away from the camera, as a moving object in the 2D scene. Thevideo analytics platform 210 may reference anexternal database 260 that stores information relating to average heights of people to determine an average height of a person. Since the height of people can vary greatly, thevideo analytics platform 210 may use additional information such as video camera settings, e.g., video camera installation angle, video camera resolution, or video camera field of view, or an estimated speed and direction in which the person is walking to infer a more precise height of the person walking through the 2D scene. - During stage (E), the
video analytics platform 210 can transmit data that represents the determined height of the vertical line in the 2D scene and data representing the identified vertical and horizontal lines, as received during stage C, to a calibrationinstruction generator component 290. Thecalibration instruction generator 290 can receive the data representing the identified vertical and horizontal lines and generate calibration instructions including determined values for intrinsic and extrinsic video camera parameters. Determining intrinsic and extrinsic video camera parameters using a known height of a vertical line in a 2D scene and identified vertical and horizontal lines in the 2D scene is described in more detail below with reference toFIG. 5 . - During stage (F), the
calibration instruction generator 290 transmits data that represents the generated calibration instructions for video camera calibration to thevideo camera 230. -
FIG. 3 depicts anexample system 300 for automatic video camera calibration. In some implementations, acomputer network 370, such as a local area network (LAN), wide area network (WAN), the Internet, or a combination thereof, connectsvideo analytics platform 310,video management system 320,multiple video cameras 330, user device 340 anddatabases 360. In some implementations, all or some of thevideo analytics platform 310,video management system 320,multiple video cameras 330, user device 340 anddatabases 360 can be implemented in a single computing system, and may communicate with none, one, or more other components over a network. -
Video analytics platform 310 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example,video analytics platform 310 may include one or more computing devices, such as one or more server devices, desktop computers, workstation computers, virtual machines (VMs) provided in a cloud computing environment, or similar devices. In some implementations,video analytics platform 310 may receive video information fromvideo management systems 320 and/orvideo cameras 330, and may store the video information. In some implementations,video analytics platform 310 may receive video information and/or other information (e.g., fire alarms, weather alerts, or the like) from other devices and/or systems, such as, for example, social media systems, mobile devices, emergency service systems (e.g., police, fire department, weather agencies, or the like), building management systems, or the like. - In some implementations,
video analytics platform 310 may apply video analytics to automatically analyze the video information and to generate real-time safety information, security information, operations information, or marketing information. The safety information may include information associated with utilization of restricted or forbidden areas, fire and/or smoke detection, overcrowding and/or maximum occupancy detection, slip and/or fall detection, vehicle speed monitoring, or the like. The security information may include information associated with perimeter monitoring, access control, loitering and/or suspicious behavior, vandalism, abandoned and/or removed objects, person of interest tracking, or the like. The operations information may include information associated with service intervention tracking, package and/or vehicle count, mobile asset locations, operations layout optimization, resource monitoring and/or optimization, or the like. The marketing information may include information associated with footfall traffic, population density analysis, commercial space layout optimization, package demographics, or the like. - In some implementations, the video analytics applied by
video analytics platform 310 may include people recognition, safety alert generation, license plate recognition, augmented reality, post-event analysis, crowd counting, cross-camera tracking, incident detection, wide-spectrum imagery, object recognition, video indexing, traffic monitoring, footfall traffic determination, left object detection, suspicious behavior detection, or the like. In some implementations,video analytics platform 310 may generate a user interface that includes the real-time safety information, the security information, the operations information, or the marketing information, and may provide the user interface to user device 340. User device 340 may display the user interface to a user of user device 340. - In some implementations, the
video analytics platform 310 may communicate withdatabases 360 to obtain information stored by thedatabases 360. For example, thedatabases 360 may include one or more databases that store information about one or more objects or entities, such as information relating to attributes of objects including dimensions of objects. In some cases one or more of thedatabases 360 may be external to thesystem 300. In other cases one or more of thedatabases 360 may be included in thesystem 300. -
Video management system 320 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example,video management system 320 may include a computing device, such as a server, a desktop computer, a laptop computer, a tablet computer, a handheld computer, one or more VMs provided in a cloud computing environment, or a similar device. In some implementations,video management system 320 may be associated with a company that receives, stores, processes, manages, and/or collects information received byvideo cameras 330. In some implementations,video management systems 320 may communicate withvideo analytics platform 310 vianetwork 370. -
Video camera 330 may include a device capable of receiving, generating, storing, processing, and/or providing video information, audio information, and/or image information. For example,video camera 330 may include a photographic camera, a video camera, a microphone, or a similar device. In some implementations,video camera 330 may include a PTZ video camera. In some implementations,video camera 330 may communicate withvideo analytics platform 310 vianetwork 370. - User device 340 may include a device capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example, user device 340 may include a computing device, such as a desktop computer, a laptop computer, a tablet computer, a handheld computer, a smart phone, a radiotelephone, or a similar device. In some implementations, user device 340 may communicate with
video analytics platform 310 via network 350. -
Network 370 may include one or more wired and/or wireless networks. For example, network 150 may include a cellular network, a public land mobile network (“PLMN”), a local area network (“LAN”), a wide area network (“WAN”), a metropolitan area network (“MAN”), a telephone network (e.g., the Public Switched Telephone Network (“PSTN”)), an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or a combination of these or other types of networks. -
FIG. 4 is a flowchart of anexample process 400 for automatically calibrating a video camera. In some implementations, theprocess 400 may be carried out by the devices and systems described herein, includingcomputing system 300 depicted inFIG. 3 . Although the flowchart depicts the various stages of theprocess 400 occurring in a particular order, certain stages may in some implementations be performed in parallel or in a different order than what is depicted in theexample process 400 ofFIG. 4 . - At step 402, the system receives video information including a video signal from a video camera. The video signal may be a signal showing a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal.
- At
step 404, the system identifies (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene. - In some implementations the system identifies the two or more vertical lines and two or more horizontal lines that the video signal shows in the 2D scene by applying a canny edge detector operator to the 2D scene to detect one or more edges in the scene, and applying a Hough Transform to select vertical and horizontal lines from the detected edges. In some cases a vertical line may not appear vertical in the 2D scene captured by the camera.
- In some implementations the identified one or more objects may include stationary objects, such as parked cars, road signs, or a picnic bench. The identified one or more objects may also include one or more moving objects, such as people walking or moving cars.
- At
step 406 the system determines a height of a first vertical line in the 2D scene. The system determines the height of a first vertical line in the 2D scene based on characteristics of the identified one or more objects. In some implementations the determined height of the vertical line includes a determined height of a vertical line that is different from the one or more identified vertical lines, as described with reference to step 404. In other implementations the determined height of the vertical line includes a determined height of a vertical line that is among the one or more identified vertical lines. - In some implementations the system may determine a height of a first vertical line in the 2D scene by referencing an external database to determine a height of one or more of the identified objects. For example, the system may identify a car of a particular make and model as one of the objects in the 2D scene, and may reference an external database that stores information relating to dimensions of vehicles to determine a height of the car, and in turn determine a height of a vertical line in the 2D scene. Other characteristics may include other dimensions of one or more of the identified objects.
- In other implementations the system may determine a height of a vertical line in the 2D scene by referencing an external database to determine a probability distribution of an expected height of one or more of the identified objects. For example, the system may identify a person as one of the objects in the 2D scene, and may reference an external database that stores information relating to distributions of the heights of people to determine an expected height of the person, and in turn a height of a vertical line in the 2D scene.
- In further implementations the system may determine a height of a vertical line in the 2D scene by inferring a height of one or more of the identified objects based on video camera settings, where video camera settings may include one or more of (i) video camera installation angle, (ii) video camera resolution, or (iii) video camera field of view.
- In further implementations the system may determine a height of a vertical line in the 2D scene by determining a direction an object is moving relative to the video camera. For example, the system may identify a moving object as a person walking towards or away from the video camera, and may use known or accessed information about an expected height of a person together with the direction in which the person is moving to determine the height of one or more vertical lines in the 2D scene.
- At
step 408, the system calibrates the video camera. Calibrating the video camera includes determining intrinsic and extrinsic video camera parameters, where intrinsic camera parameters include (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and extrinsic parameters include (i) camera height, (ii) pan, (iii) tilt and (iv) roll. In some implementations the skew intrinsic camera parameter is assumed to be zero. In some implementations the aspect ratio intrinsic camera parameter is assumed to be equal to one. The calibrated video camera may be used for a variety of tasks, including correcting optical distortion artifacts, estimating the distance of an object from the video camera, or measuring the size of objects in an image captured by the video camera. Such tasks may be used in applications such as machine vision to detect and measure objects, or in robotics, navigation systems or 3D scene reconstruction for augmented reality systems. - The system determines the intrinsic and extrinsic video camera parameters to calibrate the video camera based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the vertical line in the 2D scene, as determined in
steps FIG. 5 . -
FIG. 5 is a flowchart of anexample process 500 for determining intrinsic and extrinsic video camera parameters. In some implementations, theprocess 500 may be carried out by the devices and systems described herein, includingcomputing system 300 depicted inFIG. 3 . Although the flowchart depicts the various stages of theprocess 500 occurring in a particular order, certain stages may in some implementations be performed in parallel or in a different order than what is depicted in theexample process 500 ofFIG. 5 . - At
step 502, the system calculates three vanishing points, including (i) a vertical vanishing point and (ii) two horizontal vanishing points. Each vanishing point includes a point at which receding identified vertical or horizontal lines viewed in perspective appear to converge in the 2D scene. The system calculates the three vanishing points based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view. - For example, the system may calculate a vertical vanishing point Vy using the two or more vertical lines L1, . . . , Ln described above with reference to step 404 of
FIG. 4 . Each vertical line may be represented by two respective points A and B, e.g., L1_A and L1_B corresponding to line L1. The two or more vertical lines form an equation system Ax=b, which can be solved to determine the position x of the vanishing point Vy. The matrix A and vector b are given by equation (1) below. -
- In equation (1), A is a N*2 coefficient matrix and b is a N*1 vector with N the number of vertical lines.
- The system may calculate the horizontal vanishing point VX in a similar way. In order to calculate the second horizontal vanishing point VZ, the system may use the dimensions of the 2D video camera field of view, e.g., the width of the image shown by the video camera and the height of the image shown by the video. For example, the system may determine that an orthocenter of a triangle with three orthogonal vanishing points as vertices is a principle point and, assuming that the principle point is the image center, e.g., [u=(image width)/2, v=(image height)/2], the system may derive the vanishing point VZ, from the principle point and the previously calculated vanishing points VX, Vy.
- At
step 504, the system calculates a horizon line and roll angle. The system calculates the horizon line and roll angle based on the two horizontal vanishing points. For example, the horizon line may be determined by the horizontal vanishing points VX, VZ, where the roll angle is represented by the angle between the calculated horizon line and the horizontal line, as given by equation (2) below. -
- At
step 506, the system calculates a height of the video camera. The system calculates the height of the video camera HC based on the determined height of the vertical line in the 2D scene. For example, if the height h of a vertical line is known, the system may calculate the camera height HC using equation (3) below. -
- In equation (3), d(B,Vy) represents a distance between the vanishing point Vy and the lower point of the known vertical line, d(B,Vy) represents a distance between the vanishing point Vy and the upper point of the known vertical line, d(C,D) represents a distance between the upper point of the known vertical line and the horizon line and d(B,D) represents a distance between the lower point of the known vertical line and the horizon line. For example, as illustrated in
FIG. 6 , if the known height of the vertical line is a known height h of a person shown in the 2D scene, the points C and B may represent the top of the person as shown in the 2D scene and the bottom of the person in the person in the 2D scene. - At
step 508, the system calculates a tilt and focal length. The system calculates the tilt and focal length based on the calculated vanishing points and roll angle. For example, the system may calculate the video camera focal length using equation (4) below. -
- The system may calculate the video camera tilt using equation (5) below.
-
- In both equation (4) and (5), P represents the principle point of the image shown by the video camera.
- At
step 510, the system calculates vertical and horizontal angles of view. The system calculates the vertical and horizontal angles of view based on the calculated height of the video camera and focal length. For example, the system may determine the vertical angle of view using equation (6) below. The system may determine the horizontal angle of view using equation (7) below. -
- The system may use the determined intrinsic and extrinsic video camera parameters to automatically adjust physical settings of the video camera, including the pan-tilt-zoom settings of the video camera, the height of the video camera or the location of the video camera. For example, the system may perform the
process 500 to determine a current height of the video camera, as described above instep 506. If the determined height of the video camera in relation to the ground plane is higher or lower than an expected calibrated height, the system may automatically adjust the height to the calibrated height or generate an alert that informs a user of the video camera that the video camera height needs adjusting. As another example, the system may perform theprocess 500 to determine a current focal length of the video camera, as described above instep 508. If the determined focal length of the video camera is higher than a calibrated focal length, the system may automatically adjust the focal length, e.g., by using a wider angle lens, to lower the focal length to the calibrated focal length or generate an alert that informs a user of the video camera that the focal length needs adjusting. -
FIG. 6 depicts anexample image 600 illustrating how to calculate a height of the video camera, as described above with reference to step 506 ofFIG. 5 . Theimage 600 includes a2D scene 602 of an environment that was captured by the video camera from a first field of view. Included in thescene 602 is an image of aperson 604. The system may model theperson 604 as a vertical line and determine the height H of the vertical line modeling theperson 604 using one or more methods as described above with reference toFIG. 2 andFIG. 4 . The system may calculate the height of the video camera HC using on the determined height h. For example, the system may calculate the camera height HC using equation (3) above, which is repeated below for clarity. -
- In equation (3), d(B,Vy) represents a distance between the vanishing point Vy and the position at which the
person 604 is standing on the ground plane B. Similarly, d(C,Vy) represents a distance between the vanishing point Vy and the position of the top of the person's head. Furthermore, d(C,D) represents a distance between the position of the top of the person's head and the point at which the dotted line meets the horizon line. Similarly, d(B,D) represents a distance between the position at which theperson 604 is standing on the ground plane B and the horizon line. -
FIG. 7 illustrates a schematic diagram of an exemplarygeneric computer system 700. Thesystem 700 can be used for the operations described in association with theprocesses system 700 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, mobile devices and other appropriate computers. The components shown here, their connections and relationships, and their functions, are exemplary only, and do not limit implementations of the inventions described and/or claimed in this document. - The
system 700 includes aprocessor 710, amemory 720, astorage device 730, and an input/output device 740. Each of thecomponents system bus 750. Theprocessor 710 may be enabled for processing instructions for execution within thesystem 700. In one implementation, theprocessor 710 is a single-threaded processor. In another implementation, theprocessor 710 is a multi-threaded processor. Theprocessor 710 may be enabled for processing instructions stored in thememory 720 or on thestorage device 730 to display graphical information for a user interface on the input/output device 740. - The
memory 720 stores information within thesystem 700. In one implementation, thememory 720 is a computer-readable medium. In one implementation, thememory 720 is a volatile memory unit. In another implementation, thememory 720 is a non-volatile memory unit. - The
storage device 730 may be enabled for providing mass storage for thesystem 700. In one implementation, thestorage device 730 is a computer-readable medium. In various different implementations, thestorage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. - The input/
output device 740 provides input/output operations for thesystem 700. In one implementation, the input/output device 740 includes a keyboard and/or pointing device. In another implementation, the input/output device 740 includes a display unit for displaying graphical user interfaces. - Embodiments and all of the functional operations described in this specification may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus.
- A computer program (also known as a program, software, software application, script, or code) may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- The processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both.
- The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer may be embedded in another device, e.g., a tablet computer, a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry.
- To provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
- Embodiments may be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user may interact with an implementation, or any combination of one or more such back end, middleware, or front end components. The components of the system may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
- The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
- Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products.
- In each instance where an HTML file is mentioned, other file types or formats may be substituted. For instance, an HTML file may be replaced by an XML, JSON, plain text, or other types of files. Moreover, where a table or hash table is mentioned, other data structures (such as spreadsheets, relational databases, or structured files) may be used.
- Thus, particular embodiments have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims may be performed in a different order and still achieve desirable results.
Claims (20)
1. A computer-implemented method comprising:
receiving, at a computing system, video information comprising a video signal that shows a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal;
identifying, by the computing system, (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene;
based on characteristics of the identified one or more objects, determining a height of a first vertical line in the 2D scene; and
based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the first vertical line shown in the 2D scene, calibrating the video camera.
2. The method of claim 1 , wherein determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a height of one or more of the identified objects.
3. The method of claim 1 , wherein determining the height of the first vertical line that the video signal shows in the 2D scene comprises inferring a height of one or more of the identified objects based on video camera settings.
4. The method of claim 3 , wherein video camera settings comprise one or more of (i) video camera installation angle, (ii) video camera resolution, (iii) video camera field of view.
5. The method of claim 1 , wherein determining the height of the first vertical line that the video signal shows in the 2D scene comprises referencing an external database to determine a probability distribution of an expected height of one or more of the identified objects.
6. The method of claim 1 , wherein the identified one or more objects that the video signal shows in the 2D scene comprise stationary objects.
7. The method of claim 1 , wherein the identified one or more objects that the video signal shows in the 2D scene comprise moving objects.
8. The method of claim 7 , wherein determining the height of a first vertical line that the video signal shows in the 2D scene comprises determining a direction an object is moving relative to the video camera.
9. The method of claim 1 , wherein calibrating the video camera comprises determining intrinsic and extrinsic video camera parameters.
10. The method of claim 9 , wherein the intrinsic camera parameters comprise (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and
wherein the extrinsic parameters comprise (i) camera height, (ii) pan, (iii) tilt and (iv) roll.
11. The method of claim 10 , wherein determining the intrinsic and extrinsic video camera parameters comprises:
based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view, calculating three vanishing points, wherein the three vanishing points comprise (i) a vertical vanishing point and (ii) two horizontal vanishing points;
based on the two horizontal vanishing points, calculating a horizon line and roll angle;
based on the determined height of the first vertical line that the video shows in the 2D scene, calculating a height of the video camera;
based on the calculated vanishing points and roll angle, calculating a tilt and focal length;
based on the calculated height of the video camera and focal length, calculating vertical and horizontal angles of view.
12. The method of claim 11 , wherein each vanishing point comprises a point at which receding identified vertical or horizontal lines viewed in perspective appear to converge in the 2D scene.
13. The method of claim 11 , wherein the skew intrinsic camera parameter is assumed to be zero.
14. The method of claim 11 , wherein the aspect ratio intrinsic camera parameter is assumed to be equal to one.
15. The method of claim 1 , wherein identifying two or more vertical lines and two or more horizontal lines that the video signal shows in the 2D scene comprises:
applying a canny edge detector operator to the 2D scene to detect one or more edges in the scene; and
applying a Hough Transform to select vertical and horizontal lines from the detected edges.
16. A system comprising:
one or more computers; and
one or more computer-readable media coupled to the one or more computers having instructions stored thereon which, when executed by the one or more computers, cause the one or more computers to perform operations comprising:
receiving video information comprising a video signal that shows a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal;
identifying (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene;
based on characteristics of the identified one or more objects, determining a height of a first vertical line in the 2D scene; and
based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the first vertical line in the 2D scene, calibrating the video camera.
17. The system of claim 16 , wherein calibrating the video camera comprises determining intrinsic and extrinsic video camera parameters.
18. The system of claim 17 , wherein the intrinsic camera parameters comprise (i) focal length, (ii) principal points, (iii) aspect ratio and (iv) skew, and the extrinsic parameters comprise (i) camera height, (ii) pan, (iii) tilt and (iv) roll.
19. The system of claim 16 , wherein determining the intrinsic and extrinsic video camera parameters comprises:
based on (i) the identified vertical and horizontal lines, and (ii) dimensions of the 2D video camera field of view, calculating three vanishing points, wherein the three vanishing points comprise (i) a vertical vanishing point and (ii) two horizontal vanishing points;
based on the two horizontal vanishing points, calculating a horizon line and roll angle;
based on the determined height of the first vertical line in the 2D scene, calculating a height of the video camera;
based on the calculated vanishing points and roll angle, calculating a tilt and focal length;
based on the calculated height of the video camera and focal length, calculating vertical and horizontal angles of view.
20. One or more non-transitory computer storage media encoded with a computer program, the program comprising instructions that when executed by data processing apparatus cause the data processing apparatus to perform operations comprising:
receiving video information comprising a video signal that shows a two-dimensional (2D) scene of an environment from a first field of view of a video camera that captured the video signal;
identifying (i) two or more vertical lines that the video signal shows in the 2D scene, (ii) two or more horizontal lines that the video signal shows in the 2D scene and are orthogonal to the two or more vertical lines, and (iii) one or more objects that the video signal shows in the 2D scene;
based on characteristics of the identified one or more objects, determining a height of a first vertical line in the 2D scene; and
based on (i) the identified two or more vertical lines, (ii) the identified two or more horizontal lines, and (iii) the determined height of the first vertical line in the 2D scene, calibrating the video camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/169,035 US20160350921A1 (en) | 2015-05-29 | 2016-05-31 | Automatic camera calibration |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562167930P | 2015-05-29 | 2015-05-29 | |
US15/169,035 US20160350921A1 (en) | 2015-05-29 | 2016-05-31 | Automatic camera calibration |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160350921A1 true US20160350921A1 (en) | 2016-12-01 |
Family
ID=56098032
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/167,701 Active 2036-08-13 US10007849B2 (en) | 2015-05-29 | 2016-05-27 | Predicting external events from digital video content |
US15/167,576 Active 2036-08-18 US9996749B2 (en) | 2015-05-29 | 2016-05-27 | Detecting contextual trends in digital video content |
US15/169,111 Active US10354144B2 (en) | 2015-05-29 | 2016-05-31 | Video camera scene translation |
US15/169,113 Active 2036-10-14 US10055646B2 (en) | 2015-05-29 | 2016-05-31 | Local caching for object recognition |
US15/169,035 Abandoned US20160350921A1 (en) | 2015-05-29 | 2016-05-31 | Automatic camera calibration |
US16/003,518 Active US10402659B2 (en) | 2015-05-29 | 2018-06-08 | Predicting external events from digital video content |
US16/058,089 Active US10402660B2 (en) | 2015-05-29 | 2018-08-08 | Local caching for object recognition |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/167,701 Active 2036-08-13 US10007849B2 (en) | 2015-05-29 | 2016-05-27 | Predicting external events from digital video content |
US15/167,576 Active 2036-08-18 US9996749B2 (en) | 2015-05-29 | 2016-05-27 | Detecting contextual trends in digital video content |
US15/169,111 Active US10354144B2 (en) | 2015-05-29 | 2016-05-31 | Video camera scene translation |
US15/169,113 Active 2036-10-14 US10055646B2 (en) | 2015-05-29 | 2016-05-31 | Local caching for object recognition |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/003,518 Active US10402659B2 (en) | 2015-05-29 | 2018-06-08 | Predicting external events from digital video content |
US16/058,089 Active US10402660B2 (en) | 2015-05-29 | 2018-08-08 | Local caching for object recognition |
Country Status (5)
Country | Link |
---|---|
US (7) | US10007849B2 (en) |
EP (3) | EP3098755B1 (en) |
AU (5) | AU2016203594A1 (en) |
CA (3) | CA2931748C (en) |
SG (3) | SG10201604367VA (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170150035A1 (en) * | 2015-11-25 | 2017-05-25 | Olympus Corporation | Imaging apparatus, control method of imaging apparatus, and non-transitory recording medium |
CN107992822A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method and device, computer equipment, computer-readable recording medium |
US20180189532A1 (en) * | 2016-12-30 | 2018-07-05 | Accenture Global Solutions Limited | Object Detection for Video Camera Self-Calibration |
US20180341818A1 (en) * | 2017-05-26 | 2018-11-29 | MP High Tech Solutions Pty Ltd | Apparatus and Method of Location Determination in a Thermal Imaging System |
US10440280B2 (en) * | 2017-09-21 | 2019-10-08 | Advanced Semiconductor Engineering, Inc. | Optical system and method for operating the same |
CN110490930A (en) * | 2019-08-21 | 2019-11-22 | 谷元(上海)文化科技有限责任公司 | A kind of scaling method of camera position |
US10582095B2 (en) | 2016-10-14 | 2020-03-03 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures |
US10580164B2 (en) * | 2018-04-05 | 2020-03-03 | Microsoft Technology Licensing, Llc | Automatic camera calibration |
CN110969055A (en) * | 2018-09-29 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and computer-readable storage medium for vehicle localization |
US10643078B2 (en) * | 2017-11-06 | 2020-05-05 | Sensormatic Electronics, LLC | Automatic camera ground plane calibration method and system |
CN111294501A (en) * | 2018-12-06 | 2020-06-16 | 西安光启未来技术研究院 | Camera adjusting method |
US20200218926A1 (en) * | 2017-07-06 | 2020-07-09 | Traworks | Camera-integrated laser detection device and operating method thereof |
US10719955B2 (en) * | 2015-04-23 | 2020-07-21 | Application Solutions (Electronics and Vision) Ltd. | Camera extrinsic parameters estimation from image lines |
US10824880B2 (en) * | 2017-08-25 | 2020-11-03 | Beijing Voyager Technology Co., Ltd. | Methods and systems for detecting environmental information of a vehicle |
CN112150559A (en) * | 2020-09-24 | 2020-12-29 | 深圳佑驾创新科技有限公司 | Calibration method of image acquisition device, computer equipment and storage medium |
US20210195097A1 (en) * | 2019-02-13 | 2021-06-24 | Intelligent Security Systems Corporation | Systems, devices, and methods for enabling camera adjustments |
US11094083B2 (en) * | 2019-01-25 | 2021-08-17 | Adobe Inc. | Utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image |
US11102410B2 (en) * | 2019-05-31 | 2021-08-24 | Panasonic I-Pro Sensing Solutions Co., Ltd. | Camera parameter setting system and camera parameter setting method |
CN113421307A (en) * | 2021-06-22 | 2021-09-21 | 恒睿(重庆)人工智能技术研究院有限公司 | Target positioning method and device, computer equipment and storage medium |
US11172102B2 (en) * | 2020-02-20 | 2021-11-09 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures configured for deployment in connection with ceilings and downlight cavities |
US11216969B2 (en) * | 2019-07-23 | 2022-01-04 | Obayashi Corporation | System, method, and computer-readable medium for managing position of target |
US11328424B1 (en) | 2019-08-08 | 2022-05-10 | The Chamberlain Group Llc | Systems and methods for monitoring a movable barrier |
US20220191467A1 (en) * | 2019-02-06 | 2022-06-16 | Robert Bosch Gmbh | Calibration unit for a monitoring device, monitoring device for man-overboard monitoring, and method for calibration |
US11410330B2 (en) * | 2017-05-30 | 2022-08-09 | Edx Technologies, Inc. | Methods, devices, and systems for determining field of view and producing augmented reality |
US11417098B1 (en) * | 2017-05-10 | 2022-08-16 | Waylens, Inc. | Determining location coordinates of a vehicle based on license plate metadata and video analytics |
US11430228B2 (en) * | 2018-09-26 | 2022-08-30 | Allstate Insurance Company | Dynamic driving metric output generation using computer vision methods |
CN115103121A (en) * | 2022-07-05 | 2022-09-23 | 长江三峡勘测研究院有限公司(武汉) | Slope oblique photography device, image data acquisition method and image data acquisition instrument |
RU2780717C1 (en) * | 2021-08-06 | 2022-09-29 | Общество с ограниченной ответственностью «ЭвоКарго» | Method for calibrating external parameters of video cameras |
WO2023014246A1 (en) * | 2021-08-06 | 2023-02-09 | Общество с ограниченной ответственностью "ЭвоКарго" | Method of calibrating extrinsic video camera parameters |
US20230046840A1 (en) * | 2021-07-28 | 2023-02-16 | Objectvideo Labs, Llc | Vehicular access control based on virtual inductive loop |
WO2023180456A1 (en) * | 2022-03-23 | 2023-09-28 | Basf Se | Method and apparatus for monitoring an industrial plant |
Families Citing this family (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130339859A1 (en) | 2012-06-15 | 2013-12-19 | Muzik LLC | Interactive networked headphones |
US10558848B2 (en) | 2017-10-05 | 2020-02-11 | Duelight Llc | System, method, and computer program for capturing an image with correct skin tone exposure |
AU2015224526B2 (en) * | 2014-09-11 | 2020-04-30 | Iomniscient Pty Ltd | An image management system |
US10146797B2 (en) * | 2015-05-29 | 2018-12-04 | Accenture Global Services Limited | Face recognition image data cache |
US10007849B2 (en) | 2015-05-29 | 2018-06-26 | Accenture Global Solutions Limited | Predicting external events from digital video content |
US9672445B2 (en) * | 2015-08-03 | 2017-06-06 | Yahoo! Inc. | Computerized method and system for automated determination of high quality digital content |
CN105718887A (en) * | 2016-01-21 | 2016-06-29 | 惠州Tcl移动通信有限公司 | Shooting method and shooting system capable of realizing dynamic capturing of human faces based on mobile terminal |
EP3671633A1 (en) | 2016-02-26 | 2020-06-24 | NEC Corporation | Face recognition system, face recognition method, and storage medium |
US20170374395A1 (en) * | 2016-06-28 | 2017-12-28 | The United States Of America As Represented By The Secretary Of The Navy | Video management systems (vms) |
CN107666590B (en) * | 2016-07-29 | 2020-01-17 | 华为终端有限公司 | Target monitoring method, camera, controller and target monitoring system |
KR20180066722A (en) * | 2016-12-09 | 2018-06-19 | 한국전자통신연구원 | Mobile based personal content protection apparatus and method therefor |
CN108616718B (en) * | 2016-12-13 | 2021-02-26 | 杭州海康威视系统技术有限公司 | Monitoring display method, device and system |
WO2018110165A1 (en) * | 2016-12-15 | 2018-06-21 | 日本電気株式会社 | Information processing device, information processing method, and information processing program |
US10924670B2 (en) | 2017-04-14 | 2021-02-16 | Yang Liu | System and apparatus for co-registration and correlation between multi-modal imagery and method for same |
US10360481B2 (en) * | 2017-05-18 | 2019-07-23 | At&T Intellectual Property I, L.P. | Unconstrained event monitoring via a network of drones |
FR3067841B1 (en) * | 2017-06-14 | 2019-07-05 | Airbus Group Sas | SYSTEM AND METHOD FOR LOCATING IMAGE PROCESSING |
CN108875471A (en) * | 2017-06-19 | 2018-11-23 | 北京旷视科技有限公司 | The method, apparatus and computer storage medium of facial image bottom library registration |
JP6948175B2 (en) * | 2017-07-06 | 2021-10-13 | キヤノン株式会社 | Image processing device and its control method |
CN113205021A (en) * | 2017-07-10 | 2021-08-03 | 深圳市海清视讯科技有限公司 | Camera and face information collection method based on camera face recognition |
US10623680B1 (en) * | 2017-07-11 | 2020-04-14 | Equinix, Inc. | Data center viewing system |
US10726723B1 (en) | 2017-08-25 | 2020-07-28 | Objectvideo Labs, Llc | Parking lot use monitoring for small businesses |
US10839257B2 (en) | 2017-08-30 | 2020-11-17 | Qualcomm Incorporated | Prioritizing objects for object recognition |
US11087558B1 (en) | 2017-09-29 | 2021-08-10 | Apple Inc. | Managing augmented reality content associated with a physical location |
US20190122031A1 (en) * | 2017-10-25 | 2019-04-25 | Interdigital Ce Patent Holdings | Devices, systems and methods for privacy-preserving security |
US10732708B1 (en) * | 2017-11-21 | 2020-08-04 | Amazon Technologies, Inc. | Disambiguation of virtual reality information using multi-modal data including speech |
US11232645B1 (en) | 2017-11-21 | 2022-01-25 | Amazon Technologies, Inc. | Virtual spaces as a platform |
CN108875516B (en) * | 2017-12-12 | 2021-04-27 | 北京旷视科技有限公司 | Face recognition method, device, system, storage medium and electronic equipment |
US10970923B1 (en) * | 2018-03-13 | 2021-04-06 | State Farm Mutual Automobile Insurance Company | Method and system for virtual area visualization |
US10938649B2 (en) * | 2018-03-19 | 2021-03-02 | Arlo Technologies, Inc. | Adjusting parameters in a network-connected security system based on content analysis |
CN110324528A (en) * | 2018-03-28 | 2019-10-11 | 富泰华工业(深圳)有限公司 | Photographic device, image processing system and method |
US10713476B2 (en) * | 2018-05-03 | 2020-07-14 | Royal Caribbean Cruises Ltd. | High throughput passenger identification in portal monitoring |
FR3080938B1 (en) * | 2018-05-03 | 2021-05-07 | Royal Caribbean Cruises Ltd | HIGH-SPEED IDENTIFICATION OF PASSENGERS IN GATE MONITORING |
US11544965B1 (en) | 2018-05-10 | 2023-01-03 | Wicket, Llc | System and method for access control using a plurality of images |
CN108694385A (en) * | 2018-05-14 | 2018-10-23 | 深圳市科发智能技术有限公司 | A kind of high speed face identification method, system and device |
US11196669B2 (en) * | 2018-05-17 | 2021-12-07 | At&T Intellectual Property I, L.P. | Network routing of media streams based upon semantic contents |
US11157524B2 (en) | 2018-05-18 | 2021-10-26 | At&T Intellectual Property I, L.P. | Automated learning of anomalies in media streams with external feed labels |
CN109190466A (en) * | 2018-07-26 | 2019-01-11 | 高新兴科技集团股份有限公司 | A kind of method and apparatus that personnel position in real time |
GB2577689B (en) | 2018-10-01 | 2023-03-22 | Digital Barriers Services Ltd | Video surveillance and object recognition |
US11217006B2 (en) * | 2018-10-29 | 2022-01-04 | Verizon Patent And Licensing Inc. | Methods and systems for performing 3D simulation based on a 2D video image |
US11312594B2 (en) | 2018-11-09 | 2022-04-26 | Otis Elevator Company | Conveyance system video analytics |
JP6781413B2 (en) * | 2018-11-21 | 2020-11-04 | 日本電気株式会社 | Information processing device |
JP6852141B2 (en) * | 2018-11-29 | 2021-03-31 | キヤノン株式会社 | Information processing device, imaging device, control method of information processing device, and program |
US20210303853A1 (en) * | 2018-12-18 | 2021-09-30 | Rovi Guides, Inc. | Systems and methods for automated tracking on a handheld device using a remote camera |
US20200218940A1 (en) * | 2019-01-08 | 2020-07-09 | International Business Machines Corporation | Creating and managing machine learning models in a shared network environment |
CN109978914B (en) * | 2019-03-07 | 2021-06-08 | 北京旷视科技有限公司 | Face tracking method and device |
CN109993171B (en) * | 2019-03-12 | 2022-05-03 | 电子科技大学 | License plate character segmentation method based on multiple templates and multiple proportions |
US10997414B2 (en) | 2019-03-29 | 2021-05-04 | Toshiba Global Commerce Solutions Holdings Corporation | Methods and systems providing actions related to recognized objects in video data to administrators of a retail information processing system and related articles of manufacture |
WO2020234737A1 (en) * | 2019-05-18 | 2020-11-26 | Looplearn Pty Ltd | Localised, loop-based self-learning for recognising individuals at locations |
US11048948B2 (en) * | 2019-06-10 | 2021-06-29 | City University Of Hong Kong | System and method for counting objects |
CN110321857B (en) * | 2019-07-08 | 2021-08-17 | 苏州万店掌网络科技有限公司 | Accurate passenger group analysis method based on edge calculation technology |
US11127023B2 (en) * | 2019-07-29 | 2021-09-21 | Capital One Service, LLC | System for predicting optimal operating hours for merchants |
US11900679B2 (en) | 2019-11-26 | 2024-02-13 | Objectvideo Labs, Llc | Image-based abnormal event detection |
US11580741B2 (en) * | 2019-12-30 | 2023-02-14 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for detecting abnormal objects in video |
US11024043B1 (en) | 2020-03-27 | 2021-06-01 | Abraham Othman | System and method for visually tracking persons and imputing demographic and sentiment data |
CN111554064A (en) * | 2020-03-31 | 2020-08-18 | 苏州科腾软件开发有限公司 | Remote household monitoring alarm system based on 5G network |
US11657614B2 (en) | 2020-06-03 | 2023-05-23 | Apple Inc. | Camera and visitor user interfaces |
US11589010B2 (en) | 2020-06-03 | 2023-02-21 | Apple Inc. | Camera and visitor user interfaces |
US11368991B2 (en) | 2020-06-16 | 2022-06-21 | At&T Intellectual Property I, L.P. | Facilitation of prioritization of accessibility of media |
US11233979B2 (en) | 2020-06-18 | 2022-01-25 | At&T Intellectual Property I, L.P. | Facilitation of collaborative monitoring of an event |
US11037443B1 (en) | 2020-06-26 | 2021-06-15 | At&T Intellectual Property I, L.P. | Facilitation of collaborative vehicle warnings |
US11184517B1 (en) | 2020-06-26 | 2021-11-23 | At&T Intellectual Property I, L.P. | Facilitation of collaborative camera field of view mapping |
US11411757B2 (en) | 2020-06-26 | 2022-08-09 | At&T Intellectual Property I, L.P. | Facilitation of predictive assisted access to content |
US11356349B2 (en) | 2020-07-17 | 2022-06-07 | At&T Intellectual Property I, L.P. | Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications |
US11768082B2 (en) | 2020-07-20 | 2023-09-26 | At&T Intellectual Property I, L.P. | Facilitation of predictive simulation of planned environment |
CN114140838A (en) * | 2020-08-14 | 2022-03-04 | 华为技术有限公司 | Image management method, device, terminal equipment and system |
CN112541952A (en) * | 2020-12-08 | 2021-03-23 | 北京精英路通科技有限公司 | Parking scene camera calibration method and device, computer equipment and storage medium |
US11132552B1 (en) * | 2021-02-12 | 2021-09-28 | ShipIn Systems Inc. | System and method for bandwidth reduction and communication of visual events |
CN113473067A (en) * | 2021-06-11 | 2021-10-01 | 胡明刚 | Tracking type high-definition video conference terminal equipment |
Family Cites Families (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0785280B2 (en) | 1992-08-04 | 1995-09-13 | タカタ株式会社 | Collision prediction judgment system by neural network |
US7418431B1 (en) * | 1999-09-30 | 2008-08-26 | Fair Isaac Corporation | Webstation: configurable web-based workstation for reason driven data analysis |
US6658136B1 (en) * | 1999-12-06 | 2003-12-02 | Microsoft Corporation | System and process for locating and tracking a person or object in a scene using a series of range images |
US9892606B2 (en) * | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US8564661B2 (en) | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US6970083B2 (en) * | 2001-10-09 | 2005-11-29 | Objectvideo, Inc. | Video tripwire |
US6696945B1 (en) * | 2001-10-09 | 2004-02-24 | Diamondback Vision, Inc. | Video tripwire |
JP2003141528A (en) * | 2001-10-29 | 2003-05-16 | Tokuwa System Kk | Figure information digitizing system |
EP1472870A4 (en) | 2002-02-06 | 2006-11-29 | Nice Systems Ltd | Method and apparatus for video frame sequence-based object tracking |
JP2003346149A (en) * | 2002-05-24 | 2003-12-05 | Omron Corp | Face collating device and bioinformation collating device |
US20050134685A1 (en) * | 2003-12-22 | 2005-06-23 | Objectvideo, Inc. | Master-slave automated video-based surveillance system |
US20040258281A1 (en) * | 2003-05-01 | 2004-12-23 | David Delgrosso | System and method for preventing identity fraud |
US20050063569A1 (en) | 2003-06-13 | 2005-03-24 | Charles Colbert | Method and apparatus for face recognition |
US7627171B2 (en) * | 2003-07-03 | 2009-12-01 | Videoiq, Inc. | Methods and systems for detecting objects of interest in spatio-temporal signals |
US20100013917A1 (en) * | 2003-08-12 | 2010-01-21 | Keith Hanna | Method and system for performing surveillance |
EP1566788A3 (en) | 2004-01-23 | 2017-11-22 | Sony United Kingdom Limited | Display |
CA2564035A1 (en) | 2004-04-30 | 2005-11-17 | Utc Fire & Security Corp. | Camera tamper detection |
US9526421B2 (en) * | 2005-03-11 | 2016-12-27 | Nrv-Wellness, Llc | Mobile wireless customizable health and condition monitor |
WO2006105655A1 (en) * | 2005-04-06 | 2006-10-12 | March Networks Corporation | Method and system for counting moving objects in a digital video stream |
US9240051B2 (en) | 2005-11-23 | 2016-01-19 | Avigilon Fortress Corporation | Object density estimation in video |
US7786874B2 (en) * | 2005-12-09 | 2010-08-31 | Samarion, Inc. | Methods for refining patient, staff and visitor profiles used in monitoring quality and performance at a healthcare facility |
US20070219654A1 (en) * | 2006-03-14 | 2007-09-20 | Viditotus Llc | Internet-based advertising via web camera search contests |
WO2007148219A2 (en) | 2006-06-23 | 2007-12-27 | Imax Corporation | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
WO2008002630A2 (en) | 2006-06-26 | 2008-01-03 | University Of Southern California | Seamless image integration into 3d models |
US7667596B2 (en) | 2007-02-16 | 2010-02-23 | Panasonic Corporation | Method and system for scoring surveillance system footage |
US8619140B2 (en) * | 2007-07-30 | 2013-12-31 | International Business Machines Corporation | Automatic adjustment of area monitoring based on camera motion |
US20100299116A1 (en) | 2007-09-19 | 2010-11-25 | United Technologies Corporation | System and method for occupancy estimation |
DE102007053812A1 (en) * | 2007-11-12 | 2009-05-14 | Robert Bosch Gmbh | Video surveillance system configuration module, configuration module monitoring system, video surveillance system configuration process, and computer program |
US8005272B2 (en) * | 2008-01-03 | 2011-08-23 | International Business Machines Corporation | Digital life recorder implementing enhanced facial recognition subsystem for acquiring face glossary data |
US9584710B2 (en) * | 2008-02-28 | 2017-02-28 | Avigilon Analytics Corporation | Intelligent high resolution video system |
US8224029B2 (en) * | 2008-03-03 | 2012-07-17 | Videoiq, Inc. | Object matching for tracking, indexing, and search |
US8427552B2 (en) * | 2008-03-03 | 2013-04-23 | Videoiq, Inc. | Extending the operational lifetime of a hard-disk drive used in video data storage applications |
US20090296989A1 (en) | 2008-06-03 | 2009-12-03 | Siemens Corporate Research, Inc. | Method for Automatic Detection and Tracking of Multiple Objects |
US8959108B2 (en) | 2008-06-18 | 2015-02-17 | Zeitera, Llc | Distributed and tiered architecture for content search and content monitoring |
US20100329568A1 (en) | 2008-07-02 | 2010-12-30 | C-True Ltd. | Networked Face Recognition System |
EP2230629A3 (en) | 2008-07-16 | 2012-11-21 | Verint Systems Inc. | A system and method for capturing, storing, analyzing and displaying data relating to the movements of objects |
US8186393B2 (en) * | 2008-07-24 | 2012-05-29 | Deere & Company | Fluid coupler including valve arrangement for connecting intake conduit of sprayer to transfer conduit of nurse tank during refill operation |
US9520040B2 (en) * | 2008-11-21 | 2016-12-13 | Raytheon Company | System and method for real-time 3-D object tracking and alerting via networked sensors |
US8253564B2 (en) | 2009-02-19 | 2012-08-28 | Panasonic Corporation | Predicting a future location of a moving object observed by a surveillance device |
WO2010122128A1 (en) | 2009-04-22 | 2010-10-28 | Patrick Reilly | A cable system with selective device activation for a vehicle |
CN101577006B (en) | 2009-06-15 | 2015-03-04 | 北京中星微电子有限公司 | Loitering detecting method and loitering detecting system in video monitoring |
KR101665130B1 (en) | 2009-07-15 | 2016-10-25 | 삼성전자주식회사 | Apparatus and method for generating image including a plurality of persons |
US20120111911A1 (en) * | 2009-08-10 | 2012-05-10 | Bruce Rempe | Family Bicycle Rack for Pickup Truck |
WO2011028837A2 (en) | 2009-09-01 | 2011-03-10 | Prime Focus Vfx Services Ii Inc. | System and process for transforming two-dimensional images into three-dimensional images |
JP5427577B2 (en) * | 2009-12-04 | 2014-02-26 | パナソニック株式会社 | Display control apparatus and display image forming method |
US8442967B2 (en) * | 2010-02-04 | 2013-05-14 | Identix Incorporated | Operator-assisted iterative biometric search |
US20110205359A1 (en) * | 2010-02-19 | 2011-08-25 | Panasonic Corporation | Video surveillance system |
US9082278B2 (en) | 2010-03-19 | 2015-07-14 | University-Industry Cooperation Group Of Kyung Hee University | Surveillance system |
US10645344B2 (en) * | 2010-09-10 | 2020-05-05 | Avigilion Analytics Corporation | Video system with intelligent visual display |
US8890936B2 (en) * | 2010-10-12 | 2014-11-18 | Texas Instruments Incorporated | Utilizing depth information to create 3D tripwires in video |
US8953888B2 (en) * | 2011-02-10 | 2015-02-10 | Microsoft Corporation | Detecting and localizing multiple objects in images using probabilistic inference |
KR101066068B1 (en) * | 2011-03-22 | 2011-09-20 | (주)유디피 | Video surveillance apparatus using dual camera and method thereof |
US8760290B2 (en) * | 2011-04-08 | 2014-06-24 | Rave Wireless, Inc. | Public safety analysis system |
US8620026B2 (en) | 2011-04-13 | 2013-12-31 | International Business Machines Corporation | Video-based detection of multiple object types under varying poses |
US20130027561A1 (en) | 2011-07-29 | 2013-01-31 | Panasonic Corporation | System and method for improving site operations by detecting abnormalities |
US8504570B2 (en) * | 2011-08-25 | 2013-08-06 | Numenta, Inc. | Automated search for detecting patterns and sequences in data using a spatial and temporal memory system |
EP2786308A4 (en) | 2011-12-01 | 2016-03-30 | Ericsson Telefon Ab L M | Method for performing face recognition in a radio access network |
CA2804468C (en) * | 2012-01-30 | 2016-03-29 | Accenture Global Services Limited | System and method for face capture and matching |
EP2834776A4 (en) * | 2012-03-30 | 2016-10-05 | Intel Corp | Recognition-based security |
US9195883B2 (en) * | 2012-04-09 | 2015-11-24 | Avigilon Fortress Corporation | Object tracking and best shot detection system |
US9350944B2 (en) | 2012-08-24 | 2016-05-24 | Qualcomm Incorporated | Connecting to an onscreen entity |
BR112015008383A2 (en) * | 2012-10-18 | 2017-07-04 | Nec Corp | information processing system, method and program for information processing |
US9197861B2 (en) * | 2012-11-15 | 2015-11-24 | Avo Usa Holding 2 Corporation | Multi-dimensional virtual beam detection for video analytics |
KR101776706B1 (en) * | 2012-11-30 | 2017-09-08 | 한화테크윈 주식회사 | Method and Apparatus for counting the number of person using a plurality of cameras |
KR101467663B1 (en) * | 2013-01-30 | 2014-12-01 | 주식회사 엘지씨엔에스 | Method and system of providing display in display monitoring system |
US9405978B2 (en) * | 2013-06-10 | 2016-08-02 | Globalfoundries Inc. | Prioritization of facial recognition matches based on likely route |
TWI507661B (en) * | 2013-07-03 | 2015-11-11 | Faraday Tech Corp | Image surveillance system and method thereof |
US20150025329A1 (en) * | 2013-07-18 | 2015-01-22 | Parkland Center For Clinical Innovation | Patient care surveillance system and method |
JP5500303B1 (en) | 2013-10-08 | 2014-05-21 | オムロン株式会社 | MONITORING SYSTEM, MONITORING METHOD, MONITORING PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM |
US9478056B2 (en) | 2013-10-28 | 2016-10-25 | Google Inc. | Image cache for replacing portions of images |
WO2015132271A1 (en) * | 2014-03-03 | 2015-09-11 | Vsk Electronics Nv | Intrusion detection with directional sensing |
TWI526852B (en) | 2014-07-29 | 2016-03-21 | 國立交通大學 | Method for counting number of people based on appliance usages and monitoring system using the same |
US10110856B2 (en) * | 2014-12-05 | 2018-10-23 | Avigilon Fortress Corporation | Systems and methods for video analysis rules based on map data |
US9760792B2 (en) * | 2015-03-20 | 2017-09-12 | Netra, Inc. | Object detection and classification |
US10007849B2 (en) | 2015-05-29 | 2018-06-26 | Accenture Global Solutions Limited | Predicting external events from digital video content |
-
2016
- 2016-05-27 US US15/167,701 patent/US10007849B2/en active Active
- 2016-05-27 US US15/167,576 patent/US9996749B2/en active Active
- 2016-05-30 EP EP16171900.0A patent/EP3098755B1/en active Active
- 2016-05-30 CA CA2931748A patent/CA2931748C/en active Active
- 2016-05-30 AU AU2016203594A patent/AU2016203594A1/en not_active Abandoned
- 2016-05-30 SG SG10201604367VA patent/SG10201604367VA/en unknown
- 2016-05-30 EP EP16171899.4A patent/EP3098754A1/en not_active Ceased
- 2016-05-30 SG SG10201604368TA patent/SG10201604368TA/en unknown
- 2016-05-30 AU AU2016203579A patent/AU2016203579A1/en not_active Abandoned
- 2016-05-30 CA CA2931743A patent/CA2931743C/en active Active
- 2016-05-30 EP EP16171901.8A patent/EP3098756B1/en active Active
- 2016-05-30 CA CA2931713A patent/CA2931713C/en active Active
- 2016-05-30 SG SG10201604361SA patent/SG10201604361SA/en unknown
- 2016-05-30 AU AU2016203571A patent/AU2016203571B2/en active Active
- 2016-05-31 US US15/169,111 patent/US10354144B2/en active Active
- 2016-05-31 US US15/169,113 patent/US10055646B2/en active Active
- 2016-05-31 US US15/169,035 patent/US20160350921A1/en not_active Abandoned
-
2017
- 2017-06-20 AU AU2017204181A patent/AU2017204181B2/en active Active
- 2017-08-21 AU AU2017218930A patent/AU2017218930A1/en not_active Abandoned
-
2018
- 2018-06-08 US US16/003,518 patent/US10402659B2/en active Active
- 2018-08-08 US US16/058,089 patent/US10402660B2/en active Active
Cited By (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10719955B2 (en) * | 2015-04-23 | 2020-07-21 | Application Solutions (Electronics and Vision) Ltd. | Camera extrinsic parameters estimation from image lines |
US20170150035A1 (en) * | 2015-11-25 | 2017-05-25 | Olympus Corporation | Imaging apparatus, control method of imaging apparatus, and non-transitory recording medium |
US10432843B2 (en) * | 2015-11-25 | 2019-10-01 | Olympus Corporation | Imaging apparatus, control method of imaging apparatus, and non-transitory recording medium for judging an interval between judgement targets |
US11032451B2 (en) | 2016-10-14 | 2021-06-08 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures |
US11991427B2 (en) | 2016-10-14 | 2024-05-21 | Calumino Pty Ltd. | Imaging apparatuses and enclosures |
US10582095B2 (en) | 2016-10-14 | 2020-03-03 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures |
US11533414B2 (en) | 2016-10-14 | 2022-12-20 | Calumino Pty Ltd. | Imaging apparatuses and enclosures |
US20180189532A1 (en) * | 2016-12-30 | 2018-07-05 | Accenture Global Solutions Limited | Object Detection for Video Camera Self-Calibration |
US10366263B2 (en) * | 2016-12-30 | 2019-07-30 | Accenture Global Solutions Limited | Object detection for video camera self-calibration |
US11417098B1 (en) * | 2017-05-10 | 2022-08-16 | Waylens, Inc. | Determining location coordinates of a vehicle based on license plate metadata and video analytics |
US11765323B2 (en) * | 2017-05-26 | 2023-09-19 | Calumino Pty Ltd. | Apparatus and method of location determination in a thermal imaging system |
US20180341818A1 (en) * | 2017-05-26 | 2018-11-29 | MP High Tech Solutions Pty Ltd | Apparatus and Method of Location Determination in a Thermal Imaging System |
US20180341817A1 (en) * | 2017-05-26 | 2018-11-29 | MP High Tech Solutions Pty Ltd | Apparatus and Method of Location Determination in a Thermal Imaging System |
EP3632098A4 (en) * | 2017-05-26 | 2021-01-20 | MP High Tech Solutions Pty. Ltd. | Apparatus and method of location determination in a thermal imaging system |
US20180341816A1 (en) * | 2017-05-26 | 2018-11-29 | MP High Tech Solutions Pty Ltd | Apparatus and Method of Location Determination in a Thermal Imaging System |
US11410330B2 (en) * | 2017-05-30 | 2022-08-09 | Edx Technologies, Inc. | Methods, devices, and systems for determining field of view and producing augmented reality |
US20200218926A1 (en) * | 2017-07-06 | 2020-07-09 | Traworks | Camera-integrated laser detection device and operating method thereof |
US10824880B2 (en) * | 2017-08-25 | 2020-11-03 | Beijing Voyager Technology Co., Ltd. | Methods and systems for detecting environmental information of a vehicle |
US10440280B2 (en) * | 2017-09-21 | 2019-10-08 | Advanced Semiconductor Engineering, Inc. | Optical system and method for operating the same |
US10643078B2 (en) * | 2017-11-06 | 2020-05-05 | Sensormatic Electronics, LLC | Automatic camera ground plane calibration method and system |
CN107992822A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method and device, computer equipment, computer-readable recording medium |
US10824901B2 (en) | 2017-11-30 | 2020-11-03 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing of face sets utilizing an image recognition method |
US10580164B2 (en) * | 2018-04-05 | 2020-03-03 | Microsoft Technology Licensing, Llc | Automatic camera calibration |
US20230245472A1 (en) * | 2018-09-26 | 2023-08-03 | Allstate Insurance Company | Dynamic driving metric output generation using computer vision methods |
US11430228B2 (en) * | 2018-09-26 | 2022-08-30 | Allstate Insurance Company | Dynamic driving metric output generation using computer vision methods |
CN110969055A (en) * | 2018-09-29 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and computer-readable storage medium for vehicle localization |
US11144770B2 (en) * | 2018-09-29 | 2021-10-12 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and device for positioning vehicle, device, and computer readable storage medium |
CN111294501A (en) * | 2018-12-06 | 2020-06-16 | 西安光启未来技术研究院 | Camera adjusting method |
US11810326B2 (en) | 2019-01-25 | 2023-11-07 | Adobe Inc. | Determining camera parameters from a single digital image |
US11094083B2 (en) * | 2019-01-25 | 2021-08-17 | Adobe Inc. | Utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image |
US11595638B2 (en) * | 2019-02-06 | 2023-02-28 | Robert Bosch Gmbh | Calibration unit for a monitoring device, monitoring device for man-overboard monitoring, and method for calibration |
US20220191467A1 (en) * | 2019-02-06 | 2022-06-16 | Robert Bosch Gmbh | Calibration unit for a monitoring device, monitoring device for man-overboard monitoring, and method for calibration |
US20210195097A1 (en) * | 2019-02-13 | 2021-06-24 | Intelligent Security Systems Corporation | Systems, devices, and methods for enabling camera adjustments |
US11863736B2 (en) * | 2019-02-13 | 2024-01-02 | Intelligent Security Systems Corporation | Systems, devices, and methods for enabling camera adjustments |
US11102410B2 (en) * | 2019-05-31 | 2021-08-24 | Panasonic I-Pro Sensing Solutions Co., Ltd. | Camera parameter setting system and camera parameter setting method |
US11216969B2 (en) * | 2019-07-23 | 2022-01-04 | Obayashi Corporation | System, method, and computer-readable medium for managing position of target |
US11328424B1 (en) | 2019-08-08 | 2022-05-10 | The Chamberlain Group Llc | Systems and methods for monitoring a movable barrier |
US11682119B1 (en) | 2019-08-08 | 2023-06-20 | The Chamberlain Group Llc | Systems and methods for monitoring a movable barrier |
CN110490930A (en) * | 2019-08-21 | 2019-11-22 | 谷元(上海)文化科技有限责任公司 | A kind of scaling method of camera position |
US20220053109A1 (en) * | 2020-02-20 | 2022-02-17 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures configured for deployment in connection with ceilings and downlight cavities |
US11172102B2 (en) * | 2020-02-20 | 2021-11-09 | MP High Tech Solutions Pty Ltd | Imaging apparatuses and enclosures configured for deployment in connection with ceilings and downlight cavities |
US11831966B2 (en) * | 2020-02-20 | 2023-11-28 | Calumino Pty Ltd. | Imaging apparatuses and enclosures configured for deployment in connection with ceilings and downlight cavities |
CN112150559A (en) * | 2020-09-24 | 2020-12-29 | 深圳佑驾创新科技有限公司 | Calibration method of image acquisition device, computer equipment and storage medium |
CN113421307A (en) * | 2021-06-22 | 2021-09-21 | 恒睿(重庆)人工智能技术研究院有限公司 | Target positioning method and device, computer equipment and storage medium |
US20230046840A1 (en) * | 2021-07-28 | 2023-02-16 | Objectvideo Labs, Llc | Vehicular access control based on virtual inductive loop |
WO2023014246A1 (en) * | 2021-08-06 | 2023-02-09 | Общество с ограниченной ответственностью "ЭвоКарго" | Method of calibrating extrinsic video camera parameters |
RU2780717C1 (en) * | 2021-08-06 | 2022-09-29 | Общество с ограниченной ответственностью «ЭвоКарго» | Method for calibrating external parameters of video cameras |
WO2023180456A1 (en) * | 2022-03-23 | 2023-09-28 | Basf Se | Method and apparatus for monitoring an industrial plant |
CN115103121A (en) * | 2022-07-05 | 2022-09-23 | 长江三峡勘测研究院有限公司(武汉) | Slope oblique photography device, image data acquisition method and image data acquisition instrument |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160350921A1 (en) | Automatic camera calibration | |
US10366263B2 (en) | Object detection for video camera self-calibration | |
US11941887B2 (en) | Scenario recreation through object detection and 3D visualization in a multi-sensor environment | |
US11514680B2 (en) | Methods and systems for traffic monitoring | |
US10210286B1 (en) | Systems and methods for detecting curbs in three-dimensional point clouds descriptive of streets | |
US9710924B2 (en) | Field of view determiner | |
US20160012606A1 (en) | Multi-cue object detection and analysis | |
US9818203B2 (en) | Methods and apparatuses for monitoring objects of interest in area with activity maps | |
US20230016896A1 (en) | System and method for free space estimation | |
US9436997B2 (en) | Estimating rainfall precipitation amounts by applying computer vision in cameras | |
US20180089515A1 (en) | Identification and classification of traffic conflicts using live video images | |
CN110472599B (en) | Object quantity determination method and device, storage medium and electronic equipment | |
US20230401748A1 (en) | Apparatus and methods to calibrate a stereo camera pair | |
Morris et al. | A Real-Time Truck Availability System for the State of Wisconsin | |
Shaaban et al. | Parking space detection system using video images | |
Sofwan et al. | Implementation of vehicle traffic analysis using background subtraction in the Internet of Things (IoT) architecture | |
Sofwan et al. | Design of smart open parking using background subtraction in the IoT architecture | |
KR102407202B1 (en) | Apparatus and method for intelligently analyzing video | |
KR102663282B1 (en) | Crowd density automatic measurement system | |
CN114979567B (en) | Object and region interaction method and system applied to video intelligent monitoring | |
US20230386309A1 (en) | Systesystem and method of visual detection of group multi-sensor gateway traversals using a stereo camera | |
Zong | AN APPLICATION OF UNMANNED AERIAL VEHICLES IN INTERSECTION TRAFFIC MONITORING | |
Saramas et al. | Human Detection and Social Distancing Measurement in a Video | |
Nadella | Extracting road traffic data through video analysis using automatic camera calibration and deep neural networks | |
Tan et al. | Towards Auto-Extracting Car Park Structures: Image Processing Approach on Low Powered Devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACCENTURE GLOBAL SOLUTIONS LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BATALLER, CYRILLE;DANIEL, PHILIPPE;SIGNING DATES FROM 20160622 TO 20160623;REEL/FRAME:039370/0001 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |