US20220092822A1 - Smart Self Calibrating Camera System - Google Patents

Smart Self Calibrating Camera System Download PDF

Info

Publication number
US20220092822A1
US20220092822A1 US17/503,362 US202117503362A US2022092822A1 US 20220092822 A1 US20220092822 A1 US 20220092822A1 US 202117503362 A US202117503362 A US 202117503362A US 2022092822 A1 US2022092822 A1 US 2022092822A1
Authority
US
United States
Prior art keywords
cameras
feature points
calibrating
images
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/503,362
Inventor
Ying Zheng
Hector Sanchez
Steve Gu
Staurt Kyle Neubarth
Mahmoud Hassan
Juan Ramon Terven
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aifi Corp
Original Assignee
Aifi Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aifi Corp filed Critical Aifi Corp
Priority to US17/503,362 priority Critical patent/US20220092822A1/en
Publication of US20220092822A1 publication Critical patent/US20220092822A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras.
  • This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras.
  • the system uses people as calibration markers. Instead of finding feature matches between cameras, the system matches one or more persons between cameras. Then the system identifies certain body key points of the one or more persons and then matches these key points.
  • the system implements automatic re-calibration in a specific way to reduce human intervention, cost and time.
  • the system extracts detections from each camera, and then synchronizes frames using time-stamp, and then clusters one or more persons using re-id features.
  • the system then aggregates key points from one or more persons along time for each camera.
  • the system finds matches same time, same person key points on camera pairs and then runs un-calibrated structure from motion on the key point matches.
  • the system then aligns and upgrades scale using one or more persons' head and feet key points or the known camera height.
  • the system implements a self-healing scheme to recalibrate after these situations (but not limited to these situations): accidental or on purpose camera position changes, change of focus or aspect ratio, or camera upgrades.
  • the system uses multi-camera tracking to match people and key points. Then the system triangulates and projects key point coordinates. The system monitors accumulated errors over time. If the accumulated error is large, re-calibration is needed. If re-calibration is needed, the system runs people-based calibration.
  • this kind of method is synchronizing system time for the plurality of cameras, and then the method is detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time; and then the method is calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.
  • the at least one feature points are encoded with color or depth information.
  • the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom.
  • the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.
  • the at least one feature points are lines, dots or polygons.
  • a user can manually be involved in the calibrating.
  • the object is configured to move.
  • the plurality of cameras is configured to move.
  • a neural network is configured to match and identify the at least one feature points.
  • the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded; Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.
  • the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.
  • the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means; Capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras; Decoding the encoded coordinate information; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.
  • a neural network is configured to match and identify the at least one feature points.
  • translucent stickers covered with some infrared ink to mark the patterns.
  • the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • translucent stickers are covered with some infrared ink to mark the patterns.
  • FIG. 1 shows a method for self-calibrating a plurality of cameras in an area.
  • FIG. 2 shows another method for calibrating a plurality of cameras in an area.
  • FIG. 3 shows a third method for calibrating a plurality of cameras in an area.
  • FIG. 1 shows a method 100 for self-calibrating a plurality of cameras in an area.
  • the method comprises a step 105 of synchronizing system time for the plurality of cameras.
  • the method comprises a step 110 of detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time.
  • the method comprises a step 115 of calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.
  • the method comprises a step 120 of capturing a first set of one or more images of the feature points of the object along the route by a first one of the plurality of cameras and a second set of one or more images of the feature points of the object along the route by a second one of the plurality of cameras, wherein time stamp is recorded for each capturing.
  • the method comprises a step 125 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points on the object between the first set of one or more images of the object and the second set of one or more images of the object at same time stamp.
  • the at least one feature points are encoded with color or depth information.
  • the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom.
  • the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.
  • the at least one feature points are lines, dots or polygons.
  • a user can manually be involved in the calibrating.
  • the object is configured to move.
  • the plurality of cameras is configured to move.
  • a neural network is configured to match and identify the at least one feature points.
  • the method comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • FIG. 2 shows a method 200 for self-calibrating a plurality of cameras in an area.
  • the method comprises a step 205 of detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded.
  • the method comprises a step 210 of capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras.
  • the method comprises a step 215 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.
  • the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at, least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.
  • FIG. 3 shows another method 300 for calibrating a plurality of cameras in an area.
  • the method comprises a step 305 of detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means.
  • the method comprises a step 310 of capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras.
  • the method comprises a step 315 of decoding the encoded coordinate information.
  • the method comprises a step 320 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.
  • the method comprises a step 325 of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • the first object is a person.
  • one of the feature points is the person's head.
  • the position information of the same feature points is X and Y coordinates within an image.
  • the object is configured to move freely.
  • a neural network is configured to match and identify the at least one feature points.
  • translucent stickers covered with some infrared ink to mark the patterns.
  • the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • translucent stickers are covered with some infrared ink to mark the patterns.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention describes a system for calibrating a plurality of cameras in an area. The system functions by using certain patterns with visible or invisible properties
In addition, the system implements automatic re-calibration in a specific way to reduce human intervention, cost and time.

Description

    CROSS REFERENCE FOR RELATED APPLICATION
  • The present application is a continuation patent application of (a) U.S. Non-Provisional patent application Ser. No. 17/028,388 entitled “SMART SELF CALIBRATING CAMERA SYSTEM”, filed on Sep. 22, 2020. Thus, this present application claims the benefit of U.S. application Ser. No. 17/028,388, filed Sep. 22, 2020, all of which are incorporated by reference herein.
  • BACKGROUND OF THE APPLICATION
  • This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras.
  • Methods and apparatus for calibrating cameras in the certain areas are very common. Some methods are using reference objects and manual methods to calibrate the cameras. However, these methods need human intervention and cost time and money.
  • Therefore, it is desirable to have systems and methods to enable self-calibration for the cameras to save time and efforts.
  • SUMMARY OF THE INVENTION
  • This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras. In some embodiments, the system uses people as calibration markers. Instead of finding feature matches between cameras, the system matches one or more persons between cameras. Then the system identifies certain body key points of the one or more persons and then matches these key points. In addition, the system implements automatic re-calibration in a specific way to reduce human intervention, cost and time. In some embodiments, the system extracts detections from each camera, and then synchronizes frames using time-stamp, and then clusters one or more persons using re-id features. The system then aggregates key points from one or more persons along time for each camera. The system then finds matches same time, same person key points on camera pairs and then runs un-calibrated structure from motion on the key point matches. The system then aligns and upgrades scale using one or more persons' head and feet key points or the known camera height.
  • In some embodiments, the system implements a self-healing scheme to recalibrate after these situations (but not limited to these situations): accidental or on purpose camera position changes, change of focus or aspect ratio, or camera upgrades.
  • In some embodiments, when the system uses this self-healing scheme, the system uses multi-camera tracking to match people and key points. Then the system triangulates and projects key point coordinates. The system monitors accumulated errors over time. If the accumulated error is large, re-calibration is needed. If re-calibration is needed, the system runs people-based calibration.
  • In some implementations, this kind of method is synchronizing system time for the plurality of cameras, and then the method is detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time; and then the method is calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.
  • In some embodiments, the at least one feature points are encoded with color or depth information.
  • In some embodiments, the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.
  • In some embodiments, the at least one feature points are lines, dots or polygons.
  • In some embodiments, a user can manually be involved in the calibrating. In some embodiments, the object is configured to move. In some embodiments, the plurality of cameras is configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points.
  • In some embodiments, the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded; Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.
  • In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.
  • In some embodiment, the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means; Capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras; Decoding the encoded coordinate information; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.
  • In some embodiments, wherein the plurality of cameras are configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, translucent stickers covered with some infrared ink to mark the patterns. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value. In some embodiments, translucent stickers are covered with some infrared ink to mark the patterns.
  • These and other aspects, their implementations and other features are described in details in the drawings, the description and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a method for self-calibrating a plurality of cameras in an area.
  • FIG. 2 shows another method for calibrating a plurality of cameras in an area.
  • FIG. 3 shows a third method for calibrating a plurality of cameras in an area.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows a method 100 for self-calibrating a plurality of cameras in an area. In some implementations, the method comprises a step 105 of synchronizing system time for the plurality of cameras.
  • In some embodiments, the method comprises a step 110 of detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time.
  • In some embodiments, the method comprises a step 115 of calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.
  • In some embodiments, the method comprises a step 120 of capturing a first set of one or more images of the feature points of the object along the route by a first one of the plurality of cameras and a second set of one or more images of the feature points of the object along the route by a second one of the plurality of cameras, wherein time stamp is recorded for each capturing.
  • In some embodiments, the method comprises a step 125 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points on the object between the first set of one or more images of the object and the second set of one or more images of the object at same time stamp.
  • In some embodiments, the at least one feature points are encoded with color or depth information. In some embodiments, the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.
  • In some embodiments, the at least one feature points are lines, dots or polygons.
  • In some embodiments, a user can manually be involved in the calibrating. In some embodiments, the object is configured to move. In some embodiments, the plurality of cameras is configured to move. In, some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, the method comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • FIG. 2 shows a method 200 for self-calibrating a plurality of cameras in an area. In some implementations, the method comprises a step 205 of detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded.
  • In some embodiments, the method comprises a step 210 of capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras.
  • In some embodiments, the method comprises a step 215 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.
  • In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at, least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.
  • FIG. 3 shows another method 300 for calibrating a plurality of cameras in an area.
  • In some embodiments, the method comprises a step 305 of detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means.
  • In some embodiments, the method comprises a step 310 of capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras.
  • In some embodiments, the method comprises a step 315 of decoding the encoded coordinate information.
  • In some embodiments, the method comprises a step 320 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.
  • In some embodiments, the method comprises a step 325 of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.
  • In some embodiments, the first object is a person. In some embodiments, one of the feature points is the person's head. In some embodiments, the position information of the same feature points is X and Y coordinates within an image. In some embodiments, the object is configured to move freely.
  • In some embodiments, wherein the plurality of cameras are configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, translucent stickers covered with some infrared ink to mark the patterns. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value. In some embodiments, translucent stickers are covered with some infrared ink to mark the patterns.

Claims (4)

1. A method for calibrating a plurality of cameras in an area, comprising:
Detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded;
Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and
Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.
2. The method for calibrating a plurality of cameras in an area of claim 1, wherein the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.
3. The method for calibrating a plurality of cameras in an area of claim 1, wherein the at least one feature points are lines, dots or polygons.
4. The method for calibrating a plurality of cameras in an area of claim 1, wherein a user can manually be involved in the calibrating.
US17/503,362 2020-09-22 2021-10-18 Smart Self Calibrating Camera System Abandoned US20220092822A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/503,362 US20220092822A1 (en) 2020-09-22 2021-10-18 Smart Self Calibrating Camera System

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/028,388 US11341683B2 (en) 2020-09-22 2020-09-22 Smart self calibrating camera system
US17/503,362 US20220092822A1 (en) 2020-09-22 2021-10-18 Smart Self Calibrating Camera System

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/028,388 Continuation US11341683B2 (en) 2020-09-22 2020-09-22 Smart self calibrating camera system

Publications (1)

Publication Number Publication Date
US20220092822A1 true US20220092822A1 (en) 2022-03-24

Family

ID=80740650

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/028,388 Active US11341683B2 (en) 2020-09-22 2020-09-22 Smart self calibrating camera system
US17/503,362 Abandoned US20220092822A1 (en) 2020-09-22 2021-10-18 Smart Self Calibrating Camera System
US17/503,364 Abandoned US20220092823A1 (en) 2020-09-22 2021-10-18 Smart self calibrating camera system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/028,388 Active US11341683B2 (en) 2020-09-22 2020-09-22 Smart self calibrating camera system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/503,364 Abandoned US20220092823A1 (en) 2020-09-22 2021-10-18 Smart self calibrating camera system

Country Status (1)

Country Link
US (3) US11341683B2 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9350923B2 (en) * 2010-08-31 2016-05-24 Cast Group Of Companies Inc. System and method for tracking
US10721418B2 (en) * 2017-05-10 2020-07-21 Grabango Co. Tilt-shift correction for camera arrays
US10827116B1 (en) * 2019-08-26 2020-11-03 Juan Ramon Terven Self calibration system for moving cameras
US11109006B2 (en) * 2017-09-14 2021-08-31 Sony Corporation Image processing apparatus and method
US11176707B2 (en) * 2018-05-23 2021-11-16 Panasonic Intellectual Property Management Co., Ltd. Calibration apparatus and calibration method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060125920A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation Matching un-synchronized image portions
US9091662B1 (en) * 2009-12-22 2015-07-28 Cognex Corporation System and method for automatic camera calibration and alignment determination
TWI554100B (en) * 2012-12-27 2016-10-11 Metal Ind Res &Development Ct Correction sheet design for correcting a plurality of image capturing apparatuses and correction methods of a plurality of image capturing apparatuses
US9230326B1 (en) * 2012-12-31 2016-01-05 Cognex Corporation System, method and calibration plate employing embedded 2D data codes as self-positioning fiducials
EP3316589B1 (en) * 2015-06-25 2024-02-28 Panasonic Intellectual Property Management Co., Ltd. Video synchronization device and video synchronization method
JP6780661B2 (en) * 2016-01-15 2020-11-04 ソニー株式会社 Image processing equipment and methods, programs, and image processing systems
JP6736414B2 (en) * 2016-08-10 2020-08-05 キヤノン株式会社 Image processing apparatus, image processing method and program
CN110570475A (en) * 2018-06-05 2019-12-13 上海商汤智能科技有限公司 vehicle-mounted camera self-calibration method and device and vehicle driving method and device
US10347009B1 (en) * 2018-11-21 2019-07-09 Juan Ramon Terven Self callbrating camera system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9350923B2 (en) * 2010-08-31 2016-05-24 Cast Group Of Companies Inc. System and method for tracking
US10721418B2 (en) * 2017-05-10 2020-07-21 Grabango Co. Tilt-shift correction for camera arrays
US11109006B2 (en) * 2017-09-14 2021-08-31 Sony Corporation Image processing apparatus and method
US11176707B2 (en) * 2018-05-23 2021-11-16 Panasonic Intellectual Property Management Co., Ltd. Calibration apparatus and calibration method
US10827116B1 (en) * 2019-08-26 2020-11-03 Juan Ramon Terven Self calibration system for moving cameras

Also Published As

Publication number Publication date
US20220092818A1 (en) 2022-03-24
US11341683B2 (en) 2022-05-24
US20220092823A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
US10347009B1 (en) Self callbrating camera system
US10827116B1 (en) Self calibration system for moving cameras
US8228382B2 (en) System and method for counting people
US8836756B2 (en) Apparatus and method for acquiring 3D depth information
WO2016062076A1 (en) Camera-based positioning method, device, and positioning system
US20030076980A1 (en) Coded visual markers for tracking and camera calibration in mobile computing systems
CN107079093B (en) Calibration device
WO2007018523A2 (en) Method and apparatus for stereo, multi-camera tracking and rf and video track fusion
TW201727537A (en) Face recognition system and face recognition method
CN107509043B (en) Image processing method, image processing apparatus, electronic apparatus, and computer-readable storage medium
CN106546230B (en) Positioning point arrangement method and device, and method and equipment for measuring three-dimensional coordinates of positioning points
KR101946317B1 (en) User Identification and Tracking Method Using CCTV System
KR20110128574A (en) Method for recognizing human face and recognizing apparatus
CN115700757B (en) Control method and device for fire water monitor and electronic equipment
EP3030859A1 (en) 3d mapping device for modeling of imaged objects using camera position and pose to obtain accuracy with reduced processing requirements
CN109426786A (en) Number detection system and number detection method
CN112019826A (en) Projection method, system, device, electronic equipment and storage medium
TWM610371U (en) Action recognition system
US20220092822A1 (en) Smart Self Calibrating Camera System
US20140104431A1 (en) System and Method for Utilizing a Surface for Remote Collaboration
KR101138603B1 (en) Camera-based real-time location system and method of locating in real-time using the same system
JP6019114B2 (en) Pedestrian gait recognition method and device for portable terminal
JP5651659B2 (en) Object detection system and program
TWI755950B (en) Action recognition method and system thereof
KR100814289B1 (en) Real time motion recognition apparatus and method

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION