WO2017086771A1 - A visual surveillance system with target tracking or positioning capability - Google Patents

A visual surveillance system with target tracking or positioning capability Download PDF

Info

Publication number
WO2017086771A1
WO2017086771A1 PCT/MY2015/050141 MY2015050141W WO2017086771A1 WO 2017086771 A1 WO2017086771 A1 WO 2017086771A1 MY 2015050141 W MY2015050141 W MY 2015050141W WO 2017086771 A1 WO2017086771 A1 WO 2017086771A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera assembly
platform
plane
controller unit
scene
Prior art date
Application number
PCT/MY2015/050141
Other languages
French (fr)
Inventor
Kong Wai MAH
Original Assignee
Ventionex Technologies Sdn. Bhd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ventionex Technologies Sdn. Bhd. filed Critical Ventionex Technologies Sdn. Bhd.
Priority to PCT/MY2015/050141 priority Critical patent/WO2017086771A1/en
Publication of WO2017086771A1 publication Critical patent/WO2017086771A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/56Accessories
    • G03B17/561Support related camera accessories
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/06Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting
    • F16M11/10Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting around a horizontal axis
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/18Heads with mechanism for moving the apparatus relatively to the stand
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon Stands for scientific apparatus such as gravitational force meters
    • F16M11/20Undercarriages with or without wheels
    • F16M11/2007Undercarriages with or without wheels comprising means allowing pivoting adjustment
    • F16M11/2014Undercarriages with or without wheels comprising means allowing pivoting adjustment around a vertical axis
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M13/00Other supports for positioning apparatus or articles; Means for steadying hand-held apparatus or articles
    • F16M13/02Other supports for positioning apparatus or articles; Means for steadying hand-held apparatus or articles for supporting on, or attaching to, an object, e.g. tree, gate, window-frame, cycle
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19617Surveillance camera constructional details
    • G08B13/19632Camera support structures, e.g. attachment means, poles

Definitions

  • the present invention relates to a visual target tracking or positioning system. More preferably, the invention is in the form of a visual surveillance system incorporated with the capabilities to constantly track position of a target in a substantially precise manner and, optionally, perform a corresponding desired action towards the target or a location around the target by sensing input from a user.
  • Camera or visual surveillance systems are conventionally used in various environments for security reasons such as guarding against unauthorized entry or discovering occurrence of theft of valuable products.
  • Pursuant to advance of technologies camera surveillance systems are further improved and conferred with greater capabilities to achieve other functions in addition to those carried out for its conventional roles.
  • some camera surveillance systems have been incorporated with facial recognition ability in assisting law enforcer to identify one or more suspects almost immediately through implementation of high resolution camera and incorporation of facial recognition algorithms into the computing apparatus controlling these systems.
  • employment of the surveillance system in object tracking or positioning is another aspect which has gained strong interest.
  • the surveillance system allows the surveillance system to continuously follow or track movement of an object of interest within field of view of one or more cameras operated by the surveillance system.
  • object tracking and/or positioning capabilities are deemed significantly effective in identifying progress of an incident real-time that a corresponding intervening action can be performed if needed to end the proceeding incident before effectuating any undesired outcome.
  • Murakami in United States patent publication no. 2005/0128291, discloses a surveillance system having cameras of both visible light and infrared that the latter camera type can be used to acquire video feed in a situation where visible light is substantially absence.
  • System of Murakami further comprises an object location calculator module and path analyzer module to predict next position of the moving object by way of analyzing rotation parameters, including tilt and pan angles, of the camera and predetermined coordinate map. Being enabled to predict motion of the moving object, Murakami claims the developed system can trace and constantly monitor movement of a targeted object. Yu et al. also discloses another surveillance system in United States Patent application no. 8977001 for tracking an object by determining and selecting one of a plurality predetermined models applicable for various situations and environments.
  • the software or algorithm utilized in the aforesaid systems shall face no apparent problems in delivering the desired result with excellent reliability. Still, there are situations which application of these pre-designed algorithms may not be suitable and human intervention is required to manually drive the camera to pursuit the target, e.g. presence of multiple physically almost identical moving objects with one particular targeted to be tracked.
  • Kahn's invention as described in United States patent no. 5802412 discloses a computer-controlled miniature pan/tilt tracking mount for small payload such as camera.
  • Tracking mount of Kahn employs at least one pan position sensor and one tilt position sensor to determine position of the mount and consequently allows the precise controller towards the mount. Similar concept can be found in United States Patent no. 6715940 in producing a dome camera assembly.
  • the claimed dome assembly has rim of the rotatable bearing race, on which the camera mounted, carved with optical code denoting discrete pan position of the rotatable bearing race and an adjacently installed optical reader to check the information about the rotational position. Still, these systems fail to associate the location of the actual environment monitored by the surveillance system substantially to corresponding predetermined coordinates mapped on the scene shown real-time on the monitor of a display unit. Such association established may render manual tracking of an object easier and/or effectuating an intended action towards associated location the by at least inputting one or more predetermined coordinates to the surveillance system set up. Therefore, a surveillance system bearing the like features is highly desired.
  • the present disclosure aims to provide a surveillance system capable of acquiring video feed and displaying the video feed through one or more camera assemblies to monitor and safeguard an area within field of view of the camera assembly.
  • Another object of the present disclosure is to offer a video surveillance system featuring target positioning and/or tracking abilities. More particularly, the disclosed system is incorporated with sensors such as mechanical encoder or optical encoder to trace both pan and tilt angles of the camera assembly of the surveillance system that the camera assembly can attain swift continuous tracking movement in a more controlled and managed fashion.
  • sensors such as mechanical encoder or optical encoder to trace both pan and tilt angles of the camera assembly of the surveillance system that the camera assembly can attain swift continuous tracking movement in a more controlled and managed fashion.
  • Further object of the present disclosure is to provide a video surveillance system that the planar view of the 3-dimensional environment captured by the camera at various tilt and pan angles and presented on a display unit is substantially mapped, calibrated, indexed and/or coordinated corresponding to intersecting, paired, joined and/or combined discrete counts on each of the pan and tilt optical encoders tracing and determining respectively azimuthal panning angle and altitude elevation angle.
  • Still another object of the present disclosure is to provide a video surveillance system, with planar view of the 3-dimensional environment videoed being calibrated or indexed to a set of predetermined coordinate points, coupled to a tool that the tool can carry out one or more actions towards a location in the 3-dimensional environment substantially corresponds to a coordinate point associated therewith to the planar view shown on a display unit.
  • One object of the present disclosure is to provide a video surveillance system, with planar view of the 3-dimensional environment videoed being calibrated or indexed to a set of predetermined coordinate points. These coordinate points are preferably generated or computed based upon cycles per revolution or resolution of a pair of optical encoders being respectively resorted by a controller unit of the present disclosure to determine and regulate azimuth and elevation angles of the camera assembly.
  • Each coordinate point corresponds to discrete joined counts combining one count of the first optical encoder and one count of the second optical encoder.
  • the scene captured by the camera assembly is indexed or overlaid with coordinate points from part of the coordinate map.
  • the computed coordinate point refers to non-repetitive coordinates on a substantially spherical coordinate system or map established based upon cycles per revolution of the first and second optical encoders respectively at azimuth and altitude.
  • one of the embodiments of the present invention is a visual surveillance system.
  • several embodiment of the disclosed system comprise a base mountable to a surface; a platform extending away from the base; a first driving mechanism being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly attached to the platform, the camera assembly having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism being engaged to the camera assembly to rotatably move the camera assembly at a second plane perpendicular to the first plane; and a controller unit electrically communicating with the first and the second driving mechanisms to regulate and/or control rotational movement of the platform and camera assembly to manoeuver the field of view of the camera assembly, the controller unit receiving captured visual information from the camera assembly.
  • the first driving mechanism preferably comprises a first optical encoder of N] cycles/revolution and the second driving mechanisms comprises a second optical encoder of N2 cycles/revolution that the first and the second optical encoders are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform at the first plane and the camera assembly at the second plane for the controller unit to manoeuver the field of view of the camera using the signal.
  • the first plane is a horizontal plane and the second plane is a vertical plane.
  • the disclosed system further comprises a tool driven by the controller unit to perform an action towards a location of the environment and the location substantially corresponds to a coordinate point.
  • a visual display unit may be included and remotely communicating to the controller unit to real-time streaming the captured visual information and visually present the scene relating to the target.
  • the visual display unit is tactile- sensitive and capable of generating the input by way of sensing a touch towards the indexed coordinate points overlaid to the presented scene.
  • the controller unit overlays or indexes the scene presented on the visual display unit with the coordinate points.
  • the display unit is preferably tactile sensitive and configured to receive an input relating to one of coordinate points associated to the scene presented from a user to drive the tool to perform the action towards the corresponding location or adjacent to the corresponding location.
  • the disclosed system is incorporated with a distance measuring sensor, such as laser- and/or infrared-distance sensor to detect distance of the target from the tool and relay the detected distance to the controller unit.
  • a visual surveillance system that comprises a platform mountable to a surface; a first driving mechanism being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly attached to the platform, the camera assembly having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism being engaged to the camera assembly to rotatably move the camera assembly at a second plane perpendicular to the first plane; a controller unit electrically communicating with the first and the second driving mechanisms to regulate and/or control rotational movement of the platform and camera assembly to manoeuver the field of view of the camera assembly, the controller unit receiving captured visual information from the camera assembly; and a visual display unit remotely communicating to the controller unit to real-time stream the captured
  • the first driving mechanism comprises a first optical encoder of Nj cycles/revolution and the second driving mechanisms comprises a second optical encoder of N2 cycles/revolution that the first and the second optical encoders are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform at the first plane and the camera assembly at the second plane for the controller unit to manoeuver the field of view of the camera using the signal.
  • the scene presented on the display unit is overlaid with a plurality of coordinate points computed from cycles/revolution of the first and second optical encoder that each coordinate point corresponds to a discrete point where counts of the first and second optical encoders intersecting one another.
  • Figure 1 shows perspective view of one embodiment of the present disclosure with two video camera assemblies, a left positioned camera assembly and a right- positioned camera assembly;
  • Figure 2 shows explosive view of the embodiment illustrated in Figure Figure 3 shows explosive view of one embodiment of the base and other components placed thereto including parts of the first driving mechanism;
  • Figure 4 shows explosive view of one embodiment of the disclosed system around the platform main body and other components placed thereto including parts of the second driving mechanism
  • Figure 5 shows top cap of the platform and the controller unit or module positioned underneath the cap
  • Figure 6 shows explosive view the right positioned camera
  • Figure 7 illustrate grids or coordinate map of coordinate points being mapped onto the display unit.
  • the present disclosure relates to a visual surveillance system 100 using one or more camera assemblies 310, 330 operable to capture video images of one or more object within a defined environment.
  • the disclosed visual target tracking system 100 comprises a base 110 mountable to a surface; a platform 130 extending away from the base 110; a first driving mechanism 170 being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly 310/330 attached to the platform, the camera assembly 310/330 having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism 190 being engaged to the camera assembly 310/330 to rotatably move the camera assembly 310/330 at a second plane perpendicular to the first plane; and a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310/330 to manoeuver the field of view of the camera assembly
  • the system 100 illustrated in Figure 1 and 2 is one of embodiments of the present disclosure is to be mounted on a pole-like structure or surface of a pole-like structure for visual surveillance and the present disclosure shall not limited solely to such embodiments described hereafter.
  • the first plane is a horizontal plane and the second plane is a vertical plane, but not limited to, referring to the embodiments described hereinafter.
  • the disclosed system 100 may be mounted onto a mobile unit that the captured image or video is transmitted wirelessly to a remotely located server or display.
  • the mobile unit can be a robotic construct, vehicle or even moving human troops.
  • One or more self-stabilization mechanism can be incorporated thereof to ensure that video captured from such mobile unit attain certain acceptable quality to be used for subsequent analysis work.
  • the base 110 and platform 130 of the present system can be a hollow housing in which at least part of the controller unit 150, the first 170 and second driving mechanisms 190 will be stored and kept from being adversely affected by external environmental agent such as rain, heat, vapor and/or dust.
  • the base 110 is cylindrical in shape defining an internal hollow compartment which is accessible at least via an open bottom, a through hole 111 located at its top, and a side opening 112.
  • Side flange 113 extends radially from bottom rim of the base 110 that several threaded apertures 114 are located thereby allowing securement of the base 110 onto a mountable surface by way of compatible bot and nut or the like fasteners.
  • the top through hole 111 and the bottom opening are arranged and aligned in a manner which facilitates wires, cables, wiring pipes or the like to run through without facing any substantial hassles.
  • part of the controller unit and/or communication module may be stored within the base 110.
  • the side opening 112 fabricated at the sidewall 115 of the base 110 grants access for configuring these stored parts.
  • a cover 116 is preferably used to seal off the side opening 112 when access to the interior compartment of the base 110 is not needed.
  • the top through hole l l lof the base is preferably defined around center axis of the top 118 of the base 110.
  • the platform is vertically arisen or extended away from the base. More particular, the platform is a cylindrical construct defining an interior compartment to house at least part of the first driving mechanism.
  • the platform generally comprises a hollow cylindrical main body having an open top and an open bottom concealable by a top cap and a bottom cap respectively.
  • the cylindrical main body of the platform is relatively greater in length but shorter at diameter compared to the base, though not necessary in some other embodiments, and preferably rested atop the base.
  • Both bottom rim of the cylindrical main body and the bottom cap possess compatible flanges bearing a number of through holes, fasteners like bolt and nut can be engaged thereto for securing the bottom cap to the main body.
  • the bottom cap carries a through hole (not shown) around the central axis for part of the wiring, cables or the first driving mechanism to cross into the interior compartment of the platform.
  • the platform is rotatable or can be revolved around the central vertical axis in relation to the base to effectuate panning movement of the camera assembly at azimuth angle.
  • a pair of outlets are carved and arranged on the main body in an opposing fashion. Some embodiments may have one or more of such outlets on which the camera assembly rotatably mounts and to be turned to attain various elevation angles for visual surveillance purpose. Wiring work can be routed through the outlet to reach the camera assembly.
  • circular rim of the platform defining the open top is engraved with threaded tracks for coupling of the top cap carved with corresponding threaded tracks.
  • the top cap can be turn clockwise or anti-clockwise against the open top to respectively engage or disengage the top cap. Removal of the top cap lead to instant access of circuit boards and/or controller unit stored within the platform. More preferably, one or more storing racks are attached either to the inner surface of the main body or the beneath the top cap to place circuit boards of various modules including the controller unit in an orderly fashion, such that upgrade or replacement of the relevant modules can be conducted conveniently.
  • the base and platform arisen thereof can be an integral structure wholly instead of two separate components being mechanically fixed together to form one functional unit.
  • the first driving mechanism 170 comprises a first motor 171, a set of first gear assembly 172 engaged to the platform 130 and being configured to be driven by the first motor 171 to rotate the platform 130 around the central vertical axis in relation to the base 110, a first optical encoder 174 coupled to the first gear assembly 172 to determine angular or azimuth movement of the platform 130 or the attached camera assembly 310/330 referring to a one or more predetermined reference point at the horizontal plane.
  • the first optical encoder 174 preferably includes a first optical disc encoded with a number of codes or counts denoting or reflecting angular position of the attached camera assembly 310/330 and at least one optical sensor or reader capable of deciphering the code or counts into a machine-readable signal to be fed to the controller unit 150 for managing panning movement of the camera assembly 310/330 coupled thereto. More preferably, a pair of diagonally positioned optical sensors or readers (not shown) is employed in the first optical encoder 174 for interpolating absolute position of the camera assembly 310/330.
  • the first motor 171 can be a step motor regulated or managed by the controller unit 150.
  • the step motor drives the first gear assembly 172, which subsequently pull a first driving belt 173 to move the first optical disc accordingly to determine the extent of azimuth movement, upon receiving input of the controller unit 150.
  • Part of the first gear assembly is operably joined to a gear shaft 178, which is substantially aligned to the central vertical axis, located at the base 110, such that driving force of the first motor 171 can be translated into azimuth movement of the platform 130, so as the attached camera assembly 310/330, around the vertical axis in relation to the base 110.
  • the second driving mechanism 190 comprises a second motor 191, a set of second gear assembly 192 engaged to the camera assembly 310/330 and being configured to be driven by the second motor 191 to rotate or elevate the camera assembly 310/330 around a horizontal axis in relation to the platform 130, a second optical encoder 194 coupled to the second gear assembly 192 to determine angular or elevation movement of the attached camera assembly 310/330 referring to a one or more predetermined reference point at the vertical plane.
  • the whole set up of the second driving mechanism 190 can be similar to the first driving mechanism 170, but it is spatially organized above of and in a fashion transverse to first driving mechanism 170.
  • the second optical encoder 194 preferably includes a second optical disc encoded with a number of codes or counts denoting or reflecting angular position of the attached camera assembly 310/330 along the altitude angle and at least one optical sensor or reader capable of deciphering the code or counts into a machine-readable signal to be fed to the controller unit 150 for managing elevation movement of the camera assembly 310/330 coupled thereto.
  • a pair of diagonally positioned optical sensor or reader can be employed in some embodiments for interpolating absolute position of the camera assembly 310/330 along the altitude.
  • the second motor 191 can be a step motor regulated or managed by the controller unit 150.
  • the step motor 191 drives the second gear assembly 192, which subsequently creates corresponding elevation movement on the second optical disc of the second optical encoder, via a second driving belt 193, according to input of the controller unit 150.
  • Part of the second gear assembly 192 is operably joined to a gear shaft 198 or other gear components placed inside the camera assembly, such that driving force of the second motor 191 can be translated into elevation movement of the camera assembly 310/330 in relation to the platform 130.
  • the controller unit 150 can be any known device or apparatus with acceptable computing power to carry out instructions, input and/or programs being configured to manage movement of the camera assembly in the disclosed system.
  • the controller unit 150 can be a micro-computer or programmable integrated-circuits (IC) chips. More importantly, the controller unit 150 may form a node of a wider computing network that the information or video captured by the present disclosed system 100 would be shared, stored or analyzed by other nodes in the network. As shown in Figure 5, the controller unit 150 is preferably housed within the platform 130 underneath of the top cap 132.
  • the disclosed system 100 may employ a pair of rack 144 to neatly arrange the IC boards 155 forming the controller unit 150.
  • the racks 144 can have lips 147 extending from the sidewall 145 of the racks 144 to effectuate securement of the rack 144 within the interior compartment of the platform 130 via fasteners 149 such as screw or nuts.
  • the controller unit 150 may comprise more than one module in order to perform various functions.
  • the controller unit 150 may possess an integrated or separate communication module for data transfer or communication.
  • the communication module may contain buses for the controller unit to electrically communicate with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310/330 to manoeuver the field of view of the video camera.
  • the communication module also realizes receipt of the captured visual information by the controller unit 150 from the camera assembly 310/330 and transmits the visual information to be stored into a local or remote database.
  • Communication ports are preferably provided to establish wire or wireless communication with other remotely positioned module, parts or nodes.
  • the communication may be configured to have constant communication with other system to retrieve information from satellite, radar or weather prediction server.
  • the disclosed system can further comprise a visual display unit 230 remotely communicating to the controller unit 150 to realtime stream the captured visual information and visually present the scene relating to the target.
  • the visual display unit 230 can be LED- or LCD-based display panel. More preferably, the visual display unit 230 is touch-enabled or tactile-enabled that it allows user to provide one or more user input or instructions to the controller unit 150 by way of typing, highlighting an area of interest, selecting a target for the camera assembly 310/330 to track, and/or touching a defined coordinate point for the system 100 to perform an action towards a location associated to the touched coordinate point through a user interface overlaid on top of or integrated into the captured scene consistently streamed and presented.
  • the visual display unit 230 can be touch-enabled screen of a smartphone, computing tablet, laptop or other the like computing devices.
  • the user interface preferably in a virtual or digital form, for receiving user input can be called on or off upon activating dedicated virtual button or area on the screen of the visual display unit 230.
  • Physical button around the smartphone or a keyboard coupled to the disclosed system can be used to call upon the virtual user interface to suspend or overlay on top, preferably with good transparency to not impede view of the present scene, of the captured information shown.
  • the virtual user interface is adaptably an addition window juxtaposing to the scene presented.
  • the user interface preferably provides section where the user can manually key in a coordinate point, or a named location associated thereof, at which a planned action can be carried out with respect to a spot in the three-dimensional environment captured by the camera assembly.
  • camera assemblies 310/330 of some preferred embodiments are illustrated.
  • the camera assemblies 310/330 of the present disclosure may comprise a visible light camera 310 assembly and/or an infrared camera assembly 330.
  • the visible light camera assembly 310 has a tubular housing 311 with front 312 and back openings, a video camera sensor 313 and circuit stored inside the housing 311, a plurality of fastening components 314 to secure the camera sensor 313 and circuit within the housing 311, and front 315 and back end caps 316 to respectively shield the front 312 and back openings for substantially sealing interior of the housing off from the external environment.
  • the front end cap 315 has a glass or lens portion 317 to permit entrance of visible light for generating video or still image on the video camera sensor.
  • a wiper 309 can be coupled to the visible light camera assembly 310 to clean external surface of the front end cap 315 from time to time to ensure that entrance of the visible light is not impeded by the dust or dirt collected onto the front end cap 315. Explosive view of the infrared camera assembly 330 is further shown in Figure 6.
  • the infrared camera assembly 330 comprises a tubular housing 331 with front 332 and back openings, a infrared camera sensor 333 and circuit stored inside the housing 331, a plurality of fastening components 334 to secure the infrared camera sensor 333 and circuit within the housing, front 335 and back end caps 336 to respectively shield the front 332 and back openings, and an array of infrared LEDs 339 positioned immediately behind of the front end cap 335 to irradiate infrared radiation to a target in a condition lack of visible light.
  • the first driving mechanism 170 comprises the first optical encoder 174 and the second driving mechanisms 190 comprises a second optical encoder 194 to facilitate tracing and computation of absolute position of the camera assembly 310/330 within the defined horizontal and vertical movement planes.
  • minimal moveable angular distance of the camera assembly 310/330 respectively at the horizontal and vertical planes is dictated or governed by the available counts on the optical discs.
  • the first optical disc is of Nj cycles/revolution and the second optical disc has ⁇ 1 ⁇ 2 cycles/revolution. The number of cycles per revolution of the optical disc can be regarded as resolution of the optical disc. Higher cycles/revolution generally results in the finer attainable angular movement of the camera assembly 310/330.
  • the present disclosure allows finer and swifter tracking on a moving object shown on the screen of display unit either manually or by preset algorithm. More preferably, the Ni and/or N2 is 96 to 30000 or higher. With the aid of the correspondingly coupled optical reader, the disclosed system 100 is able to output or compute one or more signals, which correspond to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310/330 at the second plane for the controller unit 150 to manoeuver the field of view of the camera using the signal.
  • the disclosed system 100 is further incorporated with a coordinate map 400 having Ni X N2 of coordinate points 404.
  • each subsequent count further away from the absolute reference point or count, NiXo, of the first optical disc will be respectively denoted as NJXJ, N X2, N2X3 until NjX transport, where n equals to total counts of the first optical disc.
  • Angular distance between a count and another immediate count on the first optical disc is equivalently divided and predetermined. For example, angular distances at azimuth from NjXj to N1X2 and N1X2 to N1X3 are equally same.
  • the present disclosure uses the angular count or the rotatable panning angle of the first optical encoder 174as a mean to compute azimuth coordinate point, or X-coordinate point, along the azimuth path of camera assembly 310.
  • the controller unit 150 can compute or calculate position, more precisely angular position, of the camera assembly 310 with respect to a predetermined reference point.
  • N1Y0 the absolute reference point or count, of the second optical disc will be denoted as N2Y1, N2Y2, N2Y3 until N2Y 11 , where n equals to total counts of the second optical disc.
  • Angular distance between a count and another immediate count on the second optical disc is equivalently divided and predetermined, preferably according to the cycles/revolution of the second optical disc. For example, angular distances at altitude from N2Y1 to N2Y2 and N2Y2 to N2Y3 are the same.
  • the angular count of the second optical encoder 194 is preferably utilized to generate vertical coordinate point, or Y-coordinate point, along the rotatable altitude path of the camera assembly 310. With the count read from the optical sensor or encoder 174/194, the controller unit 150 can compute or calculate position, more precisely angular position, of the camera assembly 310 with respect to a predetermined reference point along the altitude.
  • the coordination system contains a plurality of coordinate points 404, which are preferably computed based upon cycles/revolution of the first and second optical discs or encoders that each coordinate point 404 corresponds to discrete joined counts, composed of one count of the first optical disc or encoder 174 and one count of the second optical disc or encoders 194 as explained above.
  • Each coordination point 404 represents a unique or discrete location on the substantial spherical coordination system constructed from the combined angular counts of the first 174 and second optical encoders 194. It is important to note herein that some of the counts in the optical disc or optical encoder may not be used owing to physical restraints from the environment where the camera assembly 310 installed. For instance, the camera assembly 310 may only ascend or descend around 120 degree in total about elevation angle along the altitude when the camera assembly 310 is positioned adjacent to ceiling of a room.
  • the disclosed system 100 overlays, maps or indexes the scene captured by the camera assembly 310/330 with part of the corresponding coordinate points 404 of the coordinate map 400.
  • the scene or field of view present on the screen at one point of time only reveals part of the total environment monitored or visually accessible by the camera assembly 310/330.
  • the controller unit 150 can determine coordinate points 404 of the coordinate map 400 relating to the scene actively shown, such that the disclosed system 100 can overlay, map or index the scene using the computed coordinate points 404.
  • presence of touch- or tactile-enabled screen in the present disclosure permits receipt of user touch on the screen to toggle an input or feature thereof, preferably in relation to the mapped coordinate points 404.
  • the disclosed system 100 preferably maps or overlays the coordinate points orderly onto the scene shown in the screen according to a number of predetermined pixels, literally a group of closely located pixels, available on a defined area on the screen. More particularly, the disclosed system 100 links or associates a group of adjacently placed, located or positioned pixels on the screen to a coordinate point 404 of the coordinate map 400.
  • a touch, which is digitally registered by the controller unit 150, towards one or more pixels of a given group linked to a coordinate point 404 will be recognized by the disclosed system 100 as a user input to activate an event to the linked coordinate point 404; the recognized input may lead to an intended action directed to or at a location, in the three dimensional environment, corresponding to or associated with the recognized coordinate point 404.
  • Each coordinate points 404 overlaid on the screen will be associated to a discrete group of pixels in similar manner.
  • the present disclosure is able to employ high CPR or resolution optical encoders 174/194 in deriving the plurality of coordinate points 404 and packing more derived coordinate points 404 into a displayed scene in any given time, rendering tracking of a target or performing of a planned action in a much refined fashion.
  • Association of the high count in the optical encoders 174/194 with high definition display unit 230 resulting in improved tracking and targeting is almost infeasible in the absence of any of these elements decade ago.
  • scene or a tracked object displayed on the screen of the display unit 230 in some of the embodiments can be zoomed in or out, either by way of optical or digital magnification.
  • the coordinate points 404 and the linked pixel groups can adaptably change in a dynamic manner in connection to the zoom or magnification level of the scene or object monitored.
  • the disclosed system 100 by way of software manipulation, reduces the number of coordinate points 404 allotted to the screen and associates more pixels to the a coordinate point 404 when the scene or target object is being zoomed in or magnified on screen.
  • the number of coordinate point 404 is 40 with 100 pixels associated to each coordinate point in zoom free condition, but the number of coordinate point 404 will be reduced to 20 with 200 pixels associated to each remaining coordinate point 404 when the target or scene is subjected to 2 times magnification.
  • the disclosed system 100 packs more coordinate points into an overlaid scene with lower pixels associated to each coordinate point 404, compared to zoom free condition, in connection to activation of a zoom out function.
  • Further embodiments of the disclosed system may include algorithm to compare pixels being digitally touched by user when a registered tactile input landed on two adjacent groups of pixels of two coordinate points 404 on the screen; the disclosed system preferably registers coordinate point of greater number of touched pixels as the valid input.
  • More preferred embodiments of the present disclosure may further take into focal length of the camera assembly 310 into consideration to map or index the scene for zooming in or out thereby.
  • base point of the first optical encoder 174 a horizontal pan (X-axis) optical disc encoder, and the second optical encoder 194 a vertical tilt (Y-axis) optical disc encoder are calibrated continuously such that the entire panoramic areas of the combined coverage of the pan and tilt operations in 360 degree of X-axis and 360 degree of Y-axis are indexed or computed to generate coordinate points 404.
  • the disclosed system 100 use the generated indices (I) or coordinate points 404 for subsequent mapping with the field of view (FOV) and/or pixels of the camera sensor at all focal length (f), and the field of view and/or pixels on the visual display unit 230.
  • the selection of the cycles per resolution of the optical disc encoder 174/194 needs to be capable to meet the distance and accuracy requirement of the intended application. The longer the distance of the target and the smaller the size of the target, shall require optical disc encoder 174/194 with higher cycles per resolution.
  • the density of indices or coordinate points 404 shall be reduced proportionally with higher focal length (f) and vice versa.
  • the reduction or increment of focal length shall determine the number of pixels of the screen in defining an index and the gap distance between immediate indices.
  • the number of pixels on the screen to define or be linked to an index shall increase and targeting a smaller object becomes harder, with greater gap exist between indices.
  • the field of view and pixels of the camera assembly 310 are calibrated with the field of view and/or pixels of the display unit 230 to ensure that the available pixels on the display unit 230 can be correctly mapped using the generated indices and/or coordinate point.
  • the present disclosure further comprises a tool driven or managed by the controller unit to perform an action towards a location in the surveyed environment.
  • the location substantially corresponds to a coordinate point 404 overlaid on the screen.
  • the controller unit 150 overlays or indexes the scene presented on the visual display unit 230 with computed coordinate points 404 from part of the coordinate map 400 as illustrated in Figure 7.
  • the display unit 230 is configured to receive an input, preferably as a touch input, on one of the coordinate points 404 indexed to the shown scene and relays the input to the controller unit 150 to drive the tool performing an action towards a corresponding location or spot in the surveyed environment.
  • the tool (not shown) is preferably capable of launching a substantially long range action towards a spot or location in the surveyed environment associated to the coordinate point 404 selected from the screen based on the user input.
  • the tool can be a light source, a sound source, or even projectile weapon to deter or stop the unlawful intrusion.
  • the tool in some embodiments, is a spotlight. It may be moveable to beam light on a given location or spot, according to a user input, around the intruder as a sign of warning.
  • the visual display unit 230 is tactile-sensitive and capable of generating the input by way of sensing a touch about the coordinate points 404 overlaid onto the presented scene.
  • the grids or coordinate map 400 of the coordinate points 404 may or may not be shown on the screen of the display unit.
  • Figure 7 illustrates one embodiment in which the grid 400 is mapped and shown on the visual display unit 230.
  • the controller unit 150 is fashioned to take the coordinate point 404, which is closest to tactile input sensed or detected on the screen, as the point of input and direct an action accordingly.
  • the scene displayed on the screen being mapped or overlaid, visibly or invisibly, by the grids 400 of coordinate points 404 is a planar view or of two-dimensional, while the coordinate points 404 are actually computed or generated based upon angular position of camera assembly 310/330 within a three-dimensional environment.
  • the scene or object shown on the display unit is a two dimensional representation of a spatial arrangement of various elements in a three-dimension environment. Accuracy or precision of such representation degrades exponentially when a subject matter located further away from the camera assembly 310/330.
  • the planed action can be directed to a specific location in the three dimensional environment corresponding to a coordinate point 404 selected on the screen. Nonetheless, the intended action can be carried out with acceptable accuracy without the need of calibration work, especially when the area monitored by the disclosed system 100 is a relatively small enclosed environment and/or the location or subject targeted by the disclosed system 100 is in close proximity of the camera assembly 310/330 or the driven tool.
  • the disclosed system 100 in some other embodiments, may only have several locations of interest on the surveyed scene calibrated to the corresponding coordinate locations on screen.
  • One or more embodiments of the present disclosure may be enabled to perform the intended action free from any manual input or human intervention relying upon algorithm or programmed instruction used in running the disclosed system.
  • the intended action will be conducted in an automated fashion though it is can be overwritten by manual input if needed.
  • the disclosed system 100 may be incorporated with at least one distance measuring sensor to detect distance of the target or the location from the tool and relay the detected distance to the controller unit to calculate the ideal trajectory path of the light, sound or projectile from the tool.
  • the distance measuring sensors are infra-red or laser-based.
  • Multiple sensors, including video camera sensors, and/or intended action devices can be calibrated to handle various multiple axis distances and directions of target object in several preferred embodiments of the present disclosure.
  • the preset disclosure provides also a visual surveillance and target tracking system 100 comprising a platform 130 mountable to a surface; a first driving mechanism 170 being coupled to the platform 130 to rotationally move the platform 130 at a first plane in relation to the base 110; a camera assembly 310/330 attached to the platform, the camera assembly 310 having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism 190 being engaged to the camera assembly 310 to rotatably move the camera assembly 310 at a second plane perpendicular to the first plane; a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310 to manoeuver the field of view of the camera assembly 310, the controller unit 150 receiving captured visual information from the camera assembly 310; and a visual display unit 230 remotely communicating to the controller unit 150 to real-time stream the captured visual information and visually present the scene relating to the target.
  • the first driving mechanism 170 comprises a first optical encoder 174 of N] cycles/revolution and the second driving mechanism 190 comprises a second optical encoder 194 of N2 cycles/revolution that the first 174 and the second optical encoders 194 are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310 at the second plane for the controller unit 150 to manoeuver the field of view of the camera assembly 310 using the signal.
  • the scene presented on the display unit 230 is overlaid with a plurality of coordinate points 404 computed from cycles/revolution of the first 174 and the second optical encoders 194 that each coordinate point 404 corresponds to a discrete point where counts of the first 174 and the second optical encoders 194 intersecting one and other.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides a visual surveillance system 100. The disclosed system 100 preferably comprises a base 110 mountable to a surface; a platform 130 extending away from the base110; a first driving mechanism 170 being coupled to the platform 130 to rotationally move the platform 130 at a first plane in relation to the base 110; a camera assembly 310 attached to the platform 130, the camera assembly 310 having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second 10 driving mechanism 190 being engaged to the camera assembly 310 to rotatably move the camera assembly 310 at a second plane perpendicular to the first plane; and a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310 to manoeuver the field of view of the camera assembly 310, the controller unit 150 receiving 15 captured visual information from the camera assembly 310. The first driving mechanism 170 of the disclosed system 100 comprises a first optical encoder 174 of N1 cycles/revolution and the second driving mechanism 190 comprises a second optical encoder 194 of N2 cycles/revolution that the first 174 and the second optical encoders 194 are able to output one or more signals respectively corresponding to angular position of the rotationally moved 20 platform 130 at the first plane and the camera assembly 310 at the second plane for the controller unit 150 to manoeuver the field of view of the camera assembly 310 using the signal.

Description

A Visual Surveillance System With Target Tracking Or Positioning Capability Technical Field
The present invention relates to a visual target tracking or positioning system. More preferably, the invention is in the form of a visual surveillance system incorporated with the capabilities to constantly track position of a target in a substantially precise manner and, optionally, perform a corresponding desired action towards the target or a location around the target by sensing input from a user.
Background
Camera or visual surveillance systems are conventionally used in various environments for security reasons such as guarding against unauthorized entry or discovering occurrence of theft of valuable products. Pursuant to advance of technologies, camera surveillance systems are further improved and conferred with greater capabilities to achieve other functions in addition to those carried out for its conventional roles. For example, some camera surveillance systems have been incorporated with facial recognition ability in assisting law enforcer to identify one or more suspects almost immediately through implementation of high resolution camera and incorporation of facial recognition algorithms into the computing apparatus controlling these systems. Apart from that, employment of the surveillance system in object tracking or positioning, either triggered by an action of the object and/or physical parameters of the object, is another aspect which has gained strong interest. It allows the surveillance system to continuously follow or track movement of an object of interest within field of view of one or more cameras operated by the surveillance system. Such object tracking and/or positioning capabilities are deemed significantly effective in identifying progress of an incident real-time that a corresponding intervening action can be performed if needed to end the proceeding incident before effectuating any undesired outcome.
In order to successfully track the object of interest, an acceptable level of accuracy or precision has to be at least attained by the surveillance system in maneuvering the camera coupled thereby to ensure that the object of interest is always locked within sight of the camera. Such precision in locating or tracking a target can be approached either from a software or hardware perspective, or combination of both. For example, United States patent no. 6998987 offers a video tracking system integrated with RFID-based coordinate determinators that these coordinate determinators are RFID tag positioned at visually apparent locations and being calibrated to allow the video surveillance system to determine the location. With the assistance of the RFID coordinate detrminators, the described system can easily direct the camera to pinpoint the location anchored with the determinators. Nevertheless, implementation of RFID tags may become unfeasible when large area is to be covered by the surveillance system and great number of locations to be installed with RFID- tags. Eckert et al. have devised another surveillance system capable of tracking an object over a larger area, with the use of motorized camera carriage suspended from and movable on a installed track, in United Stated patent publication no. 2013/0302024. However, establishment of the track can be difficult under outdoor environment or area having limited space.
Alternatively, Murakami, in United States patent publication no. 2005/0128291, discloses a surveillance system having cameras of both visible light and infrared that the latter camera type can be used to acquire video feed in a situation where visible light is substantially absence. System of Murakami further comprises an object location calculator module and path analyzer module to predict next position of the moving object by way of analyzing rotation parameters, including tilt and pan angles, of the camera and predetermined coordinate map. Being enabled to predict motion of the moving object, Murakami claims the developed system can trace and constantly monitor movement of a targeted object. Yu et al. also discloses another surveillance system in United States Patent application no. 8977001 for tracking an object by determining and selecting one of a plurality predetermined models applicable for various situations and environments. The software or algorithm utilized in the aforesaid systems shall face no apparent problems in delivering the desired result with excellent reliability. Still, there are situations which application of these pre-designed algorithms may not be suitable and human intervention is required to manually drive the camera to pursuit the target, e.g. presence of multiple physically almost identical moving objects with one particular targeted to be tracked.
To effectuate such tracking action by human effort, the pan and tilt angles of the camera in the surveillance systems have to be quantified and fine-tuned in a manner that the camera can attain a minimal movement at the smallest azimuthal and/or elevation angle possible to facilitate a swift and continuous tracking movement. Kahn's invention as described in United States patent no. 5802412 discloses a computer-controlled miniature pan/tilt tracking mount for small payload such as camera. Tracking mount of Kahn employs at least one pan position sensor and one tilt position sensor to determine position of the mount and consequently allows the precise controller towards the mount. Similar concept can be found in United States Patent no. 6715940 in producing a dome camera assembly. The claimed dome assembly has rim of the rotatable bearing race, on which the camera mounted, carved with optical code denoting discrete pan position of the rotatable bearing race and an adjacently installed optical reader to check the information about the rotational position. Still, these systems fail to associate the location of the actual environment monitored by the surveillance system substantially to corresponding predetermined coordinates mapped on the scene shown real-time on the monitor of a display unit. Such association established may render manual tracking of an object easier and/or effectuating an intended action towards associated location the by at least inputting one or more predetermined coordinates to the surveillance system set up. Therefore, a surveillance system bearing the like features is highly desired.
Summary
The present disclosure aims to provide a surveillance system capable of acquiring video feed and displaying the video feed through one or more camera assemblies to monitor and safeguard an area within field of view of the camera assembly.
Another object of the present disclosure is to offer a video surveillance system featuring target positioning and/or tracking abilities. More particularly, the disclosed system is incorporated with sensors such as mechanical encoder or optical encoder to trace both pan and tilt angles of the camera assembly of the surveillance system that the camera assembly can attain swift continuous tracking movement in a more controlled and managed fashion.
Further object of the present disclosure is to provide a video surveillance system that the planar view of the 3-dimensional environment captured by the camera at various tilt and pan angles and presented on a display unit is substantially mapped, calibrated, indexed and/or coordinated corresponding to intersecting, paired, joined and/or combined discrete counts on each of the pan and tilt optical encoders tracing and determining respectively azimuthal panning angle and altitude elevation angle.
Still another object of the present disclosure is to provide a video surveillance system, with planar view of the 3-dimensional environment videoed being calibrated or indexed to a set of predetermined coordinate points, coupled to a tool that the tool can carry out one or more actions towards a location in the 3-dimensional environment substantially corresponds to a coordinate point associated therewith to the planar view shown on a display unit. One object of the present disclosure is to provide a video surveillance system, with planar view of the 3-dimensional environment videoed being calibrated or indexed to a set of predetermined coordinate points. These coordinate points are preferably generated or computed based upon cycles per revolution or resolution of a pair of optical encoders being respectively resorted by a controller unit of the present disclosure to determine and regulate azimuth and elevation angles of the camera assembly. Each coordinate point corresponds to discrete joined counts combining one count of the first optical encoder and one count of the second optical encoder. The scene captured by the camera assembly is indexed or overlaid with coordinate points from part of the coordinate map. The computed coordinate point refers to non-repetitive coordinates on a substantially spherical coordinate system or map established based upon cycles per revolution of the first and second optical encoders respectively at azimuth and altitude.
At least one of the preceding objects is met, in whole or in part, by the present invention, in which one of the embodiments of the present invention is a visual surveillance system. Particularly, several embodiment of the disclosed system comprise a base mountable to a surface; a platform extending away from the base; a first driving mechanism being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly attached to the platform, the camera assembly having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism being engaged to the camera assembly to rotatably move the camera assembly at a second plane perpendicular to the first plane; and a controller unit electrically communicating with the first and the second driving mechanisms to regulate and/or control rotational movement of the platform and camera assembly to manoeuver the field of view of the camera assembly, the controller unit receiving captured visual information from the camera assembly. The first driving mechanism preferably comprises a first optical encoder of N] cycles/revolution and the second driving mechanisms comprises a second optical encoder of N2 cycles/revolution that the first and the second optical encoders are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform at the first plane and the camera assembly at the second plane for the controller unit to manoeuver the field of view of the camera using the signal. Preferably, the first plane is a horizontal plane and the second plane is a vertical plane.
For some embodiments, the disclosed system further comprises a tool driven by the controller unit to perform an action towards a location of the environment and the location substantially corresponds to a coordinate point.
In more embodiments, a visual display unit may be included and remotely communicating to the controller unit to real-time streaming the captured visual information and visually present the scene relating to the target.
In other preferred embodiments, the visual display unit is tactile- sensitive and capable of generating the input by way of sensing a touch towards the indexed coordinate points overlaid to the presented scene.
Still, in a number of embodiments, the controller unit overlays or indexes the scene presented on the visual display unit with the coordinate points. The display unit is preferably tactile sensitive and configured to receive an input relating to one of coordinate points associated to the scene presented from a user to drive the tool to perform the action towards the corresponding location or adjacent to the corresponding location.
For a number of embodiments, the disclosed system is incorporated with a distance measuring sensor, such as laser- and/or infrared-distance sensor to detect distance of the target from the tool and relay the detected distance to the controller unit. Another aspect of the present disclosure involves a visual surveillance system that comprises a platform mountable to a surface; a first driving mechanism being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly attached to the platform, the camera assembly having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism being engaged to the camera assembly to rotatably move the camera assembly at a second plane perpendicular to the first plane; a controller unit electrically communicating with the first and the second driving mechanisms to regulate and/or control rotational movement of the platform and camera assembly to manoeuver the field of view of the camera assembly, the controller unit receiving captured visual information from the camera assembly; and a visual display unit remotely communicating to the controller unit to real-time stream the captured visual information and visually present the scene relating to the target. The first driving mechanism comprises a first optical encoder of Nj cycles/revolution and the second driving mechanisms comprises a second optical encoder of N2 cycles/revolution that the first and the second optical encoders are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform at the first plane and the camera assembly at the second plane for the controller unit to manoeuver the field of view of the camera using the signal. Moreover, the scene presented on the display unit is overlaid with a plurality of coordinate points computed from cycles/revolution of the first and second optical encoder that each coordinate point corresponds to a discrete point where counts of the first and second optical encoders intersecting one another.
Brief Description Of The Drawings
Figure 1 shows perspective view of one embodiment of the present disclosure with two video camera assemblies, a left positioned camera assembly and a right- positioned camera assembly;
Figure 2 shows explosive view of the embodiment illustrated in Figure Figure 3 shows explosive view of one embodiment of the base and other components placed thereto including parts of the first driving mechanism;
Figure 4 shows explosive view of one embodiment of the disclosed system around the platform main body and other components placed thereto including parts of the second driving mechanism;
Figure 5 shows top cap of the platform and the controller unit or module positioned underneath the cap;
Figure 6 shows explosive view the right positioned camera; and
Figure 7 illustrate grids or coordinate map of coordinate points being mapped onto the display unit.
Detailed Description
Hereinafter, the invention shall be described according to the preferred embodiments of the present invention and by referring to the accompanying description and drawings. However, it is to be understood that limiting the description to the preferred embodiments of the invention and to the drawings is merely to facilitate discussion of the present invention and it is envisioned that those skilled in the art may devise various modifications without departing from the scope of the appended claim.
As illustrated in Figure 1 and 2, the present disclosure relates to a visual surveillance system 100 using one or more camera assemblies 310, 330 operable to capture video images of one or more object within a defined environment. Generally, the disclosed visual target tracking system 100 comprises a base 110 mountable to a surface; a platform 130 extending away from the base 110; a first driving mechanism 170 being coupled to the platform to rotationally move the platform at a first plane in relation to the base; a camera assembly 310/330 attached to the platform, the camera assembly 310/330 having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism 190 being engaged to the camera assembly 310/330 to rotatably move the camera assembly 310/330 at a second plane perpendicular to the first plane; and a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310/330 to manoeuver the field of view of the camera assembly 310/330, the controller unit 150 receiving captured visual information from the camera assembly 310/330. It is important to note that the system 100 illustrated in Figure 1 and 2 is one of embodiments of the present disclosure is to be mounted on a pole-like structure or surface of a pole-like structure for visual surveillance and the present disclosure shall not limited solely to such embodiments described hereafter. Preferably, the first plane is a horizontal plane and the second plane is a vertical plane, but not limited to, referring to the embodiments described hereinafter. For some other embodiments, the disclosed system 100 may be mounted onto a mobile unit that the captured image or video is transmitted wirelessly to a remotely located server or display. The mobile unit can be a robotic construct, vehicle or even moving human troops. One or more self-stabilization mechanism can be incorporated thereof to ensure that video captured from such mobile unit attain certain acceptable quality to be used for subsequent analysis work.
Preferably, the base 110 and platform 130 of the present system can be a hollow housing in which at least part of the controller unit 150, the first 170 and second driving mechanisms 190 will be stored and kept from being adversely affected by external environmental agent such as rain, heat, vapor and/or dust. Preferably, the base 110 is cylindrical in shape defining an internal hollow compartment which is accessible at least via an open bottom, a through hole 111 located at its top, and a side opening 112. Side flange 113 extends radially from bottom rim of the base 110 that several threaded apertures 114 are located thereby allowing securement of the base 110 onto a mountable surface by way of compatible bot and nut or the like fasteners. The top through hole 111 and the bottom opening (not shown) are arranged and aligned in a manner which facilitates wires, cables, wiring pipes or the like to run through without facing any substantial hassles. In some embodiments, part of the controller unit and/or communication module may be stored within the base 110. The side opening 112 fabricated at the sidewall 115 of the base 110 grants access for configuring these stored parts. A cover 116 is preferably used to seal off the side opening 112 when access to the interior compartment of the base 110 is not needed. Further, the top through hole l l lof the base is preferably defined around center axis of the top 118 of the base 110.
According to other embodiments, the platform is vertically arisen or extended away from the base. More particular, the platform is a cylindrical construct defining an interior compartment to house at least part of the first driving mechanism. The platform generally comprises a hollow cylindrical main body having an open top and an open bottom concealable by a top cap and a bottom cap respectively. The cylindrical main body of the platform is relatively greater in length but shorter at diameter compared to the base, though not necessary in some other embodiments, and preferably rested atop the base. Both bottom rim of the cylindrical main body and the bottom cap possess compatible flanges bearing a number of through holes, fasteners like bolt and nut can be engaged thereto for securing the bottom cap to the main body. Correspondingly, the bottom cap carries a through hole (not shown) around the central axis for part of the wiring, cables or the first driving mechanism to cross into the interior compartment of the platform. Preferably, the platform is rotatable or can be revolved around the central vertical axis in relation to the base to effectuate panning movement of the camera assembly at azimuth angle. A pair of outlets are carved and arranged on the main body in an opposing fashion. Some embodiments may have one or more of such outlets on which the camera assembly rotatably mounts and to be turned to attain various elevation angles for visual surveillance purpose. Wiring work can be routed through the outlet to reach the camera assembly. Referring to figure 4, circular rim of the platform defining the open top is engraved with threaded tracks for coupling of the top cap carved with corresponding threaded tracks. Particularly, the top cap can be turn clockwise or anti-clockwise against the open top to respectively engage or disengage the top cap. Removal of the top cap lead to instant access of circuit boards and/or controller unit stored within the platform. More preferably, one or more storing racks are attached either to the inner surface of the main body or the beneath the top cap to place circuit boards of various modules including the controller unit in an orderly fashion, such that upgrade or replacement of the relevant modules can be conducted conveniently. In some embodiments, the base and platform arisen thereof can be an integral structure wholly instead of two separate components being mechanically fixed together to form one functional unit. As shown in Figure 3 and 4, at least part of the first 170 and second driving mechanisms 190 are resided within the platform 130, though such arrangement can be configured differently according to various embodiments of the present disclosure. Particularly, the first driving mechanism 170 comprises a first motor 171, a set of first gear assembly 172 engaged to the platform 130 and being configured to be driven by the first motor 171 to rotate the platform 130 around the central vertical axis in relation to the base 110, a first optical encoder 174 coupled to the first gear assembly 172 to determine angular or azimuth movement of the platform 130 or the attached camera assembly 310/330 referring to a one or more predetermined reference point at the horizontal plane. The first optical encoder 174 preferably includes a first optical disc encoded with a number of codes or counts denoting or reflecting angular position of the attached camera assembly 310/330 and at least one optical sensor or reader capable of deciphering the code or counts into a machine-readable signal to be fed to the controller unit 150 for managing panning movement of the camera assembly 310/330 coupled thereto. More preferably, a pair of diagonally positioned optical sensors or readers (not shown) is employed in the first optical encoder 174 for interpolating absolute position of the camera assembly 310/330. The first motor 171 can be a step motor regulated or managed by the controller unit 150. The step motor drives the first gear assembly 172, which subsequently pull a first driving belt 173 to move the first optical disc accordingly to determine the extent of azimuth movement, upon receiving input of the controller unit 150. Part of the first gear assembly is operably joined to a gear shaft 178, which is substantially aligned to the central vertical axis, located at the base 110, such that driving force of the first motor 171 can be translated into azimuth movement of the platform 130, so as the attached camera assembly 310/330, around the vertical axis in relation to the base 110. Likewise, the second driving mechanism 190 comprises a second motor 191, a set of second gear assembly 192 engaged to the camera assembly 310/330 and being configured to be driven by the second motor 191 to rotate or elevate the camera assembly 310/330 around a horizontal axis in relation to the platform 130, a second optical encoder 194 coupled to the second gear assembly 192 to determine angular or elevation movement of the attached camera assembly 310/330 referring to a one or more predetermined reference point at the vertical plane. More particularly, the whole set up of the second driving mechanism 190 can be similar to the first driving mechanism 170, but it is spatially organized above of and in a fashion transverse to first driving mechanism 170. The second optical encoder 194 preferably includes a second optical disc encoded with a number of codes or counts denoting or reflecting angular position of the attached camera assembly 310/330 along the altitude angle and at least one optical sensor or reader capable of deciphering the code or counts into a machine-readable signal to be fed to the controller unit 150 for managing elevation movement of the camera assembly 310/330 coupled thereto. A pair of diagonally positioned optical sensor or reader can be employed in some embodiments for interpolating absolute position of the camera assembly 310/330 along the altitude. The second motor 191 can be a step motor regulated or managed by the controller unit 150. The step motor 191 drives the second gear assembly 192, which subsequently creates corresponding elevation movement on the second optical disc of the second optical encoder, via a second driving belt 193, according to input of the controller unit 150. Part of the second gear assembly 192 is operably joined to a gear shaft 198 or other gear components placed inside the camera assembly, such that driving force of the second motor 191 can be translated into elevation movement of the camera assembly 310/330 in relation to the platform 130.
According to a number of preferred embodiments, the controller unit 150 can be any known device or apparatus with acceptable computing power to carry out instructions, input and/or programs being configured to manage movement of the camera assembly in the disclosed system. The controller unit 150 can be a micro-computer or programmable integrated-circuits (IC) chips. More importantly, the controller unit 150 may form a node of a wider computing network that the information or video captured by the present disclosed system 100 would be shared, stored or analyzed by other nodes in the network. As shown in Figure 5, the controller unit 150 is preferably housed within the platform 130 underneath of the top cap 132. The disclosed system 100 may employ a pair of rack 144 to neatly arrange the IC boards 155 forming the controller unit 150. The racks 144 can have lips 147 extending from the sidewall 145 of the racks 144 to effectuate securement of the rack 144 within the interior compartment of the platform 130 via fasteners 149 such as screw or nuts. In some embodiments, the controller unit 150 may comprise more than one module in order to perform various functions. For example, the controller unit 150 may possess an integrated or separate communication module for data transfer or communication. The communication module may contain buses for the controller unit to electrically communicate with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310/330 to manoeuver the field of view of the video camera. The communication module also realizes receipt of the captured visual information by the controller unit 150 from the camera assembly 310/330 and transmits the visual information to be stored into a local or remote database. Communication ports are preferably provided to establish wire or wireless communication with other remotely positioned module, parts or nodes. For some preferred embodiments, the communication may be configured to have constant communication with other system to retrieve information from satellite, radar or weather prediction server.
In accordance with a plurality of preferred embodiment, the disclosed system can further comprise a visual display unit 230 remotely communicating to the controller unit 150 to realtime stream the captured visual information and visually present the scene relating to the target. The visual display unit 230 can be LED- or LCD-based display panel. More preferably, the visual display unit 230 is touch-enabled or tactile-enabled that it allows user to provide one or more user input or instructions to the controller unit 150 by way of typing, highlighting an area of interest, selecting a target for the camera assembly 310/330 to track, and/or touching a defined coordinate point for the system 100 to perform an action towards a location associated to the touched coordinate point through a user interface overlaid on top of or integrated into the captured scene consistently streamed and presented. More importantly, the visual display unit 230 can be touch-enabled screen of a smartphone, computing tablet, laptop or other the like computing devices. The user interface, preferably in a virtual or digital form, for receiving user input can be called on or off upon activating dedicated virtual button or area on the screen of the visual display unit 230. Physical button around the smartphone or a keyboard coupled to the disclosed system can be used to call upon the virtual user interface to suspend or overlay on top, preferably with good transparency to not impede view of the present scene, of the captured information shown. For a few embodiments, the virtual user interface is adaptably an addition window juxtaposing to the scene presented. The user interface preferably provides section where the user can manually key in a coordinate point, or a named location associated thereof, at which a planned action can be carried out with respect to a spot in the three-dimensional environment captured by the camera assembly. Referring to Figure 1 and 6, camera assemblies 310/330 of some preferred embodiments are illustrated. The camera assemblies 310/330 of the present disclosure, in some embodiments, may comprise a visible light camera 310 assembly and/or an infrared camera assembly 330. The visible light camera assembly 310 has a tubular housing 311 with front 312 and back openings, a video camera sensor 313 and circuit stored inside the housing 311, a plurality of fastening components 314 to secure the camera sensor 313 and circuit within the housing 311, and front 315 and back end caps 316 to respectively shield the front 312 and back openings for substantially sealing interior of the housing off from the external environment. Preferably, the front end cap 315 has a glass or lens portion 317 to permit entrance of visible light for generating video or still image on the video camera sensor. A wiper 309 can be coupled to the visible light camera assembly 310 to clean external surface of the front end cap 315 from time to time to ensure that entrance of the visible light is not impeded by the dust or dirt collected onto the front end cap 315. Explosive view of the infrared camera assembly 330 is further shown in Figure 6. Like its visible light counterpart, the infrared camera assembly 330 comprises a tubular housing 331 with front 332 and back openings, a infrared camera sensor 333 and circuit stored inside the housing 331, a plurality of fastening components 334 to secure the infrared camera sensor 333 and circuit within the housing, front 335 and back end caps 336 to respectively shield the front 332 and back openings, and an array of infrared LEDs 339 positioned immediately behind of the front end cap 335 to irradiate infrared radiation to a target in a condition lack of visible light.
As described in the foregoing, the first driving mechanism 170 comprises the first optical encoder 174 and the second driving mechanisms 190 comprises a second optical encoder 194 to facilitate tracing and computation of absolute position of the camera assembly 310/330 within the defined horizontal and vertical movement planes. Preferably, minimal moveable angular distance of the camera assembly 310/330 respectively at the horizontal and vertical planes is dictated or governed by the available counts on the optical discs. More preferably, the first optical disc is of Nj cycles/revolution and the second optical disc has Λ½ cycles/revolution. The number of cycles per revolution of the optical disc can be regarded as resolution of the optical disc. Higher cycles/revolution generally results in the finer attainable angular movement of the camera assembly 310/330. Through employment of the optical disc of higher count or resolution in both first 174 and second optical encoders 194, the present disclosure allows finer and swifter tracking on a moving object shown on the screen of display unit either manually or by preset algorithm. More preferably, the Ni and/or N2 is 96 to 30000 or higher. With the aid of the correspondingly coupled optical reader, the disclosed system 100 is able to output or compute one or more signals, which correspond to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310/330 at the second plane for the controller unit 150 to manoeuver the field of view of the camera using the signal.
Pursuant to a number of preferred embodiments, the disclosed system 100 is further incorporated with a coordinate map 400 having Ni X N2 of coordinate points 404. In more specific, each subsequent count further away from the absolute reference point or count, NiXo, of the first optical disc will be respectively denoted as NJXJ, N X2, N2X3 until NjX„, where n equals to total counts of the first optical disc. Angular distance between a count and another immediate count on the first optical disc is equivalently divided and predetermined. For example, angular distances at azimuth from NjXj to N1X2 and N1X2 to N1X3 are equally same. The present disclosure uses the angular count or the rotatable panning angle of the first optical encoder 174as a mean to compute azimuth coordinate point, or X-coordinate point, along the azimuth path of camera assembly 310. With the count read from the first optical sensor 174, the controller unit 150 can compute or calculate position, more precisely angular position, of the camera assembly 310 with respect to a predetermined reference point. Similarly, each subsequent count further away from the absolute reference point or count, N1Y0, of the second optical disc will be denoted as N2Y1, N2Y2, N2Y3 until N2Y11, where n equals to total counts of the second optical disc. Angular distance between a count and another immediate count on the second optical disc is equivalently divided and predetermined, preferably according to the cycles/revolution of the second optical disc. For example, angular distances at altitude from N2Y1 to N2Y2 and N2Y2 to N2Y3 are the same. The angular count of the second optical encoder 194, is preferably utilized to generate vertical coordinate point, or Y-coordinate point, along the rotatable altitude path of the camera assembly 310. With the count read from the optical sensor or encoder 174/194, the controller unit 150 can compute or calculate position, more precisely angular position, of the camera assembly 310 with respect to a predetermined reference point along the altitude. Combining or plotting against each count of the first optical disc to each count of the second optical disc e.g., (N X2, N2Y1) leads to generation of a substantially spherical coordination system for the camera assembly 310/330 and/or the field of view of the camera assembly 310/330. The coordination system contains a plurality of coordinate points 404, which are preferably computed based upon cycles/revolution of the first and second optical discs or encoders that each coordinate point 404 corresponds to discrete joined counts, composed of one count of the first optical disc or encoder 174 and one count of the second optical disc or encoders 194 as explained above. Each coordination point 404 represents a unique or discrete location on the substantial spherical coordination system constructed from the combined angular counts of the first 174 and second optical encoders 194. It is important to note herein that some of the counts in the optical disc or optical encoder may not be used owing to physical restraints from the environment where the camera assembly 310 installed. For instance, the camera assembly 310 may only ascend or descend around 120 degree in total about elevation angle along the altitude when the camera assembly 310 is positioned adjacent to ceiling of a room. In accordance with more preferred embodiments, the disclosed system 100 overlays, maps or indexes the scene captured by the camera assembly 310/330 with part of the corresponding coordinate points 404 of the coordinate map 400. More specifically, the scene or field of view present on the screen at one point of time only reveals part of the total environment monitored or visually accessible by the camera assembly 310/330. By referring to the counts on the first 174 and second optical encoders 194, the controller unit 150 can determine coordinate points 404 of the coordinate map 400 relating to the scene actively shown, such that the disclosed system 100 can overlay, map or index the scene using the computed coordinate points 404. In addition, presence of touch- or tactile-enabled screen in the present disclosure permits receipt of user touch on the screen to toggle an input or feature thereof, preferably in relation to the mapped coordinate points 404. From the software or user- interface aspect of some embodiments, the disclosed system 100 preferably maps or overlays the coordinate points orderly onto the scene shown in the screen according to a number of predetermined pixels, literally a group of closely located pixels, available on a defined area on the screen. More particularly, the disclosed system 100 links or associates a group of adjacently placed, located or positioned pixels on the screen to a coordinate point 404 of the coordinate map 400. A touch, which is digitally registered by the controller unit 150, towards one or more pixels of a given group linked to a coordinate point 404 will be recognized by the disclosed system 100 as a user input to activate an event to the linked coordinate point 404; the recognized input may lead to an intended action directed to or at a location, in the three dimensional environment, corresponding to or associated with the recognized coordinate point 404. Each coordinate points 404 overlaid on the screen will be associated to a discrete group of pixels in similar manner. By capitalizing on high pixels count available on relatively high definition display unit developed in recent years, the present disclosure is able to employ high CPR or resolution optical encoders 174/194 in deriving the plurality of coordinate points 404 and packing more derived coordinate points 404 into a displayed scene in any given time, rendering tracking of a target or performing of a planned action in a much refined fashion. Association of the high count in the optical encoders 174/194 with high definition display unit 230 resulting in improved tracking and targeting is almost infeasible in the absence of any of these elements decade ago. Moreover, scene or a tracked object displayed on the screen of the display unit 230 in some of the embodiments can be zoomed in or out, either by way of optical or digital magnification. More preferably, the coordinate points 404 and the linked pixel groups can adaptably change in a dynamic manner in connection to the zoom or magnification level of the scene or object monitored. The disclosed system 100, by way of software manipulation, reduces the number of coordinate points 404 allotted to the screen and associates more pixels to the a coordinate point 404 when the scene or target object is being zoomed in or magnified on screen. For instance, the number of coordinate point 404 is 40 with 100 pixels associated to each coordinate point in zoom free condition, but the number of coordinate point 404 will be reduced to 20 with 200 pixels associated to each remaining coordinate point 404 when the target or scene is subjected to 2 times magnification. Conversely, the disclosed system 100 packs more coordinate points into an overlaid scene with lower pixels associated to each coordinate point 404, compared to zoom free condition, in connection to activation of a zoom out function. Further embodiments of the disclosed system may include algorithm to compare pixels being digitally touched by user when a registered tactile input landed on two adjacent groups of pixels of two coordinate points 404 on the screen; the disclosed system preferably registers coordinate point of greater number of touched pixels as the valid input.
More preferred embodiments of the present disclosure may further take into focal length of the camera assembly 310 into consideration to map or index the scene for zooming in or out thereby. Particularly, base point of the first optical encoder 174 a horizontal pan (X-axis) optical disc encoder, and the second optical encoder 194 a vertical tilt (Y-axis) optical disc encoder are calibrated continuously such that the entire panoramic areas of the combined coverage of the pan and tilt operations in 360 degree of X-axis and 360 degree of Y-axis are indexed or computed to generate coordinate points 404. The disclosed system 100 use the generated indices (I) or coordinate points 404 for subsequent mapping with the field of view (FOV) and/or pixels of the camera sensor at all focal length (f), and the field of view and/or pixels on the visual display unit 230. The selection of the cycles per resolution of the optical disc encoder 174/194 needs to be capable to meet the distance and accuracy requirement of the intended application. The longer the distance of the target and the smaller the size of the target, shall require optical disc encoder 174/194 with higher cycles per resolution. The density of indices or coordinate points 404 shall be reduced proportionally with higher focal length (f) and vice versa. The reduction or increment of focal length shall determine the number of pixels of the screen in defining an index and the gap distance between immediate indices. At higher focal length, the number of pixels on the screen to define or be linked to an index shall increase and targeting a smaller object becomes harder, with greater gap exist between indices. More importantly, the field of view and pixels of the camera assembly 310 are calibrated with the field of view and/or pixels of the display unit 230 to ensure that the available pixels on the display unit 230 can be correctly mapped using the generated indices and/or coordinate point.
For several preferred embodiments, the present disclosure further comprises a tool driven or managed by the controller unit to perform an action towards a location in the surveyed environment. The location substantially corresponds to a coordinate point 404 overlaid on the screen. More specifically, the controller unit 150 overlays or indexes the scene presented on the visual display unit 230 with computed coordinate points 404 from part of the coordinate map 400 as illustrated in Figure 7. The display unit 230 is configured to receive an input, preferably as a touch input, on one of the coordinate points 404 indexed to the shown scene and relays the input to the controller unit 150 to drive the tool performing an action towards a corresponding location or spot in the surveyed environment. The tool (not shown) is preferably capable of launching a substantially long range action towards a spot or location in the surveyed environment associated to the coordinate point 404 selected from the screen based on the user input. The tool can be a light source, a sound source, or even projectile weapon to deter or stop the unlawful intrusion. The tool, in some embodiments, is a spotlight. It may be moveable to beam light on a given location or spot, according to a user input, around the intruder as a sign of warning.
As setting forth earlier, the visual display unit 230 is tactile-sensitive and capable of generating the input by way of sensing a touch about the coordinate points 404 overlaid onto the presented scene. Depending on the configuration of the given embodiments, the grids or coordinate map 400 of the coordinate points 404 may or may not be shown on the screen of the display unit. Figure 7 illustrates one embodiment in which the grid 400 is mapped and shown on the visual display unit 230. For embodiment in which the grids 400 are not visibly available, the controller unit 150 is fashioned to take the coordinate point 404, which is closest to tactile input sensed or detected on the screen, as the point of input and direct an action accordingly. It is crucial to note that the scene displayed on the screen being mapped or overlaid, visibly or invisibly, by the grids 400 of coordinate points 404 is a planar view or of two-dimensional, while the coordinate points 404 are actually computed or generated based upon angular position of camera assembly 310/330 within a three-dimensional environment. More importantly, the scene or object shown on the display unit is a two dimensional representation of a spatial arrangement of various elements in a three-dimension environment. Accuracy or precision of such representation degrades exponentially when a subject matter located further away from the camera assembly 310/330. Therefore, one skilled in the art shall appreciate the fact that calibration works, by taking different environmental factors into account, have to be conducted for improving accuracy of the like representation displayed on the screen to such extent, that the planed action can be directed to a specific location in the three dimensional environment corresponding to a coordinate point 404 selected on the screen. Nonetheless, the intended action can be carried out with acceptable accuracy without the need of calibration work, especially when the area monitored by the disclosed system 100 is a relatively small enclosed environment and/or the location or subject targeted by the disclosed system 100 is in close proximity of the camera assembly 310/330 or the driven tool. Alternatively, the disclosed system 100, in some other embodiments, may only have several locations of interest on the surveyed scene calibrated to the corresponding coordinate locations on screen. One or more embodiments of the present disclosure may be enabled to perform the intended action free from any manual input or human intervention relying upon algorithm or programmed instruction used in running the disclosed system. The intended action will be conducted in an automated fashion though it is can be overwritten by manual input if needed.
To improve accuracy in target positioning or a location adjacent to the subject, the disclosed system 100 may be incorporated with at least one distance measuring sensor to detect distance of the target or the location from the tool and relay the detected distance to the controller unit to calculate the ideal trajectory path of the light, sound or projectile from the tool. Preferably, the distance measuring sensors are infra-red or laser-based. Multiple sensors, including video camera sensors, and/or intended action devices can be calibrated to handle various multiple axis distances and directions of target object in several preferred embodiments of the present disclosure. In another aspect, the preset disclosure provides also a visual surveillance and target tracking system 100 comprising a platform 130 mountable to a surface; a first driving mechanism 170 being coupled to the platform 130 to rotationally move the platform 130 at a first plane in relation to the base 110; a camera assembly 310/330 attached to the platform, the camera assembly 310 having a field of view for constantly capturing visual information of a scene of an environment relating to a target; a second driving mechanism 190 being engaged to the camera assembly 310 to rotatably move the camera assembly 310 at a second plane perpendicular to the first plane; a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310 to manoeuver the field of view of the camera assembly 310, the controller unit 150 receiving captured visual information from the camera assembly 310; and a visual display unit 230 remotely communicating to the controller unit 150 to real-time stream the captured visual information and visually present the scene relating to the target. Preferably, the first driving mechanism 170 comprises a first optical encoder 174 of N] cycles/revolution and the second driving mechanism 190 comprises a second optical encoder 194 of N2 cycles/revolution that the first 174 and the second optical encoders 194 are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310 at the second plane for the controller unit 150 to manoeuver the field of view of the camera assembly 310 using the signal. Further, the scene presented on the display unit 230 is overlaid with a plurality of coordinate points 404 computed from cycles/revolution of the first 174 and the second optical encoders 194 that each coordinate point 404 corresponds to a discrete point where counts of the first 174 and the second optical encoders 194 intersecting one and other.
It is to be understood that the present invention may be embodied in other specific forms and is not limited to the sole embodiment described above. However modification and equivalents of the disclosed concepts such as those which readily occur to one skilled in the art are intended to be included within the scope of the claims which are appended thereto.

Claims

Claims
1. A visual surveillance system 100 comprising:
a base 110 mountable to a surface;
a platform 130 extending away from the base 110;
a first driving mechanism 170 being coupled to the platform 130 to rotationally move the platform 130 at a first plane in relation to the base 110;
a camera assembly 310 attached to the platform 130, the camera assembly 310 having a field of view for constantly capturing visual information of a scene of an environment relating to a target;
a second driving mechanism 190 being engaged to the camera assembly 310 to rotatably move the camera assembly 310 at a second plane perpendicular to the first plane; and
a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310 to manoeuver the field of view of the camera assembly 310, the controller unit 150 receiving captured visual information from the camera assembly 310;
wherein the first driving mechanism 170 comprises a first optical encoder 174 of N] cycles/revolution and the second driving mechanisms 190 comprises a second optical encoder 194 of Λ½ cycles/revolution that the first 174 and the second optical encoders 194 are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310 at the second plane for the controller unit 150 to manoeuver the field of view of the camera using the signal.
2. The system 100 of claim 1, further comprising a coordinate map 400 having Ni X N2 of coordinate points 404 computed from cycles/revolution of the first 174 and second optical encoders 194 that each coordinate point 404 corresponds to discrete joined counts combining one count of the first optical encoder 174 and one count of the second optical encoder 194, wherein the scene captured by the camera assembly 310 is indexed or overlaid with coordinate points 404 from part of the coordinate map 400.
3. The system 100 of claim 2, further comprising a tool driven by the controller unit 150 to perform an action towards a location of the environment and the location substantially corresponds to a coordinate point 404.
4. The system 100 of claim 1 or 3 further comprising visual display unit 230 remotely communicating to the controller unit 150 to real-time stream the captured visual information and visually present the scene relating to the target.
5. The system 100 of claim 4, wherein the controller unit 150 overlays or indexes the scene presented on the visual display unit 230 with the coordinate points 404 and is configured to receive an input relating to one of coordinate points 404 associated to the scene presented from a user to drive the tool to perform the action towards the corresponding location or adjacent to the corresponding location.
6. The system 100 of claim 5, wherein the visual display unit 230 is tactile-sensitive and capable of generating the input by way of sensing a touch towards the indexed coordinate points 404 overlaid to the presented scene.
7. The system 100 of claim 3, further comprising a distance measuring sensor to detect distance of the target from the tool and relay the detected distance to the controller unit 150.
8. The system 100 of claim 1, wherein the first plane is a horizontal plane and the second plane is a vertical plane.
9. The system 100 of claim 1, wherein the N} and/or N2 is 96 to 30000.
10. A visual surveillance system 100 comprising:
a base 110 mountable to a surface;
a platform 130 extending away from the base;
a first driving mechanism 170 being coupled to the platform 130 to rotationally move the platform 130 at a first plane in relation to the base 110; a camera assembly 310 attached to the platform 130, the camera assembly 310 having a field of view for constantly capturing visual information of a scene of an environment relating to a target;
a second driving mechanism 190 being engaged to the camera assembly 310 to rotatably move the camera assembly 310 at a second plane perpendicular to the first plane;
a controller unit 150 electrically communicating with the first 170 and the second driving mechanisms 190 to regulate and/or control rotational movement of the platform 130 and camera assembly 310 to manoeuver the field of view of the camera assembly 310, the controller unit 150 receiving captured visual information from the camera assembly 310; and
a visual display unit 230 remotely communicating to the controller unit 150 to real-time stream the captured visual information and visually present the scene relating to the target;
wherein the first driving mechanism 170 comprises a first optical encoder 174 of N] cycles/revolution and the second driving mechanism 190 comprises a second optical encoder 194 of N2 cycles/revolution that the first 174 and the second optical encoders 194 are able to output one or more signals respectively corresponding to angular position of the rotationally moved platform 130 at the first plane and the camera assembly 310 at the second plane for the controller unit 150 to manoeuver the field of view of the camera assembly using the signal;
wherein the scene presented on the display unit 230 is overlaid with a plurality of coordinate points 404 computed from cycles/revolution of the first and second optical encoder that each coordinate point corresponds to a discrete point where counts of the first and second optical encoders intersecting one and other.
PCT/MY2015/050141 2015-11-18 2015-11-18 A visual surveillance system with target tracking or positioning capability WO2017086771A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/MY2015/050141 WO2017086771A1 (en) 2015-11-18 2015-11-18 A visual surveillance system with target tracking or positioning capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/MY2015/050141 WO2017086771A1 (en) 2015-11-18 2015-11-18 A visual surveillance system with target tracking or positioning capability

Publications (1)

Publication Number Publication Date
WO2017086771A1 true WO2017086771A1 (en) 2017-05-26

Family

ID=58717541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2015/050141 WO2017086771A1 (en) 2015-11-18 2015-11-18 A visual surveillance system with target tracking or positioning capability

Country Status (1)

Country Link
WO (1) WO2017086771A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753036A (en) * 2018-12-27 2019-05-14 四川艾格瑞特模具科技股份有限公司 A kind of precision machinery processing Schedule tracking method
CN110675589A (en) * 2019-10-14 2020-01-10 新昌县管富机械有限公司 Forest fire prevention wireless alarm device
US20210239447A1 (en) * 2018-05-01 2021-08-05 Red Tuna Optical vehicle diagnostic system
CN115234808A (en) * 2022-05-20 2022-10-25 温州医科大学 Micro-expression recording device for psychological survey and writing in psychological consultation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080084473A1 (en) * 2006-10-06 2008-04-10 John Frederick Romanowich Methods and apparatus related to improved surveillance using a smart camera
US20080181600A1 (en) * 2005-04-04 2008-07-31 Francois Martos Photographing Device in Particular For Video Surveillance and Working Methods of Same
US7456847B2 (en) * 2004-08-12 2008-11-25 Russell Steven Krajec Video with map overlay
US8882369B1 (en) * 2013-05-02 2014-11-11 Rosemount Aerospace Inc. Integrated gimbal assembly

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7456847B2 (en) * 2004-08-12 2008-11-25 Russell Steven Krajec Video with map overlay
US20080181600A1 (en) * 2005-04-04 2008-07-31 Francois Martos Photographing Device in Particular For Video Surveillance and Working Methods of Same
US20080084473A1 (en) * 2006-10-06 2008-04-10 John Frederick Romanowich Methods and apparatus related to improved surveillance using a smart camera
US8882369B1 (en) * 2013-05-02 2014-11-11 Rosemount Aerospace Inc. Integrated gimbal assembly

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210239447A1 (en) * 2018-05-01 2021-08-05 Red Tuna Optical vehicle diagnostic system
CN109753036A (en) * 2018-12-27 2019-05-14 四川艾格瑞特模具科技股份有限公司 A kind of precision machinery processing Schedule tracking method
CN110675589A (en) * 2019-10-14 2020-01-10 新昌县管富机械有限公司 Forest fire prevention wireless alarm device
CN115234808A (en) * 2022-05-20 2022-10-25 温州医科大学 Micro-expression recording device for psychological survey and writing in psychological consultation

Similar Documents

Publication Publication Date Title
US11501628B2 (en) Integrative security system and method
US20070115358A1 (en) Methods and systems for operating a video surveillance system
WO2017086771A1 (en) A visual surveillance system with target tracking or positioning capability
KR101204080B1 (en) Surveillance camera system and method for controlling thereof
US8405494B2 (en) Apparatus for identifying threats using multiple sensors in a graphical user interface
US20060203090A1 (en) Video surveillance using stationary-dynamic camera assemblies for wide-area video surveillance and allow for selective focus-of-attention
US20080259179A1 (en) Automatic Multiscale Image Acquisition from a Steerable Camera
US20210339399A1 (en) Mobile robot for elevator interactions
KR20090034575A (en) Surveillance camera system
RU83675U1 (en) VIDEO MONITORING SYSTEM
US11434003B2 (en) Drone deterrence system, method, and assembly
EP3452848A1 (en) Monitoring method using a camera system with an area movement detection
GB2509833A (en) Moveable led array flaps in camera
EP2648406B1 (en) Method for switching viewing modes in a camera
KR101821159B1 (en) System for tracking moving path of objects using multi-camera
CN114047831B (en) Computing system with direct three-dimensional pointing and method of tracking pointing/input device
US9386280B2 (en) Method for setting up a monitoring camera
US20200284557A1 (en) Drone deterrence system, method, and assembly
JP2009301175A (en) Monitoring method
Yeamkuan et al. A 3d point-of-intention estimation method using multimodal fusion of hand pointing, eye gaze and depth sensing for collaborative robots
CN108604401B (en) Camera device
KR102480986B1 (en) Multi-User Operation System and the method using Mixed Reality Device
JPH0846858A (en) Camera system
JP6527848B2 (en) Monitoring device and program
KR102545741B1 (en) CCTV rotating camera control terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15908883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 30/10/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15908883

Country of ref document: EP

Kind code of ref document: A1