WO2021086884A1 - System, apparatus and method of provisioning allotments utilizing machine visioning - Google Patents

System, apparatus and method of provisioning allotments utilizing machine visioning Download PDF

Info

Publication number
WO2021086884A1
WO2021086884A1 PCT/US2020/057614 US2020057614W WO2021086884A1 WO 2021086884 A1 WO2021086884 A1 WO 2021086884A1 US 2020057614 W US2020057614 W US 2020057614W WO 2021086884 A1 WO2021086884 A1 WO 2021086884A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
parking
data
camera
mode
Prior art date
Application number
PCT/US2020/057614
Other languages
French (fr)
Inventor
JR. Charles Thomas LACEY
Naveenchandra Pooranchandra JOSHI
Original Assignee
Flownetworx, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flownetworx, Inc. filed Critical Flownetworx, Inc.
Publication of WO2021086884A1 publication Critical patent/WO2021086884A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • G08G1/144Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces on portable or mobile units, e.g. personal digital assistant [PDA]

Definitions

  • a mobile device can be configured to receive information from local sensor nodes, such as parking sensor nodes, in the vicinity of the mobile device.
  • the mobile device located in a moving vehicle can be configured to locate available parking based upon the information received from the parking sensor nodes.
  • US 10198949 discloses a method for distributing parking availability data via blockchain includes: storing a blockchain comprised of a plurality of blocks, each block having a block header including a timestamp; receiving spot availability notifications including a common spot identifier and availability data; generating a transaction value including the common spot identifier and availability data; generating a new block header including i) a current timestamp, ii) a reference hash value generated via hashing of the block header included in a most recent block identified via the timestamp, and iii) a transaction hash value generated via hashing of the new transaction value; generating a new block comprised of the new block header and the new transaction value; and transmitting the generated new block.
  • US 10255808 discusses a computer-implemented method includes: receiving, by a computing device, images of adjacent vehicles parked directly adjacent to an open parking space; determining, by the computing device, visual factors and non-visual factors of the adjacent vehicles based on the images; determining, by the computing device, risk scores for each of the adjacent vehicles based on the visual factor and the non-visual factors; determining, by the computing device, a parking position within the open parking spa outputting, by the computing device, information regarding the parking position.
  • US 2017/0017848 discloses a parking assist system of a vehicle includes a camera that, when disposed at the vehicle, has a field of view exterior of the vehicle.
  • An image processor is operable to process image data captured by the camera to detect parking space markers indicative of a parking space and to identify empty or available parking spaces.
  • the image processor includes a parking space detection algorithm that detects parking space markers by (i) extracting low level features from captured image data, (ii) classifying pixels as being part of a parking space line or not part of a parking space line, (iii) performing spatial line fitting to find lines in the captured images and to apply parking space geometry constraints, and (iv) detecting and selecting rectangles in the captured images.
  • US 8139115 discloses a computer implemented method, apparatus, and computer usable program code for tracking vehicles in a parking facility using optics.
  • the process receives a series of two-dimensional images of a vehicle in a parking facility from a camera.
  • the process generates an object representing the vehicle based on the series of two-dimensional images.
  • the object includes a set of parameters defining an outer edge frame for the vehicle.
  • the process determines a location of the vehicle in the parking garage based on the outer edge frame and positional pixel data for the parking facility.
  • US 8139115 recites that "During calibration, a test vehicle is driven around a pre determined course in the given parking area. The test vehicle drives into selected parking bays in a precise order.
  • the parking bays selected are usually the first parking bay and the last parking bay in a bank or row of parking bays.
  • the camera will follow the vehicle and notice when the vehicle stops in a parking bay. In other words, the camera records a set of camera images of the test vehicle as it drives on access road ways and pulls into one or more pre-selected parking bays. This allows the process controller to calculate the location of each parking bay, the scale of each parking bay in the pixel image, and the orientation of a vehicle in a parking bay.
  • the process controller will also calculate positional pixel data for each parking bay and each section of the access road.
  • Positional pixel data is data for associating a pixel with a real world location in the parking facility.
  • Positional pixel data includes an assignment of each pixel in the camera with a real world location road.”
  • US 2017/0124874 discusses a system for determining an available parking space.
  • a computer accesses a streaming video.
  • the computer identifies a vehicle and a corresponding identification characteristic within the accessed streaming video.
  • the computer retrieves historical data associated with the identified identification characteristic, wherein the historical data includes previous parking locations.
  • the computer determines a preferred parking space within the retrieved historical data associated with the identified identification characteristic.
  • the computer determines whether the identified preferred parking space is available based on a parking database.
  • US 2015/0086071 discloses a system and method for determining parking occupancy by constructing a parking area model based on a parking area, receiving image frames from at least one video camera, selecting at least one region of interest from the image frames, performing vehicle detection on the region(s) of interest, determining that there is a change in parking status for a parking space model associated with the region of interest, and updating parking status information for a parking space associated with the parking space model.
  • the present invention relates to a system for provisioning allotments, utilizing machine visioning, the machine visioning including one or more mounted cameras above one or more available allotments.
  • the allotments are available parking spaces for a vehicle, as may be used to carry persons and/or goods on streets and other roadways.
  • the present invention relates to an apparatus including component parts thereof for provisioning allotments utilizing machine visioning, the machine visioning including one or more mounted cameras (preferably two or more mounted cameras) in the proximity of one or more available allotments.
  • the allotments are available parking spaces for one or more vehicles.
  • the present invention relates to a method for monitoring allotments utilizing machine visioning, the machine visioning including one or more mounted cameras in the proximity of one or more available allotments.
  • the allotments are available parking spaces for vehicle.
  • a yet further aspect of the invention there is provided an improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and their respective position to the one, two or more said cameras, as well as relative to allotments within the field of vision.
  • a still further aspect of the invention is a method for recording utilization of allotments, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and their respective position to the at least one, two or more said cameras, as well as to allotments within the field of vision of the 1 more cameras.
  • a yet further aspect of the invention is a method of providing output information based on a method for recording utilization of allotments, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, but preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and its respective position to the one more said cameras, as well as to allotments within the field of vision.
  • a still further aspect of the invention is a method for collection and use of information parameters relating to specific vehicles utilizing an allotment, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles within the field of vision of the at least one, two or more cameras, which information parameters may be stored, and optionally communicated and interchanged with further systems external of the apparatus for provisioning allotments utilizing machine visioning.
  • a preferred aspect of the invention provides a system and apparatus operable to monitor one or more parking spaces and one or more vehicles wherein the one or more vehicles and one or more parking spaces are both within the field of view of at least one static, vertically mounted camera which provides video image data to a computer system/server and computer readable media or storage which is used in storing data derived from a video markup tool and from data derived from video image data received from the at least one camera, the computer system/server operable to execute program modules, to receive video image data from the at least one camera, to output a response subsequent to the execution of instruction of one or more program modules, and to communicate with one or more external devices; the system and apparatus comprising a video markup tool the positioning of one or more vehicles relative to one or more individual parking spaces separated and delineated by visible boundary markings in a parking lot and which generates a data map of the parking lot which is subsequently used by the computer system/server to determine the relative positioning of one or more vehicles present within the field of view of the at least one camera, and to determine the occurrence
  • a further aspect of the invention provides a process for initiating provision of a Service in response to an Event, the process comprising the steps of: operating a system and apparatus as described herein to determine the occurrence of an Event, and in response thereto to initiate provision of a Service selected from: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Unparking Mode, Observation Mode..
  • an “allotment” is a physical location upon which may be used for used for temporarily placement of an article, such as a vehicle.
  • Such an allotment is advantageously an area in which a vehicle may be temporarily placed, i.e, a parking space which may be bounded by marking lines on the generally planar surface of the area.
  • the vehicle may be any type of motorized or nonmotorized vehicle, with regard to the former, such specifically includes cars, trucks, motorcycles, and with regard to the latter, nonmotorized vehicles may include detachable trailers, articles affixed upon detachable trailers, as well as movable articles which may be provided to the allotment utilizing a motorized vehicle. Allotment may be one or more parking spaces having defined boundaries in two dimensions, which are typically sized to receive a vehicle. The foregoing is however to be understood as a nonlimiting definition.
  • Allotment status is the state whether the allotment is occupied by an article, i.e. a vehicle, or is empty.
  • the allotment status may be “available” or “empty” indicating that it may receive a vehicle, or may be “unavailable” or “occupied” and already having a vehicle present within the physical boundaries of the allotment, i.e., parking space.
  • “Information parameters” relating to a static vehicle or a moving vehicle are one or more datum which are collected by the cameras utilized in the improved machine visioning process of the invention. Such may include visual and nonvisual parameters. Visual parameters may include the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non-sliding), condition, degree of window tinting, relative distance to one or more of the monitoring cameras etc. Visual parameters may be data or datum, and may be ascertained utilizing the input of the one or more cameras.
  • Nonvisual parameters may be data or datum which are not visually ascertained utilizing the input of the one or more cameras, but are related to a specific vehicle identified by the apparatus and system of the invention; such may include a vehicle or driver accident history report, driver behavior and experience information, etc.
  • vehicle or driver accident history report may include a vehicle or driver accident history report, driver behavior and experience information, etc.
  • the present invention provides a system for provisioning allotments utilizing machine visioning, the machine visioning including one or more, cameras mounted above the plane of the allotments, viz., parking area (parking lot) having one or more available allotments, i.e, parking spots.
  • the allotments may be present in another location, other than a parking lot, such as a street having one or more defined allotments, i.e., parking spaced.
  • the allotments may be portions of a surface at any other location or of any other area as well, and is not limited to only parking spaces or parking spots within a bounded parking lot or street or roadway having parking spaces or parking spots.
  • the present invention may be a system of components, a method utilizing one or more of said components and/or a computer program product.
  • the computer program product may include a computer readable stora computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer program product may interoperate with one or more devices of the system which provide machine visioning, i.e., one or more mounted cameras physically mounted in the proximity of one or more allotments in order to obtain data therefrom which are subsequently processed utilizing the computer program product.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operatio be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PL A) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowcharts may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical functions).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • Cloud computing is a model of service delivery for enabling conv access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service.
  • This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
  • On-demand self-service a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
  • Resource pooling the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured service cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. service models are as follows:
  • SaaS Software as a Service: the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure.
  • the applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail).
  • a web browser e.g., web-based e-mail
  • the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • PaaS Platform as a Service
  • the consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  • IaaS Infrastructure as a Service
  • the consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
  • Private cloud the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
  • Hybrid cloud the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for loadbalancing between clouds).
  • a cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.
  • An infrastructure comprising a network of interconnected nodes.
  • Cloud computing node 10 is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node 10 is capable of being implemented and/or performing any of the functionality set forth hereinabove.
  • cloud computing node 10 there is a computer system/server 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand- held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • Computer system/server 12 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • Computer system/server 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are ! communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown in Fig. 1, computer sy stem/ server 12 in cloud computing node 10 is shown in the form of a general-purpose computing device.
  • the components of computer system/server 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16.
  • Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
  • Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12, and it includes both volatile and non-volatile media, removable and non-removable media.
  • System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32.
  • Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 can be provided for reading from and writing to a nonremovable, non-volatile magnetic media (not shown and typically called a “hard drive”).
  • a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”).
  • an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media.
  • each can be connected to bus 18 by one or more data media interfaces.
  • memory 28 may include at least one program product having a se modules that are configured to carry out the functions of embodiments of the invention.
  • the storage system 34 may be used for a database for reading and/or writing data thereto; or alternatively such a database may be accessed via an external storage device or as a cloud service.
  • Program/utility 40 having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment.
  • Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
  • Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a touch sensitive pad or surface, a screen or surface which is responsive to the location of an input means, i.e., a stylus, a finger, or other, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22.
  • I/O Input/Output
  • computer system/server 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20.
  • network adapter 20 communicates with the other components of computer system/server 12 via bus 18.
  • bus 18 It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
  • the invention provides a computer-implemented method of monitoring and controlling availability of resources, here one or more allotments, via a network.
  • a computer infrastructure such as computer system 12, can be prov for performing the processes of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure.
  • the deployment of a system can comprise one or more of: (1) installing program code on a computing device, such as computer system 12 (as shown in Fig. 1), from a computer-readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the processes of the invention.
  • FIG. 2 A first example of an implementation of a system and apparatus according to an aspect of the invention is depicted on Fig. 2.
  • Such includes one or more mounted cameras 14 in the proximity of one or more allotments 102.
  • the allotments are available parking spaces for vehicle, as depicted having visible boundary markings 106, defining a planar area dimensioned for containing a vehicle 120a - 120d.
  • Cameras 14 are mounted in the proximity of one or more allotments such that their individual visual field of view impinge on one or more allotments 102; consequently one or more optical sensors within each camera receive image data, including machine vision data relating to the allotment status as well as to information parameters and to provide this data to the system/server 12.
  • the mounted cameras 14 are external devices 14 which may communicate by any means either directly with the system/server 12 or and may also communicate with one another; such means may be by wired communication cables or wirelessly, i.e., networked, e.g., the system/server may be a cloud computing node 10.
  • the cameras 14 may be coupled to a processor executing object recognition software; the recognition software may be executed within a portion of the camera itself and be associated with one or more optical sensors forming part of camera, or the object recognition software may be executed upon the system/server 12, or both.
  • the camera may include a central processing unit, and memory storage unit as may be required in order to execute instructions stored in the memory storage unit in order to generate an output, which may be sent to the system/server 12, and/or one or more further external devices 14, such as a parking kiosk 15 in the proximity of one or more of the allotments.
  • the cameras 14 can be mounted to any available structure which support for a camera 14 in order to provide a suitable location whereby each camera may effectively receive within their field of vision image data, including machine vision data relating to the allotment status as well as information parameters relative to a vehicle.
  • Cameras 14 may be mounted upon static structures including posts as may be used for telecommunications, for lighting, or for signage as well as to parts of building structures.
  • One advantage of the present invention is that image data received from each of the cameras 14 is processed in a manner wherein the received image data of one or more cameras 14 is used to determine the status of an allotment from two different angles, as well as in the generation of a three-dimensional representation of a vehicle 120a - 102d withing the field of view of one or more of the cameras
  • a ‘bounding box’ (or “bbox”) which in turn can be used to assess the availability of empty allotments within the fields of view of one or more cameras 14.
  • a further advantage of the present invention is that the mounting height of the one or more cameras 14 is effective even at relatively lower elevations, that is say the cameras 14 are effective even at heights which are not in excess of about 20 vertical feet above the level of an allotment, such that one or more “directly downward-looking” cameras are not required.
  • FIG. 2 illustrates a system according to the present invention configured and implemented to monitor and control availability allotments, here parking spaces 102, within a bounded parking lot having a delimited number of allotments or parking spaces.
  • the bounded parking lot 100 which has a generally planar surface
  • the bounded parking lot 100 includes an array of individual parking spaces 102 set forth in a series of four rows 104 each having a plurality of abutting parking spaces 102 separated and delineated by visible boundary markings 106 which indicate the surface area each of the spaces 102.
  • the parking lot 100 in this non-limiting example includes a physical boundary, here a curb 110 which abounds all of the parking space roadway sections 113 which are present in order allow for the travel of one or more vehicles between parking spaces 102 within the parking lot 100.
  • access to the parking let 100 is available only via two entry/egress roadways 112 as the curb 110 is intended to block all other entry and exit points to the parking lot 100.
  • the parking lot 100 also includes a plurality of pole mounted cameras 14 each preferably being mounted at a height of approximately between 3 and 20 feet (1 and 6 meters) above the generally flat, planar surface 101 of the parking lot 100.
  • a parking kiosk 15 which is a particular form of an external device 14 to be discussed in more detail shortly hereinafter.
  • CS structure “CS” containing all or part of the system/server 12, although such that clearly understood as being an optional placement of a part of the system/server 12 as it may alternately be absent and instead its placement may be at a remote location such as a central station 200 which can be wholly geographically separated from the parking lot 100.
  • communications between the system/server 12 and further components, particularly the one or more cameras 14 may be communicated via a wired or wireless network; here a cloud based “C” network is depicted providing monodirectional and bidirectional communication of signals “s” between the system/server 12 and the one or more cameras 14.
  • a cloud based “C” network is depicted providing monodirectional and bidirectional communication of signals “s” between the system/server 12 and the one or more cameras 14.
  • all or part of the system/server 12 may be incorporated as part of the kiosk 15, where a kiosk is present.
  • the system/server 12 similarly can be located in a geographically separated location against, such as at a central station 200 and information which would otherwise be available at a portable mobile device such as a “smartphone” 210 having suitable display which would provide information to an end user thereof.
  • Such communications with such a mobile portable device 210 may also occur via signals “s” between an intermediate cloud C, or directly with the system/server 12.
  • the functions may also be implemented by a portable mobile device 210 which forms a part of a vehicle such as one or more of the vehicles 120a - 120d which may be present within the parking lot 100.
  • each of the pole mounted cameras 14 may unidirectionally or bidirectionally communicate with the system/server 12 (within the CS, or at a central station 200) and/or the kiosk 15 and/or smart phone/portable mobile device 210.
  • the system/server 12 (within the CS, or at a central station 200) and/or smart phone/portable mobile device 210 communicate in a similar manner, preferably wirelessly i.e, via radio frequency, Bluetooth, or via a “cloud” based service.
  • Such devices may be part of a communications network, with each being a “node” thereon.
  • the system operates to monitor the availability of allotments, here parking spaces, within the bounded parking lot by capturing, via one or more optical sensors within each cameras 14 which receive image data within their separate angular fields of views 140, including machine vision data relating to the allotment status as well as to information parameters regarding vehicles 120a - 120d and to provide this data to the system/server 12, which is used to relative positions of allotments to one or more vehicles and/or and their allotment status via obtention of information parameters at least partially determined by the one or more cameras 14 which provide for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, within the field of view 140 of at least one of the cameras 14.
  • the system includes at least one, but preferably two or more of the mounted cameras 14 having individual angular or conical optical fields of view 140, and the mounted cameras 14 are positioned such that these fields of view 140 at least in part overlap for at least two of the cameras 14.
  • four mounted cameras 14 are present, each having an independent angular fields-of-view 140 and optionally but preferably, the positioning of the one or more optical sensors (not shown) within each of the cameras 14; preferably there is a degree of overlap of the fields of view 140 of two or more of the cameras 14.
  • Each of the optical sensors of each of the cameras provides useable data which may be processed by the computer system/server 12 regarding information parameters concerning any vehicles within the field of view of the camera 14, as well as the allotment status of allotments also within the field of view. Where there is an overlap in the fields of view of two c; for the receipt of concurrent data regarding any object present within any of the fields of view 140 which data can be dynamically transmitted to the computer system/server 12.
  • This data may be used from a single camera 14 to provide a representation of a bounding box (bbox) of any one vehicle relative to any point on the parking lot 100 within the field of view of a camera 14.
  • This data may be used from a single camera 14 to provide a Service related to a vehicle within the field of view of a camera 14.
  • the data includes pixel images received from each of the optical sensors, each of the pixels representing a portion of the actual three-dimensional view from a particular camera 14 and the instantaneous status of any object within the field of view.
  • the fields of view 140 of each of the four depicted mounted cameras 14 overlap to a great degree towards the central portion of the parking lot 100 but, no individual mounted camera 14 has a sufficiently wide field-of-view 140 which encompasses the entirety of the surface of the parking lot 100, and all vehicles which are 120a-120c present therein, or entering vehicles 120d which have not yet entered any of the parking spaces 102 or the intermediate roadway sections 113.
  • data received from one or more of the cameras 14 is necessary in order to provide output data representative of the instantaneous positioning of objects, i.e., bbox of a vehicle, within the vicinity of allotments, here parking spaces 102.
  • output data provides for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, and its respective position to the two or more said cameras, to any other object such as another static or moving vehicle, as well as to allotments, here parking spots, within the field of vision of any one camera 14.
  • This initial step of generation of an initial data map may be undertaken with or without the presence of any vehicles within the parking lot 100. Additionally, at any time the initial data map generated by use of the video markup process (also noted as a ‘Video markup tool”) may be modified or updated to reflect physical changes to the parking lot 100, its parking spaces 102 as necessary.
  • the video markup process also noted as a ‘Video markup tool”
  • Fig. 3 provides a conceptual representation of a video markup process used to generate an of an initial data map of a parking lot 100a (here of a different configuration than that of Fig. 2) and Figs. 3-1 to 3-3 provides further representations of the correlation of visible boundary markings 106 within a further parking lot 100b as seen in the field of view of one camera 14 utilizing the video markup tool 130.
  • the video markup tool itself may be implemented within the computer system/server 12 and is operated by user (i.e., human user) who interfaces with the video markup tool 130 via suitable peripheral devices, such as a monitor, and input devices which operate with the computer system/server 12 to generate the initial data map from and of image data 14v in the field of view of a camera 14,
  • user i.e., human user
  • suitable peripheral devices such as a monitor, and input devices which operate with the computer system/server 12 to generate the initial data map from and of image data 14v in the field of view of a camera 14,
  • FIG. 3-1 there is represented a visual 2-D map representation 150a of a part of a parking lot 100b surface including parking spaces 102 and their visible boundary markings 106 which to be correlated with points visible in the image data 14v of a camera w lot 100b within its field of view.
  • the 2-D map is stored as a data structure, i.e., data table within the computer system/server 12 and may include further data including for example the actual physical dimensions of the parking spaces 102 including width, length and area, as well as dimensions (i.e, length) of the various line segments typically used to provide the visible boundary markings 106.
  • the representation of 150a may be a simple form of an initial data map, but typically such is only a subset of a total 3-D map of a parking lot 100, 100a, 100b having more allotments, and which may also include data of three-dimensional objects of interest as well.
  • the interrelationship between the representation of 150a and the visual data 14v is shown more clearly in Figs. 3-2 and 3-3.
  • the field of vision of the single camera 14 and the image data 14v which it provides from its optical sensor, typically a 2-D array of pixel sensors, is seen to the human operator, in which image data 14v are visible points on interest, i.e., pi, p2, p3, p5 which are present in the representation 150a which forms at least a part of an initial data map.
  • image data 14v are visible points on interest, i.e., pi, p2, p3, p5 which are present in the representation 150a which forms at least a part of an initial data map.
  • These points of interest pi, p2, p3, p5 correspond to points of the visible boundary markings 106 within the camera’s field of view.
  • the video image of a parking lot 100, visible parking spaces 102 and their visible boundary markings 106 within the camera’s field of view as seen in the planar video representation 150a is now seen.
  • the user of the video markup tool uses a suitable input device, i.e. a computer mouse, trackball, tablet and stylus or other suitable input device to “map” points represented on the 2-D representation 150a with corresponding parts of the video image and thus image data 14v of a parking lot 100c, visible parking spaces 102 and their visible boundary markings 106; such mapping may be by simply indicating and correlating a visible point on the video image data 14v which corresponds to the corresponding point of interest present in the representation 150a.
  • the input device is thus used to identify correlating point, lines, etc. of the 2-D map representation 150a with correlating parts of the planar, 2-D video image of a camera 14.
  • Fig. 3-3 depicts the result of such a mapping process, wherein the location of visible boundary markings 106 within the 3-D video image data 1 corresponding points and lines of the 2-D map representation 150a stored as a data structure, within the computer system/server 12.
  • the position of any point with a field of view of any one of the cameras 14 can now be identified by the computer system, and correlated to the initial data map stored as a data structure, as well as used further by the system and apparatus of the invention.
  • the correspondence between the outline 3D points of the ‘real world’ received by any of the cameras 14 may be correlated to points within the camera video frame 2D image points, as received on its 2D image sensor.
  • This process is repeated for one or more further cameras 14 having fields of view of one or more point(s) of interest and/or lines of interest in a parking lot, which have their respective positional information stored.
  • bounding box (or “bbox”) from video image data 14v provided within by a camera’s 14 field of view as received on the camera’s image sensor.
  • the video image data 14v is used to generate data representing a virtual geometric volume of the vehicle 120 bbox which includes data regarding the physical size (in 3-D) of the vehicle 120 and the location of one or more points of the bbox, including the location of points at least 3 comers of the bbox, (rear-bottom-left, “bb3”, front-bottom-left, “bb2”, front-bottom-right “bbl”) derived from video image data 14v, which 3 comers may also be assigned (x, y, z coordinates) which are subs coordinates stored in the data map generated using the video markup tool 130, namely the (virtual) data map of a parking lot on which the vehicle 120 is present.
  • a camera captures an image of a vehicle 120 within its field of view and visual image data 14v received is provided to a neural network 12x which may be implemented within or using the computer system/server 12, which estimates 2-dimensional coordinates of the vehicle’s 3-dimensional bounding box vertices based on certain points derived from the visual image data 14v.
  • Such bounding box comers “bbc” relative to a vehicle are illustrated on Fig.
  • K is obtained from a known conventional calibration procedure using a chessboard camera calibration procedure
  • R*[I-C] is a model-view matrix which was established using the video markup tool 130, (i.e., IC of a vehicle,) and/or points or lines of interest (i.e, boundary lines defining the position of an allotment, or lines 106 which indicate the individual boundaries of each of the spaces 102 in a parking lot 100, or lines which indicate the individual boundaries of parking spaces on a section of a roadway 113, or a point such as the location of a structure, such as structure CS, or kiosk 15) are indicated by a user or operator of the system.
  • points or lines of interest i.e, boundary lines defining the position of an allotment, or lines 106 which indicate the individual boundaries of each of the spaces 102 in a parking lot 100, or lines which indicate the individual boundaries of parking spaces on a section of a roadway 113, or a point such as the location of a structure, such as structure CS,
  • the instantaneous location of a vehicle 120 to the surface 101 of a parking lot may be ascertained, and stored in a computer memory within the system/server 12 for immediate use by the system and/or recorded for later access and retrieval.
  • the process may be used concurrently track the location of one or more static and/or moving vehicles relative to the surface of the parking lot, with respect to other vehicles, and with respect to allotments present in the lot 100.
  • the process also provides for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, and its respective position to the two or more said cameras, to any other object such as a static or moving vehicle present within the field of view of the cameras 14 comprised forming part of the system, as well as the instantaneous status of allotments, i.e. empty or occupied.
  • solve PnP In the process of determining the location of points within a camera’s field of view, such may be calculated using a “solve PnP” function, which may be implemented within a part of the system.
  • the ‘solve PnP’ function itself, is known, (see “Appendix A”) and may be implemented using the system. From the results of the solvePnP function, (or other suitable function, other than the solvePnP function) the relative position of one or more point(s) of interest and/or lines of interest in the real environment within the virtual map 155 stored within the system having been previously generated by the video markup process and video markup tool 130.
  • the instantaneous relative position of a vehicle 120 relative to one or more of the point(s) of interest and/or lines of interest in the real environment, i.e., as present in the stored 3-D map may be dynamically implemented by the system, i.e.
  • intermittently such as when the data correlating to one or more video image frames (“image data”) received by one or more of the cameras, which may correspond to the sampling rate of one or more cameras, viz, with each frame received by a camera, or may be where some of the frames are skipped and only intermittent frames received by the camera are used in the determination; ideally the sampling rate of frames received by a camera and used the system is at an interval of between about 0.1 seconds and 10 seconds, preferably between about 0.2 seconds and 5 seconds) or continuously, and the determination is preferably made during the time that any motion within a field of view of one or more of the cameras is sensed; such sensing may be made by com one or more pixels or visual image areas (i.e, for brightness, change of shape) between two or more successive frames of image data received by one or more of the cameras which form part of the system.
  • image data video image frames
  • the system may then intercorrelate the real world position of point(s) of interest and/or lines of interest from the relative positional information from at least one camera 14, preferably at least two or more of the cameras 14, and any changes which occur within any of the fields of view, which may be correlated to the three-dimensional virtual map, from the vantage points of one or more of the cameras 14. From the three-dimensional virtual map of a parking lot 155, and the position of one or more pixels within the video image data 14v received by one or more of the cameras 14, where the pixels represent attributes of a vehicle within the field of view of one more of the cameras, the status of the parking lot and vehicles present thereon can be continuously monitored for any changes.
  • Events Responsive to such changes (‘Events’) which occur within any of the fields of view of one or more of the cameras 14 present within the system and apparatus of the invention may undertake further steps and/or undertake operations (‘Services’) which may occur in response to certain Events.
  • Services include one or more of: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Unparking Mode, Observation Mode.
  • the system and apparatus may have different modes of operation in both statically and dynamically determine the relative position of a vehicle, or its 3 -dimensional bounding box relative to positional information forming part of the three-dimensional virtual map, and may dynamically update the relative position of a vehicle (i.e, from a reconstruction thereof, or a bounding box thereof, or from one or more points determined from a bounding box thereof) when it is moving, i.e, the system may have a more rapid or accelerated operating mode wherein an increased rate of or frequency of sampling of image data from one or more, preferably two or more of the cameras are received and operated upon by the system, and optionally one or more responses or outputs may be generated by the system and may be used for any of a number of purposes.
  • the system may operate in a static mode wherein the frequency of sampling of image data from one or more, preferably two or more of the cameras are received and operated upon by the system, is at a longer interval as compared to the dynamic mode of operation of the system described immediately above; again, optionally one or more responses or ou system and may be used for any of a number of purposes.
  • Fig. 5 illustrates a further schematic of an aspect of the invention, including the step of detecting and processing one or more images 14v of video data (i.e, a “video stream”) obtained from one or more cameras 14, including generation 11 la, of the position of the bbox of the visible vehicles 120 relative to further points of interest available from the virtual map 115 of the parking lot 100, i.e, boundary lines 106 an the relative position of vehicles 120 within a parking space 102, as well as the allotment status of parking spaces 102, and correlating 112a the bbox of visible vehicles with the further points of interest and to compare such with prior correlations in order to determine if an Event, has taken place.
  • video data i.e, a “video stream”
  • An Event 112b can be any change in the position of or location of any vehicle, or its bbox data, which has occurred since a prior point in time.
  • the system and apparatus of the invention may sense the change in the video image data 14v received by any of the cameras 14 which may trigger an Event, such as movement of a vehicle 120, or a change in the allotment status of a parking space 102.
  • the system and apparatus of the invention may communicate this change and/or the incidence of an Event which may cause the initiation of a Service.
  • Figure 6 provides a flowchart including a sequence of steps which relate to a process of the present invention, including discrete logical steps under which the system may operate, including the determination of an Event.
  • the steps include the following:
  • conditional statement - is there is a car “bbox” (3 -dimensional bounding box, see Fig. 4.3) that doesn’t have a corresponding bbox (3 -dimensional bounding box ) on a previous frame (of video image data) ?
  • 208 detect the car’s license plate and notify the system (( detected
  • 210 has any car’s bbox intersected with a parking space rect ? (parking space rectangle; lines defining a parking space; lines defining an allotment)
  • steps 200, 202, 204, 206, 210, 214, 218 may be used to determine the occurrence of an ‘Event’ during a time interval, or between time intervals, and may be used to issue a notification of such Event, and/or to initiate a ‘Service’.
  • the determination as to whether an Event has occurred is not limited to the flowchart of Fig. 6, as other conditions or parameters may be sufficient to signal, trigger or otherwise indicate the occurrence of an event.
  • Such expressly includes a difference between two different frames of image data 14v in the field of view of a camera 14, particularly any changes in the pixels or other visual aspects or features at two different times within the field of view of a camera 14.
  • An aspect of the invention is an acquisition mode where the system of the present invention makes initial visual observation with an object of interest, here a moving vehicle.
  • this vehicle is identified as 120d which is seen, is entering the region of the parking lot 100 via one of the two (ungated) entry/egress roadways 112. Concurrently, at least a part of the vehicle 120d enters into the field-of-view 140 of one or more of the mounted cameras
  • An acquisition mode is initiated as data received from the two or more cameras indicated that the vehicle 120d as newly entered a field-of-view 140.
  • information parameters selected from: the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non-sliding), condition, degree of window tinting, identification information such as a license plate, window stickers, direct relative distance to one or more of the mounted cameras 14, relative positioning of the vehicle to one or more positions, i.e, other vehicles already present within the parking lot 100, one or more of the parking spaces 102, one or more of the marked lines 106, or other visually discernible feature of the vehicle 120d.
  • the information parameters are transmitted to the system/server 12 where they are processed to determine if an Event has occurred, i.e., according to the steps outline in the flowchart of Fig. 6.
  • the determination as to whether an Event has occurred is not limited to the flowchart of Fig. 6, as other conditions or parameters may be sufficient to signal, trigger or otherwise indicate the occurrence of an event.
  • Such expressly includes a difference between two different frames of image data 14v in the field of view of a camera 14, particularly any changes in the pixels or other visual aspects or features at two different times within the field of view of a camera 14.
  • the resulting data extracted from the steps provides one or more information parameters which can then be checked against the records stored on a database interacting with the system/server or alternately which forms part of the system/server 12.
  • the system may generate a new record utilizing the newly acquired information parameters and store this in the database for subsequent use.
  • the system operates to initiate a higher density pixel recognition mode in one or more of the mounted cameras 14 in order to provide a higher quality video input and thereby improved data collection of the vehicle 120d and optionally, preferably, additional information parameters are collected for this vehicle 120d, and added to the new record, which now may assign a unique identifier to vehicle 120d.
  • the unique identifier may be used in subsequent operations to thereby provide improved indexing of the database in subsequent steps, or on other occasions wherein the vehicle 120d returns to the parking lot 100, after having exited it.
  • An aspect of the invention is a tracking mode where the system of the present invention makes a sequence of visual observations of a moving object of interest, here a moving vehicle.
  • this vehicle is identified as 120c which is se in motion in the parking lot 100, specifically between rows of parking spaces 102, within the intermediate roadway sections 113 which are present in order allow for the travel of one or more vehicles between parking spaces 102.
  • a time sequenced series of captured video images are concurrently received from one or more of the mounted cameras 14 and each frame of the images is processed to determine if an Event has occurred, i.e., according to the steps outline in the flowchart of Fig. 6.
  • the resulting data extracted from the sequence of images captured at least one camera 14, provides one or more information parameters relating to vehicle 120c, and may include not only one or more of the information parameters but expressly include the relative position of vehicle 120c to other objects within the parking lot 100.
  • a further aspect of the invention is a parking mode with a system of the present invention makes a series of visual observations if you moving object of interest, here again a moving vehicle again identified as 120c.
  • This mode becomes operational as a subset of the tracking mode where it is determined from the series of time sequenced series of captured video images of the moving vehicle 120c that it is in close proximity to one or more of the parking spaces 102 and the direction the vehicle 120c had changed in angular direction.
  • a time sequenced series of captured video images are concurrently received from a mounted camera 14 and trimmed the images is processed according to steps outlined in the flowchart of figure 6.
  • the resulting data extracted from the sequence of images captured by the mounted cameras 14 provides one or more information parameters relating to vehicle 120c during the parking mode and in particular, is positioning relative to one or more of the parking spaces 102 and/or one or more of the marked lines 106 which indicate the individual boundaries of one or more of the parking spaces 102 proximate to vehicle 120c.
  • the parking mode may remain operational until it is sensed that for a first timing interval, motion of the vehicle 120c has ceased for that timing interval (i.e, 10 seconds, 30 seconds, 60 seconds, or more), indicating a high likelihood of that the vehicle has come to a stop within one or more of the parking spaces 102.
  • the data captured 1 cameras 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval.
  • data captured to be transmitted to the kiosk 15 which includes a visual indicator providing data relating to one or more of: the parking lot 100, parking spaces 102 on the parking lot 100 (i.e, a map); the position of the now parked vehicle 120a within the parking lot 102, and an indication of time or time-stamp datum.
  • a visual indicator providing data relating to one or more of: the parking lot 100, parking spaces 102 on the parking lot 100 (i.e, a map); the position of the now parked vehicle 120a within the parking lot 102, and an indication of time or time-stamp datum.
  • further information may also be supplied, such as instructions relevant to the use of the parking lot 100, and/or payment options where the use of the parking lot 100 is not available free of cost.
  • the data captured, as well as further information may also be made available to a handheld device 210, including the requirement for payment for the use of the parking lot, including the fee required.
  • a further aspect of the invention is a Parked Mode which preferably, is entered into by the system subsequent to a Tracking Mode or a Parking Mode (preferably a Parking Mode).
  • the system of the present invention makes a series of visual observations of a static object of interest, here again a non-moving vehicle identified as 120a. While this mode may become operational at any time, preferably it becomes operational subsequent to a Parking Mode which may be used to determine the status of the vehicle 120a relative to a parking space 102 and/or one or more of marked lines 106.
  • the data capture of the mounted cameras 14 maintain visual observation of the non-moving vehicle 120a and remain in the state, until movement in the immediate proximity of the vehicle 120a (such as the opening of a door, rear tailgate, trunk) or physical movement of the vehicle 120a takes place, and is sensed by two or more of mounted cameras 14. Thereafter for a short time interval, at such a time, the data captured 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval.
  • a further aspect of the invention is an Unparking Mode which preferably, is entered into by the system subsequent to a Parking Mode.
  • the system of the present invention makes a series of visual observations regarding a static object of interest, here again a non-moving vehicle identified as 120a. While this mode may become operational at any time, preferably it becomes operational subsequent to a Parked Mode w the status of the vehicle 120a relative to a parking space 102 and/or one or more of marked lines 106.
  • the data capture of the mounted cameras 14 maintain visual observation of the non-moving vehicle 120a and remain in the state, until movement in the immediate proximity of the vehicle 120a (such as the opening of a door, rear tailgate, trunk) or physical movement of the vehicle 120a takes place, and is sensed by two or more of mounted cameras 14 for a second interval (i.e, 10 seconds, 30 seconds, 60 seconds, or more) and it is observed that the relative position of the vehicle 120a is now changed relative to its position in the parking lot 100 established during the prior Parked Mode.
  • a second interval i.e, 10 seconds, 30 seconds, 60 seconds, or more
  • the data captured 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval.
  • a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval.
  • further information may also be supplied to the contemporaneous record of the vehicle 120a in the database, such as whether a payment is to be charged to the registered owner of the vehicle 120a, or whether a payment is to be withdrawn from a bank or other financial account linked to the registered owner of the vehicle 120c or to other source of payment associated with the vehicle 120a in the database.
  • the system returns to and operates according to the Tracking Mode until the vehicle exits the field of view 140 of the mounted cameras 14.
  • Observation Mode Regardless of the motive or static status of any objects within the fields of view of two or more of the cameras 14 having field of views 140 extending into the parking lot 100, i.e, of either moving or static vehicles, at any time the system may enter into an Observation
  • datum collected as information p vehicle or a moving vehicle by the cameras utilized in the improved machine visioning process of the invention include one or more visual parameters which may include, but are not limited to, the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non- sliding), condition, degree of window tinting, direct relative distance to one or more of the monitoring cameras. These one or more datum may be stored in a database record, and indexed to the particular vehicle observed.
  • the database record of the vehicle preferably also includes registration information such as the state, license plate number, a Vehicle Identification Number (VIN), and status of current vehicle registration as may be evidenced by stickers placed upon portions of a license plate and/or upon portions of a windshield.
  • This information is visibly discernible, particularly where high-resolution image capturing is used by one or more of the mounted cameras 14; such data may be captured and stored in the database record, concurrently with a date stamp of the date wherein this datum was acquired.
  • Such provides a nonlimiting example of visual information parameters collectible by the system of the invention which can be correlated to one or more non-visual parameters associated with the current vehicle registration.
  • the datum collected using the system of the present invention and stored as one or more fields within a database record within a database which can be used by the system, which is part of the system of the invention can be compared against further external databases and/or datum such as: databases or data relating to a vehicle history report, a vehicle registration report including past and present owners, a police report correlating the vehicle against any outstanding violations or reports of stolen vehicles, a vehicle credit report including the status of any liens upon the vehicle.
  • Such information may be used to generate a further report or further output resulting from such a comparison from datum collected according to the present invention by the system, as compared against further external databases and/or datum.
  • Further datum collected using the system of the present invention may include the status of the physical condition of the vehicle, or changes in its physical condition, i.e, whether the vehicle has been repaired following an accident, whether the vehicle had been in an accident, whether the vehicle has been otherwise altered, i.e., as a result of modification, vandalism, graffiti, and such datum can be stored in the system and/or compared to prior stored datum stored within the system.
  • datum collected as information parameters relating to a static vehicle or a moving vehicle by the cameras utilized in the improved machine visioning process of the invention may also be used to determine the proximity of the location of an object of interest, i.e. a vehicle relative to two or more of the cameras 14 used with the system to a point of interest.
  • the then-current location of the vehicle and the fact of the of its corresponding near proximity to such a point of interest can be transmitted to a portable mobile device such as a “smartphone” 210 having suitable display which would provide information to an end user, or a similar such portable mobile device which may be installed as part of the vehicle.
  • a portable mobile device such as a “smartphone” 210 having suitable display which would provide information to an end user, or a similar such portable mobile device which may be installed as part of the vehicle.
  • Such may be achieved by comparing the database record of a vehicle, and any of the foregoing modes to one or more records of one or more external databases relating to relative locations of any objects of interest, and as a result of such comparison, initiating a communication which will provide the foregoing indication to an end user via a portable mobile device.
  • the foregoing is effective without reliance upon a Global Positioning Device or any related sensors, but is operational solely in accordance with the system provided as an aspect of the present invention.
  • FIG. 7 Therein is depicted a representation of a common roadway 170, vis a “street” between parallel curbs 110 as is commonly encountered in North America, which includes portions delineated into parking spaces 102 adjacent to curbs 110 and delineated by series of parallel marked lines 106, and an intermediate roadway section 113 which are present in order allow for the travel of one or more vehicles between the parallel rows 104 of adjacent parking spaces 102.
  • FIG. 1 Visible also in the figure are representative series of building fronts 172 spaced away from the curb, one of which has mounted thereon a first camera 14 having an angular or conical field-of-view 140, and on the opposite side of the street a second, pole mounted camera 14 having its own angular conical field-of-view 140. Adjacent thereto is a kiosk 15.
  • a kiosk 15 Adjacent thereto is a kiosk 15.
  • the invention receives input from one or more of the cameras 14 and processes the received data in accordance with the steps outlined previously, in order to derive datum collected as information parameters relating to any of the vehicles pass within the fields of view 140 of the cameras 14.
  • the “parked” vehicle 112e is currently subjected to a Parked Mode and may concurrently be subjected to an Observation Mode.
  • the “moving” vehicle 112f is currently subjected to an Acquisition Mode and/or a Tracking Mode and optionally an Observation Mode.
  • the further “parked” vehicle 112g is currently subjected to a Parked Mode and may concurrently be subjected to an Observation Mode, as well as a conditional, as the relative placement of the vehicle 112g indicates that it is only extending into two adjoining parking spaces 102, but also is occupying a no-parking space 105 visibly marked by diagonal parallel lines.
  • the system of the invention may initiate an output which communicates with the local parking authority and or the local police authority to indicate the parking violation/infraction of vehicle 112g.

Abstract

The present invention relates to an apparatus and system for provisioning allotments utilizing machine visioning, the machine visioning including one or more mounted cameras above one or more available allotments. In a preferred embodiment the allotments are available parking spaces for a vehicle, as may be used to carry persons and/or goods on streets and other roadways. The apparatus and system is used to monitor Events, and may initiate a Service in response to the occurrence of an Event.

Description

SYSTEM, APPARATUS AND METHOD OF PROVISIONING ALLOTMENTS UTILIZING
MACHINE VISIONING
Certain systems and methods of monitoring and controlling availability of resources, here one or more allotments used for temporarily placement of an article, such as a vehicle, are not generally unknown to the art.
US 2017/0325082 describes apparatus and methods related providing city services, such as parking, are described. A mobile device can be configured to receive information from local sensor nodes, such as parking sensor nodes, in the vicinity of the mobile device. In a parking application, the mobile device located in a moving vehicle can be configured to locate available parking based upon the information received from the parking sensor nodes.
US 10198949 discloses a method for distributing parking availability data via blockchain includes: storing a blockchain comprised of a plurality of blocks, each block having a block header including a timestamp; receiving spot availability notifications including a common spot identifier and availability data; generating a transaction value including the common spot identifier and availability data; generating a new block header including i) a current timestamp, ii) a reference hash value generated via hashing of the block header included in a most recent block identified via the timestamp, and iii) a transaction hash value generated via hashing of the new transaction value; generating a new block comprised of the new block header and the new transaction value; and transmitting the generated new block.
US 10255808 discusses a computer-implemented method includes: receiving, by a computing device, images of adjacent vehicles parked directly adjacent to an open parking space; determining, by the computing device, visual factors and non-visual factors of the adjacent vehicles based on the images; determining, by the computing device, risk scores for each of the adjacent vehicles based on the visual factor and the non-visual factors; determining, by the computing device, a parking position within the open parking spa outputting, by the computing device, information regarding the parking position.
US 2017/0017848 discloses a parking assist system of a vehicle includes a camera that, when disposed at the vehicle, has a field of view exterior of the vehicle. An image processor is operable to process image data captured by the camera to detect parking space markers indicative of a parking space and to identify empty or available parking spaces. The image processor includes a parking space detection algorithm that detects parking space markers by (i) extracting low level features from captured image data, (ii) classifying pixels as being part of a parking space line or not part of a parking space line, (iii) performing spatial line fitting to find lines in the captured images and to apply parking space geometry constraints, and (iv) detecting and selecting rectangles in the captured images.
US 8139115 discloses a computer implemented method, apparatus, and computer usable program code for tracking vehicles in a parking facility using optics. The process receives a series of two-dimensional images of a vehicle in a parking facility from a camera. The process generates an object representing the vehicle based on the series of two-dimensional images. The object includes a set of parameters defining an outer edge frame for the vehicle. The process determines a location of the vehicle in the parking garage based on the outer edge frame and positional pixel data for the parking facility. US 8139115 recites that "During calibration, a test vehicle is driven around a pre determined course in the given parking area. The test vehicle drives into selected parking bays in a precise order. The parking bays selected are usually the first parking bay and the last parking bay in a bank or row of parking bays. The camera will follow the vehicle and notice when the vehicle stops in a parking bay. In other words, the camera records a set of camera images of the test vehicle as it drives on access road ways and pulls into one or more pre-selected parking bays. This allows the process controller to calculate the location of each parking bay, the scale of each parking bay in the pixel image, and the orientation of a vehicle in a parking bay. The process controller will also calculate positional pixel data for each parking bay and each section of the access road. Positional pixel data is data for associating a pixel with a real world location in the parking facility. Positional pixel data includes an assignment of each pixel in the camera with a real world location road."
US 2017/0124874 discusses a system for determining an available parking space. A computer accesses a streaming video. The computer identifies a vehicle and a corresponding identification characteristic within the accessed streaming video. The computer retrieves historical data associated with the identified identification characteristic, wherein the historical data includes previous parking locations. The computer determines a preferred parking space within the retrieved historical data associated with the identified identification characteristic. The computer determines whether the identified preferred parking space is available based on a parking database.
US 2015/0086071 discloses a system and method for determining parking occupancy by constructing a parking area model based on a parking area, receiving image frames from at least one video camera, selecting at least one region of interest from the image frames, performing vehicle detection on the region(s) of interest, determining that there is a change in parking status for a parking space model associated with the region of interest, and updating parking status information for a parking space associated with the parking space model. In a Master’s Thesis titled “Vehicle Detection and Pose Estimation for Autonomous Driving” by Libor Novak, (Czech Technical University in Prague; May, 2017) therein is disclosed a method for determining a computer visioning and image processing system which generates from two- dimensional (“2-D”, “2D”) images received by a video sensor, a representation of a three- dimensional (“3-D”, “3D”) bounding box representative of the approximate 3-D volume of a vehicle, in any rotational relationship position relative to the video sensor. The detection system uses a deep neural network (DNN) method for 3D bounding box estimation,
While the prior art discloses certain apparatus, systems and methods, there is nonetheless the real and urgent need in the art for further improvements in systems and methods of monitoring and controlling availability resources, here one or more allotments used for temporarily placement of an article, such as a vehicle, which utilize machine visioning. In an aspect the present invention relates to a system for provisioning allotments, utilizing machine visioning, the machine visioning including one or more mounted cameras above one or more available allotments. In a preferred embodiment the allotments are available parking spaces for a vehicle, as may be used to carry persons and/or goods on streets and other roadways.
A further aspect, the present invention relates to an apparatus including component parts thereof for provisioning allotments utilizing machine visioning, the machine visioning including one or more mounted cameras (preferably two or more mounted cameras) in the proximity of one or more available allotments. In a preferred embodiment the allotments are available parking spaces for one or more vehicles.
It is still further aspect the present invention relates to a method for monitoring allotments utilizing machine visioning, the machine visioning including one or more mounted cameras in the proximity of one or more available allotments. In a preferred embodiment the allotments are available parking spaces for vehicle.
A yet further aspect of the invention there is provided an improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and their respective position to the one, two or more said cameras, as well as relative to allotments within the field of vision. A still further aspect of the invention is a method for recording utilization of allotments, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and their respective position to the at least one, two or more said cameras, as well as to allotments within the field of vision of the 1 more cameras.
A yet further aspect of the invention is a method of providing output information based on a method for recording utilization of allotments, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of at least one, but preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles, and its respective position to the one more said cameras, as well as to allotments within the field of vision.
A still further aspect of the invention is a method for collection and use of information parameters relating to specific vehicles utilizing an allotment, the method utilizing the improved machine visioning processes useful in establishing relative positions of allotments and their allotment status via the use of least one, preferably two or more cameras which have overlapping fields of the vision, which process provides for the dynamic collection of information parameters relating to one or more static vehicles and/or one or more moving vehicles within the field of vision of the at least one, two or more cameras, which information parameters may be stored, and optionally communicated and interchanged with further systems external of the apparatus for provisioning allotments utilizing machine visioning.
A preferred aspect of the invention provides a system and apparatus operable to monitor one or more parking spaces and one or more vehicles wherein the one or more vehicles and one or more parking spaces are both within the field of view of at least one static, vertically mounted camera which provides video image data to a computer system/server and computer readable media or storage which is used in storing data derived from a video markup tool and from data derived from video image data received from the at least one camera, the computer system/server operable to execute program modules, to receive video image data from the at least one camera, to output a response subsequent to the execution of instruction of one or more program modules, and to communicate with one or more external devices; the system and apparatus comprising a video markup tool the positioning of one or more vehicles relative to one or more individual parking spaces separated and delineated by visible boundary markings in a parking lot and which generates a data map of the parking lot which is subsequently used by the computer system/server to determine the relative positioning of one or more vehicles present within the field of view of the at least one camera, and to determine the occurrence of any Events occurring periodically within the field of view; the computer system/server operable to continuously monitor video image data received from the at least one camera and determine the relative positioning of the one or more vehicles present with the field of view by monitoring video image data received from the at least one camera, generate a bbox corresponding to a vehicle within the field of view, correlate the position of the bbox with the data map of the parking lot to determine the physical location of the vehicle relative to one or more points of interest represented in the data map, and responsive to the occurrence of an Event, undertake further steps, operations, or provide a Service.
A further aspect of the invention provides a process for initiating provision of a Service in response to an Event, the process comprising the steps of: operating a system and apparatus as described herein to determine the occurrence of an Event, and in response thereto to initiate provision of a Service selected from: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Unparking Mode, Observation Mode..
These and further aspects of the invention are disclosed hereinafter, the following specification which necessarily includes the accompanying drawings. In this application an “allotment” is a physical location upon which may be used for used for temporarily placement of an article, such as a vehicle. Such an allotment is advantageously an area in which a vehicle may be temporarily placed, i.e, a parking space which may be bounded by marking lines on the generally planar surface of the area. The vehicle may be any type of motorized or nonmotorized vehicle, with regard to the former, such specifically includes cars, trucks, motorcycles, and with regard to the latter, nonmotorized vehicles may include detachable trailers, articles affixed upon detachable trailers, as well as movable articles which may be provided to the allotment utilizing a motorized vehicle. Allotment may be one or more parking spaces having defined boundaries in two dimensions, which are typically sized to receive a vehicle. The foregoing is however to be understood as a nonlimiting definition.
“Allotment status’ is the state whether the allotment is occupied by an article, i.e. a vehicle, or is empty. The allotment status may be “available” or “empty” indicating that it may receive a vehicle, or may be “unavailable” or “occupied” and already having a vehicle present within the physical boundaries of the allotment, i.e., parking space.
“Information parameters” relating to a static vehicle or a moving vehicle are one or more datum which are collected by the cameras utilized in the improved machine visioning process of the invention. Such may include visual and nonvisual parameters. Visual parameters may include the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non-sliding), condition, degree of window tinting, relative distance to one or more of the monitoring cameras etc. Visual parameters may be data or datum, and may be ascertained utilizing the input of the one or more cameras. Nonvisual parameters may be data or datum which are not visually ascertained utilizing the input of the one or more cameras, but are related to a specific vehicle identified by the apparatus and system of the invention; such may include a vehicle or driver accident history report, driver behavior and experience information, etc. The foregoing is however to be understood as a nonlimiting definition.
The present invention provides a system for provisioning allotments utilizing machine visioning, the machine visioning including one or more, cameras mounted above the plane of the allotments, viz., parking area (parking lot) having one or more available allotments, i.e, parking spots. The allotments may be present in another location, other than a parking lot, such as a street having one or more defined allotments, i.e., parking spaced. The allotments may be portions of a surface at any other location or of any other area as well, and is not limited to only parking spaces or parking spots within a bounded parking lot or street or roadway having parking spaces or parking spots. In certain aspects the present invention may be a system of components, a method utilizing one or more of said components and/or a computer program product. The computer program product may include a computer readable stora computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. Further, the computer program product may interoperate with one or more devices of the system which provide machine visioning, i.e., one or more mounted cameras physically mounted in the proximity of one or more allotments in order to obtain data therefrom which are subsequently processed utilizing the computer program product.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operatio be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PL A) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, or other illustrations can be implemented by computer readable program instructions which may any of a number of computer languages, or may be implemented using low level coding. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams and other illustrations in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowcharts may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical functions). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling conv access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
Characteristics are as follows
On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. service models are as follows:
Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the gen group and is owned by an organization selling cloud services.
Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for loadbalancing between clouds).
A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes.
Referring now to Fig. 1, a schematic of an example of a cloud computing node is shown. Cloud computing node 10 is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node 10 is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node 10 there is a computer system/server 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand- held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
Computer system/server 12 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system.
Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are ! communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown in Fig. 1, computer sy stem/ server 12 in cloud computing node 10 is shown in the form of a general-purpose computing device. The components of computer system/server 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16. Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12, and it includes both volatile and non-volatile media, removable and non-removable media.
System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32. Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 34 can be provided for reading from and writing to a nonremovable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 18 by one or more data media interfaces. As will be further depicted and described below, memory 28 may include at least one program product having a se modules that are configured to carry out the functions of embodiments of the invention.
The storage system 34 may be used for a database for reading and/or writing data thereto; or alternatively such a database may be accessed via an external storage device or as a cloud service.
Program/utility 40, having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a touch sensitive pad or surface, a screen or surface which is responsive to the location of an input means, i.e., a stylus, a finger, or other, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22. Still yet, computer system/server 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As depicted, network adapter 20 communicates with the other components of computer system/server 12 via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
In an embodiment, the invention provides a computer-implemented method of monitoring and controlling availability of resources, here one or more allotments, via a network. In this case, a computer infrastructure, such as computer system 12, can be prov for performing the processes of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure. To this extent, the deployment of a system can comprise one or more of: (1) installing program code on a computing device, such as computer system 12 (as shown in Fig. 1), from a computer-readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the processes of the invention.
A first example of an implementation of a system and apparatus according to an aspect of the invention is depicted on Fig. 2. Such includes one or more mounted cameras 14 in the proximity of one or more allotments 102. In a preferred embodiment the allotments are available parking spaces for vehicle, as depicted having visible boundary markings 106, defining a planar area dimensioned for containing a vehicle 120a - 120d. Cameras 14 are mounted in the proximity of one or more allotments such that their individual visual field of view impinge on one or more allotments 102; consequently one or more optical sensors within each camera receive image data, including machine vision data relating to the allotment status as well as to information parameters and to provide this data to the system/server 12. The mounted cameras 14 are external devices 14 which may communicate by any means either directly with the system/server 12 or and may also communicate with one another; such means may be by wired communication cables or wirelessly, i.e., networked, e.g., the system/server may be a cloud computing node 10. The cameras 14 may be coupled to a processor executing object recognition software; the recognition software may be executed within a portion of the camera itself and be associated with one or more optical sensors forming part of camera, or the object recognition software may be executed upon the system/server 12, or both. Where processing of the visual image data received by the one or more optical sensors is performed by a camera 14 itself, the camera may include a central processing unit, and memory storage unit as may be required in order to execute instructions stored in the memory storage unit in order to generate an output, which may be sent to the system/server 12, and/or one or more further external devices 14, such as a parking kiosk 15 in the proximity of one or more of the allotments. The cameras 14 can be mounted to any available structure which support for a camera 14 in order to provide a suitable location whereby each camera may effectively receive within their field of vision image data, including machine vision data relating to the allotment status as well as information parameters relative to a vehicle. Cameras 14 may be mounted upon static structures including posts as may be used for telecommunications, for lighting, or for signage as well as to parts of building structures. One advantage of the present invention is that image data received from each of the cameras 14 is processed in a manner wherein the received image data of one or more cameras 14 is used to determine the status of an allotment from two different angles, as well as in the generation of a three-dimensional representation of a vehicle 120a - 102d withing the field of view of one or more of the cameras
14, a ‘bounding box’ (or “bbox”) which in turn can be used to assess the availability of empty allotments within the fields of view of one or more cameras 14. A further advantage of the present invention is that the mounting height of the one or more cameras 14 is effective even at relatively lower elevations, that is say the cameras 14 are effective even at heights which are not in excess of about 20 vertical feet above the level of an allotment, such that one or more “directly downward-looking” cameras are not required.
The embodiment of the system and apparatus depicted on Fig. 2 illustrates a system according to the present invention configured and implemented to monitor and control availability allotments, here parking spaces 102, within a bounded parking lot having a delimited number of allotments or parking spaces. Notably the bounded parking lot 100 which has a generally planar surface
101, which is essentially two-dimensional such that wherein any point on the surface 101 may be thereto expressed in a 3-D Cartesian coordinate system such that (x, y, z = 0). Nonetheless the bounded parking lot 100 in real-life of course occupies a three-dimensional space of which the surface 101 is present when z = 0. This relationship is used when determining the relative position of any bbox with relation to any point on the surface 101, as will be described later.
Turning to Fig. 2, the bounded parking lot 100 includes an array of individual parking spaces 102 set forth in a series of four rows 104 each having a plurality of abutting parking spaces 102 separated and delineated by visible boundary markings 106 which indicate the surface area each of the spaces 102. The parking lot 100 in this non-limiting example includes a physical boundary, here a curb 110 which abounds all of the parking space roadway sections 113 which are present in order allow for the travel of one or more vehicles between parking spaces 102 within the parking lot 100. In this depicted embodiment, access to the parking let 100 is available only via two entry/egress roadways 112 as the curb 110 is intended to block all other entry and exit points to the parking lot 100. Notably, in Figure 2 there are no physical gates, barriers, or other types of external devices 14 which would monitor or meter the entrance and egress of vehicles into an out of the bounded parking lot 100; such are not essential to the system and methods according to the present invention, and in preferred aspects are purposely omitted. Of course, if desired, such one or more gates, barriers or other types of external devices 14 which could be used to control entry/egress may be provided and utilized to determine the number and/or population of vehicles entering or exiting the parking lot 100.
As is further visible, the parking lot 100 also includes a plurality of pole mounted cameras 14 each preferably being mounted at a height of approximately between 3 and 20 feet (1 and 6 meters) above the generally flat, planar surface 101 of the parking lot 100. Optionally but in some embodiments, preferably there is further provided is a parking kiosk 15 which is a particular form of an external device 14 to be discussed in more detail shortly hereinafter. Also depicted on Figure 2 is a structure “CS” containing all or part of the system/server 12, although such that clearly understood as being an optional placement of a part of the system/server 12 as it may alternately be absent and instead its placement may be at a remote location such as a central station 200 which can be wholly geographically separated from the parking lot 100. In such an instance communications between the system/server 12 and further components, particularly the one or more cameras 14 may be communicated via a wired or wireless network; here a cloud based “C” network is depicted providing monodirectional and bidirectional communication of signals “s” between the system/server 12 and the one or more cameras 14. Alternately, all or part of the system/server 12 may be incorporated as part of the kiosk 15, where a kiosk is present. But the system/server 12 similarly can be located in a geographically separated location against, such as at a central station 200 and information which would otherwise be available at a portable mobile device such as a “smartphone” 210 having suitable display which would provide information to an end user thereof. Such communications with such a mobile portable device 210 may also occur via signals “s” between an intermediate cloud C, or directly with the system/server 12. Of course, is to be understood that the functions may also be implemented by a portable mobile device 210 which forms a part of a vehicle such as one or more of the vehicles 120a - 120d which may be present within the parking lot 100. It is to be understood that each of the pole mounted cameras 14 may unidirectionally or bidirectionally communicate with the system/server 12 (within the CS, or at a central station 200) and/or the kiosk 15 and/or smart phone/portable mobile device 210. It is also to be understood that similarly the kiosk 15, the system/server 12 (within the CS, or at a central station 200) and/or smart phone/portable mobile device 210 communicate in a similar manner, preferably wirelessly i.e, via radio frequency, Bluetooth, or via a “cloud” based service. Such devices may be part of a communications network, with each being a “node” thereon.
In an aspect invention, the system operates to monitor the availability of allotments, here parking spaces, within the bounded parking lot by capturing, via one or more optical sensors within each cameras 14 which receive image data within their separate angular fields of views 140, including machine vision data relating to the allotment status as well as to information parameters regarding vehicles 120a - 120d and to provide this data to the system/server 12, which is used to relative positions of allotments to one or more vehicles and/or and their allotment status via obtention of information parameters at least partially determined by the one or more cameras 14 which provide for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, within the field of view 140 of at least one of the cameras 14.
In operation, the system includes at least one, but preferably two or more of the mounted cameras 14 having individual angular or conical optical fields of view 140, and the mounted cameras 14 are positioned such that these fields of view 140 at least in part overlap for at least two of the cameras 14. In the embodiment of figure 2, four mounted cameras 14 are present, each having an independent angular fields-of-view 140 and optionally but preferably, the positioning of the one or more optical sensors (not shown) within each of the cameras 14; preferably there is a degree of overlap of the fields of view 140 of two or more of the cameras 14. Each of the optical sensors of each of the cameras provides useable data which may be processed by the computer system/server 12 regarding information parameters concerning any vehicles within the field of view of the camera 14, as well as the allotment status of allotments also within the field of view. Where there is an overlap in the fields of view of two c; for the receipt of concurrent data regarding any object present within any of the fields of view 140 which data can be dynamically transmitted to the computer system/server 12. This data may be used from a single camera 14 to provide a representation of a bounding box (bbox) of any one vehicle relative to any point on the parking lot 100 within the field of view of a camera 14. This data may be used from a single camera 14 to provide a Service related to a vehicle within the field of view of a camera 14. The data includes pixel images received from each of the optical sensors, each of the pixels representing a portion of the actual three-dimensional view from a particular camera 14 and the instantaneous status of any object within the field of view. For example, with reference to Figure 2, there can be seen that the fields of view 140 of each of the four depicted mounted cameras 14 overlap to a great degree towards the central portion of the parking lot 100 but, no individual mounted camera 14 has a sufficiently wide field-of-view 140 which encompasses the entirety of the surface of the parking lot 100, and all vehicles which are 120a-120c present therein, or entering vehicles 120d which have not yet entered any of the parking spaces 102 or the intermediate roadway sections 113. Thus, data received from one or more of the cameras 14 is necessary in order to provide output data representative of the instantaneous positioning of objects, i.e., bbox of a vehicle, within the vicinity of allotments, here parking spaces 102. Such output data provides for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, and its respective position to the two or more said cameras, to any other object such as another static or moving vehicle, as well as to allotments, here parking spots, within the field of vision of any one camera 14.
In an alternative embodiment, it is required that there be used data received from two or more of the cameras 14 be co-processing in accordance with the invention, in order to provide output data representative of the instantaneous positioning of objects within the vicinity of allotments, here parking spaces 102. Such output data provides for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, and its respective position to the one or more said cameras, to any other object such as another static or moving vehicle, as well as to allotments, here parking spots, within the field of vision of the one or more cameras. An overall description of the system is discussed with regard to th disclose parts of the method (process) used in generating data and a three-dimensional model representative of the instantaneous positioning of objects, here vehicles within the vicinity of allotments, here parking spaces 102.
Prior to use of the apparatus and system of the invention in determining the positioning of one or more vehicles 120a - 120d relative to one or more allotments, namely individual parking spaces 102 separated and delineated by visible boundary markings 106 within a parking lot 100 it is required to generate an initial data map of the parking lot 100, parking spaces 102 present therein by operation of a video markup process wherein the three-dimensional coordinates (x, y, z) of the location of the visible boundary markings 106 upon the surface of a parking lot 100 visible within the field of view of the one or more cameras 14 is individually mapped and correlated to a data model representative of the 2-D (x, y, z = 0) actual location (and dimensions) of the surface 101 of the parking lot 100 including the parking spaces 102 within the field of view of a camera 14. This initial step of generation of an initial data map may be undertaken with or without the presence of any vehicles within the parking lot 100. Additionally, at any time the initial data map generated by use of the video markup process (also noted as a ‘Video markup tool”) may be modified or updated to reflect physical changes to the parking lot 100, its parking spaces 102 as necessary.
Reference is made to Fig. 3 which provides a conceptual representation of a video markup process used to generate an of an initial data map of a parking lot 100a (here of a different configuration than that of Fig. 2) and Figs. 3-1 to 3-3 provides further representations of the correlation of visible boundary markings 106 within a further parking lot 100b as seen in the field of view of one camera 14 utilizing the video markup tool 130. The video markup tool itself may be implemented within the computer system/server 12 and is operated by user (i.e., human user) who interfaces with the video markup tool 130 via suitable peripheral devices, such as a monitor, and input devices which operate with the computer system/server 12 to generate the initial data map from and of image data 14v in the field of view of a camera 14, Turning first to Fig. 3-1 there is represented a visual 2-D map representation 150a of a part of a parking lot 100b surface including parking spaces 102 and their visible boundary markings 106 which to be correlated with points visible in the image data 14v of a camera w lot 100b within its field of view. In this representation 150a, the 2-D coordinates of points of interest are assigned (x, y, z) coordinate values, but in the case where the points of interest are parking spaces 102, one or more points present in their visible boundary markings 106 are assigned (x,y, z = 0) coordinates relative to the 2-D map. The 2-D map is stored as a data structure, i.e., data table within the computer system/server 12 and may include further data including for example the actual physical dimensions of the parking spaces 102 including width, length and area, as well as dimensions (i.e, length) of the various line segments typically used to provide the visible boundary markings 106. The representation of 150a may be a simple form of an initial data map, but typically such is only a subset of a total 3-D map of a parking lot 100, 100a, 100b having more allotments, and which may also include data of three-dimensional objects of interest as well. The interrelationship between the representation of 150a and the visual data 14v is shown more clearly in Figs. 3-2 and 3-3. In each of these, the field of vision of the single camera 14 and the image data 14v which it provides from its optical sensor, typically a 2-D array of pixel sensors, is seen to the human operator, in which image data 14v are visible points on interest, i.e., pi, p2, p3, p5 which are present in the representation 150a which forms at least a part of an initial data map. These points of interest pi, p2, p3, p5 correspond to points of the visible boundary markings 106 within the camera’s field of view. Therein the video image of a parking lot 100, visible parking spaces 102 and their visible boundary markings 106 within the camera’s field of view as seen in the planar video representation 150a is now seen. Next, the user of the video markup tool uses a suitable input device, i.e. a computer mouse, trackball, tablet and stylus or other suitable input device to “map” points represented on the 2-D representation 150a with corresponding parts of the video image and thus image data 14v of a parking lot 100c, visible parking spaces 102 and their visible boundary markings 106; such mapping may be by simply indicating and correlating a visible point on the video image data 14v which corresponds to the corresponding point of interest present in the representation 150a. The input device is thus used to identify correlating point, lines, etc. of the 2-D map representation 150a with correlating parts of the planar, 2-D video image of a camera 14. As the camera 14 remains in its static location once mounted, the relationship between the mapped points of interest pi, p2, p3, p5 and the corresponding point or parts of the image data 14v are also static and do not change. Fig. 3-3 depicts the result of such a mapping process, wherein the location of visible boundary markings 106 within the 3-D video image data 1 corresponding points and lines of the 2-D map representation 150a stored as a data structure, within the computer system/server 12. With this data, including the correlation of the ‘real world’ 3-D location of points, lines of points, lines within a parking lot with corresponding points as seen from one or more of the cameras 14 and their respective fields of view, the position of any point with a field of view of any one of the cameras 14 can now be identified by the computer system, and correlated to the initial data map stored as a data structure, as well as used further by the system and apparatus of the invention. In such a way the correspondence between the outline 3D points of the ‘real world’ received by any of the cameras 14 may be correlated to points within the camera video frame 2D image points, as received on its 2D image sensor. The outline 3D data may also include other points of interest in the image wherein the (x, y, z) coordinates are correlated to the camera video frame 2D image points, as may be the case where objects other than the location of visible boundary markings 106 (which have x, y, z = 0 coordinates) are of interest and are also stored within the correlated to the map stored as a data structure, i.e., the location of a kiosk, 15, roadways 112 as the curb 110 or other.
This process is repeated for one or more further cameras 14 having fields of view of one or more point(s) of interest and/or lines of interest in a parking lot, which have their respective positional information stored.
The determination of the dimensions of one or more of the vehicles within the field of view of any one of the cameras 14 by the system and apparatus of the invention is a further operation and is undertaken after the aforesaid mapping process is used to generate an initial data map 155 of a parking lot. Reference is now made to Figs.4A and 4B, Video image data 14v received from a camera 14 is used the to generate a three-dimensional representation of a vehicle’s 120
‘bounding box’ (or “bbox”) from video image data 14v provided within by a camera’s 14 field of view as received on the camera’s image sensor. The video image data 14v is used to generate data representing a virtual geometric volume of the vehicle 120 bbox which includes data regarding the physical size (in 3-D) of the vehicle 120 and the location of one or more points of the bbox, including the location of points at least 3 comers of the bbox, (rear-bottom-left, “bb3”, front-bottom-left, “bb2”, front-bottom-right “bbl”) derived from video image data 14v, which 3 comers may also be assigned (x, y, z coordinates) which are subs coordinates stored in the data map generated using the video markup tool 130, namely the (virtual) data map of a parking lot on which the vehicle 120 is present. Such a method is generally described in “Vehicle Detection and Pose Estimation for Autonomous Driving” by Libor Novak, as well as in the following. Reference is made to Fig. 4A, in a first step, via use of one or more of the cameras 14, a camera captures an image of a vehicle 120 within its field of view and visual image data 14v received is provided to a neural network 12x which may be implemented within or using the computer system/server 12, which estimates 2-dimensional coordinates of the vehicle’s 3-dimensional bounding box vertices based on certain points derived from the visual image data 14v. As shown on Fig.4B, the system and apparatus returns 7 image coordinates “IC” (based on a three-dimensional Cartesian coordinates system, having “x”, “y” and “z” axes, wherein z is the height axis relative to a planar surface, i.e, a street surface, ground, a parking lot surface 101) of projected bounding box comers for a vehicle 120 (rear-bottom-left, front-bottom-left, front-bottom-right and the y-coordinate “bb4” of the front- top-left comer (2+2+2+l=7 (coordinates)). Such bounding box comers “bbc” relative to a vehicle are illustrated on Fig. 4B, Notably at least bbl, bb2 and bb3 have separate 3-D coordinates having values of (x, y, z = 0), which may be used to establish a correlation of these three points to correspond with points of the virtual map of a parking lot, such that the position of the vehicle 120 relative to any point on the surface 101 of the parking lot can be established using only the virtual map and visual image data 14v obtained from one of the cameras 14.
More particularly, from the 7 image coordinates IC obtained, the ground plane equation (where the coordinate axis “z” has a ‘zero” value, 0) and camera projection matrix P=K*R*[I-C], the system may reconstruct the dimensions of the actual 3 -dimensional bounding box bbox of a vehicle P=K*R*[I-C] where K is calibration matrix, R is rotation and C is shift. K is obtained from a known conventional calibration procedure using a chessboard camera calibration procedure, and R*[I-C] is a model-view matrix which was established using the video markup tool 130, (i.e., IC of a vehicle,) and/or points or lines of interest (i.e, boundary lines defining the position of an allotment, or lines 106 which indicate the individual boundaries of each of the spaces 102 in a parking lot 100, or lines which indicate the individual boundaries of parking spaces on a section of a roadway 113, or a point such as the location of a structure, such as structure CS, or kiosk 15) are indicated by a user or operator of the system. With the above process, the instantaneous location of a vehicle 120 to the surface 101 of a parking lot may be ascertained, and stored in a computer memory within the system/server 12 for immediate use by the system and/or recorded for later access and retrieval. The process may be used concurrently track the location of one or more static and/or moving vehicles relative to the surface of the parking lot, with respect to other vehicles, and with respect to allotments present in the lot 100. The process also provides for the dynamic collection of information parameters relating to a static vehicle or a moving vehicle, and its respective position to the two or more said cameras, to any other object such as a static or moving vehicle present within the field of view of the cameras 14 comprised forming part of the system, as well as the instantaneous status of allotments, i.e. empty or occupied.
In the process of determining the location of points within a camera’s field of view, such may be calculated using a “solve PnP” function, which may be implemented within a part of the system. The ‘solve PnP’ function itself, is known, (see “Appendix A”) and may be implemented using the system. From the results of the solvePnP function, (or other suitable function, other than the solvePnP function) the relative position of one or more point(s) of interest and/or lines of interest in the real environment within the virtual map 155 stored within the system having been previously generated by the video markup process and video markup tool 130.
Using the virtual three-dimensional map 155 stored within the system, the instantaneous relative position of a vehicle 120 relative to one or more of the point(s) of interest and/or lines of interest in the real environment, i.e., as present in the stored 3-D map. Notably, this determination may be dynamically implemented by the system, i.e. intermittently (such as when the data correlating to one or more video image frames (“image data”) received by one or more of the cameras, which may correspond to the sampling rate of one or more cameras, viz, with each frame received by a camera, or may be where some of the frames are skipped and only intermittent frames received by the camera are used in the determination; ideally the sampling rate of frames received by a camera and used the system is at an interval of between about 0.1 seconds and 10 seconds, preferably between about 0.2 seconds and 5 seconds) or continuously, and the determination is preferably made during the time that any motion within a field of view of one or more of the cameras is sensed; such sensing may be made by com one or more pixels or visual image areas (i.e, for brightness, change of shape) between two or more successive frames of image data received by one or more of the cameras which form part of the system. The system may then intercorrelate the real world position of point(s) of interest and/or lines of interest from the relative positional information from at least one camera 14, preferably at least two or more of the cameras 14, and any changes which occur within any of the fields of view, which may be correlated to the three-dimensional virtual map, from the vantage points of one or more of the cameras 14. From the three-dimensional virtual map of a parking lot 155, and the position of one or more pixels within the video image data 14v received by one or more of the cameras 14, where the pixels represent attributes of a vehicle within the field of view of one more of the cameras, the status of the parking lot and vehicles present thereon can be continuously monitored for any changes. Responsive to such changes (‘Events’) which occur within any of the fields of view of one or more of the cameras 14 present within the system and apparatus of the invention may undertake further steps and/or undertake operations (‘Services’) which may occur in response to certain Events. Non-limiting examples of such Services include one or more of: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Unparking Mode, Observation Mode.
The system and apparatus may have different modes of operation in both statically and dynamically determine the relative position of a vehicle, or its 3 -dimensional bounding box relative to positional information forming part of the three-dimensional virtual map, and may dynamically update the relative position of a vehicle (i.e, from a reconstruction thereof, or a bounding box thereof, or from one or more points determined from a bounding box thereof) when it is moving, i.e, the system may have a more rapid or accelerated operating mode wherein an increased rate of or frequency of sampling of image data from one or more, preferably two or more of the cameras are received and operated upon by the system, and optionally one or more responses or outputs may be generated by the system and may be used for any of a number of purposes. When no motion is sensed within the field of view of one or more of the cameras, the system may operate in a static mode wherein the frequency of sampling of image data from one or more, preferably two or more of the cameras are received and operated upon by the system, is at a longer interval as compared to the dynamic mode of operation of the system described immediately above; again, optionally one or more responses or ou system and may be used for any of a number of purposes.
Fig. 5 illustrates a further schematic of an aspect of the invention, including the step of detecting and processing one or more images 14v of video data (i.e, a “video stream”) obtained from one or more cameras 14, including generation 11 la, of the position of the bbox of the visible vehicles 120 relative to further points of interest available from the virtual map 115 of the parking lot 100, i.e, boundary lines 106 an the relative position of vehicles 120 within a parking space 102, as well as the allotment status of parking spaces 102, and correlating 112a the bbox of visible vehicles with the further points of interest and to compare such with prior correlations in order to determine if an Event, has taken place. An Event 112b can be any change in the position of or location of any vehicle, or its bbox data, which has occurred since a prior point in time. The system and apparatus of the invention may sense the change in the video image data 14v received by any of the cameras 14 which may trigger an Event, such as movement of a vehicle 120, or a change in the allotment status of a parking space 102. The system and apparatus of the invention, may communicate this change and/or the incidence of an Event which may cause the initiation of a Service.
Figure 6 provides a flowchart including a sequence of steps which relate to a process of the present invention, including discrete logical steps under which the system may operate, including the determination of an Event. With reference to Fig. 6, the steps include the following:
200 : receive one frame of video image data from one or more cameras (14);
202 : detect three-dimensional (3D) positions of all cameras using three-dimensional (3D) detector;
204 ; find corresponding cars from previous frame by searching closes reels (rectangles) on new frames;
206 : conditional statement - is there is a car “bbox” (3 -dimensional bounding box, see Fig. 4.3) that doesn’t have a corresponding bbox (3 -dimensional bounding box ) on a previous frame (of video image data) ? 208 : detect the car’s license plate and notify the system (( detected
210 : has any car’s bbox intersected with a parking space rect ? (parking space rectangle; lines defining a parking space; lines defining an allotment)
212 : notify the system (or Service) that a car took a parking space
214 : has any car’s bbox (3 -dimensional bounding box) crossed the parking lot entrance ?
216 : notify the system (or Service) that a car entered the parking lot
218 : continue to the next batch processing (next step, step)
The foregoing steps 200, 202, 204, 206, 210, 214, 218 may be used to determine the occurrence of an ‘Event’ during a time interval, or between time intervals, and may be used to issue a notification of such Event, and/or to initiate a ‘Service’.
Notwithstanding the foregoing, the determination as to whether an Event has occurred is not limited to the flowchart of Fig. 6, as other conditions or parameters may be sufficient to signal, trigger or otherwise indicate the occurrence of an event. Such expressly includes a difference between two different frames of image data 14v in the field of view of a camera 14, particularly any changes in the pixels or other visual aspects or features at two different times within the field of view of a camera 14.
Returning now to Fig. 2, several aspects and features of the system of the present invention will be described relative to several modes of operation of the system and apparatus. These “Modes” may be examples of a ‘Service’ which may be provided by the system and apparatus of the invention.
Acquisition Mode: An aspect of the invention is an acquisition mode where the system of the present invention makes initial visual observation with an object of interest, here a moving vehicle. In the figure, this vehicle is identified as 120d which is seen, is entering the region of the parking lot 100 via one of the two (ungated) entry/egress roadways 112. Concurrently, at least a part of the vehicle 120d enters into the field-of-view 140 of one or more of the mounted cameras
14. An acquisition mode is initiated as data received from the two or more cameras indicated that the vehicle 120d as newly entered a field-of-view 140. In the acq information parameters selected from: the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non-sliding), condition, degree of window tinting, identification information such as a license plate, window stickers, direct relative distance to one or more of the mounted cameras 14, relative positioning of the vehicle to one or more positions, i.e, other vehicles already present within the parking lot 100, one or more of the parking spaces 102, one or more of the marked lines 106, or other visually discernible feature of the vehicle 120d.
The information parameters are transmitted to the system/server 12 where they are processed to determine if an Event has occurred, i.e., according to the steps outline in the flowchart of Fig. 6. However the determination as to whether an Event has occurred is not limited to the flowchart of Fig. 6, as other conditions or parameters may be sufficient to signal, trigger or otherwise indicate the occurrence of an event. Such expressly includes a difference between two different frames of image data 14v in the field of view of a camera 14, particularly any changes in the pixels or other visual aspects or features at two different times within the field of view of a camera 14. The resulting data extracted from the steps, provides one or more information parameters which can then be checked against the records stored on a database interacting with the system/server or alternately which forms part of the system/server 12. Where it is determined that the information parameters of vehicle 120d do not correspond to an already existing record of a vehicle stored in the database, the system may generate a new record utilizing the newly acquired information parameters and store this in the database for subsequent use. Optionally but preferably, the system operates to initiate a higher density pixel recognition mode in one or more of the mounted cameras 14 in order to provide a higher quality video input and thereby improved data collection of the vehicle 120d and optionally, preferably, additional information parameters are collected for this vehicle 120d, and added to the new record, which now may assign a unique identifier to vehicle 120d. The unique identifier may be used in subsequent operations to thereby provide improved indexing of the database in subsequent steps, or on other occasions wherein the vehicle 120d returns to the parking lot 100, after having exited it. Tracking Mode: An aspect of the invention is a tracking mode where the system of the present invention makes a sequence of visual observations of a moving object of interest, here a moving vehicle. In the figure, this vehicle is identified as 120c which is se in motion in the parking lot 100, specifically between rows of parking spaces 102, within the intermediate roadway sections 113 which are present in order allow for the travel of one or more vehicles between parking spaces 102. In operation, and in the tracking mode a time sequenced series of captured video images are concurrently received from one or more of the mounted cameras 14 and each frame of the images is processed to determine if an Event has occurred, i.e., according to the steps outline in the flowchart of Fig. 6. The resulting data extracted from the sequence of images captured at least one camera 14, provides one or more information parameters relating to vehicle 120c, and may include not only one or more of the information parameters but expressly include the relative position of vehicle 120c to other objects within the parking lot 100.
Parking Mode: A further aspect of the invention, is a parking mode with a system of the present invention makes a series of visual observations if you moving object of interest, here again a moving vehicle again identified as 120c. This mode becomes operational as a subset of the tracking mode where it is determined from the series of time sequenced series of captured video images of the moving vehicle 120c that it is in close proximity to one or more of the parking spaces 102 and the direction the vehicle 120c had changed in angular direction. In operation, in the parking mode a time sequenced series of captured video images are concurrently received from a mounted camera 14 and trimmed the images is processed according to steps outlined in the flowchart of figure 6. The resulting data extracted from the sequence of images captured by the mounted cameras 14 provides one or more information parameters relating to vehicle 120c during the parking mode and in particular, is positioning relative to one or more of the parking spaces 102 and/or one or more of the marked lines 106 which indicate the individual boundaries of one or more of the parking spaces 102 proximate to vehicle 120c. The parking mode may remain operational until it is sensed that for a first timing interval, motion of the vehicle 120c has ceased for that timing interval (i.e, 10 seconds, 30 seconds, 60 seconds, or more), indicating a high likelihood of that the vehicle has come to a stop within one or more of the parking spaces 102. Optionally but preferably, during this first timing interval, a readily shortly thereafter, movement in the immediate proximity of the vehicle 120a (such as the opening of a door, rear tailgate, trunk) or physical movement of the vehicle 120a takes place, and is sensed by two or more of mounted cameras 14. At such a time, the data captured 1 cameras 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval. At such time, data captured to be transmitted to the kiosk 15 which includes a visual indicator providing data relating to one or more of: the parking lot 100, parking spaces 102 on the parking lot 100 (i.e, a map); the position of the now parked vehicle 120a within the parking lot 102, and an indication of time or time-stamp datum. Optional further information may also be supplied, such as instructions relevant to the use of the parking lot 100, and/or payment options where the use of the parking lot 100 is not available free of cost. The data captured, as well as further information may also be made available to a handheld device 210, including the requirement for payment for the use of the parking lot, including the fee required.
Parked Mode: A further aspect of the invention is a Parked Mode which preferably, is entered into by the system subsequent to a Tracking Mode or a Parking Mode (preferably a Parking Mode). Here, the system of the present invention makes a series of visual observations of a static object of interest, here again a non-moving vehicle identified as 120a. While this mode may become operational at any time, preferably it becomes operational subsequent to a Parking Mode which may be used to determine the status of the vehicle 120a relative to a parking space 102 and/or one or more of marked lines 106. In this mode, the data capture of the mounted cameras 14 maintain visual observation of the non-moving vehicle 120a and remain in the state, until movement in the immediate proximity of the vehicle 120a (such as the opening of a door, rear tailgate, trunk) or physical movement of the vehicle 120a takes place, and is sensed by two or more of mounted cameras 14. Thereafter for a short time interval, at such a time, the data captured 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval.
Unparking Mode: A further aspect of the invention is an Unparking Mode which preferably, is entered into by the system subsequent to a Parking Mode. Here, the system of the present invention makes a series of visual observations regarding a static object of interest, here again a non-moving vehicle identified as 120a. While this mode may become operational at any time, preferably it becomes operational subsequent to a Parked Mode w the status of the vehicle 120a relative to a parking space 102 and/or one or more of marked lines 106. In this mode, the data capture of the mounted cameras 14 maintain visual observation of the non-moving vehicle 120a and remain in the state, until movement in the immediate proximity of the vehicle 120a (such as the opening of a door, rear tailgate, trunk) or physical movement of the vehicle 120a takes place, and is sensed by two or more of mounted cameras 14 for a second interval (i.e, 10 seconds, 30 seconds, 60 seconds, or more) and it is observed that the relative position of the vehicle 120a is now changed relative to its position in the parking lot 100 established during the prior Parked Mode. At such a time, the data captured 14 may be correlated to a time or time-stamp as a datum, which may be added to a database record associated with vehicle 120c, which datum may be the beginning of a parked time interval. Where the use of the parking lot 100 is not available free of cost, optional further information may also be supplied to the contemporaneous record of the vehicle 120a in the database, such as whether a payment is to be charged to the registered owner of the vehicle 120a, or whether a payment is to be withdrawn from a bank or other financial account linked to the registered owner of the vehicle 120c or to other source of payment associated with the vehicle 120a in the database.
Subsequent to an Unparking Mode, with regard to the vehicle 120a, the system returns to and operates according to the Tracking Mode until the vehicle exits the field of view 140 of the mounted cameras 14.
Observation Mode: Regardless of the motive or static status of any objects within the fields of view of two or more of the cameras 14 having field of views 140 extending into the parking lot 100, i.e, of either moving or static vehicles, at any time the system may enter into an Observation
Mode wherein data concerning any such objects may be collected, processed by the computer system/server 12 and used to generate new records within the database and/or be compared against existing records within the database, and as a result, initiate a further action upon satisfaction of certain conditionals. According to a first conditional, datum collected as information p vehicle or a moving vehicle by the cameras utilized in the improved machine visioning process of the invention include one or more visual parameters which may include, but are not limited to, the vehicle's overall size, color, manufacturer, model, door size, door type (sliding or non- sliding), condition, degree of window tinting, direct relative distance to one or more of the monitoring cameras. These one or more datum may be stored in a database record, and indexed to the particular vehicle observed. The database record of the vehicle preferably also includes registration information such as the state, license plate number, a Vehicle Identification Number (VIN), and status of current vehicle registration as may be evidenced by stickers placed upon portions of a license plate and/or upon portions of a windshield. This information is visibly discernible, particularly where high-resolution image capturing is used by one or more of the mounted cameras 14; such data may be captured and stored in the database record, concurrently with a date stamp of the date wherein this datum was acquired. Such provides a nonlimiting example of visual information parameters collectible by the system of the invention which can be correlated to one or more non-visual parameters associated with the current vehicle registration.
The datum collected using the system of the present invention and stored as one or more fields within a database record within a database which can be used by the system, which is part of the system of the invention can be compared against further external databases and/or datum such as: databases or data relating to a vehicle history report, a vehicle registration report including past and present owners, a police report correlating the vehicle against any outstanding violations or reports of stolen vehicles, a vehicle credit report including the status of any liens upon the vehicle. Such information may be used to generate a further report or further output resulting from such a comparison from datum collected according to the present invention by the system, as compared against further external databases and/or datum.
Further datum collected using the system of the present invention may include the status of the physical condition of the vehicle, or changes in its physical condition, i.e, whether the vehicle has been repaired following an accident, whether the vehicle had been in an accident, whether the vehicle has been otherwise altered, i.e., as a result of modification, vandalism, graffiti, and such datum can be stored in the system and/or compared to prior stored datum stored within the system. According to a second conditional, datum collected as information parameters relating to a static vehicle or a moving vehicle by the cameras utilized in the improved machine visioning process of the invention may also be used to determine the proximity of the location of an object of interest, i.e. a vehicle relative to two or more of the cameras 14 used with the system to a point of interest. For example wherein the location of a vehicle is ascertained to be in the near proximity of a point of interest, such as point of interest which may provide a useful Service with respect to the continued operation of the vehicle, or may be a point of interest which may provide a useful product or service to one or more persons within the vehicle, the then-current location of the vehicle and the fact of the of its corresponding near proximity to such a point of interest, which may be for example, indicated by a distance, or estimated minutes of travel) can be transmitted to a portable mobile device such as a “smartphone” 210 having suitable display which would provide information to an end user, or a similar such portable mobile device which may be installed as part of the vehicle. Such may be achieved by comparing the database record of a vehicle, and any of the foregoing modes to one or more records of one or more external databases relating to relative locations of any objects of interest, and as a result of such comparison, initiating a communication which will provide the foregoing indication to an end user via a portable mobile device. Notably the foregoing is effective without reliance upon a Global Positioning Device or any related sensors, but is operational solely in accordance with the system provided as an aspect of the present invention.
Further aspects of the invention are described with reference to Fig. 7. Therein is depicted a representation of a common roadway 170, vis a “street” between parallel curbs 110 as is commonly encountered in North America, which includes portions delineated into parking spaces 102 adjacent to curbs 110 and delineated by series of parallel marked lines 106, and an intermediate roadway section 113 which are present in order allow for the travel of one or more vehicles between the parallel rows 104 of adjacent parking spaces 102. Visible also in the figure are representative series of building fronts 172 spaced away from the curb, one of which has mounted thereon a first camera 14 having an angular or conical field-of-view 140, and on the opposite side of the street a second, pole mounted camera 14 having its own angular conical field-of-view 140. Adjacent thereto is a kiosk 15. As a visible from the drawings, as to be understood from the general principles discussed with reference t of cameras 14 operate in a manner very similar to that described with reference to Fig. 2. The system the invention receives input from one or more of the cameras 14 and processes the received data in accordance with the steps outlined previously, in order to derive datum collected as information parameters relating to any of the vehicles pass within the fields of view 140 of the cameras 14. Such include the determination if an ‘Event’ has occured In the depiction of Fig. 7, the “parked” vehicle 112e is currently subjected to a Parked Mode and may concurrently be subjected to an Observation Mode. The “moving” vehicle 112f is currently subjected to an Acquisition Mode and/or a Tracking Mode and optionally an Observation Mode. The further “parked” vehicle 112g is currently subjected to a Parked Mode and may concurrently be subjected to an Observation Mode, as well as a conditional, as the relative placement of the vehicle 112g indicates that it is only extending into two adjoining parking spaces 102, but also is occupying a no-parking space 105 visibly marked by diagonal parallel lines. Where the system of the invention operates according to the conditional, the system of the invention may initiate an output which communicates with the local parking authority and or the local police authority to indicate the parking violation/infraction of vehicle 112g.
The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Appendix A
Figure imgf000038_0001
Figure imgf000039_0001

Claims

Claims: 1. A system and apparatus operable to monitor one or more parking spaces and one or more vehicles wherein the one or more vehicles and one or more parking spaces are both within the field of view of at least one static, vertically mounted camera which provides video image data to a computer system/server and computer readable media or storage which is used in storing data derived from a video markup tool and from data derived from video image data received from the at least one camera, the computer system/server operable to execute program modules, to receive video image data from the at least one camera, to output a response subsequent to the execution of instruction of one or more program modules, and to communicate with one or more external devices; the system and apparatus comprising a video markup tool which operates in determining the positioning of one or more vehicles relative to one or more individual parking spaces separated and delineated by visible boundary markings in a parking lot and which generates a data map of the parking lot which is subsequently used by the computer system/server to determine the relative positioning of one or more vehicles present within the field of view of the at least one camera, and to determine the occurrence of any Events occurring periodically within the field of view; the computer system/server operable to continuously monitor video image data received from the at least one camera and determine the relative positioning of the one or more vehicles present with the field of view by monitoring video image data received from the at least one camera, generate a bbox corresponding to a vehicle within the field of view, correlate the position of the bbox with the data map of the parking lot to determine the physical location of the vehicle relative to one or more points of interest represented in the data map, and responsive to the occurrence of an Event, undertake further steps, operations, or provide a Service.
2. The system and apparatus of claim 1, wherein the video markup tool is implemented within the computer system/server and is operated by user who interfaces with the video markup tool to generate the initial data map from and of image data in the field of view of a camera, wherein the user correlates points of interest visible within the field of view of a camera whose image data is also visible on a peripheral device, with a 2-D map to thereby assign coordinate values of the of points of interest in the field of view to the 2-D map representation, which data is stored within a data structure as part of a data map of the parking lot.
3. The system and apparatus of claim 2, wherein, in addition to the coordinates of points of interest, wherein coordinate values of points of interest stored as part of the data map include 2- D and 3-D coordinates of points of interest.
4. The system and apparatus of any preceding claims 1 - 3, wherein the computer system/server generates the bbox of a vehicle including data regarding the physical size (in 3-D) of the vehicle, wherein the bbox is generated using the at least 3 comers, rear-bottom-left, “bb3”, front-bottom-left, “bb2”, front-bottom-right “bbl”, derived from video image data from a camera.
5. The system and apparatus of claim 5, wherein the location of one or more points of the bbox are correlated by the computer system/server to the coordinates stored in the data map generated using the markup tool.
6. The system and apparatus of any preceding claim, wherein the system and apparatus comprises two or more static, vertically mounted cameras.
7. The system and apparatus of any preceding claim, wherein responsive to an Event, the system and apparatus initiates a Service selected from: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Unparking Mode, Observation Mode.
8. A process for initiating provision of a Service in response to an Event, the process comprising the steps of: operating a system and apparatus according to any preceding claim to determine the occurrence of an Event, and in response thereto to initiate provision of a Service selected from: Acquisition Mode, Tracking Mode, Parking Mode, Parked Mode, Mode..
PCT/US2020/057614 2019-10-29 2020-10-28 System, apparatus and method of provisioning allotments utilizing machine visioning WO2021086884A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962927448P 2019-10-29 2019-10-29
US62/927,448 2019-10-29
US201962952834P 2019-12-23 2019-12-23
US62/952,834 2019-12-23

Publications (1)

Publication Number Publication Date
WO2021086884A1 true WO2021086884A1 (en) 2021-05-06

Family

ID=75715569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/057614 WO2021086884A1 (en) 2019-10-29 2020-10-28 System, apparatus and method of provisioning allotments utilizing machine visioning

Country Status (1)

Country Link
WO (1) WO2021086884A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327453A (en) * 2021-05-27 2021-08-31 山东巍然智能科技有限公司 Parking lot vacancy guiding system based on high-point video analysis
CN114141041A (en) * 2021-10-20 2022-03-04 江铃汽车股份有限公司 Remote parking control method, system, storage medium and device for automobile

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101656A1 (en) * 2006-10-30 2008-05-01 Thomas Henry Barnes Method and apparatus for managing parking lots
US20140200970A1 (en) * 2012-08-06 2014-07-17 Cloudparc, Inc. Controlling Use of Parking Spaces Using a Smart Sensor Network
US20140214500A1 (en) * 2013-01-25 2014-07-31 Municipal Parking Services Inc. Parking lot monitoring system
US20150086071A1 (en) * 2013-09-20 2015-03-26 Xerox Corporation Methods and systems for efficiently monitoring parking occupancy
US20170124874A1 (en) * 2015-10-30 2017-05-04 International Business Machines Corporation Real-time indoor parking advisor
US9773413B1 (en) * 2014-09-16 2017-09-26 Knighscope, Inc. Autonomous parking monitor
US20170325082A1 (en) * 2009-12-11 2017-11-09 Mentis Services France Providing city services using mobile devices and a sensor network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101656A1 (en) * 2006-10-30 2008-05-01 Thomas Henry Barnes Method and apparatus for managing parking lots
US20170325082A1 (en) * 2009-12-11 2017-11-09 Mentis Services France Providing city services using mobile devices and a sensor network
US20140200970A1 (en) * 2012-08-06 2014-07-17 Cloudparc, Inc. Controlling Use of Parking Spaces Using a Smart Sensor Network
US20140214500A1 (en) * 2013-01-25 2014-07-31 Municipal Parking Services Inc. Parking lot monitoring system
US20150086071A1 (en) * 2013-09-20 2015-03-26 Xerox Corporation Methods and systems for efficiently monitoring parking occupancy
US9773413B1 (en) * 2014-09-16 2017-09-26 Knighscope, Inc. Autonomous parking monitor
US20170124874A1 (en) * 2015-10-30 2017-05-04 International Business Machines Corporation Real-time indoor parking advisor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUSNITA ET AL.: "Intelligent Parking Space Detection System Based on Image Processing", INTERNATIONAL JOURNAL OF INNOVATION, MANAGEMENT AND TECHNOLOGY, vol. 3, no. 3, 3 June 2012 (2012-06-03), XP055602852, Retrieved from the Internet <URL:http://www.ijimt.org/papers/228-G0038.pdf> [retrieved on 20201229] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327453A (en) * 2021-05-27 2021-08-31 山东巍然智能科技有限公司 Parking lot vacancy guiding system based on high-point video analysis
CN114141041A (en) * 2021-10-20 2022-03-04 江铃汽车股份有限公司 Remote parking control method, system, storage medium and device for automobile

Similar Documents

Publication Publication Date Title
US11879977B2 (en) System of vehicles equipped with imaging equipment for high-definition near real-time map generation
US20230016568A1 (en) Scenario recreation through object detection and 3d visualization in a multi-sensor environment
EP3759562B1 (en) Camera based localization for autonomous vehicles
EP3581890B1 (en) Method and device for positioning
US10077054B2 (en) Tracking objects within a dynamic environment for improved localization
CN110287276A (en) High-precision map updating method, device and storage medium
Thornton et al. Automated parking surveys from a LIDAR equipped vehicle
US9483944B2 (en) Prediction of free parking spaces in a parking area
US20180301031A1 (en) A method and system for automatically detecting and mapping points-of-interest and real-time navigation using the same
US9083856B2 (en) Vehicle speed measurement method and system utilizing a single image capturing unit
EP3769507A1 (en) Traffic boundary mapping
CN111276007A (en) Method for positioning and navigating automobile in parking lot through camera
US11625706B2 (en) System and method for location-based passive payments
US20220137630A1 (en) Motion-Plan Validator for Autonomous Vehicle
WO2021086884A1 (en) System, apparatus and method of provisioning allotments utilizing machine visioning
CN114399916A (en) Virtual traffic light control reminding method for digital twin smart city traffic
CN115236694A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
KR20200013156A (en) Method and system for improving signage detection performance
WO2020210960A1 (en) Method and system for reconstructing digital panorama of traffic route
CN115131986A (en) Intelligent management method and system for closed parking lot
US20210097587A1 (en) Managing self-driving vehicles with parking support
US11132900B2 (en) Vehicular parking location identification
Sukhinskiy et al. Developing a parking monitoring system based on the analysis of images from an outdoor surveillance camera
KR102317311B1 (en) System for analyzing information using video, and method thereof
US20210405641A1 (en) Detecting positioning of a sensor system associated with a vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20881151

Country of ref document: EP

Kind code of ref document: A1