AU2020101919A4 - Technology configured to provide real-time practical interpretation of parking signage - Google Patents

Technology configured to provide real-time practical interpretation of parking signage Download PDF

Info

Publication number
AU2020101919A4
AU2020101919A4 AU2020101919A AU2020101919A AU2020101919A4 AU 2020101919 A4 AU2020101919 A4 AU 2020101919A4 AU 2020101919 A AU2020101919 A AU 2020101919A AU 2020101919 A AU2020101919 A AU 2020101919A AU 2020101919 A4 AU2020101919 A4 AU 2020101919A4
Authority
AU
Australia
Prior art keywords
parking
image
data
sign
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020101919A
Inventor
Sam Pinner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Can-I-Park Pty Ltd
Original Assignee
Can I Park Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019903020A external-priority patent/AU2019903020A0/en
Application filed by Can I Park Pty Ltd filed Critical Can I Park Pty Ltd
Application granted granted Critical
Publication of AU2020101919A4 publication Critical patent/AU2020101919A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/146Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas where the parking area is a limited parking space, e.g. parking garage, restricted space

Abstract

The present invention relates, in various embodiments, to provide real-time practical interpretation of parking signage. Embodiments of the invention are primarily directed to technology that supports extraction of information from parking sign arrays comprised of one or more parking sign panels, with some embodiments providing a user interface in the form of a mobile device software application which assists a user in making informed decisions regarding parking rules, and some embodiments used for the purposes of data collection and mapping. In some embodiments the technology allows for payment of parking fees. 10/11 1P 4PY 1 P ti

Description

10/11
1P 4PY
1P
ti
TECHNOLOGY CONFIGURED TO PROVIDE REAL-TIME PRACTICAL INTERPRETATION OF PARKING SIGNAGE FIELD OF THE INVENTION
[0001] The present invention relates, in various embodiments, to provide real-time practical interpretation of parking signage. Embodiments of the invention are primarily directed to technology that supports extraction of information from parking sign arrays comprised of one or more parking sign panels, with some embodiments providing a user interface in the form of a mobile device software application which assists a user in making informed decisions regarding parking rules, and some embodiments used for the purposes of data collection and mapping. In some embodiments the technology allows for payment of parking fees. While some embodiments will be described herein with particular reference to those applications, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
BACKGROUND
[0002] Any discussion of the background art throughout the specification should in no way be considered as an admission that such art is widely known or forms part of common general knowledge in the field.
[0003] Parking signs are used to inform people in relation to rules and regulations regarding parking (for example on-street parking) in specified locations. These have in recent years become increasingly complex, as the rules and regulations may vary significantly between times of day, days of the week, and other factors. This has made understanding of signage challenging for many people, often resulting in poor decision making.
[0004] A partial solution has been proposed by mapping an area based on identification of parking rules and regulations at each location in the area, and allowing a user to obtain location-specific parking information via GPS tracking. That is, a user's location is identified relative to a map, and the map is associated with location specific parking information. This is an imperfect solution for a few reasons. Firstly, there are challenges associated with changing signage (for example temporary signage), as changes would need to be incorporated into back-end databases that maintain geospatial parking regulation data.
Secondly, potential for GPS inaccuracies (especially in areas with high-rise buildings and the like) can result in incorrect information being delivered to a user.
SUMMARY OF THE INVENTION
[0005] It is an object of the present invention to overcome or ameliorate at least one of the disadvantages of the prior art, or to provide a useful alternative.
[0006] Example embodiments are described below in the section entitled "claims".
[0007] Reference throughout this specification to "one embodiment", "some embodiments" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment", "in some embodiments" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
[0008] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
[0009] In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
[0010] As used herein, the term "exemplary" is used in the sense of providing examples, as opposed to indicating quality. That is, an "exemplary embodiment" is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
[0012] FIG. 1A provides a representation of a mobile device based implementation of technology described herein according to one embodiment.
[0013] FIG. 1B provides a representation of a mobile device based implementation of technology described herein according to one embodiment.
[0014] FIG. 2 illustrates a system according to one embodiment.
[0015] FIG. 3A to FIG. 3C illustrate example sign arrays.
[0016] FIG. 4 illustrates a method according to one embodiment.
[0017] FIG. 5A illustrates a method according to one embodiment.
[0018] FIG. 5B illustrates a method according to one embodiment.
[0019] FIG. 6 illustrates an example computer system.
[0020] FIG. 7A and FIG. 7B provide example illustrative app screenshots.
[0021] FIG. 8 illustrates a process flow according to one embodiment.
DETAILED DESCRIPTION
[0022] The present invention relates, in various embodiments, to provide real-time practical interpretation of parking signage. Embodiments of the invention are primarily directed to technology that supports extraction of information from parking sign arrays comprised of one or more parking sign panels, with some embodiments providing a user interface in the form of a mobile device software application which assists a user in making informed decisions regarding parking rules, and some embodiments used for the purposes of data collection and mapping. In some embodiments the technology allows for payment of parking fees. While some embodiments will be described herein with particular reference to those applications, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
Mobile Device Embodiments - Overview
[0023] In overview, some embodiments described herein relate to application of computer technology, including image analysis technology, to provide on-location point-in time practical interpretation of parking regulation signage with low latency, allowing for substantially real-time feedback to a user of a mobile device For example, a user of the mobile device interacts with an app that is configured to capture image data of a parking sign, and display a simplified point-in-time interpretation of current parking opportunities (for example "you can park here for the next 30 minutes"). FIG. 7A and FIG. 7B provide illustrative examples of app screenshots, the former showing image capture of a street sign and the latter showing a simplified point-in-time interpretation of current parking opportunities This optionally leverages artificial intelligence image similarity processing technologies and/or augmented reality image overlay technology.
[0024] In other embodiments, image processing technologies described below are used for alternate purposes in addition or alternately to delivering information to a user of a mobile device. For example, in some embodiments, image collection and analysis thereby to interpret the content of parking signs is used for the purposes of collecting information, with GPS data associated with image files being used thereby to enable mapping of parking rules (for example thereby to enable overlay of such rules on a map). In some embodiments the determined parking rules are overlaid on a map in either or both of the following formats: time agnostic parking rules, which represent the content of signage and rules covering various possible times; and time specific rules, which represent rules active at a current point in time (to the exclusion of other points in time).
[0025] Examples are described below primarily by reference to a scenario where a mobile software application (i.e. a collection of computer executable code stored on a memory device) is executed via one or more processors of a mobile device (for example a smartphone or tablet device having a camera module). The mobile device may be, for example, a device with an Android or iOS type operating system. The mobile device is connected to a network (for example a WiFi network or cellular telecommunications network), which allows the mobile software application to communicate with a server device that is configured to perform cloud-based processing steps to support the operation of the mobile application. It will be appreciated that various steps described herein may be shifted between local and cloud-based processing in further embodiments.
[0026] According to one embodiment, the software application is configured to enable execution of a computer implemented method configured to provide a practical interpretation of parking regulations. The term "parking regulations" refers to predefined rules regarding positioning of a vehicle in a defined area, as stipulated by signage. Parking regulations are described as including:
• A regulation condition, which may be "no parking", "no stopping" "time-limited parking allowed", "metered parking allowed", "two hour parking", and so on. Each regulation condition has a defined set of practical regulations, which may vary between jurisdictions.
• Regulation operative times associated with each regulation condition, which define time periods (including times of day, days of the week, and so on), which define when that regulation condition is operative.
• A directional condition (for example defined by an arrow or other marker), indicating whether a direction applies in a left direction, a right direction, or in both directions.
[0027] It is common for complex cascading rules to be present. Furthermore, in some cases rules are too complex to be defined on a single sign, and a parking instruction sign array is present, being in essence a tiled array of multiple signs which in combination describe parking regulations for a particular area. Examples are provided in FIG. 3A to FIG. 3C, which illustrate as illustrative examples: a single sign array, a two-sign array, and a three sign array.
[0028] The method includes operating an image input module configured to enable user selection of an input image available in memory of a mobile device, the input image containing at least one parking instruction sign array. There are various technological approaches by which this is optionally achieved, and some or all of these may be present in a given embodiment. Two examples include.
• Selection of a pre-existing image stored in device memory.
• Accessing an image capture device of the mobile device, thereby to enable capture of a new image (which is then stored in memory of the mobile device). The capture may be user triggered (for example via a button press) or automated (for example an image-based object identification algorithm is configured to identify a predicted image area of a sign array, and trigger image capture in response).
[0029] In the case of the latter, in one embodiment a graphical artefact identification algorithm is executed in respect of a stream of input image data provided by the image capture device, thereby to identify in the image capture device a region predicted to contain the parking instruction sign array. This algorithm, for example, searches for substantially rectilinear having defined attributes. Such an algorithm may be trained via an Al and/or machine learning process which is specifically for identification of sign arrays.
[0030] In some embodiments, an augmented reality process is used, in which case live display of image capture remains rendered on-screen of a mobile device, which one or more automatically selected and captured frames from that live stream being stored in memory and used for the purposes of image processing as described below.
[0031] The method of the mobile device software application includes triggering an image analysis process. For example, this triggering occurs following capture of an image that is to be analysed. The image analysis process is configured to:
(i) Identify in the input image a region of the image that includes the parking instruction sign array. This optionally includes a pre-processing step, thereby to enable removal of irrelevant background information (e.g. via cropping) from the image prior to subsequent analysis steps.
(ii) Analyse the identified region of the image thereby to determine graphical attributes of the parking instruction sign array. This may include a range of techniques, as discussed in more detail further below.
(iii) Perform a parking regulation extraction process thereby to determine a current parking regulation data set for a point-in-time associated with the triggering of the image analysis process. This is also discussed in more detail further below.
[0032] The method then includes causing delivery of data representative of the determined current parking regulation data set via an output device of the mobile device. For example, this may include one or more of: textual information; graphics overlaid on a static image; or graphics overlaid on a dynamic image (e.g. via augmented reality technology).
Example Mobile Device Application
[0033] The example of FIG. 1A illustrates a mobile device 120 having a display screen 121 on which a user interface 123 is rendered. An example sign array 180 is positioned within a field of view of an image capture device of mobile device 120, and a rendered digital image 124 (for example within a stream of digital images) captured by the image capture device is displayed by user interface 123. In the example of FIG. 1B, a boundary graphic 129 is superimposed via the user interface to visually demonstrate to a user that a sign array has been identified, and indicate that a sign interpretation process is underway. In the example of FIG. 1C, the sign interpretation process has been completed, and a left-side graphical overlay 121 and right side graphical overlay 122 are displayed by the use interface thereby to provide a practical interpretation of current point-in-time parking regulations. This process is described in more detail further below.
[0034] The example of FIG. 1A to FIG. 1C may represent a static captured image, or a live-capture display. In the case of the former, application of overlays 121 and 122 includes applying overlays to a static image. In the case of the latter, application of overlays 121 and 122 includes operating an augmented reality module (for example based on Vuforia, ARKit or ARCore) thereby to track the location of the displayed sign array and position renderings of AR overlays relative to that sign array. It will be appreciated that where AR technologies are used more complex overlays are possible, for example highlighting areas of a street via AR overlay technologies thereby to indicate parking rules.
[0035] FIG. 2 illustrates a mobile device system 120 and server system 130 by reference to various modules. The term "module" refers to a software component that is logically separable (a computer program), or a hardware component. The module of the embodiment refers to not only a module in the computer program but also a module in a hardware configuration. The discussion of the embodiment also serves as the discussion of computer programs for causing the modules to function (including a program that causes a computer to execute each step, a program that causes the computer to function as means, and a program that causes the computer to implement each function), and as the discussion of a system and a method. For convenience of explanation, the phrases "stores information," "causes information to be stored," and other phrases equivalent thereto are used. If the embodiment is a computer program, these phrases are intended to express "causes a memory device to store information" or "controls a memory device to cause the memory device to store information." The modules may correspond to the functions in a one-to-one correspondence. In a software implementation, one module may form one program or multiple modules may form one program. One module may form multiple programs. Multiple modules may be executed by a single computer. A single module may be executed by multiple computers in a distributed environment or a parallel environment. One module may include another module. In the discussion that follows, the term "connection" refers to not only a physical connection but also a logical connection (such as an exchange of data, instructions, and data reference relationship). The term "predetermined" means that something is decided in advance of a process of interest. The term "predetermined" is thus intended to refer to something that is decided in advance of a process of interest in the embodiment. Even after a process in the embodiment has started, the term "predetermined" refers to something that is decided in advance of a process of interest depending on a condition or a status of the embodiment at the present point of time or depending on a condition or status heretofore continuing down to the present point of time. If "predetermined values" are plural, the predetermined values may be different from each other, or two or more of the predetermined values (including all the values) may be equal to each other. A statement that "if A, B is to be performed" is intended to mean "that it is determined whether something is A, and that if something is determined as A, an action B is to be carried out". The statement becomes meaningless if the determination as to whether something is A is not performed.
[0036] The term "system" refers to an arrangement where multiple computers, hardware configurations, and devices are interconnected via a communication network (including a one-to-one communication connection). The term "system", and the term "device", also refer to an arrangement that includes a single computer, a hardware configuration, and a device. The system does not include a social system that is a social "arrangement" formulated by humans.
[0037] At each process performed by a module, or at one of the processes performed by a module, information as a process target is read from a memory device, the information is then processed, and the process results are written onto the memory device. A description related to the reading of the information from the memory device prior to the process and the writing of the processed information onto the memory device subsequent to the process may be omitted as appropriate. The memory devices may include a hard disk, a random-access memory (RAM), an external storage medium, a memory device connected via a communication network, and a ledger within a CPU (Central Processing Unit).
[0038] A mobile app module 130 which executes on mobile device system 120 includes an image selection module 134, which performs selection of an input image for example as described above. An image pre-processing module is configured to perform one or more pre-processing algorithms to the input image, for example cropping, colour removal, resolution reduction, compression, and the like. These pre-processing algorithms are preferably selected thereby to reduce an amount of image data for transmission to a server for analysis.
[0039] A request data generation module 135 is configured to generate a request data packet for transmission to server system 140 via a request/response management module 133, thereby to enable cloud-based elements of the image analysis process. The request includes data including or representative of: the image data (following pre-processing); a point-in-time (for example a timestamp or a user-specified time) including time and date information; and data representative of the transmitting app (for example a UID and/or addressing information for enabling addressing of a response to the request).
[0040] In the example of FIG. 2, module 130 includes a graphical output generation module 131. Module 131 is responsive to a response received from server system 140 via module 133 thereby to generate graphical information for rendering via user interface 123 on display screen 121 of device 120. This may include:
• A textual display indicating plain language point-in-time rules for either or both of a left side and a right side of the sign.
• Overlay graphics (which may include textual information indicating plain language point-in-time rules for either or both of a left side and a right side of the sign) for overlay on a static image or a dynamic display (with assistance from an AR rendering module 138).
SA custom graphic, which includes a graphical representation of a sign (optionally a different image to the input image, for example an image from a categorised image database of server 140) along with textual information indicating plain language point-in-time rules for either or both of a left side and a right side of the sign. A potential example is shown at https://nikkisylianteng.com/parking.html.
[0041] Other approaches may also be used., for example including utilisation of maps. In some embodiments parking rules (for example current point-in-time parking rules in a simplified format) are provided as an overlay on a map interface.
[0042] FIG. 4 illustrates an example method 400 performed by a mobile device 120 via execution of a software application such as that of module 130.
[0043] Block 401 represents triggering of an image capture process. For example, a user launches the app, an the image capture process triggers automatically by activating the device camera module and displaying a live stream of captured images on screen. Block 402 represents execution of an object identification algorithm which analysis select frames of image data thereby to predictively identify a sign array. The array is preferably graphically identified on screen, for example as shown in FIG. 1B. This triggers generation of a request, including image data (with optional pre-processing) and other data (for example a time stamp, GPS coordinates, and the like), which is sent to a server system for image processing and determination of output data (block 500). The response data is received ay 404, and this enables generation of graphical artefacts for output at 405 and rendering of those on-screen at 406 (for example rending of plain language point-in-time parking rules as an overlay on a static or dynamic image).
[0044] In the present embodiment, data relating to each sign interpretation event is stored locally and/or in a cloud repository (for example a cloud hosted records database 148), thereby to allow for auditing. In this manner, a user retains evidence of a location, a time, and a sign that was assessed. This can assist in resolving disputes in the context of parking violation enforcement and the like.
The example of a mobile app interface is non-limiting, for example in the sense that in some embodiments, data from sign interpretation events is normalised and used to update a central repository (for example cloud hosted records database 148), thereby to allow for generation of geographic/mapping data for parking rules in an area.
Example Server System
[0045] Referring again to FIG. 2, mobile device system 120 communicates with a server system 140. Server system 140 may be defined by multiple servers (for example some modules may be provided by third party systems that are accessed via API or the like).
[0046] An app data handling module 141 is configured to handle the receiving of requests and deliver of responses to a plurality of client devices including device 120.
[0047] An image pre-processing module 144 is optionally provided, for example to simplify and/or optimise received image data for the purposes of downstream analysis. This may include cropping, and in some cases includes transformation the region of the input image containing the parking sign array to perform planar normalisation (for example to provide the sign in a normalised two-dimensional rectilinear position, by identifying sign edges and transforming them into a rectangle of predefined dimensions).
[0048] In some embodiments an image similarity processing module 142 (or example an Al-based image analysis system such as DeepAl or VisionAl) is used to analyse the current pre-processed input image based on images in a pre-categorised image database 144, thereby to (if possible) identify an image with a threshold level of similarity. Each image in the image database is associated with a set of parking regulations having associated point in-time applicability data (which may be location dependent). Database 144 is populated with multiple images of each of a plurality of known parking signs and arrays, optionally including images taken from multiple aspects. It will be appreciated that this facilitates training of the image similarity Al algorithms.
[0049] In some embodiments image similarity processing is performed based on segmentation of the source image, for example by first breaking down the image into segments which are optionally defined based on geometric constraints and/or identification of constituent individual signs in an array. This again allows ultimately for identification of an image in database 144 which is associated with a set of parking regulations having associated point-in-time applicability data.
[0050] Where an image of threshold similarity is identified, an algorithm is used to process the associated with a set of parking regulations having associated point-in-time applicability data. Where those are location dependent, GPS data associated with the request is processed thereby to allow identification of a correct location-specific set. Then, an algorithm is used to determine a practical parking rule based on the request's point-in time data. For example, the associated with a set of parking regulations includes a plurality of regulation conditions with associated operative times, and one of these is selected based on the point-in-time data. That is used by a response data generation module 147 thereby to define response data for transmission by module 141 to module 133.
[0051] In some embodiments in artefact extraction module 146 is provided, this module being configured to perform a parking regulation extraction process thereby to determine a current parking regulation data set for a point-in-time associated with the triggering of the image analysis process. This in one embodiment includes:
* Identifying a boundary for an individual parking instruction sign in the input image;
* Performing extraction of: (i) textual artefacts; and (ii) at least one graphical artefact including a directional marker; and
• Applying a rules process thereby to define set of parking regulations having associated point-in-time applicability data for left-side and/or right-side directions based on the extracted (i) textual artefacts; and (ii) at least one graphical artefact including a directional marker.
[0052] In some embodiments such an extraction-based method is used in combination with an image similarity method, and invoked only when an image with threshold similarity is not identified. In some such embodiments, following the extraction process a new categorised image is able to be defined, and this is added to database 144.
[0053] In the illustrated embodiment, system 140 additionally includes an image-based damage identification module 149, which is configured to enable identification of damages (e.g. vandalised) signs. Such identification may be achieved using Al methods (for example by training an Al algorithm with images showing damaged signs), and/or based on exception results from artefact extraction (for example where unexpected text is extracted, or requisite text for matching is unable to be extracted). An analytics/reporting module 150 is configured to provide output to a defined address where suspected damage is identified, and to collate overall analytics on the usage of the system (for example where users are checking parking regulations, when, and so on).
[0054] In a further embodiment, software executed on the server includes the following components:
• data pre-processing software
* sign reading subsystem
* a sign damage classifier (for example using a commercially available neural network)
• deployment software - (for example Flask or a similar cloud server may be used).
[0055] Some preliminary processes are performed for the purposes of enabling initial configuration, for example training of Al algorithms. This includes:
• Data collection of reasonably well framed and high-resolution images of variety of parking signs. This number of images varies between embodiments, but it will be appreciated that small vision datasets for training neural networks are usually composed of thousands of images.
• Data labelling for the above images, including marking panel corners on images, as well as well formatted text description of the contents of panels. In some embodiments a crowd sourcing platform can be used for this purpose.
[0056] In one example. firstly, raw images of signs are collected and labelled. Data will be collected. Depending on quantity, quality and initial results this might be supplemented by images freely available on the internet (subject to licensing). For the purpose of data labelling some embodiments use software marketed under the name LABELME.
[0057] For neural network training, images need to be arranged into a dataset. Data pre processing software will be responsible for converting raw images from users into format that is used for training neural networks. This will be set of scripts written in python. During the training, random data augmentation (e.g. slight colour shift) is performed to thereby to increase final neural network performance.
[0058] The sign reading subsystem is configured to recognise and read information from the signs in (for example images captured by end users). This subsystem is in some embodiments composed of multiple (two or three) chained neural networks. A visualisation is shown in FIG. 8, which illustrates a multiple phase process:
• A) Sign detection - find sign on the image
• B) Panel detection - split sign by small panels (one panel is e.g. "1P7:30-10:30 Mon-Fri")
• C) Information extraction - read actual info from panel
[0059] In some embodiments sign and panel detection (A and B) are combined into single neural network.
[0060] Similar technology has been used by the Google Street View House Numbers project from Google Brain research group in 2013. For the technical aspects of Google project see Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks (Goodfellow et al., 2013). Localisation of objects on images is a well research area of computer vision. Reading textual information is also a common problem. Both have been show to work both in academic literature as well as industrial applications.
[0061] In some embodiments the software also includes a Sign Damage Classifier. The damage classifier architecture includes:
• Sign detection (as described above)
• A damage classifier, which is configured to categorise the image into one of a plurality of predefined possible categories.
[0062] In one embodiment the damage subsystem will classify user images either as "OK" or "damaged". The "damaged" images can further be broken down into classes like "rust", "graffiti" and so on based on identification of visual artefacts (training of neural networks for this process involves labelling of training images with damage).
[0063] FIG. 5A illustrates a method 500 (corresponding to block 500 of FIG. 4) according to one embodiment, being a computer implemented method for processing image data thereby to generate data representative of the determined current parking regulation data set via an output device of the mobile device. It will be appreciated that not all illustrated processes of the method of FIG. 5 are present in further embodiments, with various processes being optional, and that steps may be added, removed and/or substituted.
[0064] Block 501 represents a process by which an image is received from an instance of the mobile app executing on a user's mobile device. The image is anticipated to include a street sign array comprised of one or more panels.
[0065] Block 502 represents a panel identification process, by which a neural network identified one or more sign panels (optionally with a preliminary step of identifying the array). This allows content extracted from each panel to be treated separately. A next panel for processing is selected at 503. Whilst this flowchart shows sequential processing of panels, it will be appreciated that parallel processing may be used.
[0066] For the selected panel, block 504 represents an optional similarity-based processing phase, whereby an image processing algorithm determined whether the panel is "identical" (i.e. meets threshold similarity requirements) to a panel that has been previously processed and which already has associated structured text. If this is successful, the relevant structured text is added to a final set of structured text at 507. If there is no similarity (or if the process of block 503 is excluded), the method progresses to block 505.
[0067] Block 505 represents artefact extraction. This includes extracting known parking icons (i.e. non-alphanumeric artefacts, including arrows)) and alphanumeric artefacts in an ordered manner. Then, at the process of block 506, these are converted into structured text. The structured text conversion process includes defining values for a set of required fields including: direction (e.g. left/right); parking rule (e.g. no-stopping); and a time value (which may be defined by a set of times and dates, optionally defined using a database which defines special dates such as school/public holidays and the like). The resulting structured text is added to the final set at 507, and based on the decision at 508 as to whether all identified panels have been processed, the method either loops to 503 to continues to 509, at which point the final set of structured text is defined.
[0068] Block 510 represents application of a decisioning process to the final set of structured text based on a current date and time. This allows determination of a parking rule for that date and time. In some embodiments the absence of a rule governing a specified date/time results in a default "free parking" decision. Current rules for left and right are thereby defined for the purposes of block 511.
[0069] Block 512 represents causing output of the determined current rules in a simplified form. For example, this may include causing delivery to the relevant instance of the mobile app to display a particular GUI output, for example as shown in FIG. 7B, thereby to indicate current parking rules.
[0070] FIG. 5B illustrates a method 500' according to a further embodiment. In this method, the image received at 501 is captured for purposes including data collection (in addition or alternate to obtaining real-time interpretation at a mobile device). In some embodiments the image received at 501 for the purpose of method 500' is extracted from an existing data repository, for example an image captured using Street View from Google Maps. Following the process of block 509, processes are performed thereby to enable application of the parking rules data in the a geospatial context, for example by allowing overlay of parking rules on a map interface (for example Google Maps or the like).
[0071] Block 520 represents a process including extracting positioning data associated with the image. This preferably includes GPS coordinate data associated with the image. In some embodiments a positional correction process is performed thereby to translate the GPS coordinates to the position of the sign. For example, this is based on a distance calculation utilising a number of pixels occupied by a sign panel (having a known size) and a direction based on an orientation of the sign panel relative to the mobile device (for example length differentiation between a left and right edge). This allows for the approximate GPS coordinates of a sign to be inferred from GPS coordinates of an image.
[0072] Block 521 represents a process whereby the final set of structured text is converted into a set of data configured to be associated with mapping data. This may include modifying the structured text into a predefined format (for example with particular fields defined, a CSV format, or the like), and may include association with the positioning data extracted at 520. Then, at block 522 a map overlay database is updated with the new map association data, thereby to enable generation of map overlay data representative of parking rules. The map overlay data in some embodiments enables either or both of the following:
* Time agnostic parking rules, thereby to allow a user to view complete rules for given locations.
• Time specific parking rules, thereby to allow a user to view in-force rules for given locations for a specific date/time.
[0073] In a preferred embodiment, parking rules are colour coded to provide overlays on a map, and a user is able to interact with a time-modification controller there to enable visualisation of changes in parking rules over time, as coloured overlays change based on time specific parking rules.
[0074] In some embodiments the technology is integrated with a payment system that allows for provision of payments in respect of paid/metered parking, thereby allowing a user to pay for parking via camera-based identification of the relevant parking regulations sign. This leverages user information (for example a vehicle license plate), which may be stored by the app, along with GPS data and payment information.
[0075] As noted above, embodiments of the technology include a step of performing a parking regulation extraction process, thereby to determine a current parking regulation data set for a point-in-time associated with the triggering of the image analysis process. In some embodiments, this a current parking regulation data set includes (or is associated with) data which defines payment rules. This allows a user's software application to automatically calculate a fee amount payable for parking (for example in response to a user selection of a time period). The fee amount is then passed to a transaction module which allows payment via the software application. In some instances, the payment rules are able to be determined based on data resulting from the image analysis process alone. In other instances, additional data (for example GPS data, date/time data, and the like) is additionally leveraged thereby to determine current payment rules and associated fees.
[0076] It will be appreciated that the above disclosure provided improved technology for allowing a user to make informed decisions regarding parking via technology-supported interpretation of parking regulation signage.
[0077] FIG. 6 illustrates an example computer or processing system that may implement any portion systems, methods, and computer program products described herein in one embodiment of the present disclosure. The computer system is only one example of a suitable processing system and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the methodology described herein. The processing system shown may be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the processing system may include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
[0078] The computer system may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. The computer system may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
[0079] The components of computer system may include, but are not limited to, one or more processors or processing units 601, a system memory 603, and a bus 605 that couples various system components including system memory 606 to processor 601. The processor 601 may include a software module 602 that performs the methods described herein. The module 601 may be programmed into the integrated circuits of the processor 601, or loaded from memory 603, storage device 604, or network 607 or combinations thereof.
[0080] Bus 605 may represent one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
[0081] Computer system may include a variety of computer system readable media. Such media may be any available media that is accessible by computer system, and it may include both volatile and non-volatile media, removable and non-removable media.
[0082] System memory 603 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory or others. Computer system may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 18 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (e.g., a "hard drive"). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD ROM or other optical media can be provided. In such instances, each can be connected to bus 605 by one or more data media interfaces.
[0083] Computer system may also communicate with one or more external devices 608 such as a keyboard, a pointing device, a display 609, etc.; one or more devices that enable a user to interact with computer system; and/or any devices (e.g., network card, modem, etc.) that enable computer system to communicate with one or more other computing devices. Such communication can occur via Input/Output (1/O) interfaces 609.
[0084] Still yet, computer system can communicate with one or more networks 607 such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 606. As depicted, network adapter 606 communicates with the other components of computer system via bus 605. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
[0085] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[0086] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0087] A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
[0088] Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[0089] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages, a scripting language such as Perl, VBS or similar languages, and/or functional languages such as Lisp and ML and logic-oriented languages such as Prolog. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
[0090] Aspects of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0091] These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
[0092] The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0093] The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0094] The computer program product may comprise all the respective features enabling the implementation of the methodology described herein, and which-when loaded in a computer system-is able to carry out the methods. Computer program, software program, program, or software, in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
[0095] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0096] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements, if any, in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
[0097] Various aspects of the present disclosure may be embodied as a program, software, or computer instructions embodied in a computer or machine usable or readable medium, which causes the computer or machine to perform the steps of the method when executed on the computer, processor, and/or machine. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform various functionalities and methods described in the present disclosure is also provided.
[0098] The system and method of the present disclosure may be implemented and run on a general-purpose computer or special-purpose computer system. The terms "computer system" and "computer network" as may be used in the present application may include a variety of combinations of fixed and/or portable computer hardware, software, peripherals, and storage devices. The computer system may include a plurality of individual components that are networked or otherwise linked to perform collaboratively, or may include one or more stand-alone components. The hardware and software components of the computer system of the present application may include and may be included within fixed and portable devices such as desktop, laptop, and/or server. A module may be a component of a device, software, program, or system that implements some "functionality", which can be embodied as software, hardware, firmware, electronic circuitry, or etc.
[0099] Although specific embodiments of the present invention have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.
[00100] It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
[00101] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
[00102] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
[00103] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
[00104] Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.

Claims (5)

1. A computer implemented method configured to provide a practical interpretation of parking regulations, the method including:
operating an image input module configured to enable user selection of an input image available in memory of a mobile device, the input image containing at least one parking instruction sign array;
triggering an image analysis process, wherein the image analysis process is configured to: (i) identify in the input image a region of the image that includes the parking instruction sign array; (ii) analyse the identified region of the image thereby to determine graphical attributes of the parking instruction sign array; and (iii) perform a parking regulation extraction process thereby to determine a current parking regulation data set for a point-in-time associated with the triggering of the image analysis process; and
causing delivery of data representative of the determined current parking regulation data set via an output device of the mobile device.
2. A method according to claim 1 including determining current parking fee data based on the determined parking regulation data set.
3. A method according to claim 2 including providing a payment module which allows a user to provide a payment for a parking fee, wherein the parking fee is calculated based on the determined current parking fee data.
4. A computer implemented method configured to provide a practical interpretation of parking regulations, the method including:
accessing an image;
identifying in the image one or more parking sign panels;
for each identified panel, extracting graphical artefacts from the panel, and processing those graphical artefacts thereby to define structured text representative of parking rules; combining the structured text for each of the one or more parking sign panels into a common data set.
5. A computer implemented method configured to provide a practical interpretation of parking regulations, the method including:
accessing an image;
operating a first neural network thereby to identify in the image one or more parking sign panels;
operating a second neural network thereby to, for each identified panel, extract graphical artefacts from the panel; and
processing those graphical artefacts thereby to define structured text representative of parking rules for the panel.
AU2020101919A 2019-08-20 2020-08-20 Technology configured to provide real-time practical interpretation of parking signage Ceased AU2020101919A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019903020 2019-08-20
AU2019903020A AU2019903020A0 (en) 2019-08-20 Technology configured to provide real-time practical interpretation of parking signage

Publications (1)

Publication Number Publication Date
AU2020101919A4 true AU2020101919A4 (en) 2020-10-01

Family

ID=72608222

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020101919A Ceased AU2020101919A4 (en) 2019-08-20 2020-08-20 Technology configured to provide real-time practical interpretation of parking signage

Country Status (1)

Country Link
AU (1) AU2020101919A4 (en)

Similar Documents

Publication Publication Date Title
US10013643B2 (en) Performing optical character recognition using spatial information of regions within a structured document
JP7265034B2 (en) Method and apparatus for human body detection
US8774462B2 (en) System and method for associating an order with an object in a multiple lane environment
US11881006B2 (en) Machine learning assistant for image analysis
US9098765B2 (en) Systems and methods for capturing and storing image data from a negotiable instrument
JP2021508123A (en) Remote sensing Image recognition methods, devices, storage media and electronic devices
CN110914872A (en) Navigating video scenes with cognitive insights
US20180357905A1 (en) Providing parking assistance based on multiple external parking data sources
US9031308B2 (en) Systems and methods for recreating an image using white space and check element capture
US11397516B2 (en) Systems and method for a customizable layered map for visualizing and analyzing geospatial data
CN112132032A (en) Traffic sign detection method and device, electronic equipment and storage medium
WO2019214321A1 (en) Vehicle damage identification processing method, processing device, client and server
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN112232203A (en) Pedestrian recognition method and device, electronic equipment and storage medium
CN110363193B (en) Vehicle weight recognition method, device, equipment and computer storage medium
US20200097735A1 (en) System and Method for Display of Object Movement Scheme
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN112396060B (en) Identification card recognition method based on identification card segmentation model and related equipment thereof
EP3848820A1 (en) Systems and methods for location-based information processing with augmented reality (ar) system
AU2020101919A4 (en) Technology configured to provide real-time practical interpretation of parking signage
US9552372B2 (en) Mapping infrastructure layout between non-corresponding datasets
CN112102398B (en) Positioning method, device, equipment and storage medium
US11710313B2 (en) Generating event logs from video streams
CN114238541A (en) Sensitive target information acquisition method and device and computer equipment
CN115114302A (en) Road sign data updating method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry