AU2021106921A4 - Systems and Methods for vehicle occupant compliance monitoring - Google Patents
Systems and Methods for vehicle occupant compliance monitoring Download PDFInfo
- Publication number
- AU2021106921A4 AU2021106921A4 AU2021106921A AU2021106921A AU2021106921A4 AU 2021106921 A4 AU2021106921 A4 AU 2021106921A4 AU 2021106921 A AU2021106921 A AU 2021106921A AU 2021106921 A AU2021106921 A AU 2021106921A AU 2021106921 A4 AU2021106921 A4 AU 2021106921A4
- Authority
- AU
- Australia
- Prior art keywords
- image
- camera
- target vehicle
- imaging configuration
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000012544 monitoring process Methods 0.000 title description 7
- 238000003384 imaging method Methods 0.000 claims abstract description 128
- 238000005286 illumination Methods 0.000 claims abstract description 79
- 238000012545 processing Methods 0.000 claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 49
- 230000015654 memory Effects 0.000 claims abstract description 37
- 238000004891 communication Methods 0.000 claims abstract description 24
- 230000008569 process Effects 0.000 claims description 28
- 238000013528 artificial neural network Methods 0.000 claims description 25
- 238000010801 machine learning Methods 0.000 claims description 12
- 241001465754 Metazoa Species 0.000 claims description 6
- 238000013442 quality metrics Methods 0.000 claims description 3
- 230000006399 behavior Effects 0.000 description 22
- 238000013527 convolutional neural network Methods 0.000 description 14
- 230000033001 locomotion Effects 0.000 description 8
- 238000012549 training Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 235000013361 beverage Nutrition 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 239000003990 capacitor Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000002800 charge carrier Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Embodiments generally relate to a computer-implemented method for vehicle occupant
surveillance. The method comprises capturing a first image of a target vehicle using a
camera configured with a first imaging configuration, configuring the camera with a
second imaging configuration, capturing a second image of the target vehicle using the
camera configured with the second imaging configuration, making the first and second
image available to at least one processor, processing the first image by the at least one
processor to determine a licence plate number of the target vehicle, and processing the
second image by the at least one processor to determine a non-compliance event. The
first imaging configuration allows the camera to legibly capture the licence plate of the
target vehicle and the second imaging configuration allows the camera to capture an
interior of the target vehicle with sufficient detail to determine the non-compliance
event.
Remote Server 130
100
Processor 132
Storage 136 Memory 134
Object Detection
Non- Module 172 License Plate
Compliance Determination Module
Event Record Compliance 176
138 Determination Module
175
Camera 120
Controller 122
Network 140
Processor 124
Sensor 126 Computing Device 110
Communication Interface 118
Memory 128
Processor 112
Overview Camera 150 Storage 116 Memory 114
Non- Object Detection Module
Compliance 192
Event Record
Camera Control Module
--- Illumination Source 160 4-- 182
193
Overview Illumination Source
Footage Control Module 194
184
Non-Compliance
Daylight Sensor 170 4--) Determination Module
195
License Plate
Determination Module
196 ---- Power Source 190 -------.---
Figure 1
Description
Remote Server 130 100 Processor 132
Storage 136 Memory 134 Object Detection Non- Module 172 License Plate Compliance Determination Module Event Record Compliance 176 138 Determination Module 175
Camera 120
Controller 122 Network 140
Processor 124
Sensor 126 Computing Device 110
Communication Interface 118 Memory 128
Processor 112
Overview Camera 150 Storage 116 Memory 114
Non- Object Detection Module Compliance 192 Event Record Camera Control Module --- Illumination Source 160 4-- 182 193 Overview Illumination Source Footage Control Module 194 184 Non-Compliance Daylight Sensor 170 4--) Determination Module 195
License Plate Determination Module 196 ---- Power Source 190 -------.---
Figure 1
Systems and methods for vehicle occupant compliance monitoring
Technical Field
[0001] Embodiments relate to methods and systems for vehicle occupant compliance monitoring. In particular, embodiments relate to vehicle occupant surveillance using image processing techniques.
Background
[0002] Occupant or driver behaviour in self-driven automotive vehicles has a substantial impact on the ability of the occupant or driver to drive safely and comply with the regulations regarding safe driving. Most jurisdictions impose specific restrictions on vehicle occupant behaviour. The restrictions may comprise restrictions on the use of a mobile phone by the driver, a requirement of properly fastening seatbelts, restrictions on consumption of food or beverages while driving, or unrestrained animals in vehicles, for example. With the exponential growth in the use of mobile phones, the growing tendency of mobile phone users to continue to use mobile phones while driving poses serious safety hazards. Using mobile phones while driving can cause drivers to take their eyes off the road, their hands off the steering wheel, and their minds off the road and the surrounding situation, which poses serious safety hazards.
[0003] There are technical difficulties inherent in monitoring vehicle driver behaviour. For example, it is generally not practical for local law enforcement personnel to monitor driver behaviour on a frequent enough basis to act as a suitable deterrent. Further, for stationary camera-based systems, it can be challenging to accurately identify whether a driver is compliant or non-compliant with local rules. Evidence of non-compliance with vehicle occupant safety regulation is necessary to implement an enforcement regime and impose fines or other penalties on non compliance. The more efficient and widespread the gathering of evidence of non- compliance is, the easier it becomes to enforce the vehicle occupant behaviour safety regulations leading to overall better safety outcomes.
[0004] It is desired to address or ameliorate one or more shortcomings or disadvantages of prior techniques for driver behaviour monitoring, or to at least provide a useful alternative thereto.
[0005] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
[0006] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Summary
[0007] Some embodiments relate to a computer-implemented method for vehicle occupant surveillance, the method comprising:
capturing a first image of a target vehicle using a camera configured with a first imaging configuration,
configuring the camera with a second imaging configuration,
capturing a second image of the target vehicle using the camera configured with the second imaging configuration,
making the first and second image available to at least one processor, processing the first image by the at least one processor to determine a licence plate number of the target vehicle, processing the second image by the at least one processor to determine a non-compliance event, wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
[0008] The method of some embodiments further comprises triggering by the at least one processor, illumination of the interior of the target vehicle using an infrared illumination source for capturing the second image.
[0009] In some embodiments, the triggering is scheduled based on the camera processing the first image and detecting the target vehicle in the first image.
[0010] In some embodiments, the triggering is scheduled based on a lateral speed of the target vehicle determined based on a plurality of images captured before the second image.
[0011] In some embodiments, the infrared illumination source generates infrared light with a wavelength from 730nm to 930nm.
[0012] In some embodiments, the time elapsed between capture of the first image and the second image is from Ims to 40ms.
[0013] In some embodiments, the second imaging configuration comprises a high gain configuration of the camera.
[0014] In some embodiments, the high gain configuration comprises again configuration in the range of 2dB to 48dB.
[0015] In some embodiments, the at least one processor processes the second image using a trained machine learning model to determine the non-compliance event.
[0016] In some embodiments, the trained machine learning model comprises at least one object detection neural network.
[0017] In some embodiments, the non-compliance event comprises one or more of: use of phone by driver, failure to use seat belt, failure to place hands on the steering wheel, excess number of occupants in target vehicle and unrestrained animals in target vehicle.
[0018] The method of some embodiments further comprises:
obtaining a daylight intensity indication from a daylight sensor,
modifying the first imaging configuration and the second imaging configuration based on the daylight intensity indication.
[0019] In some embodiments, the first image or the second image are infrared images, or wherein both the first image and the second image are infrared images.
[0020] Some embodiments relate to a system for determining vehicle occupant non compliance, the system including:
a computing device positioned near a road, the computing device including at least one processor and memory storing executable program code to process images, the computing device being in communication with a remote server over a network;
a camera in communication with the computing device and positioned to capture infrared images of vehicles on the road in a field of view of the infrared camera, wherein the camera is responsive to commands from the computing device to capture successive first and second infrared images of a vehicle, wherein the camera is configured to cooperate with a light source to illuminate the vehicle with infrared light of a first infrared light intensity immediately prior to capturing the first infrared image and to illuminate the vehicle with a second infrared light intensity immediately prior to capturing the second infrared image, wherein the first infrared light intensity is different from the second infrared light intensity; wherein the computing device is configured to: receive and process the first and second infrared images from the infrared camera; to determine based on at least one of the processed first and second images whether a driver of the vehicle is non compliant with a rule; and to transmit non-compliance information to the remote server when the at least one of the processed first and second images indicate that a driver of the vehicle is non-compliant with the rule.
[0021] Some embodiments relate to a method for determining vehicle occupant non compliance, the method comprising:
capturing successive first and second infrared images of a vehicle on a road by a camera responsive to commands from a computing device configured to communicate with the camera;
receiving by the computing device the first and second infrared images;
processing by the computing device the first and second infrared images to determine based on at least one of the processed first and second images whether a driver of the vehicle is non-compliant with a rule; and
transmitting by the computing device non-compliance information to a remote server when the at least one of the processed first and second images indicate that a driver of the vehicle is non-compliant with the rule;
wherein the camera is configured to cooperate with a light source to illuminate the vehicle with infrared light of a first infrared light intensity immediately prior to capturing the first infrared image and to illuminate the vehicle with a second infrared light intensity immediately prior to capturing the second infrared image, wherein the first infrared light intensity is different from the second infrared light intensity.
[0022] Some embodiments relate to a system for determining vehicle occupant non compliance, the system comprising:
a camera configured to capture images of vehicles on the road,
a computing device in communication with the camera,
the computing device comprising at least one processor and a memory in communication with the processor, the memory comprising program code executable by the at least one processor to configure the at least one processor to:
receive a first image of a target vehicle captured using the camera configured with a first imaging configuration,
configuring the camera with a second imaging configuration,
receive a second image of the target vehicle using the camera configured with the second imaging configuration,
processing the first image to determine a licence plate number of the target vehicle,
processing the second image to determine a non-compliance event,
wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
[0023] The system of some embodiments further comprises an infrared illumination source operable by the computing device, and
wherein the memory comprises program code executable by the at least one processor to further configure the at least one processor to trigger the infrared illumination source to illuminate the interior of the target vehicle before the second image is captured.
[0024] In some embodiments, the system further comprises an infrared illumination source operable by the camera, and
wherein the memory comprises program code executable by the at least one processor to further configure the camera to trigger the infrared illumination source to illuminate the interior of the target vehicle before the second image is captured.
[0025] Some embodiments relate to a system for vehicle occupant surveillance, the system comprising:
a dynamically configurable camera in communication with a computing device,
the computing device comprising a memory in communication with at least one processor, wherein the memory comprises program code which when executed by the at least one processor configures the at least one processor to:
receive a first image of a target vehicle captured using the camera configured with a first imaging configuration,
configure the camera with a second imaging configuration,
receive a second image of the target vehicle captured using the camera configured with the second imaging configuration, processing the first image to determine a licence plate number of the target vehicle, processing the second image to determine a non-compliance event, wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
[0026] The system of some embodiments further comprises an infrared illumination source positioned to illuminate a field of view of the dynamically configurable camera, and
the at least one processor is further configured to trigger illumination of the interior of the target vehicle using an infrared illumination source for capturing the second image.
[0027] In some embodiments, the triggering is scheduled based on the dynamically configurable camera processing the first image and detecting the target vehicle in the first image.
[0028] In some embodiments, the triggering is scheduled based on a lateral speed of the target vehicle determined based on a plurality of images captured before the second image.
[0029] In some embodiments, the infrared illumination source generates infrared light with a wavelength from 730nm to 930nm.
[0030] In some embodiments, the time elapsed between capture of the first image and the second image is from Ims to 40ms.
[0031] In some embodiments, the second imaging configuration comprises a high gain configuration of the camera.
[0032] In some embodiments, the high gain configuration comprises a gain configuration in the range of 2dB to 48dB.
[0033] In some embodiments, the memory comprises a trained machine learning model and the at least one processor is further configured to process the second image using the trained machine learning model to determine the non-compliance event.
[0034] In some embodiments, the trained machine learning model comprises at least one object detection neural network.
[0035] In some embodiments, the non-compliance event comprises one or more of: use of phone by driver, failure to use seat belt, failure to place hands on the steering wheel, excess number of occupants in target vehicle and unrestrained animals in target vehicle.
[0036] The least one processor of some embodiments is further configured to:
receive a daylight intensity indication from a daylight sensor in communication with the computing device,
update the first imaging configuration and the second imaging configuration based on the daylight intensity indication.
[0037] In some embodiments the first image or the second image are infrared images, or wherein both the first image and the second image are infrared images.
[0038] Some embodiments relate to a computer-implemented method for vehicle occupant surveillance, the method comprising: capturing a first image of a target vehicle using a camera configured with a first imaging configuration; making the first image available to at least one processor; processing the first image by at least one processor to determine an image clarity metric; determining by the least one processor a second imaging configuration based on the image clarity metric; configuring the camera with a second imaging configuration; capturing a second image of the target vehicle using the camera configured with the second imaging configuration; making the second image available to at least one processor, processing the second image by the at least one processor to determine a licence plate number of the target vehicle or a non-compliance event.
[0039] In some embodiments, the first imaging configuration is different from the second imaging configuration.
[0040] The method of some embodiments further comprises:
configuring the camera with a third imaging configuration; and
capturing a third image of the target vehicle using the camera configured with the third imaging configuration.
[0041] The method of some embodiments further comprises determining the third imaging configuration based on the image quality metric.
[0042] In some embodiments, the third imaging configuration is different from the second imaging configuration.
Brief Description of Drawings
[0043] Figure 1 is a block diagram of a system for vehicle occupant surveillance according to some embodiments;
[0044] Figures 2A and 2B are schematic diagrams of a system for vehicle occupant surveillance according to some embodiments;
[0045] Figure 3 is an image of part of the system deployed on a gantry according to some embodiments;
[0046] Figure 4 is an example image captured by a camera configured with a first imaging configuration;
[0047] Figure 5 is an example image captured by a camera configured with a second imaging configuration;
[0048] Figure 6 shows a series of images showing part of a process of scheduling the capture of an image by the camera according to a second imaging configuration;
[0049] Figure 7 is a flowchart of a method for vehicle occupant surveillance according to some embodiments;
[0050] Figure 8 is a flowchart of a method for vehicle occupant surveillance according to some embodiments; and
[0051] Figure 9 is a flowchart of a method for vehicle occupant surveillance according to some embodiments.
Detailed Description
[0052] Embodiments relate to computer-implemented methods and systems for surveillance of vehicle occupant behaviour. The embodiments incorporate a dynamically configurable camera, including a thermal camera or an infrared camera to capture images of vehicle occupants and vehicles. The embodiments also incorporate image processing program code to process the images of vehicle occupants and vehicles to determine the licence plate number of vehicles and identify any occupant behaviour that could be considered non-compliant with regulations. The non-compliant behaviour may include: use of a mobile phone while driving, consumption of food or beverages while driving, not fastening seatbelts, an excess number of occupants in a vehicle, unrestrained animals in vehicles, or children not in car seats, for example.
[0053] The system of some embodiments may be deployed on a gantry or a post with a line of sight over one or more lanes of traffic travelling on a road, such as a highway, a street or other thoroughfare. The camera of the embodiments may be directed to capture images of an interior of vehicles as they pass through the vicinity of the camera. The camera of the embodiments may also be directed to capture images of the exterior of vehicles to capture details of the vehicle including make and model of the vehicle and a licence plate of the vehicle.
[0054] The system of some embodiments also comprises one or more additional illumination sources to illuminate an interior region of the vehicles. The illumination sources may comprise an infrared light source. The infrared light source may be configured to transmit infrared light within a specific frequency range that results in improved quality of images with sufficient detail to determine non-compliant behaviour of occupants within vehicles by processing images captured by the dynamically configurable camera of the embodiments.
[0055] The dynamically configurable camera of the embodiments may allow the dynamic configuration of the camera according to one or more imaging configurations. Each imaging configuration may comprise specific parameters for the various configurable components of the dynamically configurable camera. The configurable components of the dynamically configurable camera may include shutter speed of the camera, wavelength range captured by the camera, zoom configuration of the camera including digital zoom configuration, and/or sensor gain configuration of the camera, for example.
[0056] Figure 1 is a block diagram of a system for vehicle occupant compliance monitoring 100 according to some embodiments. The system 100 comprises a computing device 110 in communication with a camera 120. The computing device 110 may also be in communication with an illumination source 160 and a daylight sensor 170. The camera 120 is positioned to capture images of vehicles exterior and interior while the vehicles pass through a stretch of road. The camera 120 may be a dynamically configurable camera that allows dynamic control and configuration of its exposure and image capture parameters to capture images using different exposure or image capture parameters.
[0057] The illumination source 160 may provide additional illumination when necessary to capture more detailed images of the exterior of vehicles or the interior of vehicles. The illumination source 160 may generate infrared illumination that may not distract drivers of vehicles while enabling capturing of more detailed images by the camera 120. In some embodiments, the illumination source 160 may form a part or subcomponent of the camera 120. Some embodiments may incorporate a FLIRTM Backfly S camera as camera 120, for example. In some embodiments, more than one illumination source 160 may be incorporated in the system 100. For example, the camera 120 may include one illumination source 160 and at least one additional external illumination source 160 may be positioned in the immediate vicinity of camera 120 to direct illumination in the same direction as the field of view of the camera 120.
[0058] In some embodiments, the illumination source 160 may be dynamically configurable to control the range of wavelengths of IR light emitted by the illumination source 160. In some embodiments, the intensity of the IR light emitted by the illumination source 160 may be dynamically configurable.
[0059] In some embodiments, camera 120 may comprise a global electronic shutter. An image sensor 126 in camera 120 comprises photodiodes where light displaces electrons in the semiconductor lattice and the corresponding charge carriers are swept across a diode junction to generate a photocurrent. The photocurrent is accumulated on a capacitor to aggregate charge proportional to the number of photons per second per meter squared (equivalent to irradiance). After the charge has been accumulated for a pixel over a configured period of time, it is typically read out of the capacitor using an in-pixel amplifier or using a bucket-brigade readout in charge-coupled devices. A 2D spatial array of photodiodes are arranged in a row-column format in the image sensor 126. Camera 120 of some embodiments may use a global shutter to expose the entire image sensor at the same time but may require in-pixel memories to store the charge while readout occurs. Alternatively, specialised circuits may be provided in the image sensor 126 to do a full array readout simultaneously. Using a global shutter allows camera 120 to capture images with minimal or no motion blur. The significant reduction in motion blur in images allows improved image processing to identify non compliant behaviours and identify vehicles and licence plates.
[0060] Significant variability in the attributes of windscreens of different vehicles may be observed. Different vehicles have different angles or orientations of their respective windscreens. For example, the windscreen of a truck may be nearly perpendicular to the surface of the road while the windscreen of a race car may be positioned at a sharp acute angle with the surface of the road. Windscreens of cars may also have films or other features capable of filtering certain wavelengths of light. The illumination source 160 and the camera 120 are configured to obtain a clear view of an interior of vehicles despite the variability in the angle at which the windscreen may be positioned or the variability in the ranges of the wavelength of light filtered by windscreens. In some embodiments, the intensity and wavelength of light produced by the illumination source 160 may be controlled by the computing device 110 according to variations in natural lighting conditions. Some embodiments may comprise a daylight sensor 170 configured to communicate to computing device 110 daylight intensity data to allow the computing device to configure intensity and wavelength of the light emitted by the illumination source 160 to capture images of the interior of vehicles and exterior of vehicles. In some embodiments, the computing device 110 may configure the illumination source to generate light or illumination with a wavelength between 730nm to 930nm. Light with a wavelength between 730nm to 930nm provides improved illumination of the interior of vehicles and allows the capture of images with sufficient detail (i.e. image resolution) to allow accurate image processing to identify actual or apparent non-compliant behaviour by an occupant of the vehicle. Images of non-compliant behaviour need to also meet the requisite legal standard to be used as evidence before administrative authorities or courts to sustain legal action or fines in response to the non-compliant behaviour.
[0061] Some embodiments also include an additional overview camera 150. The overview camera 150 captures continuous video footage of the area under surveillance to provide an alternative view. Video footage generated by the overview camera may be stored in a storage 116 of the computing device 110 as overview footage 184. In some embodiments, the overview footage 184 may be transmitted by the computing device 110 to a remote server 130 over a network 140. The remote server 130 may have a scalable storage 136 that allows archival of overview footage over longer periods than overview camera 150. The video recorded by the overview camera 150 may not be used for identifying non-compliant behaviour of occupants of vehicles. Instead, the images recorded by overview camera 150 may be used for auditing the position and integrity of the camera 120 at times that camera 120 captures images of non-compliant vehicle occupant behaviour.
[0062] Computing device 110 maybe a computing device configured with hardware and software to perform image processing operations to identify non-compliant behaviour in images captured by the camera 120. The computing device 110 comprises at least one processor 112 to perform computation. The at least one processor 112 may comprise one or more of a microprocessor, a graphics processing unit (GPU), a digital signal processor (DSP), or a field-programmable gate array (FPGA), for example. In some embodiments, the at least one processor is capable of multithreading or parallel processing to perform image processing operations in parallel. The system 100 may be used to perform surveillance over a stretch of road including multiple lanes with potentially multiple vehicles, and potentially multiple occupants on each vehicle. Multithreading or parallel processing design of program code in memory 114 of the computing device 110 allows the execution of parallel image processing operations to identify non-compliant behaviour across multiple cars in multiple lanes.
[0063] Some embodiments comprise a remote server 130 that is not positioned in the vicinity of the area under surveillance but configured to communicate with the computing device 110 over a network 140. In some embodiments, a part or the entirety of the image processing operations for identifying non-compliant behaviour may be performed on the remote server 130. The computing device 110 by virtue of being deployed in the field may have limited access to computational capability, memory, thermal dissipation or power. Remote server 130 being deployed remotely may be configured to have access to scalable computational capability and memory allowing the image processing operations according to the embodiments be performed on the remote server 130. The remote server 130 may be implemented in a cloud computing environment such as on Amazon AWSTMor Microsoft AzureTM
[0064] The network 140 may include, for example, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, some combination thereof, or so forth. The network 140 may include, for example, one or more of: a wireless network, a wired network, an internet, an intranet, a public network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a public-switched telephone network (PSTN), a cable network, a cellular network, a satellite network, afibre-optic network, some combination thereof, or so forth.
[0065] Memory 114 of the computing device 110 comprises program code to implement image processing techniques to process images captured by the camera 120 and determine non-compliance events or non-compliant behaviour. Memory 114 also comprises program code to dynamically control the imaging configuration of the camera 120 to obtain images of the exterior and interior of vehicles that are suitable for identifying non-compliance events in the interior of the vehicle and details of the vehicle including, for example, a make, model or licence plate number.
[0066] Memory 114 comprises program code executable by the at least one processor 112 to implement an object detection module 192. The object detection module 192 is capable of processing images or image data received by the computing device 110 to identify objects or actions or behaviours. For example, object detection module 192 may be capable of processing an image or image data depicting an interior of a vehicle to identify mobile phone usage by an occupant, improper fastening of seatbelts, or consumption of food or beverages by a driver, for example. In some embodiments, the object detection module 192 may be capable of processing an image or image data depicting an interior of a vehicle to identify faces of occupants and crop faces of occupants from images. In some embodiments, the object detection module 192 may be capable of processing an image or image data depicting an interior of a vehicle to identify hands of occupants of the vehicle, including hands of a person in the driver's seat, and to identify a position of the person's hands. In some embodiments, the object detection module 192 may be capable of processing an image or image data depicting an interior of a vehicle to identify one or more of a mobile phone, a steering wheel or a fastened seatbelt, for example, or to determine the number of occupants in the vehicle.
[0067] Each category of object being detected may be identified by a class indicator of the detected object. Information recorded for each object detected in an image may also include pose information associated with the detected object. The pose information may include coordinates defining a bounding box around the detected object in the image, for example. Coordinates defining the bounding box around a detected object allow the cropping of the detected object from an image wherein the object was detected. In some embodiments, the pose information may include specific details regarding the detected object. For example, pose information in relation to seatbelts may comprise information regarding whether the detected seatbelt appears to have been fastened. As another example, pose information associated with detected driver's hands may include information regarding whether the driver's hands are positioned on the steering wheel.
[0068] The object detection module 192 may comprise or apply one or more models for an object class constructed from a set of training example images. Multiple training example images may be required to capture various aspects of class variability for each class of detected object. The models comprised in the object detection module 192 may incorporate principles of Artificial Intelligence (AI) or Machine Learning (ML) to perform the various image processing tasks to detect objects in images.
[0069] In some embodiments, the object detection module 192 may comprise one or more neural networks configured or trained to perform object detection. The one or more neural networks may embody feature representations, including low-level object features to high-level object features. An annotated training dataset or training images may be used to train the one or more neural networks to perform object detection. The training images may comprise bounding boxes around relevant classes of objects to be detected by object detection module 192 and an identifier associated with a class of object present in each bounding box. During the training process, the one or more neural networks may be trained to embody feature representations, including low-level object features to high-level object features of each class of object to be detected by the object detection module 192.
[0070] The neural networks comprised in the object detection module 192 comprise a hierarchy of layers of neurons indexed by grids of decreasing resolution. An input layer is used to provide the raw pixel data of the images captured by camera 120. Each subsequent layer computes a vector output at each grid point using a list of local filters applied to the data in the preceding layer. The linear operation of one or more initial layers of neurons may be followed by nonlinear operations applied to each coordinate or region of the image. In some embodiments, at certain layers, the grid resolution may be reduced by subsampling following a local max or averaging operation by the neural network. The neural network may terminate in one or more output layers that make predictions according to the design of the model neural network including object classes and/or pose information for each identified object class.
[0071] In some embodiments, the neural network of the object detection module 192 may be trained using stochastic gradient descent on a loss function defined in terms of the output layers and the labels in a training dataset. The neural network may be jointly trained to learn linear classifiers and a complex hierarchy of nonlinear features that yield the final feature representation.
[0072] In some embodiments, the object detection module 192 may comprise two convolutional neural networks (CNNs) that may process image data in parallel. Corresponding to each location or sub-region in an image captured by camera 120, a first CNN may be configured to act as a classifier CNN identifying a class associated with an object in the location or sub-region within the image. A second CNN may be configured to act as a pose estimator CNN. The second CNN may be configured as a pose estimator CNN to estimate pose information associated with each object detected by the first classifier CNN.
[0073] In some embodiments, the object detection module 192 may comprise multiple pairs of such first and second CNN layers, with each pair configured to identify objects in a detection window of a specific size and aspect ratio within an image captured by camera 120. In some embodiments, the pose estimator CNNs may be configured to predict a residual error between the quantized windows and the ground-truth object bounding boxes to more accurately determine a bounding box around identified objects.
[0074] In some embodiments, the object detection module 192 may implement a single shot multibox detector neural network structure. In the single-shot multibox detector neural network, a sliding window classifier neural network may make a multi class prediction over the set of all object categories potentially present in an image. The predictions of the sliding window classifier neural network, together with refined windows generated by a pose estimator neural network, may comprise the output of the object detection module. The YOLO (you look only once) framework is an example of a single shot multibox detector neural network structure incorporated by the object detection module 192 of some embodiments.
[0075] In some embodiments, the object detection module may implement a two stage based neural network structure also incorporating a sliding-window based image processing approach. In the two-stage based neural network structure, a sliding-window classifier may perform a two-class classification between an object (of any category or class) or a background in the image being processed. Windows identified by the sliding-window classifier may be referred to as regions of interest. Each identified region of interest may be individually processed by one or more region classification neural networks to extract features within each region of interest from a feature map and classify each region as belonging to a specific class of defined objects or the background. The feature extraction process may involve quantizing the region of interest coordinates and using max-pooling or bilinear interpolation of the feature map without performing quantization. The object detection module 192 of some embodiments may incorporate an R-CNN, Fast R-CNN, Faster R-CNN based neural networks to implement the two-stage based neural network structure.
[0076] Memory 114 may also comprise program code to implement a camera control module 193. The camera control module 193 may comprise software drivers and device communication protocols to allow the computing device 110 to send instructions to the camera 120. In some embodiments, the computing device 110 may comprise a dedicated interface card for interfacing communication between camera 120 and computing device 110. Camera 120 and computing device 110 may communicate over a wired Ethernet, which may include a gigabit Ethernet or Universal Serial Bus (USB) including USB 3.0 communication interface. In some embodiments, the camera 120 and computing device 110 may communicate over a wireless network, such as a WiFi network for example. In some embodiments, the camera control module 193 may implement a communication standard to communicate with camera 120. The implemented communication standard may include GigE Vision standard, USB3 Vision standard, or the GenlCam standard, for example. In some embodiments, the camera control module 193 may send instructions or signals to the camera 120 to operate with a specific imaging configuration for capturing images. The imaging configurations may be suitable for capturing adequate details on an interior of a vehicle or an exterior of a vehicle during varying lighting conditions. Each imaging configuration may comprise one or more parameters defining a shutter speed, a range of wavelength of light to be captured, a degree of digital zoom to be implemented to a specific region of a captured image or a degree of sensor gain to be implemented by the camera 120.
[0077] The camera control module 193 also controls the scheduling of capture of images using a specific imaging configuration. The scheduling of capture of images may comprise identifying when a potential object of interest will be at a preferred location in the field of view of the camera 120 to obtain an image with sufficient detail of one or more objects of interest. Scheduling of capture of an image using a specific imaging configuration is described in greater detail with reference to Figure 8.
[0078] Memory 114 may also comprise program code to implement an illumination control module 194. The illumination control module 194 controls the illumination source 160. The imaging configurations implemented by the camera control module 193 may also comprise a configuration associated with an intensity and a range of wavelength of the illumination generated by the illumination source 160. Scheduling information determined by the camera control module 193 may also be made available to the illumination control module 194 to coordinate the illumination of the objects in the field of view of camera 120 with the capture of images according to a specific imaging configuration.
[0079] Memory 114 may also comprise program code to implement a non compliance determination module 195. The non-compliance determination module 195 receives object detection outputs of the object detection module 192 and processes the object detection outputs to determine where a non-compliance event is observable based on the detected objects or the pose information of the detected objects. For example, the object detection module 192 may detect a steering wheel and a driver's hands in a captured image and may determine coordinates or bounding box pose information for the steering wheel and the driver's hands. The non-compliance determination module 195 may consider proximity or overlap between the bounding boxes around both the steering wheel and the driver's hands to assess whether the driver's hands appear to be on or off the steering wheel. If the non-compliance determination module 195 determines that the driver's hands are not on the steering wheel, then the non-compliance module may create a non-compliance event record 182 in the storage 116. The non-compliance determination module 195 may also extract additional data associated with the non-compliance event record, for example, additional images of the vehicle associated with the non-compliance event captured by the camera, a part of a video captured by the overview camera 150 including the footage depicting the non-compliance event, or a portion of the image corresponding to the face of the driver associated with the non-compliance event, for example. The non compliance determination module 195 may gather and store the necessary images, and information in the non-compliance event record 182 to for a part of evidence documentation to support legal or administrative enforcement actions.
[0080] In some embodiments, the memory 114 may comprise program code to implement a licence plate determination module 196. The licence plate determination module 196 may comprise program code similar to the object detection module 192 to identify one or more regions in images captured by the camera 120 corresponding to licence plates. The licence plate determination module 196 also processes regions of images corresponding to a licence plate to perform character recognition and determine the licence plate number associated with a vehicle. The licence plate determination module 196 may provide the determined licence plate number information to the non compliance determination module 195 to register the licence plate number associated with a non-compliance event identified by the non-compliance determination module 195. The determined licence plate number may be stored in the non-compliance event record 182 with rest of the images associated with the non-compliance event.
[0081] In some embodiments, part or all of the image processing operations necessary for identifying non-compliance events may be performed by the remote server 130 based on image data transmitted by the computing device 110. The remote server 130 comprises a memory 134. The memory 134 may comprise program code to implement image processing modules including an object detection module 172, a compliance determination module 175, and a licence plate determination module 176. The image processing modules of the remote server 130 may implement substantially similar principles and logic as the object detection module 192, compliance determination module 195, and licence plate determination module 196 implemented on the computing device 110. The remote server 130 may be configured to receive image data from a plurality of computing devices 110 deployed in distinct regions monitoring non compliance in different regions. The remote server 130 may allow the centralisation of parts or all of the image processing operations to determine non-compliance events. In embodiments where the remote server 130 performs a part or all of the image processing operations, the computing capability of the computing device 110 may be primarily used for image capturing purposes by sending control signals to the camera 120 and the illumination source 160. The remote server 130 may also comprise or have access to storage 136 to store non-compliance event records 138. In some embodiments, the remote server 130 may be in communication with a database server to store information regarding non-compliance event records.
[0082] The various modules implemented in the memory 114 of the computing device 110 and the memory 134 of the remote server 130 comprise program code implementing the computing or data processing capability of the respective modules. In addition, the various modules may also comprise or may have access to software packages, software libraries, configurations or configuration files, and/or Application Programming Interfaces (APIs) to perform the computing or data processing functions of the respective modules. In some embodiments, the various modules when executed by a processor may execute as one or more processes coordinated by an operating system executing on the computing device 110 or the remote server 130. The various modules when executed as a process may comprise one or more threads executing concurrently to perform the data processing or image processing operations described herein.
[0083] The system 100 also comprises a power source 190 provided to power the computing device 110, the camera 120, the overview camera 150, the illumination source 160 and the daylight sensor 170. The power source 190 may be a mains power source or a power source based on a combination of solar cells and batteries making the system 100 suitable for deployment in remote locations.
[0084] Figures 2A and 2B are schematic diagrams of a system for vehicle occupant surveillance according to some embodiments. Imaging conditions required to capture a picture depicting an exterior of a vehicle are significantly different from the imaging conditions required for capturing a picture depicting the interior of a vehicle. Imaging conditions may include the various configurable aspects of camera 120 including digital zoom configuration, sensor gain configuration, shutter speed (i.e. image sensor capture or exposure time interval), illumination intensity and range of wavelength captured by camera 120.
[0085] Figure 2A is a schematic diagram 200 illustrating a configuration of system 100 suitable for capturing an exterior of a vehicle 220. The camera 120, the illumination source 160 and a housing 210 may be mounted on a post or gantry 205. The housing 210 may house the computing device 110 and any other components associated with the system 100 deployed to perform surveillance. The vehicle 220 has a licence plate 240 and an occupant 250. In the schematic 200, the computing device 110 configures camera 120 with a first imaging configuration adapted to capture clearly or distinctly an exterior of the vehicle 220, including the licence plate 240 of vehicle 220. Image 400 of Figure 4 is an example of an image captured using the first imaging configuration illustrated in schematic 200. In some embodiments, the first imaging configuration may comprise a low or no gain configuration. For camera 120 a gain configuration is associated with the sensitivity of sensor 126 to light. Gain for sensor 120 of camera 120 corresponds to the relationship between the number of electrons acquired by the sensor 126 and the analog-to-digital units that are generated in response by the image sensor 126, representing the image signal. The relationship between the number of electrons acquired by the sensor 126 and the analog-to-digital units that are generated in response by the image sensor 126 may be represented in decibels. In the first imaging configuration, the camera 120 may be configured to operate with low gain configuration.
[0086] Figure 2B is a schematic diagram 201 illustrating a configuration of system 100 suitable for capturing details on an interior of the vehicle 220. In the schematic 201, the computing device 110 configures camera 120 with a second imaging configuration adapted to capture clearly or distinctly an interior of the vehicle 220, including one or more objects within the vehicle or one or more body parts of occupant 250 to determine compliance of the occupants in vehicle 220. Image 500 of Figure 5 is an example of an image captured using the second imaging configuration illustrated in schematic 201. In the second imaging configuration, the camera 120 may be configured to operate with a high gain configuration.
[0087] Figure 3 is a photograph of part of the system 100 deployed on a gantry 310 according to some embodiments. Photograph 300 illustrates the camera 120 and the illumination source 160 angled downwards towards a road under surveillance. A lateral distance between the camera 120 and the illumination source 160 may be in the range of 50 cm to 80 cm, for example. Camera 120 and illumination source 160 may be powered by power cable 320. In some embodiments, the camera 120 and illumination source 160 may be housed in a common housing to make the entire system more compact. The housing may be physically secured to a permanent fixture such as a gantry. The housing may be weatherproof to make the system as deployed durable through changing weather conditions.
[0088] Figure 4 illustrates the image 400 captured by camera 120 configured by computing device 110 with a first imaging configuration to capture an exterior of the vehicle 220. Configured with the first imaging configuration, the camera 120 captures the licence plate region 440 with sufficient clarity to decipher the licence plate number of vehicle 220.
[0089] Figure 5 illustrates the image 500 captured by camera 120 configured by computing device 110 with a second imaging configuration to capture the interior of the vehicle 220. The image 500 may be processed by the object detection module 192 and the non-compliance determination module 195 to determine one or more non compliance events. In image 500, a first non-compliance event identified by bounding box 510 relates to the use of a mobile phone. In image 500, a second non-compliance event identified by bounding box 520 relates to the absence of positioning of at least one hand on the steering wheel. The computing device 110 or the remote server 130 may process the images 400 and 500 to identify the details of the vehicle 220 including the licence plate number of the vehicle 220, and one or more non-compliance events as illustrated in Figure 5. On identifying the non-compliance events, the computing device 110 or the remote server 130 may create a non-compliance event record comprising the captured images, one or more categories of non-compliance events identified and the details of the vehicle including the licence plate number of the vehicle. The non compliance event record may be used by administrative or law enforcement authorities to take steps to enforce compliance or issue fines in relation to the identified non compliance event.
[0090] In some embodiments, images captured according to the second imaging configuration may not clearly or legibly depict the licence plate number of the vehicle. As exemplified in image 500, the region 440 corresponding to the licence plate is not legible. This may be caused by the high sensor gain setting associated with the second imaging configuration or the use of the illumination source 160 leading to overexposure or washing out of the region 440 corresponding to the licence plate. Capturing images using both the first and second image configuration allows the capture of details of both the exterior and the interior of a vehicle.
[0091] Figure 6 illustrates a series of images 600 showing apart of a process of scheduling the capture of an image by camera 120 according to a second imaging configuration to depicting an interior of a vehicle. In some embodiments, the camera 120 may be configured to capture images according to a first imaging configuration that allows the capture of an exterior of the vehicle including the licence plate details. On identifying a vehicle or a licence plate in an image, and tracking the lateral movement of the vehicle or a licence plate in the subsequent images captured by camera 120, the illumination source control module 194 and the camera control module 193 operating in cooperation with the object detection module 192 may determine a schedule for switching the camera to the second imaging configuration and/or for turning on the illumination source 160 to capture an image that depicts an interior of the vehicle. Scheduling the capture of an image using a second imaging configuration and/or turning on the illumination source 160 allows the capture of an image with sufficient details of an interior of the vehicle to perform the various object detection operations by the object detection module 192 and allow the determination of non compliance events by the non-compliance determination module 195.
[0092] In image 610, a first image of the series of images 600, a licence plate 670 is detected by the licence plate determination module 196. In image 620, a second image of the series of image 600, the same licence plate number 670 is detected by the licence plate determination module 196. As the vehicle in the image 620 has progressed closer to the camera 120, the licence plate 670 has moved downward. In images 630 and 640 captured after image 620, the licence plate 670 has further moved downward. Segment 654 relates the position of the licence plate 670 in image 640 with a corresponding position in image 610. Segment 650 indicates a downward movement of the licence plate 670 across the images 610, 620, 630 and 640. The downward movement of the licence plate 670 across the images 610, 620, 630 and 640 corresponds with the movement of the vehicle towards camera 120. As the vehicle approaches the camera 120, the licence plate 670 moves downwards as reflected in the images 610, 620, 630 and 640. The rate of downward movement of the licence plate 670 across the images 610, 620, 630 and 640 may provide a proxy or an indication of the speed of the vehicle to allow scheduling of the capturing of an image of the interior of the vehicle. The rate of downward movement of the licence plate 670 across the images 610, 620, 630 and 640 may also allow the scheduling of the illumination of the interior of the vehicle using the illumination source 160. Using a timing detail of the capture of each of the images 610, 620, 630 and 640, and the details of the segment 650, the camera control module 193 and/or the illumination source control module 194 may determine a most suitable time or schedule that allows a capture of the interior of the vehicle with sufficient detail to identify non-compliance events. Scheduling the capture of the interior of the vehicle too early or too late may not allow the capture of sufficient details to determine non-compliance by occupants of the vehicle. Once the schedule for capturing the interior of the vehicle is determined, the camera 120 may be configured by the computing device 110 to switch to a second imaging configuration and the illumination source 160 may be triggered or switched on by the computing device 110 according to the schedule. In some embodiments, multiple images using the second imaging configuration may be captured by the camera 120 to provide some redundancy for performing the various object detection and image processing operations. In some embodiments, the camera may be configured to switch from a first imaging configuration to the second imaging configuration multiple times on a frame by frame basis. Switching between the first and second imaging configurations on a frame by frame basis allows the capture of images of both the interior and exterior of the vehicle in near-identical physical locations. Since images captured using the second imaging configuration may not depict the licence plate details (as illustrated in Figure 5), switching between the first and second imaging configurations on a frame by frame basis to capture images of a vehicle at nearly identical physical locations provides greater evidential certainty in the non-compliance events detected by the non compliance determination module.
[0093] Figure 7 illustrates a method for vehicle occupant surveillance 700 according to some embodiments implemented by the computing device 110. At 705, the camera control module 193 configures the camera 120 to capture images using a first imaging configuration. The first imaging configuration may allow the capture of images to easily identify an exterior of vehicles including the licence plate numbers. At 710, the computing device 110 receives from camera 120 an image of a target vehicle captured using the first imaging configuration. Image 400 of Figure 4 is an example of an image captured at step 710. The captured image is made available to the computing device 110 by the camera 120. At 720, the computing device 110 configures the camera 120 to capture images using a second imaging configuration. The second imaging configuration may allow the capture of images of an interior of the target vehicle that may be suitable for identifying non-compliance by one or more occupants in the vehicle. At 730, the computing device 110 receives from camera 120, a second image captured using the second imaging configuration. Image 500 of Figure 5 is an example of a second image.
[0094] At 740, the licence plate determination module 196 processes the first image to determine a licence plate number associated with the target vehicle. At 750, the object detection module 192 and the non-compliance determination module 195 process the second image to determine one or more non-compliance events. Information regarding the determined non-compliance events including the first and second images, the class or category of non-compliance events, a timestamp associated with the non-compliance event and the licence plate number of the target vehicle may be stored as a non-compliance event record 182 in storage 116. At 760, the computing device 110 may transmit a part or the entirety of the non-compliance event record 182 to the remote server 130.
[0095] In some embodiments, some parts of method 700 may be performed by the remote server 130 based on image data transmitted by the computing device 110 to the remote server 130. For example, steps 740 and 750 may be performed by the remote computing device 130.
[0096] Figure 8 is a flowchart for a method of vehicle occupant surveillance 800 according to some embodiments implemented by the computing device 110. The camera 120 in some embodiments may be configured to continuously capture images of vehicles passing through a stretch of road using a first imaging configuration by default. At step 810, the computing device 110 received from camera 120 a first plurality of images captured using the first imaging configuration. At 811, the licence plate determination module 196 processes the first plurality of images to determine a licence plate in each or a subset of the first plurality of images. Figure 6 illustrates an example of the determination of licence plates in the first plurality of images captured by camera 120 using a first imaging configuration.
[0097] At 812, the camera control module 193 processes the determined licence plates in the first plurality of images to determine a speed indication of the target vehicle in the first plurality of images. Based on the speed indication, the camera control module 193 may determine a schedule to switch the camera 120 to capture images using a second imaging configuration. The schedule may be determined to capture an image of an interior of the vehicle that has sufficient detail to determine non compliance events. At 814, the camera control module 193 configures the camera 120 to capture images using a second imaging configuration. Step 814 may be executed at a time based on the schedule determined at step 812.
[0098] The schedule determined at step 812 may also comprise a schedule to trigger illumination by the illumination source 160. At 816, the illumination source control module 194 may transmit a command or signal to the illumination source 160 to trigger illumination of an area in the field of view of camera 120 based on the schedule determined at 812. At 818, the computing device 110 may receive one or more second images captured by camera 120 using the second imaging configuration.
[0099] At 822, the object detection module 192 and the non-compliance determination module 195 process the one or more second images to determine one or more non-compliance events. Information regarding the determined non-compliance events including the first plurality of images and the one or more second images, the class or category of non-compliance events, a timestamp associated with the non compliance event and the licence plate number of the target vehicle may be stored as a non-compliance event record 182 in storage 116. At 824, the computing device 110 may transmit a part or the entirety of the non-compliance event record 182 to the remote server 130.
[0100] In some embodiments, some parts of method 800 maybe performed by the remote server 130 based on image data transmitted by the computing device 110 to the remote server 130. For example, step 822 may be performed by the remote computing device 130.
[0101] Figure 9 is a flowchart for a method 900 of vehicle occupant surveillance according to some embodiments implemented by the system 100. Method 900 is fundamentally aligned and compatible with methods 700 and 800 but emphasises different aspects of the techniques described here. Step 910 of method 900 comprises capturing first infrared images of a vehicle on the road by the camera 120 responsive to commands from the computing device 110. In some embodiments, step 910 may comprise illumination of the vehicle with infrared light of a first infrared light intensity immediately prior to capturing the first infrared image. The illumination may occur through an illumination source configured to cooperate with the camera 120 or the computing device 110. Step 920 comprises capturing second infrared images of the vehicle on the road by the camera 120 responsive to commands from the computing device 110. In some embodiments, step 920 may comprise illumination of the vehicle with infrared light of a second infrared light intensity immediately prior to capturing the second infrared image. The second light intensity may be higher than the first light intensity so that the camera 120 can pick up an image of the vehicle interior that may not be possible with light transmitted at thefirst intensity. Step 930 comprises receiving by the computing device 110 the first and second infrared images. Step 940 comprises processing by the computing device the first and second infrared images to determine based on at least one of the processed first and second images whether a driver of the vehicle is non-compliant with a rule. A rule may comprise a specific government or regulatory or legal mandate or requirement associated with the compliant and lawful operation of vehicles and behaviour within vehicles. Step 950 comprises transmitting by the computing device 110 non-compliance information to the remote server 130 when the at least one of the processed first and second images indicate that a driver of the vehicle is non-compliant with the rule.
[0102] In some embodiments, the processor 112 maybe configured to process the first image and determine an image clarity metric. The image clarity metric may be a numeric representation of the quality or clarity of features captured in the first image. The image clarity metric may also be indicative of the suitability of the first image for object detection image processing operations by the object detection module 192 or licence plate determination by the licence plate determination module 196. The image clarity metric may be in the form of a number, or a multi-dimensional vector, for example. In some embodiments, the image quality metric may be based on an information-entropy measurement of the first image. Based on the image clarity metric, the processor 112 may dynamically determine the second imaging configuration. The second imaging configuration may comprise camera configuration parameters to configure camera 120 to capture images that have improved clarity and are more suitable for object detection operations by the object detection module 192 or licence plate determination by the licence plate determination module 196, for example. In some embodiments, the second imaging configuration may allow improved determination of a licence plate number in the second image captured using the second imaging configuration. In some embodiments, the second imaging configuration may allow improved determination of a non-compliance event in the target vehicle in the second image captured using the second imaging configuration.
[0103] In some embodiments, the processor 112 may configure camera 120 with a third imaging configuration for capture of a third image. For example, thefirst imaging configuration may be used to determine the image clarity metric and to determine the second and third image configurations for capture of respective second and third images that are optimised to allow determination of the license plate number (in one of the second or third image) and allow determination of a non-compliance event (in the other one of the second or third image).
[0104] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
Claims (37)
1. A computer-implemented method for vehicle occupant surveillance, the method comprising:
capturing a first image of a target vehicle using a camera configured with a first imaging configuration,
configuring the camera with a second imaging configuration,
capturing a second image of the target vehicle using the camera configured with the second imaging configuration,
making the first and second image available to at least one processor,
processing the first image by the at least one processor to determine a licence plate number of the target vehicle, and
processing the second image by the at least one processor to determine a non-compliance event,
wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
2. The method of claim 1, further comprising triggering, by the at least one processor, illumination of the interior of the target vehicle using an infrared illumination source for capturing the second image.
3. The method of claim 2, wherein the triggering is scheduled based on the camera processing the first image and detecting the target vehicle in the first image.
4. The method of claim 2, wherein the triggering is scheduled based on a lateral speed of the target vehicle determined based on a plurality of images captured before the second image.
5. The method of claim 2, wherein the infrared illumination source generates infrared light with a wavelength between 730nm and 930nm.
6. The method of any one of claims I to 5, wherein the time elapsed between capture of the first image and the second image is between ims and 40ms.
7. The method of any one of claims 1 to 6, wherein the second imaging configuration comprises a high gain configuration of the camera.
8. The method of claim 7, wherein the high gain configuration comprises a gain configuration in the range of 2dB to 48dB.
9. The method of any one of claims 1 to 8, wherein the at least one processor processes the second image using a trained machine learning model to determine the non-compliance event.
10. The method of claim 9, wherein the trained machine learning model comprises at least one object detection neural network.
11. The method of any one of claims 1 to 10 wherein the non-compliance event comprises one or more of: use of phone by driver, failure to use seat belt, failure to place hands on the steering wheel, excess number of occupants in target vehicle and unrestrained animals in target vehicle.
12. The method of any one of claims I to11 further comprising:
obtaining a daylight intensity indication from a daylight sensor, and modifying the first imaging configuration and the second imaging configuration based on the daylight intensity indication.
13. The method of any one of claims I to 12, wherein the first image or the second image are infrared images, or wherein both the first image and the second image are infrared images.
14. A system for determining vehicle occupant non-compliance, the system including:
a computing device positioned near a road, the computing device including at least one processor and a memory storing executable program code to process images, the computing device being in communication with a remote server over a network; and
a camera in communication with the computing device and positioned to capture infrared images of vehicles on the road in a field of view of the infrared camera, wherein the camera is responsive to commands from the computing device to capture successive first and second infrared images of a vehicle, wherein the camera is configured to cooperate with a light source to illuminate the vehicle with infrared light of a first infrared light intensity immediately prior to capturing the first infrared image and to illuminate the vehicle with a second infrared light intensity immediately prior to capturing the second infrared image, wherein the first infrared light intensity is different from the second infrared light intensity;
wherein the computing device is configured to: receive and process the first and second infrared images from the infrared camera; to determine based on at least one of the processed first and second images whether a driver of the vehicle is non compliant with a rule; and to transmit non-compliance information to the remote server when the at least one of the processed first and second images indicate that a driver of the vehicle is non-compliant with the rule.
15. A method for determining vehicle occupant non-compliance, the method comprising:
capturing successive first and second infrared images of a vehicle on a road by a camera responsive to commands from a computing device configured to communicate with the camera;
receiving by the computing device the first and second infrared images;
processing by the computing device the first and second infrared images to determine based on at least one of the processed first and second images whether a driver of the vehicle is non-compliant with a rule; and
transmitting by the computing device non-compliance information to a remote server when the at least one of the processed first and second images indicate that a driver of the vehicle is non-compliant with the rule;
wherein the camera is configured to cooperate with a light source to illuminate the vehicle with infrared light of afirst infrared light intensity immediately prior to capturing the first infrared image and to illuminate the vehicle with a second infrared light intensity immediately prior to capturing the second infrared image, wherein the first infrared light intensity is different from the second infrared light intensity.
16. A system for determining vehicle occupant non-compliance, the system comprising:
a camera configured to capture images of vehicles on the road, and
a computing device in communication with the camera,
the computing device comprising at least one processor and a memory in communication with the processor, the memory comprising program code executable by the at least one processor to configure the at least one processor to: receive a first image of a target vehicle captured using the camera configured with a first imaging configuration, configuring the camera with a second imaging configuration, receiving a second image of the target vehicle using the camera configured with the second imaging configuration, processing the first image to determine a licence plate number of the target vehicle, and processing the second image to determine a non-compliance event, wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
17. The system of claim 16, wherein the system further comprises an infrared illumination source operable by the computing device, and
wherein the memory comprises program code executable by the at least one processor to further configure the at least one processor to trigger the infrared illumination source to illuminate the interior of the target vehicle before the second image is captured.
18. The system of claim 16, wherein the system further comprises an infrared illumination source operable by the camera, and
wherein the memory comprises program code executable by the at least one processor to further configure the camera to trigger the infrared illumination source to illuminate the interior of the target vehicle before the second image is captured.
19. A system for vehicle occupant surveillance, the system comprising:
a dynamically configurable camera in communication with a computing device,
the computing device comprising a memory in communication with at least one processor, wherein the memory comprises program code which when executed by the at least one processor configures the at least one processor to:
receive a first image of a target vehicle captured using the camera configured with a first imaging configuration,
configure the camera with a second imaging configuration,
receive a second image of the target vehicle captured using the camera configured with the second imaging configuration,
process the first image to determine a licence plate number of the target vehicle, and
process the second image to determine a non-compliance event,
wherein the first imaging configuration allows the camera to legibly capture the licence plate of the target vehicle and the second imaging configuration allows the camera to capture an interior of the target vehicle with sufficient detail to determine the non-compliance event.
20. The system of claim 19, wherein the system further comprises an infrared illumination source positioned to illuminate a field of view of the dynamically configurable camera, and the at least one processor is further configured to trigger illumination of the interior of the target vehicle using an infrared illumination source for capturing the second image.
21. The system of claim 20, wherein the triggering is scheduled based on the dynamically configurable camera processing the first image and detecting the target vehicle in the first image.
22. The system of claim 20, wherein the triggering is scheduled based on a lateral speed of the target vehicle determined based on a plurality of images captured before the second image.
23. The system of claim 20, wherein the infrared illumination source generates infrared light with a wavelength between 730nm and 930nm.
24. The system of any one of claims 19 to 23, wherein the time elapsed between capture of the first image and the second image is between ims and 40ms.
25. The system of any one of claims 19 to 24, wherein the second imaging configuration comprises a high gain configuration of the camera.
26. The system of claim 25, wherein the high gain configuration comprises a gain configuration in the range of 2dB to 48dB.
27. The system of any one of claims 19 to 26, wherein the memory comprises a trained machine learning model and the at least one processor is further configured to process the second image using the trained machine learning model to determine the non-compliance event.
28. The system of claim 27, wherein the trained machine learning model comprises at least one object detection neural network.
29. The system of any one of claims 19 to 28 wherein the non-compliance event comprises one or more of: use of phone by driver, failure to use seat belt, failure to place hands on the steering wheel, excess number of occupants in target vehicle and unrestrained animals in target vehicle.
30. The system of any one of claims 10 to 29 wherein the at least one processor is further configured to:
receive a daylight intensity indication from a daylight sensor in communication with the computing device, and
update the first imaging configuration and the second imaging configuration based on the daylight intensity indication.
31. The system of any one of claims 19 to 30, wherein the first image or the second image are infrared images, or wherein both the first image and the second image are infrared images.
32. A computer-implemented method for vehicle occupant surveillance, the method comprising:
capturing a first image of a target vehicle using a camera configured with a first imaging configuration;
making the first image available to at least one processor;
processing the first image by at least one processor to determine an image clarity metric;
determining by the least one processor a second imaging configuration based on the image clarity metric;
configuring the camera with a second imaging configuration; capturing a second image of the target vehicle using the camera configured with the second imaging configuration; making the second image available to at least one processor, and processing the second image by the at least one processor to determine a licence plate number of the target vehicle or a non-compliance event.
33. The method of claim 32, wherein thefirst imaging configuration is different from the second imaging configuration.
34. The method of claim 32 or claim 33, further comprising:
configuring the camera with a third imaging configuration; and
capturing a third image of the target vehicle using the camera configured with the third imaging configuration.
35. The method of claim 34, further comprising determining the third imaging configuration based on the image quality metric.
36. The method of claim 34 or claim 35, wherein the third imaging configuration is different from the second imaging configuration.
37. A system comprising means to perform the method of any one of claims 1 to 13, 15 and 32 to 36.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/AU2021/051136 WO2022178568A1 (en) | 2021-02-26 | 2021-09-30 | Systems and methods for vehicle occupant compliance monitoring |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021900527 | 2021-02-26 | ||
AU2021900527A AU2021900527A0 (en) | 2021-02-26 | Systems and Methods for vehicle occupant compliance monitoring |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021106921A4 true AU2021106921A4 (en) | 2021-11-25 |
Family
ID=78610658
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021106921A Active AU2021106921A4 (en) | 2021-02-26 | 2021-08-24 | Systems and Methods for vehicle occupant compliance monitoring |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2021106921A4 (en) |
WO (1) | WO2022178568A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4439491A1 (en) * | 2023-03-30 | 2024-10-02 | Aptiv Technologies AG | Visual detection of hands on steering wheel |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11610481B2 (en) * | 2018-07-19 | 2023-03-21 | Acusensus Ip Pty Ltd | Infringement detection method, device and system |
JP2020056839A (en) * | 2018-09-28 | 2020-04-09 | パナソニック株式会社 | Imaging apparatus |
JP2020057869A (en) * | 2018-09-28 | 2020-04-09 | パナソニックi−PROセンシングソリューションズ株式会社 | Imaging apparatus |
JP7444423B2 (en) * | 2019-05-20 | 2024-03-06 | i-PRO株式会社 | Vehicle monitoring system and vehicle monitoring method |
-
2021
- 2021-08-24 AU AU2021106921A patent/AU2021106921A4/en active Active
- 2021-09-30 WO PCT/AU2021/051136 patent/WO2022178568A1/en active Application Filing
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4439491A1 (en) * | 2023-03-30 | 2024-10-02 | Aptiv Technologies AG | Visual detection of hands on steering wheel |
Also Published As
Publication number | Publication date |
---|---|
WO2022178568A1 (en) | 2022-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190289282A1 (en) | Exposure coordination for multiple cameras | |
US8379924B2 (en) | Real time environment model generation system | |
US11651594B2 (en) | Systems and methods of legibly capturing vehicle markings | |
EP3825732A1 (en) | Methods and systems for computer-based determining of presence of objects | |
AU2021100546A4 (en) | Infringement detection method, device and system | |
TW202101965A (en) | Sensor device and signal processing method | |
US20130162834A1 (en) | Integrated video quantization | |
US20220067394A1 (en) | Systems and Methods for Rapid License Plate Reading | |
EP4181083A1 (en) | Stopped vehicle detection and validation systems and methods | |
AU2021106921A4 (en) | Systems and Methods for vehicle occupant compliance monitoring | |
US11917308B2 (en) | Imaging device, image recording device, and imaging method for capturing a predetermined event | |
WO2021029262A1 (en) | Device, measurement device, distance measurement system and method | |
KR20190136515A (en) | Vehicle recognition device | |
WO2020153272A1 (en) | Measuring device, ranging device, and method of measurement | |
Shahrear et al. | An automatic traffic rules violation detection and number plate recognition system for Bangladesh | |
US20220268890A1 (en) | Measuring device and distance measuring device | |
JP7103324B2 (en) | Anomaly detection device for object recognition and anomaly detection program for object recognition | |
Shreyas et al. | IOT Based Smart Signal | |
CN109076168B (en) | Control device, control method, and computer-readable medium | |
Tan et al. | Thermal Infrared Technology-Based Traffic Target Detection in Inclement Weather | |
US11625924B2 (en) | Vehicle parking monitoring systems and methods | |
WO2023178510A1 (en) | Image processing method, device, and system and movable platform | |
WO2024018909A1 (en) | State estimation device, state estimation method, and state estimation program | |
WO2023286375A1 (en) | Information processing system, information processing device, and information processing method | |
KR102145409B1 (en) | System for visibility measurement with vehicle speed measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) |