GB2539646A - Image capture device and associated method - Google Patents

Image capture device and associated method Download PDF

Info

Publication number
GB2539646A
GB2539646A GB1510644.6A GB201510644A GB2539646A GB 2539646 A GB2539646 A GB 2539646A GB 201510644 A GB201510644 A GB 201510644A GB 2539646 A GB2539646 A GB 2539646A
Authority
GB
United Kingdom
Prior art keywords
frames
capture device
image capture
incident
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1510644.6A
Other versions
GB2539646B (en
GB201510644D0 (en
Inventor
Gilbert Wayne
Cowper Steve
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RISK TELEMATICS UK Ltd
Original Assignee
RISK TELEMATICS UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RISK TELEMATICS UK Ltd filed Critical RISK TELEMATICS UK Ltd
Priority to GB1510644.6A priority Critical patent/GB2539646B/en
Publication of GB201510644D0 publication Critical patent/GB201510644D0/en
Publication of GB2539646A publication Critical patent/GB2539646A/en
Application granted granted Critical
Publication of GB2539646B publication Critical patent/GB2539646B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D41/00Fittings for identifying vehicles in case of collision; Fittings for marking or recording collision areas

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

An image capture device for a road vehicle comprises a plenoptic camera and is configured to capture successive plenoptic frames from the camera. A buffer stores the frames received and replaces older frames with newer frames. In response to an event, such as an accident, the frames prior to the event are captured and stored.

Description

Image Capture Device and Associated Method The invention relates to an image capture device for a road vehicle and an associated method of image analysis.
Dashboard-or helmet-mounted video cameras are used by motorists and cyclists in order to record their driving and capture evidence related to road traffic incidents. Image analysis is used in vehicle impact testing and real life crash investigation in order to better understand how an impact occurred and, in the case of real crash data, to determine io whether a particular party is at fault.
A problem encountered with conventional solutions is it can be difficult to determine some types of information regarding an incident using conventional video footage.
According to a first aspect of the invention there is provided an image capture device for a road vehicle, comprising: a plenoptic camera configured to capture a succession of plenoptic frames and a buffer configured to: store the succession of frames received from the plenoptic camera, replace older frames stored in the buffer with newer frames from the plenoptic camera, and in response to an event, capture a set of frames preceding the event.
The use of a plenoptic camera enables further detail regarding an incident scene to be captured. Light field images generated by a plenoptic camera can be interrogated after capture in order to change the perspective of the view, for example. It has been found that such flexibility in image processing is of particular use in the analysis of road vehicle incidents. A difficulty a plenoptic camera to acquire high resolution images is that plenoptic camera may generate large quantities of data related to a scene. This difficulty may be addressed by the buffer storing a recent sequence of frames on an on-going basis and capturing the most recent set of frames in response to an event, such as a road traffic incident occurring. The use of the buffer also reduces the storage requirements of the device irrespective of the resolution of the light field camera.
The image capture device may comprise one or more sensors for providing data related to a motion or a position of the road vehicle. The one or more sensors may comprise an accelerometer for determining an acceleration of the road vehicle. The one or more sensors may comprise a position determining unit configured to determine a geographic position of the road vehicle. The image capture device may comprise an event detection unit. The event detection unit may be configured to determine whether an event has occurred in accordance with the sensor data and to command the buffer to capture the set of frames in response to the occurrence of the event. The event detection unit may be configured to determine whether the sensor data is indicative of a road vehicle incident. The event detection unit may be configured to command the buffer to capture the set of frames in response to the occurrence of the incident. The event detection unit may be configured to determine whether the sensor data is indicative of the road vehicle entering a particular geographic area. The event detection unit may be configured to command the buffer to capture the set of frames in response to entering the particular geographic area.
The image capture device may be configured to be installed in a vehicle. The buffer may store and/or discard the frames on a first-in-first-out basis. The plenoptic camera may be a plenoptic video camera.
The buffer may be configured to store new frames from the camera for a fixed period of time after the occurrence of the incident.
According to a further aspect of the invention there is provided a road vehicle comprising the image capture device.
According to a further aspect of the invention there is provided an incident detection unit configured to receive motion sensor data from the image capture, determine if an incident has occurred in accordance with the motion sensor data and command the buffer of the image capture device to suspend storage or replacement of frames in response to the occurrence of the incident According to a further aspect of the invention there is provided an automated method of analysing a road vehicle incident comprising: receiving one or more frames captured by a vehicle-mounted plenoptic camera; generating one or more images associated with each frame; extracting one or more features from the one or more images in order to determine information regarding the incident.
The method may comprise generating a plurality of images associated with each frame. The method may comprise selecting one or more images by identifying a standardized feature of a vehicle and determining a level of focus for each image of the frame based on the appearance of the standardized feature.
The method may comprise determining a distance associated with an image in accordance with the appearance of the standardized feature in the image. The standardized feature may be a licence plate or a character of a licence plate. The extracted feature may include one or more of: a focal distance associated with the one or more images, a human face, one or more licence plate characters, a sign indicative of the vehicle make or model. The method may comprise comparing the one or more licence plate characters to a database to of licence plate details in order to obtain information regarding a vehicle associated with the licence plate.
The method may comprise determining a closing rate between the plenoptic camera and an object based on a comparison of a first distance associated with a selected image of the object in a first frame and a second distance associated with a selected image of the object in a second frame. The method may comprise providing a composite closing rate based on the determined closing rate and a closing rate calculated using accelerometer data. The method may comprise determining a collision energy based on the determined closing rate.
There may be provided a computer program, which when run on a computer, causes the computer to configure any apparatus, including a circuit, controller or device disclosed herein or perform any method disclosed herein. The computer program may be a software implementation, and the computer may be considered as any appropriate hardware, including a digital signal processor, a microcontroller, and an implementation in read only memory (ROM), erasable programmable read only memory (EPROM) or electronically erasable programmable read only memory (EEPROM), as non-limiting examples. The software may be an assembly program.
The computer program may be provided on a computer readable medium, which may be a physical computer readable medium such as a disc or a memory device, or may be embodied as a transient signal. Such a transient signal may be a network download, including an internet download.
Embodiments of the invention will now be described, by way of example, with reference to the following figures, in which: Figures la illustrates a road vehicle comprising an image capture device; Figures lb illustrates an image capture device with components distributed on a road vehicle and distally form the road vehicle; and Figure 2 illustrates a method of analysing a road vehicle incident using data captures by an image capture device such as that described with reference to figure 1.
Figure 1 illustrates a road vehicle 102 comprising an image capture device 100. In this example, the road vehicle 102 is a van. Other examples of road vehicles 102 include motor vehicles such as a cars or trucks, for example.
The image capture device 100 comprises a plenoptic camera 104 and a buffer 106. In this example, the entirety of the image capture device 100, including the plenoptic camera 104 and a buffer 106, is installed in the road vehicle 102. Alternatively the image capture device 100 may be a portable device for placing in the road vehicle 102. The image capture device 100 is distributed around the vehicle in this example. Alternatively, the image capture device 100 may be provided in a single unit.
The plenoptic camera 104 in figure la is integrated with a front facing portion of the vehicle, such as a bumper assembly, 102 in order to record a scene ahead of the vehicle. Alternatively, the plenoptic camera could be rear-or side-facing. A plurality of plenoptic cameras 104 may be provided in order to acquire different views from the vehicle 102.
Plenoptic cameras are sometimes therefore also known as light field cameras. The plenoptic camera 104 may uses a microlens array to capture a 40 (three spatial dimensions and a time dimension) light field information of a scene. Light field images generated by a plenoptic camera can be interrogated after capture in order to change the perspective of the view, for example. It has been found that such flexibility in image processing is of particular use in the analysis of road vehicle incidents.
The plenoptic camera 104 is configured to capture a succession of plenoptic frames. The plenoptic camera 104 may record frames of plenoptic video footage. Video footage may be considered to have a minimum frame rate of 4 frames per second. The plenoptic camera 104 may have a frame rate of at least 4 frames per second. In general, the present disclosure is not limited to a particular type of plenoptic camera.
Plenoptic video allows a frame sequence to be viewed as a function of time in the same way as conventional video, but with the option to also move to a variety of depth planes within the frame in order to maximise the focus on the object of interest. For example, images from a plenoptic camera may be used to track an object within a field of view, such as a number plate. The plenoptic camera provides an equivalent of an Mx N matrix where M represents the number of frames and N represents the depth component, which may be displayed as separate images associated with a frame.
Simple example of tracing an object through 8 frames of plenoptic video with the depth plane being varied to keep the object in focus The buffer 106 is configured to (i) store the succession of frames received from the plenoptic camera 104 and (ii) replace older frames stored in the buffer 106 with newer frames from the plenoptic camera 104 on a continuous basis. Each frame corresponds to a particular instant in time.
The image capture device 100 has a motion sensor 108 for providing data related to a motion of the vehicle and a position determining unit 110 for providing data related to a geographic position of the road vehicle 102. The motion sensor may be a velocity or direction heading sensor or an accelerometer for determining an acceleration of the road vehicle 102, such as a 1-, 2-or 3-dimensional accelerometer, for example. The position determining unit 110 may be a ground unit for a satellite navigation system, such as GPS, GLONASS or Galileo for example.
An event detection unit 112 may be configured to (i) determine whether an event has occurred in accordance with the data from the sensors or in response to user input and 00 command the buffer 106 to capture a set of frames in response to the occurrence of the event. The captured set of frames may relate to a period of time that precedes or follows the event or a period of time that encompasses the event. The use of the buffer can therefore reduce the storage requirements of the device whilst ensuring that information regarding an event of interest is captured.
Capturing an event may comprise suspending storage or replacement of frames in the buffer 106. In that case, a set of frames is held in the buffer 106. The captured set of frames that is held in the buffer 106 may not be replaced by newer frames from the plenoptic camera 104 until the event has been transferred to another storage medium or user input is received indicating that the event should not be recorded. Alternatively, capturing an event may comprise moving a set of frames from the buffer 106 to another storage medium, such as a permanent storage medium An optional telemetry unit 114 may be provided in order to transfer frames stored in the buffer and/or sensor data to a location that is remote from the vehicle.
In a first example, the event detection unit 112 may be configured to determine whether the sensor data is indicative of a road vehicle incident and to command the buffer 106 to capture the set of frames in response to the occurrence of the incident. A collision between the road vehicle 102 and an object is an example of a road vehicle 102 incident, which may be an accident. The event detection unit 112 may determine whether a road vehicle 102 incident has occurred based on the magnitude of a sensed acceleration from the motion sensor 108 reaching or exceeding a predetermined threshold.
In a second example, the event detection unit 112 may be configured to determine whether the sensor data from the positioning device 110 is indicative of the road vehicle 102 entering a particular geographic area and to command the buffer 106 to capture the set of frames in response to entering the particular geographic area.
In a fourth example, the event detection unit 112 may be configured to receive user input that indicates that an event has taken place. The first, second and third examples relate to automatic detection using the event detection unit 112, whereas the fourth example relates to manual detection. The incident detection unit 112 may further perform automated analysis such as that described with reference to the method of figure 2 below.
Figures lb illustrates an image capture device with components distributed on a road vehicle and remotely from the road vehicle. In this example, the buffer 106 and incident detection unit 112 are separate from the vehicle in this example and may be provided remotely.
The remaining components of the image capture device 100 are installed in the vehicle.
That is, the plenoptic camera 104, motion sensor 108, position determining unit 110 and telemetry unit 114 are installed in the vehicle.
In this case the telemetry unit 114 may be configured to continuously transmit data from the plenoptic camera 104, motion sensor 108 and position determining unit 110 while the vehicle 102 is in use. The transmitter may start or stop transmitting in response to the data it receives from the other components of the image capture device 100.
The buffer 106 and incident detection unit 112 are configured to receive the data from the transmitter via a wireless link, which may be provided using a mobile telephony standard such as 3G or 40, for example. As such, the buffer 106 and incident detection unit 112 may be provided as applications on a server. A single server may provide buffering and incident detection functionality for a plurality of vehicles.
Figure 2 illustrates an automated method 200 of analysing a road vehicle incident. The method comprises receiving 202 one or more frames that have been captured by a vehicle-mounted plenoptic camera. One or more images are generated 204 for each frame. The images within a frame may each be at a different depth of focus or correspond to different views of the scene.
In the case where a plurality of images associated with each frame is generated, the method 200 may comprise selecting 206 one of the images by identifying a standardized feature of a vehicle involved in the road vehicle incident and determining a level of focus for each image of the frame based on the appearance of the standardized feature. A standardised feature relates to an object with a uniform shape or appearance dictated by a legal or commercial standard. A road sign, licence plate or a character of a licence plate have each been found to be convenient standardised features for road vehicle event analysis applications. A licence plate may also be referred to as a registration plate. The term "licence plate" may refer to any sign that can be used to uniquely identify a particular vehicle. Existing algorithms for locating the position of an object such as a number plate within a video may be adapted for use in this application by sequentially applying the algorithm to each image for each frame.
For example, in order to identify a feature within the scene, an OCR algorithm may be applied to each of the following: Frame 1, Depth 1; Frame 1, Depth 2; Frame 1, Depth 3; Frame 1, Depth 4; Frame 2, Depth 1; Frame 2, Depth 2; 35... etc. Well established image processing techniques such as, but not limited to, Edge detection or SURF (Speeded-Up Robust Features) algorithms may be applied to any feature in the image, and not only standard features, in order to determine a level of focus. An image of a frame that is most in-focus may be selected. Alternatively, all images that have an acceptable level of focus may be selected. As a further alternative, all images of a frame may be selected.
Algorithms such as SPASH (Sparse Shape Reconstruction) use a known dictionary of possible target shapes (such as the 35 alphanumeric characters available on standard UK to licence plates) to identify characters in situations where the image is noisy / missing pixels (such as in low light conditions) or where the image is partially occluded.
One or more features from the one or more selected images are extracted 208 in order to determine information regarding the incident. The extracted feature may include one or more of: a focal distance associated with the one or more images, a human face, one or more licence plate characters, a sign indicative of the vehicle make or model. A badge is an example of a sign indicative of a vehicle make or model. Using the properties of the depth field within the plenoptic video, a mesh of points representing a particular vehicle or other objects can be identified.
A number of processes may be applied 210 to the extracted feature in order to provide information regarding the incident.
One or more licence plate characters may be compared to a database of licence plate details in order to obtain information regarding a vehicle associated with the licence plate, such as a make, model, colour or other attribute of the vehicle. The information regarding the vehicle may comprise information regarding an owner or user of the vehicle, or a report of a previous incident involving the vehicle. This information may then be stored in as metadata associated with the event or with the buffered frames associated with the event.
Using a combination of established point tracking algorithms and edge / corner detection techniques, measurements of data such as distances, approach velocities and delta-V (a standard automotive measure of collision severity) can then be derived from the depth information using basic mathematical techniques to provide additional data.
A distance associated with an image may be determined in accordance with the appearance of the standardized feature in the image. A closing rate between the plenoptic camera and an object may be determined based on a comparison of a first distance associated with a selected image of the object in a first frame and a second distance associated with a selected image of the object in a second frame. The collision energy may be determined using known physical models based on the closing rate determined from such analysis.
Data from the camera system could also be combined with the acceleration data as a secondary verification / calibration technique. For example, a composite closing rate may be calculated based on the closing rate and a closing rate calculated using accelerometer data. The use of a composite closing rate may provide a more accurate estate of the actual closing rate and so enable a more accurate determination of the collision energy of an incident, for example.
This video derived data allows information to be gained on third party vehicles in addition to the vehicle in which the plenoptic camera is placed. This provides a major advantage over the use of purely acceleration-based crash recording systems within just a single vehicle.
In the event of a collision which is not directly visible to the image capture device, such as in a vehicle equipped with a single forward-facing image capture device where a collision occurs on a part of the vehicle which is out of the field of view, the data recorded by the image capture device still enables measurements of rate of change of position of fixed items within the depth field frame (such as lamp posts, trees and other structures) to be measured and so provides an indication of the motions or energy involved in the collision.

Claims (25)

  1. Claims 1. An image capture device for a road vehicle, comprising: a plenoptic camera configured to capture a succession of plenoptic frames; and a buffer configured to: store the succession of frames received from the plenoptic camera, replace older frames stored in the buffer with newer frames from the plenoptic camera, and in response to an event, capture a set of frames preceding the event.
  2. 2. The image capture device of claim 1 comprising one or more sensors for providing data related to a motion or a position of the road vehicle.
  3. 3. The image capture device of claim 2 wherein the one or more sensors comprise an accelerometer for determining an acceleration of the road vehicle.
  4. 4. The image capture device of claim 2 wherein the one or more sensors comprise a position determining unit configured to determine a geographic position of the road vehicle.
  5. 5. The image capture device of claim 2 comprising an event detection unit, wherein the event detection unit is configured to determine whether an event has occurred in accordance with the sensor data and to command the buffer to capture the set of frames in response to the occurrence of the event.
  6. 6. The image capture device of claim 5 wherein the event detection unit is configured to determine whether the sensor data is indicative of a road vehicle incident and to command the buffer to capture the set of frames in response to the occurrence of the incident.
  7. 7. The image capture device of claim 5 wherein the event detection unit is configured to determine whether the sensor data is indicative of the road vehicle entering a particular geographic area and to command the buffer to capture the set of frames in response to entering the particular geographic area.
  8. 8. The image capture device of claim 1 configured to be installed in a vehicle.
  9. 9. The image capture device of any of claims 1 wherein the buffer stores and discards the frames on a first-in-first-out basis.
  10. 10. The image capture device of any of claims 1 wherein the plenoptic camera is a plenoptic video camera.
  11. 11. The image capture device of claim 1 wherein the buffer is configured to store new frames from the camera for a fixed period of time after the occurrence of the incident.
  12. 12. A road vehicle comprising the image capture device of claim 1.
  13. 13. An incident detection unit configured to receive motion sensor data from the image capture device of claim 1, determine if an incident has occurred in accordance with the motion sensor data and command the buffer of the image capture device to suspend storage or replacement of frames in response to the occurrence of the incident.
  14. 14. An automated method of analysing a road vehicle incident comprising: receiving one or more [pre-recorded] frames captured by a vehicle-mounted plenoptic camera; generating one or more images associated with each frame; and extracting one or more features from the one or more images in order to determine information regarding the incident.
  15. 15. The method of claim 14 comprising: generating a plurality of images associated with each frame; and selecting one or more images by identifying a standardized feature of a vehicle and determining a level of focus for each image of the frame based on the appearance of the standardized feature.
  16. 16. The method of claim 15 comprising determining a distance associated with an image in accordance with the appearance of the standardized feature in the image.
  17. 17. The method of claim 15 wherein the standardized feature is a licence plate or a character of a licence plate.
  18. 18. The method of claim 14 wherein the extracted feature includes one or more of: a focal distance associated with the one or more images, a human face, one or more licence plate characters, a sign indicative of the vehicle make or model.
  19. 19. The method of claim 18 comprising comparing the one or more licence plate characters to a database of licence plate details in order to obtain information regarding a vehicle associated with the licence plate.
  20. 20. The method of claim 16 or 18 comprising determining a closing rate between the io plenoptic camera and an object based on a comparison of a first distance associated with a selected image of the object in a first frame and a second distance associated with a selected image of the object in a second frame.
  21. 21. The method of claim 20 comprising providing a composite closing rate based on the determined closing rate and a closing rate calculated using accelerometer data.
  22. 22. The method of claim 20 comprising determining a collision energy based on the determined closing rate.
  23. 23. A computer or computer program configured to perform the method of claim 14.
  24. 24. An image capture device or incident detection unit substantially as described herein with reference to the accompanying drawings.
  25. 25. A method substantially as described herein with reference to the accompanying drawings.
GB1510644.6A 2015-06-17 2015-06-17 Image capture device and associated method Expired - Fee Related GB2539646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1510644.6A GB2539646B (en) 2015-06-17 2015-06-17 Image capture device and associated method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1510644.6A GB2539646B (en) 2015-06-17 2015-06-17 Image capture device and associated method

Publications (3)

Publication Number Publication Date
GB201510644D0 GB201510644D0 (en) 2015-07-29
GB2539646A true GB2539646A (en) 2016-12-28
GB2539646B GB2539646B (en) 2018-04-18

Family

ID=53784886

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1510644.6A Expired - Fee Related GB2539646B (en) 2015-06-17 2015-06-17 Image capture device and associated method

Country Status (1)

Country Link
GB (1) GB2539646B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11388338B2 (en) 2020-04-24 2022-07-12 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Video processing for vehicle ride
US11396299B2 (en) 2020-04-24 2022-07-26 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Video processing for vehicle ride incorporating biometric data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872416B (en) * 2019-03-27 2021-07-27 Oppo广东移动通信有限公司 Image transmission interaction method, system and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2432737A (en) * 2005-11-28 2007-05-30 Stephen Guy Lacey Jackson Continuous image recording
US20100100276A1 (en) * 2005-05-09 2010-04-22 Nikon Corporation Imaging apparatus and drive recorder system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100276A1 (en) * 2005-05-09 2010-04-22 Nikon Corporation Imaging apparatus and drive recorder system
GB2432737A (en) * 2005-11-28 2007-05-30 Stephen Guy Lacey Jackson Continuous image recording

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11388338B2 (en) 2020-04-24 2022-07-12 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Video processing for vehicle ride
US11396299B2 (en) 2020-04-24 2022-07-26 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Video processing for vehicle ride incorporating biometric data

Also Published As

Publication number Publication date
GB2539646B (en) 2018-04-18
GB201510644D0 (en) 2015-07-29

Similar Documents

Publication Publication Date Title
US9019380B2 (en) Detection of traffic violations
TWI469886B (en) Cooperative event data record system and method
EP2107503A1 (en) Method and device for generating a real time environment model for vehicles
JP7355151B2 (en) Information processing device, information processing method, program
US20180359445A1 (en) Method for Recording Vehicle Driving Information and Creating Vehicle Record by Utilizing Digital Video Shooting
US20110215915A1 (en) Detection system and detecting method for car
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
KR20160062880A (en) road traffic information management system for g using camera and radar
EP2107504A1 (en) Method and device for generating a real time environment model for vehicles
US11479260B1 (en) Systems and methods for proximate event capture
WO2016185373A1 (en) Detection and documentation of tailgating and speeding violations
CN109671006A (en) Traffic accident treatment method, apparatus and storage medium
US10860866B2 (en) Systems and methods of legibly capturing vehicle markings
GB2539646A (en) Image capture device and associated method
CN109145888A (en) Demographic method, device, system, electronic equipment, storage medium
KR101760261B1 (en) Around view monitoring system having function of black box and operating method
CN109409173B (en) Driver state monitoring method, system, medium and equipment based on deep learning
US20240013476A1 (en) Traffic event reproduction system, server, traffic event reproduction method, and non-transitory computer readable medium
US11676397B2 (en) System and method for detecting an object collision
FR3018416A1 (en) METHOD AND SYSTEM FOR SUPERVISION, PARTICULARLY APPLIED TO VIDEO SURVEILLANCE
KR101527003B1 (en) Big data system for blackbox
US10906494B1 (en) Systems and methods for predicting occupant location based on vehicular collision
CN108664695B (en) System for simulating vehicle accident and application thereof
JP2022056153A (en) Temporary stop detection device, temporary stop detection system, and temporary stop detection program
CN113220805A (en) Map generation device, recording medium, and map generation method

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20190617