GB2615766A - A collision avoidance system for a vehicle - Google Patents

A collision avoidance system for a vehicle Download PDF

Info

Publication number
GB2615766A
GB2615766A GB2202091.1A GB202202091A GB2615766A GB 2615766 A GB2615766 A GB 2615766A GB 202202091 A GB202202091 A GB 202202091A GB 2615766 A GB2615766 A GB 2615766A
Authority
GB
United Kingdom
Prior art keywords
size
target object
risk level
threshold value
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2202091.1A
Other versions
GB202202091D0 (en
Inventor
Mcguckin Jonathan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocular Ltd
Original Assignee
Ocular Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocular Ltd filed Critical Ocular Ltd
Priority to GB2202091.1A priority Critical patent/GB2615766A/en
Publication of GB202202091D0 publication Critical patent/GB202202091D0/en
Publication of GB2615766A publication Critical patent/GB2615766A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/525Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking automatically indicating risk of collision between vehicles in traffic or with pedestrians, e.g. after risk assessment using the vehicle sensor data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • B60Q5/005Arrangement or adaptation of acoustic signal devices automatically actuated
    • B60Q5/006Arrangement or adaptation of acoustic signal devices automatically actuated indicating risk of collision between vehicles or with pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0017Planning or execution of driving tasks specially adapted for safety of other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2400/00Special features or arrangements of exterior signal lamps for vehicles
    • B60Q2400/50Projected symbol or information, e.g. onto the road or car body
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2800/00Features related to particular types of vehicles not otherwise provided for
    • B60Q2800/20Utility vehicles, e.g. for agriculture, construction work
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2300/00Indexing codes relating to the type of vehicle
    • B60W2300/12Trucks; Load vehicles
    • B60W2300/121Fork lift trucks, Clarks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4044Direction of movement, e.g. backwards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2720/00Output or target parameters relating to overall vehicle dynamics
    • B60W2720/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2754/00Output or target parameters relating to objects
    • B60W2754/10Spatial relation or speed relative to objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Method of detecting obstacles surrounding the vehicle and determining a probability of collision depending on the size or the change in size of the detected objects 30. Nearer object B may be perceived larger than object A further away. An approaching object A’ may appear to be increasing in size. The size of the object may be deduced from the size of the region on the captured image frame from camera 14. Ultrasonic distance sensors 18 may detect distance to objects 30. A higher risk level may be designated if the size exceeds size threshold or size change threshold. Wireless signal (e.g. Bluetooth, wifi, RFID) may be detected for further actions. The detected objects may be classified in types including pedestrians, other vehicles, signs, or wireless signal beacons by machine learning using computer vision 14. Warning output may alert operator of different risk levels, including audio 20 or visual light warning, projector 22 projecting an image IA, IB, IC, ID onto ground surface, or performing emergency manoeuvre.

Description

A Collision Avoidance System for a Vehicle
Field of the Invention
This invention relates to collision avoidance systems for vehicles. The invention relates particularly 5 but not exclusively to collision avoidance systems for workplace vehicles such as forklifts.
Background to the Invention
Conventionally, workplace vehicles such as forklifts use passive safety systems which typically involve the activation lights or alarms when the vehicle is reversing. As such, the onus is on pedestrians to notice the vehicle and to avoid it. Responsibility also lies with the vehicle operator to look behind when reversing, and attempts can be made to segregate pedestrian areas from vehicle areas. However, these measures are considered to be unreliable Alternative safety systems exist, including ultrasonic sensor systems and RFID systems. However, ultrasonic systems cannot discriminate between pedestrians and other obstacles, while RFID systems require all pedestrians to wear an RFID tag, which cannot be relied upon and can be difficult to manage at large scale. 3D depth cameras can be used, but such systems are expensive and can have a response time that is too slow avoid collisions.
It would be desirable to provide improved means for reducing the incidence of vehicle related accidents.
Summary of the Invention
From a flrst aspect, the invention provides a collision avoidance system for a vehicle, the system comprising: an object detection system configured to detect at least one target object in a field of detection; analysing means for determining a collision risk level; and output means for implementing at least one output action depending on the determined collision risk level, wherein said analysing means is configured to determine said risk level depending on a determined size of the at least one detected target object and/or on a determined change in size of the at least one detected target object.
Preferably, said analysing means is configured to determine said risk level by determining if said determined size exceeds one or more size threshold value, and/or by determining if the determined size of the target object has increased over time.
Preferably, the system supports the adoption of at least two collision risk levels, including a lowest risk level and a highest risk level, and optionally one or more intermediate risk levels between the lowest and highest risk levels.
Preferably, the system is configured to adopt the lowest risk level when the analysing means determines that there are no detected target objects having a determined size that exceeds one or more size threshold value, and/or that there are no detected target objects with a size that has increased over time: optionally by an amount that exceeds one or more size change threshold value.
Preferably, the system is configured to adopt a risk level other than the lowest risk level if the analysing means determines that one or more detected target objects has a determined size that exceeds one or more size threshold value, and/or that one or more detected target objects has a size that has increased over time, optionally by an amount that exceeds the one or more size change threshold value.
Preferably, the system is configured to adopt an intermediate risk level if analysing means determines that the determined size of a detected target object exceeds a first size threshold value but does not exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value.
Preferably, the system is configured to adopt an intermediate risk level if the analysing means determines that the determined size of a detected target object exceeds a first size threshold value but does not exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value, and if the determined size of the object is determined not to be increasing over time, or to be increasing by an amount that is less than the relevant size change threshold value.
Preferably, the system is configured to adopt the highest risk level if the analysing means determines that the determined size of one or more detected target object exceeds the second size threshold value.
Preferably, the system is configured to adopt the highest risk level if the analysing means determines that the determined size of one or more detected target object is increasing over time, for example by an amount that exceeds the relevant size change threshold value.
Optionally, the system includes at least one distance sensor, for example at least one ultrasonic distance sensor, configured to detect objects in said field of detection.
Optionally, the system is configured to adopt the highest risk level if an object detected by said at least one distance sensor is determined to be closer to said vehicle than a threshold distance value.
Said object detection system is optionally configured to detect one or more types of target object, for example pedestrians and/or one or more type of vehicle and/or one or more types of sign. Typically, one or more respective size threshold value and/or one or more respective size change threshold value is associated with each type of target object.
Optionally, said objection detection system is configured to detect at least one type of sign, the system being configured to adopt the highest risk level in response to detection of an instance of said at least one type of sign, preferably in response to detection of said instance of said at least one type of sign with a size above a respective size threshold value.
In preferred embodiments, said object detection system is a computer vision object detection system and said field of detection is a field of vision.
In preferred embodiments. said object detection system comprises at least one digital camera, preferably at least one digital video camera, for capturing digital images and/or digital video of the at least one target object in the field of vision, said object detection system being configured to detect said at least one target object in said digital images and/or digital video.
Preferably, said analysing means is configured to determine the size of the at least one detected 15 target object in the captured digital video or digital image, and/or to determine the change in size of the at least one detected target object in the captured digital video or image.
Preferably, said analysing means is configured to determine the size of the region of the captured digital video or digital image that represents the at least one detected target object, and/or to determine the change in the size of the region of the captured digital video or digital image that represents the at least one detected target object, and wherein determining the size of the region of the captured digital video may involve determining the size of the region of one or more frame of the captured digital video that represents the at least one detected target object.
Typically, the object detection system is configured to detect said at least one target object by detecting objects that belong to a respective class that is associated with said at least one target object. The object detection system may be configured to detect objects that belong to the respective class using supervised machine learning.
In preferred embodiments said output means comprises any one or more of: audio output means; visual output means; means for controlling the operation of the vehicle. Optionally, said output means comprises at least one light projector that is operable to project one or more image onto a ground surface. The output means may be configured to implement one or more different output action depending on the determined risk level.
From another aspect, the invention provides a vehicle comprising a collision avoidance system as claimed in any one of claims 1 to 23, wherein said object detection system is configured so that said field of detection is in a direction of movement of said vehicle.
From another aspect, the invention provides a collision avoidance method for a vehicle, the method comprising: detecting at least one target object in a field of detection; determining a collision risk level depending on a determined size of the at least one detected target object and/or on a determined change in size of the at least one detected target object; and. implementing at least one output action depending on the determined collision risk level.
The method may include determining the size of the at least one detected target object in a captured digital video or digital image, and/or to determine the change in size of the at least one detected target object in the captured digital video or image, and wherein, preferably, said determining comprises determining the size of the region of the captured digital video or digital image that represents the at least one detected target object, and/or determining the change in the size of the region of the captured digital video or digital image that represents the at least one detected target object, and wherein determining the size of the region of the captured digital video may involve determining the size of the region of one or more frame of the captured digital video that represents the at least one detected target object.
In preferred embodiments: the collision avoidance system comprises one or more camera and an image or video processing system configured to process the image or video feed received from the camera(s). Advantageously, the processing system is trained for detection of specified objects (e.g. pedestrians), and a safety algorithm is implemented which categorises the risk in the environment based on the detected objects. Depending on which risk level is assigned, different alert(s) are used to aid the vehicle operator and/or the pedestrian. For example, the feed from the cameras may be processed by a NVidia Jetson (trade mark) system in conjunction with, in accordance with preferred embodiments of the invention, an object detection module.
Further advantageous aspects of the invention will be apparent to those ordinarily skilled in the art upon review of the following description of a specific embodiment and with reference to the accompanying drawings.
Brief Description of the Drawings
Embodiments of the invention are now described by way of example and with reference to the accompanying drawings in which: Figure 1 is a schematic diagram of a collision avoidance system embodying one aspect of the invention, the system being installed on a vehicle in the preferred form of a forklift; Figure 2 is a perspective view of the forklift and a pedestrian; Figure 3 is a schematic diagram illustrating a preferred example of how the system determines a risk level and takes corresponding action; Figure 4A is a block diagram illustrating an exemplary first risk level; Figure 4B is a block diagram illustrating an exemplary second risk level; Figure 4C is a block diagram illustrating an exemplary third risk level; and Figure 4D is a block diagram illustrating an alternative third risk level.
Detailed Description of the Drawings
Referring in particular to Figure 1 of the drawings there is shown, generally indicated as 10. a collision avoidance system embodying one aspect of the invention. The system 10 is installed on a vehicle 12 which, in preferred embodiments, is a forklift vehicle (commonly referred to as a forklift truck or just forklift). It will be understood that systems embodying the invention may be used with other vehicles, e.g. buggies, loaders, tractors and so on, particularly vehicles for use in workplaces or other shared spaces in which the vehicle does not necessarily travel on a dedicated road or path, e.g. in a warehouse or on a work site or other workplace or environment.
The system 12 comprises an object detection system configured to detect target objects 30 in a field of detection. In preferred embodiments, the object detection system is a computer vision object detection system, and the field of detection is the field of vision (FOV) of the computer vision system.
Preferably, the object detection system comprises at least one digital camera 14. In the illustrated embodiment, there is one camera 14 although in alternative embodiments one or more additional camera 14 may be provided. The or each camera 14 may be configured to capture digital video and/or digital images of the FOV. In preferred embodiments the camera 14 captures digital video. Preferably, the camera 14 is configured such that the FOV has a 180° detection angle.
In use, the camera 14 captures digital video (and/or digital images, as applicable) of target objects 30 in the FOV. The object detection system is configured to detect target objects in the digital video and/or digital images (as applicable). The objection detection system may comprise one or more processor 16 configured to detect target objects in digital video and/or digital images. The, or each, processor 16 may take any conventional form, e.g. comprising hardware, software and/or firmware as is convenient. For example, the processor(s) 16 may comprise any one or more of: a video processor; an image processor; and/or a computer running one or more computer program, any one or more of which may be configured to detect target objects in the digital video or digital images captured by the camera 14. Detection of the target objects in the digital video or digital images captured by the camera 14 may be performed in any conventional manner. In preferred embodiments, the object detection system is configured to detect target objects by detecting instances of objects that are deemed to belong to one or more class in the captured digital video and/or images, wherein each type of target object that the system is configured to detect belongs to a respective class. Preferably, the object detection is performed using machine learning. preferably supervised machine learning, to detect objects belonging to the relevant class. The object detection system maybe trained using conventional machine learning techniques to facilitate detection of objects in the. or each. class associated with the, or each. type of target object that the system is intended to detect. For example. the object detection system may comprise the NVidia Jetson (trade mark) system with an object detection module.
In preferred embodiments. the objection detection system is configured to detect people, in particular pedestrians. As such. the object detection system may be said to be configured to perform pedestrian detection.
Alternatively, or in addition, the object detection system maybe configured to detect other types of target object, e.g. vehicles or particular type(s) of vehicle. Optionally, the object detection system is configured to detect target objects in the form of a sign. The sign may have any shape, the object detection system being configured to detect the sign by its shape. The objection detection system may be configured to detect more than one type of sign, each type having a different shape that is detectable by the object detection system. The sign, or each type of sign, may be used as a danger sign. The, or each, sign may be provided in the vehicle's environment at one or more location associated with a relatively high risk of collision (e.g. at a blind spot, a corner or a junction). Optionally, the object detection system is configured to detect more than one type of object (e.g. pedestrians and one or more type of sign), and the system 10 may be configured to take the same or different action depending on which type of object is detected.
The system 10 further includes analysing means for determining a collision risk level that is indicative of the risk of a collision with a target object 30 (e.g. in the case where the target object is a pedestrian or other vehicle) or in the vicinity of the target object (e.g. in the case where the target object is a sign). In preferred embodiments, the analysing means is configured to determine the risk level depending on a determined size of the, or each, detected target object and/or on a determined change in size of the, or each, detected target object in the captured video or image. The analysing means may take any convenient form, for example comprising one or more processor 16 configured to analyse detected target objects in the captured digital video and/or digital images, and to determine the risk level depending on the analysis, wherein the analysis preferably comprises determining the size of the detected target object. The, or each, processor 16 may take any conventional form, e.g. comprising hardware, software and/or firmware as is convenient. For example, the processor(s) 16 may comprise any one or more of: a video processor; an image processor; and/or a computer running one or more computer program, any one or more of which may be configured to perform the analysis. The analysing means may be implemented using the same, or different, processor(s) 16 as the objection detection system, as is convenient.
In preferred embodiments, determining the size of a detected target object 30 comprises determining the size (e.g. the area, height, and/or width) of the region of the captured video or image that represents the detected target object. The digital video is typically captured in video frames, and determining the size of the detected target object may comprise determining the size (e.g. the area, height, and/or width) of the region of one or more frame of the captured video that represents the detected target object. Typically, each video frame or image comprises an array of pixels, and determining the size of the detected target object may comprise determining the size (e.g. the area, height, and/or width) of the region of one or more frame of the captured video that represents the detected target object by determining the number of pixels that define the size (e.g. the area, height, and/or width) of the region. Alternatively, the object detection system may identify a detected target object by defining a corresponding boundary region of the captured image or video frame(s), and determining the size of the detected target object may comprise determining the size (e.g. the area, height, and/or width) of the boundary region.
In preferred embodiments, the analysing means is configured to determine the risk level by determining if the determined size of the target object exceeds one or more size threshold value. Alternatively, or in addition, the analysing means is configured to determine the risk level by determining if the determined size of the target object has increased over time, e.g. with respect to two or more successive captured images or video frames. Optionally, one or more size change threshold value may be used to asses how the determined size of the detected object changes over time.
It will be apparent that the determined size of the detected target object 30 is indicative of how close the target object is to the system 10 (or more particularly to the camera 14) or vehicle 12. For example, in Figure 1 target objects A and B may be the same size in the real world, but when detected by the system 10, the determined size of target B is larger than the determined size of target A since target B is closer to the vehicle than target A. Accordingly, at least one size threshold value may be set to correspond to the target object being close enough to the vehicle 12 to be a collision risk (or otherwise that there is a collision risk, e.g. in the case where the target object is a sign rather than a pedestrian). Determining that the size of a detected target object is increasing over time is an indication that the target object is moving towards the system 10 (or more particularly to the camera 14) or vehicle 12, or vice versa (it is noted that some types of target object (e.g. pedestrians and vehicles) can move, while other types (e.g. signs) may be static). For example, in Figure 1 target object A' represents target object A after relative movement towards the vehicle.
Even though A and A' are of the same size in the real world, the determined size of A' is larger than the determined size of A since A' is closer to the vehicle than A. Accordingly, at least one size change threshold value may be set to correspond to the target object moving towards the vehicle 12, and/or the vehicle moving towards the target object, i.e. relative movement between the target object and vehicle that results in the target object being closer to the vehicle, at a rate that is a collision risk.
Advantageously, the value of the, or each, size threshold value depends on: or is specific to, the type, or class, of the detected target object. Similarly, the value of the, or each, size change threshold value depends on, or is specific to, the type, or class, of the detected target object. The system may therefore use a respective set of threshold values for each type, or class, of detected target object.
In preferred embodiments, the system 10 supports the adoption of at least two collision risk levels, including a lowest risk level and a highest risk level, and optionally one or more intermediate risk levels between the lowest and highest risk levels. Preferably, the lowest risk level is adopted when the analysing means determines that there are no detected target objects having a determined size that exceeds the relevant size threshold value(s), and/or that there are no detected target objects with a size that has increased over time (e.g. increased over time by an amount that exceeds the relevant size change threshold value if applicable). A higher risk level, i.e. a risk level other than the lowest risk level, is adopted if the analysing means determines that one or more detected target objects has a determined size that exceeds the relevant size threshold value(s), and/or that one or more detected target objects has a size that has increased over time (e.g. increased over time by an amount that exceeds the relevant size change threshold value if applicable).
In preferred embodiments, said analysing means is configured to adopt an intermediate risk level if the determined size of a detected target object exceeds a first size threshold value but does not 15 exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value.
Optionally, the analysing means is configured to adopt the intermediate risk level if the determined size of a detected target object exceeds a first size threshold value but does not exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value, and if the determined size of the object is determined not to be increasing over time, or to be increasing by an amount that is less than the relevant size change threshold value.
Preferably, the analysing means is configured to adopt the highest risk level if the determined size of one or more detected target object exceeds the second size threshold value.
Preferably, the analysing means is configured to adopt the highest risk level if the determined size of one or more detected target object is determined to be increasing over time, for example by an 30 amount that exceeds the relevant size change threshold value.
The adoption of risk levels by the analysing means may depend on the type of target object that is detected. For example, the risk level adoption process described above may be implemented in respect of pedestrians or vehicles. An alternative risk level adoption process may be implemented in respect of signs On particular objects that are detectable visually using the computer vision object detection system) and/or in respect of objects, such as beacons, that are detectable by other means, e.g. using wireless technology such as Bluetooth TM, WiFi TM or RFID. For example, the analysing means may be configured to adopt the highest risk level upon detection of a sign, or one or more particular types of sign (e.g. that indicate danger). In preferred embodiments, the analysing means is configured to determine the risk level depending on a determined size of a detected sign, as is described above in relation to target objects generally. Accordingly, the analysing means may be configured to adopt the highest risk level upon detection of a sign with a size above the respective size threshold level. Alternatively, the analysing means may be configured to adopt the highest risk level (or other relevant risk level) upon detection of a sign irrespective of its size. Optionally, the system 10 may be configured to detect wireless beacons (e.g. BLE Bluetooth beacons. WiFi beacons, RFID beacons and/or other RE beacon). The analysing means may be configured to adopt the highest risk level upon detection of a beacon. The system 10 may be provided with any suitable conventional wireless receiver or detector (not shown) for detecting wireless beacons or the like. Such signs or beacons may be located at or adjacent danger zones (e.g. at a blind spot or junction), or other areas where it is desirable for the vehicle to stop and/or take other action(s) associated with 10 the highest risk level.
Optionally, the system 10 includes at least one distance sensor 18, for example at least one ultrasonic distance sensor, configured to detect objects in the field of detection, or field of vision The analysing means may be configured to adopt the highest risk level if an object detected by the distance sensor(s) 18 is determined to be closer to the vehicle 12 than a threshold distance value.
The system 10 further includes output means for implementing at least one output action depending on the determined risk level. The output means typically comprises any one or more of: audio output means; visual output means; and/or means for controlling the operation of the vehicle 12.
Accordingly, the output action(s) may comprise causing the audio output means to render an audio signal, causing the visual output means to illuminate and/or to control the operation of the vehicle 12 as applicable, or otherwise activating the relevant output means. For example, audio output means comprising one or more audio output device 20 (e.g. comprising a buzzer, siren or loudspeaker) may be provided. Visual output means comprising one or more visual output device 22 (e.g. a lamp, beacon or projector) may be provided. Means for controlling the operation of the vehicle 12 may comprise one or more processor 16 configured to communicate with a control system of the vehicle 12 (e.g. comprising a vehicle control unit (VCU), an electronic control unit (ECU) and/or CAN bus) in order to control the operation of the vehicle 12 (e.g. to control the vehicle's speed and/or to stop the vehicle). The, or each, processor 16 may take any conventional form, e.g. comprising hardware, software and/or flrmware as is convenient. For example, the processor(s) 16 may comprise a computer running one or more computer program. The means for controlling the operation of the vehicle may be implemented using the same, or different, processor(s) 16 as the objection detection system and/or the analysing means, as is convenient.
The output means may conveniently be controlled by the analysing means. Alternatively, one or more suitably configured processor 16 may be provided for this purpose. The, or each, processor 16 may take any conventional form, e.g. comprising hardware, software and/or firmware as is convenient. For example, the processor(s) 16 may comprise a computer running one or more computer program. The means for controlling the output means may be implemented using the same, or different, processor(s) 16 as the objection detection system and/or the analysing means, as is convenient.
In preferred embodiments, at least one visual output device 22 comprising a light projector, e.g. an LED projector, is provided. The, or each, projector 22 is preferably configured to project an image onto a ground surface around the vehicle 12. For example, projector 22A is preferably provided and is operable to project an image IA of an arrow onto the ground surface, the arrow indicating the direction of travel of the vehicle 12 (see Figure 2). Optionally, one or more projectors 22B, 22C, 22D are provided and are configured to project a respective image IB, IC ID of a boundary line onto the ground surface adjacent the vehicle 12 to define a boundary at least partly around the vehicle. In the illustrated example, projector 22B projects a boundary line at the rear of the vehicle and projectors 10 22C, 22D project boundary lines at a respective side of the vehicle 12.
In preferred embodiments. operation of the output means is controlled depending on the determined risk level. For example, the output means may be controlled such that a respective different set of output action(s) are implemented depending on the determined risk level.
The components of the system 10 may installed at any suitable location of the vehicle 12. For example, the processor(s) 16 may be integrated with the vehicle's control systems, or may be provided in a separate control unit (not shown) that may be installed in the vehicle, e.g. in the cabin. The camera(s) 14 may be mounted on the vehicle 12 at any suitable location, e.g. on the roof or body of the vehicle 12, inside or outside of the vehicle 12. In typical embodiments, the camera(s) 14 is located and positioned such that the FOV extends rearwardly of the vehicle 12, i.e. in a direction that is opposite to the vehicle's normal forward direction of travel. As such, the system 10 may be used when the vehicle 12 is reversing. Alternatively, or in addition, one or more camera 14 may be located and positioned such that the FOV extends forwardly of the vehicle 12, i.e. in the direction of normal forward travel. Each visual output device 22 may be mounted on the vehicle 12 at any suitable location, e.g. on the roof or body of the vehicle 12, typically outside of the vehicle 12. In cases where the visual output device 22 is a projector, it may be located at a side, front or rear of the vehicle 12 as required, and be positioned to project its image onto the ground surface. Optionally, at least one visual output device 22 is provided inside the vehicle cabin for alerting the driver to collision danger. One or more audio output device 20 may be provided at any suitable location inside and/or outside of the vehicle 12 for alerting pedestrians and/or the drive as required. The components of the system 10 may communicate with each other, and with the VCU or other control system as applicable, using any conventional wired or wireless communication link.
A preferred embodiment of the collision avoidance system 12 is now described by way of example. In this example, the system 12 supports first and second size thresholds and a size change threshold: Size Threshold 1-defining a relatively small threshold area size for a detected object. the threshold being specific to the object class.
Size Threshold 2-defining a larger (i.e. larger than Size Threshold 1) threshold area size for the detected object, the threshold being specific to the object class.
Size Change Threshold-defining an amount of a change in the size of the detected object above which the detected object is deemed to be getting closer to the vehicle. The system 12 may determine the size of each detected target object in successive frames or images, determine the change of size over time, and determine that the detected object is getting closer if the change in size exceeds the Size Change Threshold (for example, at approximately 20 frames per second, this analysis is very accurate).
In this example: the system 12 supports first: second and third collision risk levels: Risk Level 1; Risk Level 2 and Risk Level 3, wherein Risk Level 1 is the lowest risk level, Risk Level 31s the highest risk level, and Risk Level 2 is an intermediate risk level. Referring in particular to Figure 3, at block 301 the system 12 monitors the environment around the vehicle 12 in order to detect target object(s) of one or more class or type in the, or each, FOV. Depending on whether or not one or more target object(s) are detected, and on whether or not one or more of the supported thresholds are exceeded, the system 12 adopts one or other of the supported risk levels (blocks 302: 303, 304). Depending on which risk level is adopted, the respective corresponding output action(s) are taken (blocks 305, 306, 307). In this example, the output action for Risk Level 1 is to activate projector 22A to project the arrow image IA onto the ground surface (behind the vehicle 12 in this example); the output actions for Risk Level 2 include the action for Risk Level 1 plus activation of projectors 22B: 22C, 22D to project a boundary line IB, IC: ID around the end and sides of the vehicle 12, activation of a lamp or audio device inside the vehicle to alert the driver, and limiting the speed of the vehicle 12; the output actions for Risk Level 3 include the actions of Risk Level 2 plus activation of an external audio device: further reducing the speed of the vehicle and optionally stopping the vehicle 12. More generally, it will be understood that the specific output actions for any risk level may vary as suits the application; any output action, or combination of output actions may be implemented as desired. Optionally, for one or more risk levels, e.g. the lowest risk level, no output actions may be taken. Typically, the output actions escalate as the risk level increases.
With reference to Figure 4A: in this example the system 10 may be configured to adopt Risk Level 1 if it is determined that there are no target objects in the field of view, or that there are no target objects with a size that exceeds Size Threshold 1 and optionally that there are no target objects that exceed the Size Change Threshold. With reference to Figure 4B: in this example the system 10 may be configured to adopt Risk Level 2 if, in respect of a detected target object, the determined size exceeds Size Threshold 1 (but not Size Threshold 2) and the Size Change Threshold is not exceeded. With reference to Figure 4C: in this example the system 10 may be configured to adopt Risk Level 3 if, in respect of a detected target object, the determined size exceeds Size Threshold 2 or that the Size Change Threshold is exceeded. Alternatively, with reference to Figure 4D, in embodiments where the distance sensor(s) 18 are present, the system 10 may be configured to adopt Risk Level 3 if, in respect of a detected target object, the determined size exceeds Size Threshold 2 or the Size Change Threshold is exceeded or the distance threshold is breached.
The example described above is particularly suited in cases where the target objects are pedestrians, but may also be used with other types of target object. Alternatively, or in addition, with reference again to Figure 4D. in embodiments where the system 10 is configured to detect signs and/or beacons, the system 10 may be configured to adopt Risk Level 3 if, in respect of a detected target object, the determined size exceeds Size Threshold 2 or the Size Change Threshold is exceeded or a sign or beacon is detected (or, optionally, if the distance threshold is breached if this feature is supported by the system).
It will be understood that in alternative embodiments, the system 10 may be configured to support more than three or fewer than three risk levels, and that the conditions that must be met to adopt one risk level or another may vary to suit the application. In preferred embodiments, when more than one target object 30 is detected, the system 10 is configured to ascertain the respective risk level for each target object independently, and preferably to adopt the highest ascertained risk level associated with the detected target objects.
The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Claims (25)

  1. CLAIMS: 1. A collision avoidance system for a vehicle, the system comprising: an object detection system configured to detect at least one target object in a field of detection; analysing means for determining a collision risk level; and output means for implementing at least one output action depending on the determined collision risk level, wherein said analysing means is configured to determine said risk level depending on a determined size of the at least one detected target object and/or on a determined change in size of the at least one detected target object.
  2. 2. The system of claim 1, wherein said analysing means is configured to determine said risk level by determining if said determined size exceeds one or more size threshold value, and/or by determining 15 if the determined size of the target object has increased over time.
  3. 3. The system of claim 1 or 2, wherein the system supports the adoption of at least two collision risk levels, including a lowest risk level and a highest risk level, and optionally one or more intermediate risk levels between the lowest and highest risk levels.
  4. 4. The system of claim 3 when dependent on claim 2, configured to adopt the lowest risk level when the analysing means determines that there are no detected target objects having a determined size that exceeds one or more size threshold value, and/or that there are no detected target objects with a size that has increased over time, optionally by an amount that exceeds one or more size change threshold value.
  5. 5. The system of claim 3 when dependent on claim 2, or on claim 4, configured to adopt a risk level other than the lowest risk level if the analysing means determines that one or more detected target objects has a determined size that exceeds one or more size threshold value, and/or that one or more detected target objects has a size that has increased overtime, optionally by an amount that exceeds the one or more size change threshold value.
  6. 6. The system of any one of claims 3 to 5, configured to adopt an intermediate risk level if analysing means determines that the determined size of a detected target object exceeds a first size threshold 35 value but does not exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value.
  7. 7. The system of any one of claims 3 to 5, configured to adopt an intermediate risk level if the analysing means determines that the determined size of a detected target object exceeds a first size 40 threshold value but does not exceed a second size threshold value, the second size threshold value being indicative of a larger determined size than the first size threshold value, and if the determined size of the object is determined not to be increasing over time, or to be increasing by an amount that is less than the relevant size change threshold value.
  8. 8. The system of any one of claims 3 to 7, configured to adopt the highest risk level if the analysing 5 means determines that the determined size of one or more detected target object exceeds the second size threshold value.
  9. 9. The system of any one of claims 3 to 8, configured to adopt the highest risk level if the analysing means determines that the determined size of one or more detected target object is increasing over 10 time, for example by an amount that exceeds the relevant size change threshold value.
  10. 10. The system of any preceding claim, further including at least one distance sensor, for example at least one ultrasonic distance sensor, configured to detect objects in said field of detection, and/or means for detecting wireless beacons.
  11. 11. The system of claim 10 when dependent on any one of claims 3 to 9, configured to adopt the highest risk level if an object detected by said at least one distance sensor is determined to be closer to said vehicle than a threshold distance value.
  12. 12. The system of any preceding claim, wherein said object detection system is configured to detect one or more types of target object, for example pedestrians and/or one or more type of vehicle and/or one or more types of sign and/or one or more type of beacon.
  13. 13. The system of claim 12 when dependent on any one of claims 2 to 11, wherein one or more 25 respective size threshold value and/or one or more respective size change threshold value is associated with each type of target object.
  14. 14. The system of claim 12 or 13 when dependent on claim 3, wherein said objection detection system is configured to detect at least one type of sign, the system being configured to adopt the highest risk level in response to detection of an instance of said at least one type of sign, preferably in response to detection of said instance of said at least one type of sign with a size above a respective size threshold value, and/or wherein the system is configured to adopt the highest risk level in response to detection of an instance of said at least one type of beacon.
  15. 15. The system of any preceding claim, wherein said object detection system is a computer vision object detection system and said field of detection is a field of vision.
  16. 16. The system of any preceding claim, wherein said object detection system comprises at least one digital camera, preferably at least one digital video camera, for capturing digital images and/or digital 40 video of the at least one target object in the field of vision, said object detection system being configured to detect said at least one target object in said digital images and/or digital video.
  17. 17. The system of claim 16, wherein said analysing means is configured to determine the size of the at least one detected target object in the captured digital video or digital image, and/or to determine the change in size of the at least one detected target object in the captured digital video or image. 5
  18. 18. The system of claim 17, wherein said analysing means is configured to determine the size of the region of the captured digital video or digital image that represents the at least one detected target object, and/or to determine the change in the size of the region of the captured digital video or digital image that represents the at least one detected target object, and wherein determining the size of the region of the captured digital video may involve determining the size of the region of one or more frame of the captured digital video that represents the at least one detected target object.
  19. 19. The system of any preceding claim, wherein the object detection system is configured to detect said at least one target object by detecting objects that belong to a respective class that is 15 associated with said at least one target object.
  20. 20. The system of claim 19, wherein the object detection system is configured to detect objects that belong to the respective class using supervised machine learning.
  21. 21. The system of any preceding claim, wherein said output means comprises any one or more of: audio output means; visual output means; means for controlling the operation of the vehicle, wherein optionally said output means comprises at least one light projector that is operable to project one or more image onto a ground surface.
  22. 22. The system of claim 21, wherein the output means is configured to implement one or more different output action depending on the determined risk level.
  23. 23. A vehicle comprising a collision avoidance system as claimed in any one of claims 1 to 23, wherein said object detection system is configured so that said field of detection is in a direction of 30 movement of said vehicle.
  24. 24. A collision avoidance method for a vehicle, the method comprising: detecting at least one target object in a field of detection; determining a collision risk level depending on a determined size of the at least one detected 35 target object and/or on a determined change in size of the at least one detected target object; and. implementing at least one output action depending on the determined collision risk level.
  25. 25. The method of claim 24, including determining the size of the at least one detected target object in a captured digital video or digital image, and/or to determine the change in size of the at least one 40 detected target object in the captured digital video or image, and wherein, preferably, said determining comprises determining the size of the region of the captured digital video or digital image that represents the at least one detected target object, and/or determining the change in the size of the region of the captured digital video or digital image that represents the at least one detected target object, and wherein determining the size of the region of the captured digital video may involve determining the size of the region of one or more frame of the captured digital video 5 that represents the at least one detected target object.
GB2202091.1A 2022-02-17 2022-02-17 A collision avoidance system for a vehicle Pending GB2615766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2202091.1A GB2615766A (en) 2022-02-17 2022-02-17 A collision avoidance system for a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2202091.1A GB2615766A (en) 2022-02-17 2022-02-17 A collision avoidance system for a vehicle

Publications (2)

Publication Number Publication Date
GB202202091D0 GB202202091D0 (en) 2022-04-06
GB2615766A true GB2615766A (en) 2023-08-23

Family

ID=80934551

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2202091.1A Pending GB2615766A (en) 2022-02-17 2022-02-17 A collision avoidance system for a vehicle

Country Status (1)

Country Link
GB (1) GB2615766A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150019080A1 (en) * 2013-07-09 2015-01-15 GM Global Technology Operations LLC Driver assistance system for a motor vehicle
EP2947638A1 (en) * 2014-05-19 2015-11-25 Honeywell International Inc. Airport surface collision zone display for an aircraft
US20160150070A1 (en) * 2013-07-18 2016-05-26 Secure4Drive Communication Ltd. Method and device for assisting in safe driving of a vehicle
EP3343533A1 (en) * 2016-12-27 2018-07-04 Panasonic Intellectual Property Corporation of America Information processing apparatus, information processing method, and program
EP3348446A1 (en) * 2016-12-30 2018-07-18 Hyundai Motor Company Posture information based pedestrian detection and pedestrian collision prevention apparatus and method
EP3533680A1 (en) * 2016-10-28 2019-09-04 LG Electronics Inc. -1- Autonomous vehicle and operating method for autonomous vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150019080A1 (en) * 2013-07-09 2015-01-15 GM Global Technology Operations LLC Driver assistance system for a motor vehicle
US20160150070A1 (en) * 2013-07-18 2016-05-26 Secure4Drive Communication Ltd. Method and device for assisting in safe driving of a vehicle
EP2947638A1 (en) * 2014-05-19 2015-11-25 Honeywell International Inc. Airport surface collision zone display for an aircraft
EP3533680A1 (en) * 2016-10-28 2019-09-04 LG Electronics Inc. -1- Autonomous vehicle and operating method for autonomous vehicle
EP3343533A1 (en) * 2016-12-27 2018-07-04 Panasonic Intellectual Property Corporation of America Information processing apparatus, information processing method, and program
EP3348446A1 (en) * 2016-12-30 2018-07-18 Hyundai Motor Company Posture information based pedestrian detection and pedestrian collision prevention apparatus and method

Also Published As

Publication number Publication date
GB202202091D0 (en) 2022-04-06

Similar Documents

Publication Publication Date Title
US7772991B2 (en) Accident avoidance during vehicle backup
US10692380B2 (en) Vehicle vision system with collision mitigation
US20170140227A1 (en) Surrounding environment recognition device
EP2578464B1 (en) Video-based warning system for a vehicle
KR101083885B1 (en) Intelligent driving assistant systems
US8055017B2 (en) Headlamp monitoring apparatus for image exposure adjustment
JP4415856B2 (en) Method for detecting the forward perimeter of a road vehicle by a perimeter sensing system
JP2005309797A (en) Warning device for pedestrian
US10821894B2 (en) Method and device for visual information on a vehicle display pertaining to cross-traffic
JP2009265842A (en) Warning device for vehicle and warning method
JP2007241898A (en) Stopping vehicle classifying and detecting device and vehicle peripheral monitoring device
CN108151714B (en) Motor vehicle system with motion detection capability
KR20180065527A (en) Vehicle side-rear warning device and method using the same
WO2016092925A1 (en) Approaching vehicle detection device
KR20120086577A (en) Apparatus And Method Detecting Side Vehicle Using Camera
JP4751894B2 (en) A system to detect obstacles in front of a car
KR20220167810A (en) AVM system with real-time control function
JP2009154775A (en) Attention awakening device
JPS62273477A (en) Monitor for surrounding area of vehicle
CN111497738A (en) Vehicle reminding and warning system
GB2615766A (en) A collision avoidance system for a vehicle
US11780364B2 (en) Vehicle marker
KR20230101505A (en) The around monitoring apparatus ofo the image base
JP2008114673A (en) Vehicle monitoring device
KR20180039838A (en) Alarm controlling device of vehicle and method thereof