WO2024171146A1 - Procédé et appareil d'obtention de données de collision entre un objet et un véhicule - Google Patents
Procédé et appareil d'obtention de données de collision entre un objet et un véhicule Download PDFInfo
- Publication number
- WO2024171146A1 WO2024171146A1 PCT/IB2024/051508 IB2024051508W WO2024171146A1 WO 2024171146 A1 WO2024171146 A1 WO 2024171146A1 IB 2024051508 W IB2024051508 W IB 2024051508W WO 2024171146 A1 WO2024171146 A1 WO 2024171146A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- vehicle
- lidar
- computing unit
- alert data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 249
- 230000008569 process Effects 0.000 claims description 81
- 238000004891 communication Methods 0.000 claims description 22
- 238000003384 imaging method Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 21
- 238000010801 machine learning Methods 0.000 description 23
- 230000015654 memory Effects 0.000 description 21
- 238000003066 decision tree Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013501 data transformation Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 101100126955 Arabidopsis thaliana KCS2 gene Proteins 0.000 description 1
- 241000132023 Bellis perennis Species 0.000 description 1
- 235000005633 Chrysanthemum balsamita Nutrition 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Definitions
- the present disclosure relates to computer-implemented methods and apparatus for object detection.
- it relates to methods and apparatus employing LIDAR technology for object detection, and to methods and apparatus for detecting objects in vehicle blind spots.
- US 7,409,295 B2 discloses a process for determining an imminent collision between a vehicle and an object.
- the vehicle has a detection system for obtaining data on the position of the observed object within a detection field.
- the process of said document includes a step a) of obtaining position data of an object within said detection field and a step b) of determining whether said object is likely to be between a first and a second collision course when the following two conditions are met, (1) a value of the probability density function that said object is on said first collision course exceeds a value of the probability density function that said object is on a first avoidance course, and (2) a value of the probability density function that said object is on said second collision course exceeds a value of the probability density function that said object is on a second avoidance course.
- Said first avoidance course is on one side of said first and second collision paths
- said second avoidance course is on the other side of said first and second collision paths.
- the method disclosed by said document also has a step c) of providing information to facilitate the deployment of collision mitigation measures if said object is determined to be between said collision courses.
- the image data may contain status information (i.e. position, velocity, etc.) about one or more objects observed at a given time.
- the detection system may be based on, for example, electromagnetic echo radiation (e.g. laser, radar), acoustics (e.g. sonar, ultrasound), and thermal imaging (e.g. infrared).
- the detection system is based on electromagnetic echo, in particular radar.
- document DE102011010864A1 discloses a method for predicting collisions between a motor vehicle and objects, in particular near a blind spot area of the motor vehicle.
- the method includes a step of detecting at least one object in the environment of the motor vehicle by means of at least one first sensor, and a step of calculating a stochastic accessibility quantity (M) of the motor vehicle by means of an evaluation unit interacting with the at least one sensor.
- M stochastic accessibility quantity
- said document mentions a step of calculating a stochastic accessibility quantity (M 1 ) of the at least one object and a step of calculating the collision probability of a collision between the motor vehicle and the at least one object by forming the intersection of the two stochastic accessibility quantities (M, M') by means of the evaluation unit.
- the document mentions that when the probability of collision exceeds a predefined threshold, the motor vehicle is braked and a visual warning and/or an acoustic warning is issued to a driver of the motor vehicle.
- the object is preferably detected also with a second sensor.
- the first sensor and the second sensor are each assigned to an area in the vicinity of the motor vehicle, and these two areas in particular do not overlap or only partially overlap.
- the corresponding data from the two sensors are preferably merged to detect the at least one object.
- the document states that the first sensor is preferably designed as a camera covering a front area of the motor vehicle, and the second sensor is preferably designed as a radar or LIDAR sensor for monitoring a side blind spot zone of the motor vehicle.
- the method may include a step of transmitting the imminent collision condition to a central receiving facility or to emergency personnel along with position information (e.g., from a GPS) to alert others of the situation so that help can be sent.
- position information e.g., from a GPS
- the present disclosure describes embodiments of a computer-implemented method of obtaining collision probability data between an object and a vehicle.
- the method disclosed herein comprises executing in a computing unit a step a) of receiving a plurality of LIDAR images, where the LIDAR images include time data and distance values between the vehicle and an object, and where the LIDAR images are acquired by a LIDAR sensor configured to be arranged in a blind spot of the vehicle.
- the method has a step b) of obtaining an alert data by means of a machine vision method that takes as input at least a first LIDAR image taken at a first instant, and a second LIDAR image taken at a second instant after the first instant.
- the machine vision method generates the alert data if the value of the distance data between the vehicle and the object in the second LIDAR image is less than the value of the distance data between the vehicle and the object in the first LIDAR image.
- the method has a step c) of obtaining a georeferenced alert data through a classification process that takes as input the alert data and at least one georeferencing data obtained by a GPS module, and has a step d) of obtaining a collision probability data through a classification process that It takes as input the plurality of LIDAR images and a plurality of georeferenced alert data obtained during a vehicle's travel path.
- the input data that is stored in the plurality of fields of the database record can be selected from the group comprising, the georeferenced alert data, the georeferencing data, a minimum distance data detected between an object and the vehicle, the object class data, a vehicle speed data, a vehicle acceleration data, a vehicle direction data, a vehicle deviation data with respect to a road lane, travel route data, and combinations thereof.
- the present disclosure describes embodiments of a computer-implemented method for obtaining collision probability data on a travel path of a vehicle, which comprises executing on a remote server a step A) of receiving from a computing unit arranged in the vehicle a plurality of georeferenced alert data, where each georeferenced alert data is associated with an alert data.
- the computing unit executes an artificial vision method that generates the alert data if a value of a distance data between the vehicle and an object identified in a second LIDAR image taken at a second instant is less than a value of a distance data between the vehicle and the object of a first LIDAR image taken at a first instant prior to the second instant.
- the computing unit obtains the georeferenced alert data through a classification process that takes as input the alert data and a georeferencing data obtained by a GPS module.
- the method also has a step B) of storing each georeferenced alert data in a record of a database accessed by the remote server; and a step C) of obtaining collision probability data through a classification process that takes as input the records in which the plurality of georeferenced alert data obtained during a vehicle's travel route are stored.
- the present disclosure describes embodiments of an apparatus for obtaining collision probability data between an object and a vehicle, comprising a LIDAR sensor configured to obtain a plurality of LIDAR images, where the LIDAR sensor is configured to be arranged in a blind spot of the vehicle.
- the apparatus further comprises a GPS module connected to the computing unit and configured to obtain georeferencing data of the vehicle, and a communications module.
- the apparatus includes a computing unit connected to the LIDAR sensor, the GPS module, and the communications module and configured to be arranged in the vehicle, where the computing unit is configured to execute any of the embodiments of the methods implemented herein disclosed.
- the present disclosure describes embodiments of an apparatus for obtaining collision probability data on a travel path of a vehicle, comprising a remote server configured to connect to at least one computing unit, and configured to execute any of the embodiments of the methods implemented herein disclosed in which a remote server is employed.
- any of the embodiments of the methods and apparatus disclosed herein allow obtaining collision probability data for a vehicle, which is preferably part of a group of vehicles that have an embodiment of the apparatus disclosed herein. Additionally, said vehicles will allow collecting data that trains the processes and methods executed in the stages of the embodiments of the methods disclosed herein, for example, to train the classification process of stage d). In this way, the method will be able to provide alerts to drivers of vehicles in areas with a greater number of events with collision probability data values greater than a predetermined value. This in turn allows obtaining information on travel routes, for example, when the vehicle is providing a public service (e.g., municipal bus, intercity bus) or traveling a work route (e.g., delivery trucks).
- a public service e.g., municipal bus, intercity bus
- work route e.g., delivery trucks.
- FIG. 3 is a flow chart of one embodiment of the method disclosed herein including steps a), b), c) and d).
- FIG. 4 is a flow chart of one embodiment of the method disclosed herein including steps a), b), c) and d). In addition, this embodiment includes sub-steps bl, b2, b3 and b4 of step b).
- FIG. 5 shows a block diagram of one embodiment of the method and apparatus disclosed herein, wherein it is identified that the machine vision method further takes as input a criticality value that is obtained with a criticality analysis process, which in turn takes as input an object class data.
- the object class data is obtained with an object classification method that processes the LIDAR images.
- FIG. 6 is a flow chart of one embodiment of the method disclosed herein including steps a), b), c) and d). In addition, this embodiment includes a sub-step of step c).
- FIG. 7 shows a block diagram of an embodiment of the method and apparatus disclosed herein similar to the embodiment of FIG. 1, where the computing unit accesses a database in which records containing alert data, georeferenced alert data, georeferencing data, and other data are stored.
- FIG. 9 shows a block diagram of an embodiment of the method and apparatus disclosed herein similar to the embodiment of FIG. 7, where the computing unit sends a data packet to a remote server.
- FIG. 10 shows a block diagram of one embodiment of the method and apparatus disclosed herein, where the computing unit communicates with a remote server by sending LIDAR images and georeferenced alert data.
- the remote server manages the database in which input data such as alert data, georeferenced alert data, georeferencing data, and other data are stored.
- the remote server obtains the collision probability data.
- FIG. 11 is a flow chart of one embodiment of the method disclosed herein including steps a), b), c), and d). In addition, this embodiment includes sub-steps di, d2, d3, and d4 of step d).
- FIG. 12 is a flow diagram of one embodiment of the method disclosed herein, wherein the method is executed on a remote server that receives from a computing unit disposed in the vehicle a plurality of georeferenced alert data.
- FIG. 13 is a block diagram of one embodiment of the apparatus and method disclosed herein, wherein the apparatus includes a communications module that establishes communication with a remote server.
- FIG. 14 is a block diagram of an embodiment of the apparatus disclosed herein, illustrating a detail view of an embodiment of the LIDAR sensor including a servo motor, a laser generating device, a laser receiver, and a LIDAR imaging device. Furthermore, this figure illustrates that the object detected in the blind spot of the vehicle is an automobile, and a laser incident on the automobile and bounce signals detected by the laser receiver are shown.
- FIG. 15 is a block diagram of one embodiment of the apparatus disclosed herein, illustrating a detail view of one embodiment of a display-type user interface device that is connected to the computing unit, and is located on the dashboard of the vehicle.
- FIG. 16 is a flow chart of one embodiment of the method disclosed herein including steps a), b), c), and d). Furthermore, this embodiment includes a step aO) prior to step a), where step aO) includes three sub-steps. Also, step d) is illustrated as including two sub-steps.
- the present disclosure describes embodiments of a computer-implemented method of obtaining collision probability data between an object (1) and a vehicle (2).
- the method disclosed herein comprises executing in a computing unit (3) a step a) of receiving a plurality of LIDAR images (4), where the LIDAR images (4) include time data and distance values between the vehicle (2) and an object (1), and where the LIDAR images (4) are acquired by a LIDAR sensor (15) configured to be arranged in a blind spot (16) of the vehicle (2).
- the method has a step b) of obtaining an alert data (5) using an artificial vision method (6) that takes as input at least a first LIDAR image (7). taken at a first instant, and a second LIDAR image (8) taken at a second instant after the first instant.
- the artificial vision method (6) generates the alert data (5) if the value of the distance data between the vehicle (2) and the object (1) of the second LIDAR image (8) is less than the value of the distance data between the vehicle (2) and the object (1) of the first LIDAR image (7).
- the method has a step c) of obtaining a georeferenced alert data (12) through a classification process (13) that takes as input the alert data (5) and at least one georeferencing data (14) obtained by a GPS module (17), and has a step d) of obtaining a collision probability data (9) through a classification process (10) that takes as input the plurality of LIDAR images (4) and/or a plurality of georeferenced alert data (12) obtained during a travel path of the vehicle (2).
- the method allows obtaining collision probability data (9) for a vehicle (2) while traveling a travel route, where the vehicle (2) preferably forms part of a group of vehicles (2). Additionally, said vehicles (2) will allow collecting data that train the processes and methods executed in the steps of the embodiments of the methods disclosed herein, for example, to train the classification process (10) of step d) or the artificial vision method (6) of step b). In this way, the method allows providing alerts to drivers of vehicles (2) in areas of a travel route with a greater number of events in which alert data (5) is generated with values of the collision probability data (9) greater than a predetermined value.
- This information about travel routes makes it possible to obtain metrics and data related to the driving of the driver of the vehicle (2), and to determine which areas of the travel route are potentially dangerous.
- data will be understood as a symbolic representation that can be numeric, alphabetic, algorithmic, logical, and/or vector that encodes information.
- a data can have a structure or frame composed of blocks of characters or bits that represent different types of information. Each block is It consists of character strings, numbers, logical symbols, among others.
- data can be made up of bits only (binary strings), made up of characters formed one by one by a combination of bits, made up of fields, records or tables made up of fields and records, or made up of data exchange files (formats such as csv, json, xls, among others).
- data can be a matrix of n fdas by m columns. In turn, data can contain several pieces of data.
- the frame when data has a frame structure, can have a block of identification characters, generally known as a header, which contains information related to a computing device or processor that sends the data, and may contain information related to a computing device or processor that receives the data.
- a block of identification characters generally known as a header
- the frame if data has a frame format, the frame contains blocks related to layers according to the OSI reference model.
- the frame may also have a tail block of characters (or simply tail) that allows a computing unit or server to identify that it is the end of the data, i.e., that after that block there is no more information contained in the data previously identified by the computing unit or server with the "header".
- the data has between the "header” block and the "tail” block one or more character blocks that represent statistics, numbers, descriptors, words, letters, logical values (e.g. booleans) and combinations of these.
- the method in step a) receives the LIDAR images (4) from a LIDAR sensor (15) arranged in the blind spot (16) of the vehicle (2).
- the LIDAR sensor (15) is configured to emit a light or laser beam (33) between a first angle (41) and a second angle (42).
- the LIDAR sensor (15) scans with the laser (33) so that it scans the object (1) when said object (1) approaches the blind spot (16) of the vehicle (2).
- the LIDAR sensor (15) can emit the laser (33) by means of a laser generating device (31).
- the laser (33) when it hits the object (1), bounces off forming rebound signals (37), which can be detected by a laser receiver (32).
- the LIDAR sensor (15) may include a LIDAR imaging device (34) that takes as input the bounce signals (37) and lasers (33) emitted by the laser generating device (31). In this way, the LIDAR imaging device (34) may obtain one or more data matrices or tensors that include variables related to the trajectory, speed and relative acceleration measured between the object (1) and the vehicle (2).
- the LIDAR sensor (15) may generate a LIDAR image (4) according to a predetermined time interval.
- the LIDAR imaging device (34) may record a plurality of bounce signals (37), which, when stored in a data structure, such as a tensor or matrix, allow an image, i.e. the LIDAR image (4), to be generated.
- the LIDAR images (4) may be data or data arrays that further include values of a time data and values of distance between the vehicle (2) and an object (1).
- a first LIDAR image (7) may be data that includes the data matrix representing the LIDAR image itself, and may include another data structure with information such as, time stamp data, or minimum distance data between the vehicle (2) and the object (1).
- the minimum distance data may be determined by the LIDAR sensor (15) from the bounce signals (37) and the emitted lasers (33).
- the method disclosed herein obtains the alert data (5) by means of the artificial vision method (6).
- the artificial vision method (6) takes as input the LIDAR images (4) and preferably processes them in pairs.
- the artificial vision method (6) may take a first LIDAR image (7) and a second LIDAR image (8) taken at consecutive instants.
- the second LIDAR image (8) may correspond to a LIDAR image (4) obtained by the LIDAR sensor (15) a few milliseconds after the first LIDAR image (7) is obtained.
- the artificial vision method (6) can determine the change in variables related to the relative movement of the object (1) and the vehicle (2).
- the machine vision method (6) can determine from the first LIDAR image (7) and the second LIDAR image (8) whether the object (1) is approaching or moving away from the vehicle (2). Also, the machine vision method (6) can determine whether the object (1) is accelerating or decelerating relative to the vehicle (2). Likewise, with two or more LIDAR images (4), the artificial vision method (6) can determine a trajectory vector, which can be a collision trajectory vector or an evasion trajectory vector.
- the machine vision method (6) may be based on machine learning and/or computer vision techniques.
- the machine vision method (6) may include a deep learning-based image classification process.
- the machine vision method (6) may generate the alert data (5) if the value of the distance data between the vehicle (2) and the object (1) in the second LIDAR image (8) is smaller than the value of the distance data between the vehicle (2) and the object (1) in the first LIDAR image (7). In this way, the machine vision method (6) determines that the object (1) is approaching the vehicle (2) and a collision path exists.
- step b) may include a sub-step bl) of obtaining an object class data (18) by means of an object classification method (19) taking as input the LIDAR images (4), where the object class data (18) assigns a class value to the object (1) identified in the LIDAR images (4), where the identified object (1) is selected from a vehicle, an obstacle and a pedestrian.
- step b) may include a sub-step b2) of obtaining a criticality value (20) by means of a criticality analysis process (38) taking as input the object class data (18), and may include a sub-step b3) of obtaining with the artificial vision method (6) the value of the alert data (5) taking as input the criticality value (20) and the LIDAR images (4).
- the artificial vision method (6) may apply a correction factor to the value of the alert data (5) depending on the criticality value (20).
- step b) may include a sub-step b4) of moving to step c) if the value of the alert data (5) exceeds a trigger value, otherwise, discarding the alert data and repeating step a).
- the method allows determining whether the detection of an object (1) is more or less relevant. For example, in case the object classification method (19) determines that the object (1) belongs to has an object class data (18) that determines that said If the object (1) is a pedestrian, cyclist or motorcyclist, in sub-step b2) a higher criticality value (20) can be assigned than if the object (1) is a car or truck. This is due to factors such as that, in the event of an accident, the consequences of a collision with a pedestrian, cyclist or motorcyclist could be much more serious than in the case of a collision with the same characteristics with a car or truck. Likewise, the pedestrian or cyclist has less capacity to react (change direction, braking, acceleration) than a vehicle.
- the criticality value (20) can have a minimum value that penalizes the alert data value (5). This is useful, for example, in cases where the vehicle (2) is parking, or is on narrow roads where the poles are very close to the road lane.
- the criticality analysis process (38) may be a hierarchical classification process that, based on the object class data (18), assigns a criticality value (20) according to a predetermined scale or hierarchy.
- the object class data (18) may be data that stores one or more categorical, boolean or numerical variables that allow determining the membership of an object (1) to a class or group of objects with predetermined characteristics.
- the criticality analysis process (38) may also take as input dynamic variables of the vehicle (2), such as its speed, its relative speed with respect to the object (1), acceleration, deceleration, or distance to the object (1), in combination with the object class data (18) to obtain the criticality value (20).
- a predetermined speed e.g., 5 km/h, 2 km/h
- a criticality value (20) to be assigned for inert objects (1), such as posts, walls or fences
- a significant criticality value (20) to be generated that allows the alert data (5) to be generated if the object class data (18) indicates that the object (1) is a living being (e.g., a human, or an animal).
- the magnitude of the criticality value (20) may increase to a greater extent (i.e., exponentially) inversely proportional to the distance of the object (1) from the vehicle (2) when the object (1) is a living being.
- the artificial vision method (6) obtains an alert data (5) that takes into consideration both the physical and movement variables detected and/or calculated from the LIDAR images (4), as well as the hierarchy or criticality scale associated with the type of object (1) detected.
- sub-step b4) a conditional is executed in which it is evaluated whether the value of the alert data (5) exceeds a trigger value, or otherwise, discarding the alert data and repeating step a).
- alert data (5) with negligible values can be discarded, for example, alert data (5) that have associated events with objects (1) with object class data (18) corresponding to immobile and inert objects. This in turn allows to lighten the computational load of the computing unit (3) and its associated modules, in addition to reducing the amount of data that is stored in memory (e.g., in memory modules or databases connected to the computing unit (3)).
- said machine vision method (6) may include one or more machine vision processes or methods, for example, principal component analysis (PCA), kernel methods, statistical methods, scale invariant feature transformation (SIFT), feature extraction, Accelerated Robust Features (SURF), Oriented Gradient Localization Histogram (GLOH), DAISY, Oriented FAST and Rotated BRIEF (ORB), Scalable Robust Binary Keypoints (BRISK), Histogram of Oriented Gradients (HOG), and any other machine vision method known to a person of ordinary skill in the art.
- PCA principal component analysis
- SIFT scale invariant feature transformation
- SURF Accelerated Robust Features
- GLOH Accelerated Robust Features
- GLOH Oriented Gradient Localization Histogram
- DAISY Oriented FAST and Rotated BRIEF
- ORB Scalable Robust Binary Keypoints
- HOG Histogram of Oriented Gradients
- the artificial vision method (6) may include preprocessing processes or methods configured to clean, filter, normalize the LIDAR images (4), or to extract features, perform dimensional reduction of variables, and other methods or processes that allow lightening the computational load and memory consumption of the computing unit (3) or the remote server (26) that processes the LIDAR images (4).
- An example of a preprocessing method may be a method that executes the computing unit (3) configured to remove information considered as noise, for example: For example, the background, vehicles or objects (1) detected behind the nearest object (1), and any other noise information known to a person of average skill in the subject.
- the classification method (10), the classification process (13), the object classification method (19) and/or the artificial vision method (6) may include one or more stages, methods, processes, or computational subprocesses that allow data to be classified into two or more classes.
- the classification method (10), the classification process (13), the object classification method (19), and/or the computer vision method (6) may include any other process that allows grouping data into classes, including artificial intelligence and machine learning processes, such as linear classification processes (e.g. logistic regression, Naive Bayes classification, Fisher linear discriminant), support vector machines, least squares support vector machines, quadratic classification processes, kernel estimation, k-th neighborhood, decision trees, alternating decision trees, ID3 algorithms, C4.5 algorithms, Chi-square automatic interaction detection (CHAID) algorithms, stump trees, fast-and-frugal trees, simple decision trees, linear decision trees, decision trees deterministic, randomized decision trees, non-deterministic decision trees, quantum decision trees, pruned decision trees (Decision tree pruning), random forests, neural networks (e.g. supervised, backpropagation, forward propagation), learning vector quantization, and other machine learning techniques familiar to a person of ordinary skill in the art.
- linear classification processes e.g. logistic regression, Naive Bayes classification, Fisher linear discriminant
- support vector machines e.
- Machine learning or also called machine learning can refer to algorithms and statistical models that computer systems (e.g., computing unit (3), remote server (26)) can use to perform a specific task without using explicit instructions, relying instead on models and inferences.
- machine learning instead of a rule-based data transformation, a data transformation inferred from an analysis of the data can be used. historical data and training data. For example, image content can be analyzed using a machine learning model or a machine learning algorithm.
- Machine learning may involve performing a plurality of machine learning tasks by machine learning systems, such as supervised learning.
- Supervised learning may include presenting a set of example inputs and desired outputs to machine learning systems. For example, historical records of groups of LIDAR images (4) previously labeled by an expert may be used to train the machine learning processes, methods, and models of the classification method (10), the classification process (13), the object classification method (19), and/or the computer vision method (6).
- machine learning may include a plurality of other tasks based on an output of the machine learning system. Such tasks may also be classified as machine learning problems, such as classification, regression, clustering, density estimation, dimensionality reduction, anomaly detection, and the like. Machine learning may include a plurality of mathematical and statistical techniques.
- Learning processes may include decision tree based learning, association rule learning, deep learning, artificial neural networks, genetic learning processes, inductive logic programming, support vector machines (SVM), Bayesian networks, reinforcement learning, representation learning, rule based machine learning, sparse dictionary learning, similarity and metric learning, learning classification systems (LCS), logistic regression, random forest, K-means, gradient boosting and adaboost, K-nearest neighbors (KNN), a priori processes, artificial neural networks (ANN), convolutional neural networks (CNNs), cyclic neural networks (RNN), ant colony process, simulated annealing process, particle swarm process, and any other machine learning method or process known to a person of ordinary skill in the art.
- SVM support vector machines
- Bayesian networks reinforcement learning
- representation learning rule based machine learning
- sparse dictionary learning sparse dictionary learning
- similarity and metric learning learning classification systems
- LCS learning classification systems
- LCS learning classification systems
- LCS learning classification systems
- ANN artificial neural networks
- step c) may further include a step e) of recording each alert data (5) in a record (21) of a database (22), where the record (21) includes a field (23) in which the alert data (5) is stored, and a plurality of fields (24) configured to store input data (25) associated with characteristics selected from travel characteristics of the vehicle (2).
- the input data (25) that is stored in the plurality of fields (24) of the database record (22) can be selected from the group comprising, the georeferenced alert data (12), the georeferencing data (14), a minimum distance data detected between an object (1) and the vehicle (2), the object class data (18), a vehicle speed data (2), an acceleration data of the vehicle (2), a vehicle direction data (2), a deviation data of the vehicle (2) with respect to a road lane, travel route data, and combinations thereof.
- Such input data (25) can be taken as input to data analysis processes and classification processes based on machine learning that allow obtaining data, metrics and/or values that qualify the driver of the vehicle (2) and/or the travel route.
- these input data (25) allow generating reports that preventively warn other drivers who travel a predetermined travel route, for example, that in certain areas of the route there is a greater flow of pedestrians, or that it coincides with a bicycle route where more care must be taken to avoid accidents.
- the records (21) of the database (22) would store detailed information of the moments prior to the collision of the object (1) with the vehicle (2). This information allows determining the causes of the accident related to the kinematic variables and relative movement between the object (1) and the vehicle (2). Also, in the register (21) it is possible to store data values related to the internal behavior of the vehicle (2), for example, tire pressure, fuel level, acceleration percentage, speed, gear or change of the vehicle, relative distance in front of a vehicle located in front of the vehicle (2), and any other data or variable that allows to subsequently analyze the causes and/or variables associated with a collision.
- a database is a set of data stored in a memory register systematically for later use.
- Databases can be selected from hierarchical databases, network databases, transactional databases, relational databases, multidimensional databases, object-oriented databases, document databases, deductive databases and other databases known to a person with average knowledge of the subject.
- the method may further include a step e) of transmitting a data packet (40) including records (21) from the database (22) to a remote server (26) connected to the computing unit (3) arranged in the vehicle (2).
- the remote server (26) can execute the calculations that require greater use of memory resources and computational load, and leaves the computing unit (3) to execute steps a) to d).
- the classification process (10) of step d) may be a supervised training classification method.
- the remote server (26) may periodically execute a retraining process configured to obtain a file or computer program, which when sent from the remote server (26) to the computing unit (3), and subsequently read by the computing unit (3), causes the computing unit (3) to update the methods or processes based on machine learning and/or rules.
- the driver of the vehicle (2) may enter an update command into an input/output device or user interface device (35), where the update command causes the computing unit (3) to link to the remote server (26) and download the most updated version of the collision probability data acquisition method.
- the classification process (10) may be trained with a training database (27) that includes a plurality of records (28) associated with travel paths of a plurality of vehicles (2).
- the training database (27) may be a backup of the database (22).
- the remote server (26) may run an ETL (Extract, Transform, Load) process in which it extracts records (28) from records (21) in the database (22) and performs transformations, groupings, or intermediate processing of the variables and data stored in the database (22). This allows for extracting the variables and data necessary to train the classification process (10), and discarding data that is irrelevant to this purpose.
- ETL Extract, Transform, Load
- step d) may include a sub-step di) of transmitting the georeferenced alert data (12) and/or the LIDAR images (4) from the computing unit (3) to a remote server (26), and a sub-step d2) of storing the georeferenced alert data (12) and/or the LIDAR images (4) in a record (21) of a database (22) accessed by the remote server (26).
- step d) may include a sub-step d3) of obtaining the collision probability data (9) by executing the classification process (10) on the remote server (26), where the classification process (10) takes as input the record (21) of sub-step d2).
- step d) may include a sub-step d4) of retraining the classification process (10) taking as input the training database (27), where The training database (27) is periodically updated including new records (21) entered into the database (22).
- the collision probability data (9) is obtained on the remote server (26) and not on the computing unit (3).
- This allows the computational load of the computing unit (3) to be lightened.
- the computing unit (3) can also obtain the alert data (5) locally, and generate an alert message (36) that is emitted on a user interface device (35) accessed by the driver of the vehicle (2). This also allows the driver of the vehicle (2) to be alerted in cases where an object (1) is detected in the blind spot (16) and is classified as a dangerous event.
- the computing unit (3) may transmit a data packet (40) including georeferenced alert data (12), georeferencing data (14) and/or LIDAR images (4), together with other data, variables and data arrays related to the vehicle (2) during a travel route.
- the data packet (40) may group sets of records (21) storing said data.
- the sets of records (21) included in each data packet (40) may correspond to a predetermined time interval, or to a predetermined distance interval.
- several data packets (40) can be stored in a memory module of the computing unit (3) to then send them at the time the communications module (29) is connected to the Internet, or any other wireless communications protocol (e.g., radio frequency, satellite, 3G, 4G, 5G, Bluetooth, ZigBee, among others).
- any other wireless communications protocol e.g., radio frequency, satellite, 3G, 4G, 5G, Bluetooth, ZigBee, among others.
- Step aO) includes a sub-step (100) of scanning a detection zone by sweeping with the laser (33) of the LIDAR sensor (15) an angular path between the first angle (41) and the second angle (42), where the detection zone is divided into semicircular segments divisible into successive triangles.
- step aO) may include a sub-step (101) of detecting a plurality of bounce signals (37) with the laser receiver (32) of the LIDAR sensor (15), where the bounce signals (37) are generated when the laser (33) bounces off an object (1) detected near the blind spot (16) of the vehicle (2), and where each bounce signal (37) is related to a semicircular segment of the detection zone.
- step aO) may include a sub-step (102) of obtaining the LIDAR images (4), where at least a first LIDAR image (7) and a second LIDAR image (8) are obtained.
- the LIDAR images (4) are obtained from the bounce signals (37), by means of the LIDAR image generation device (34).
- the first LIDAR image (7) includes a first reference point
- the second LIDAR image (8) a second reference point.
- Each reference point is a coordinate of a coordinate system.
- the coordinate system includes an X coordinate axis parallel to a surface of the vehicle (2).
- the LIDAR sensor (15) may include a servo motor (30) with a shaft on which a laser generating device (31) is mounted that generates the laser (33).
- the LIDAR sensor (15) is technically and electronically simpler compared to conventional and commercially available LIDAR modules, for example those used in Apple® products such as iPhone® and iPad® (e.g., Pro, Pro Max versions) of generation 12 and higher.
- Such embodiments including step 0) may also have a step d) including a sub-step (103) of obtaining a functional form f(x) from the first reference point of the first LIDAR image (7) and the second reference point of the second LIDAR image (8), and a sub-step (104) of obtaining the collision probability data (9) by integrating the functional form f(x), where the integration limits are defined based on the coordinates of the first reference point and the second reference point.
- modalities allow determining the value of the collision probability data (9) based on two reference points, which allows reducing the consumption of computational resources and the memory usage of the computing unit (3).
- the functional form f(x) is a mathematical function obtained from the reference points of the LIDAR images (4), where at least two reference points are used (i.e., those of the first LIDAR image (7) and the second LIDAR image (8)).
- at least two reference points i.e., those of the first LIDAR image (7) and the second LIDAR image (8).
- three or more LIDAR images (4) can be used, each of the LIDAR images (4) having an associated reference point with respect to the coordinate system.
- the functional form f(x) can be linear, a spline, a polynomial function of third or higher degree, or adopt any other form of integrable function.
- the area under the curve with respect to the coordinate axis X, which is parallel to a surface of the vehicle (2), is obtained.
- the value of said area under the curve corresponds to the value of the collision probability data (9).
- the present disclosure describes embodiments of a computer-implemented method for obtaining collision probability data on a travel path of a vehicle (2).
- the method comprises executing on a remote server (26) a step A) of receiving from a computing unit (3) arranged in the vehicle (2) a plurality of georeferenced alert data (12), where each georeferenced alert data (12) is associated with an alert data (5).
- the method further has a step B) of storing each georeferenced alert data (12) in a record (21) of a database (22) accessed by the remote server (26); and a step C) of obtaining collision probability data (9) by means of a classification process (10) that takes as input the records (21) in which the plurality of georeferenced alert data (12) obtained during a travel path of the vehicle (2) are stored.
- the method embodiments including steps A), B) and C) allow a remote server (26) to connect to one or more computing units (3) and process alert data (5) and/or georeferenced alert data (12) of a plurality of vehicles (2).
- the remote server (26) can generate maps of the travel routes of the vehicles (2) and classify segments of each travel route according to the collision probability data (9) obtained in step C) for said plurality of vehicles (2).
- the remote server (26) can be connected to the computing units (3) of the plurality of vehicles (2) by means of a client-server type communications protocol.
- the computing units (3) are configured as terminals that communicate with the remote server (26) through the Internet, using a data transmission protocol such as API, API-REST, HTTP, HTTPS, and the like known to a person of average skill in the art.
- the computing unit (3) executes an artificial vision method (6) that generates the alert data (5) if a value of a distance data between the vehicle (2) and an object (1) identified in a second LIDAR image (8) taken at a second instant, is less than a value of a distance data between the vehicle (2) and the object (1) of a first LIDAR image (7) taken at a first instant prior to the second instant.
- the computing unit (3) obtains the georeferenced alert data (12) through a classification process (13) that takes as input the alert data (5) and a georeferencing data (14) obtained by a GPS module (17).
- the modalities of the method that include steps A), B) and C) allow the alert data (5) to be obtained locally in the computing unit (3), with which alert messages (36) can be generated that are communicated locally to the driver through the user interface device (35).
- the present disclosure describes embodiments of an apparatus for obtaining collision probability data between an object (1) and a vehicle (2).
- the apparatus comprises a LIDAR sensor (15) configured to obtain a plurality of LIDAR images (4), where the LIDAR sensor (15) is configured to be arranged in a blind spot (16) of the vehicle (2).
- the apparatus further comprises a GPS module (17) connected to the computing unit (3) and configured to obtain georeferencing data (14) of the vehicle (2), and a communications module (29).
- the apparatus includes a computing unit (3) connected to the LIDAR sensor (15), the GPS module (17), and the communications module (29) and configured to be arranged in the vehicle (2), where the computing unit (3) is configured to execute any of the modalities of the methods implemented herein disclosed.
- the apparatus includes a plurality of LIDAR sensors (15), each LIDAR sensor (15) disposed at a blind spot (16) of the vehicle (2).
- the plurality of LIDAR sensors (15) are connected in parallel to the computing unit (3). In this manner, the computing unit (3) can process data corresponding to two or more blind spots (16) of the vehicle (2).
- the LIDAR sensor (15) is a commercially available sensor, for example, LIDAR modules available for iPhone ® or iPad® 12 series or higher LIDAR devices that feature this technology.
- the LIDAR sensor (15) may include a servo motor (30) having an axis configured to execute an alternating movement between a first angle (41) and a second angle (42).
- the LIDAR sensor (15) may further include a laser generating device (31) disposed on the axis of the servo motor (30) and a laser receiver (32) configured to be disposed in the blind spot (16) of the vehicle (2) and to detect a plurality of bounce signals (37) that are generated when a laser (33) generated by the laser generating device (31) bounces off an object (1) positioned near the blind spot (16).
- the LIDAR sensor (15) may also include a LIDAR imaging device (34) connected to the laser receiver (32) and the laser generating device (31), and configured to obtain the LIDAR images (4) from the plurality of bounce signals (37).
- the first angle (41) and a second angle (42) can generate between them an angular path between 300° and 45°.
- the first angle (41) and a second angle (42) define an angular path of 180°, also called a field of view or detection zone, and form an X axis parallel to a surface of the vehicle (2).
- the LIDAR sensor (15) is in a blind spot (16) arranged in the middle zone side of the vehicle (2), the X axis would be parallel to the length of the vehicle, with origin in the position where the LIDAR sensor is installed (15).
- the LIDAR sensor (15) is configured to scan with the laser (33) the field of view defined by the first angle (41) and the second angle (42) and detect the object (1) when it is positioned in the blind spot (16).
- a plurality of bounce signals (37) are generated, which are subsequently detected by the laser receiver (32).
- the bounce signals (37) are processed by the LIDAR image generating device (34), which may be a computing unit similar to the computing unit (3).
- the LIDAR imaging device (34) may be configured to determine blind spot areas (16) circumscribed in right triangles. When an object (1) is detected, the LIDAR imaging device (34) obtains from the LIDAR images (4) at least the distance values and location coordinates of the object (1) relative to a coordinate axis system including the aforementioned X axis.
- the LIDAR imaging device (34) may obtain at least two points in the coordinate axis system, one for a first LIDAR image (7) and one for a second LIDAR image (8).
- the LIDAR imaging device (34) may assign to each LIDAR image (4) a time data or time stamp data that allows the computing unit (3) to identify the instant in time at which said LIDAR image (4) was obtained.
- the LIDAR imaging device (34) preferably generates a data tensor that includes a plurality of points corresponding to the surfaces of the object (1) on which the laser (33) strikes. In this way, a plurality of distances between the object (1) and the vehicle (2) can be determined, and LIDAR images (4) can be formed that allow the shape of the object (1) to be visualized, and that subsequently allow an object class data (18) associated with said object (1) to be obtained.
- the computing unit (3) is a device that processes data.
- the computing unit (3) may be selected from the group comprising microcontrollers, microprocessors, DSCs (Digital Signal Controller), FPGAs (Field Programmable Gate Array), CPLDs (Complex Programmable Logic Device), ASICs (Application Specific Integrated Circuit), SoCs (System on Chip), PSoCs (Programmable System on Chip), computers, servers, tablets, cell phones, smart phones, signal generators and computing units (3) similar or equivalent known to a person with average knowledge of the subject and combinations of these.
- the computing unit (3) may also include a memory module having at least one memory selected from the group comprising RAM (cache memory, SRAM, DRAM, DDR), ROM memory (Flash, Cache, hard drives, SSD, EPROM, EEPROM, removable ROM memories (e.g. SD (miniSD, microSD, etc.), MMC (MultiMedia Card), Compact Flash, SMC (Smart Media Card), SDC (Secure Digital Card), MS (Memory Stick), among others)), CD-ROM, digital versatile discs (DVD for Digital Versatile Disc) or other optical storage, magnetic cassettes, magnetic tapes, storage media or any other medium that can be used to store information and which can be accessed by a computing unit (3) that are known to a person with average skill in the art.
- the computing unit (3) may further include a storage device, display device, and/or a Human Interface Device (HID).
- the HID device may be selected, without limitation, from a keyboard, mouse, trackball, touchpad, pointing device, joystick, touch screen, among other devices capable of allowing a user to input data into the computing unit of the device, and combinations thereof.
- the apparatus may include a user interface device (35) connected to the computing unit (3) and configured to be disposed on a dashboard of the vehicle (2) to which a conduit of the vehicle (2) has access, where the computing unit (3) is configured to obtain an alert message (36) that is displayed on the user interface device (35) each time an alert data (5) is obtained.
- the user interface device (35) includes a user interface device (35) that is connected to the computing unit (3) and configured to be disposed on a dashboard of the vehicle (2) to which a conduit of the vehicle (2) has access, where the computing unit (3) is configured to obtain an alert message (36) that is displayed on the user interface device (35) whenever an alert data (5) is obtained. display.
- the user interface device (35) may be a tablet, smartphone or touch screen integrated into the vehicle dashboard (2).
- the display device can be any device that can be connected to a computer unit and display its output, and can be selected from CRT monitor (Cathode Ray Tube), flat panel display, LCD liquid crystal display (Liquid Crystal Display), active matrix LCD display, passive matrix LCD display, LED displays, screen projectors, TV (8KTV, 4KTV, HDTV, plasma TV, Smart TV), OLED displays (Organic Light Emitting Diode), AMOLED displays (Active Matrix Organic Light Emitting Diode), QD quantum dot displays (Quantic Display), segment displays, among other devices capable of displaying data to a user, known to a person with average knowledge of the subject, and combinations of these.
- the communications module (29) can use a wireless communication technology that is selected from the group consisting of Bluetooth, WiFi, Radio Frequency RF ID (for the acronym in English for Radio Frequency Identification), UWB (for the acronym in English for Ultra Wide B-and), GPRS, Konnex or KNX, DMX (for its acronym in English, Digital Multiplex), WiMax and equivalent wireless communication technologies that are known to a person with average knowledge of the subject and combinations of the above.
- a wireless communication technology that is selected from the group consisting of Bluetooth, WiFi, Radio Frequency RF ID (for the acronym in English for Radio Frequency Identification), UWB (for the acronym in English for Ultra Wide B-and), GPRS, Konnex or KNX, DMX (for its acronym in English, Digital Multiplex), WiMax and equivalent wireless communication technologies that are known to a person with average knowledge of the subject and combinations of the above.
- the present disclosure describes embodiments of an apparatus for obtaining collision probability data on a travel path of a vehicle (2), comprising a remote server (26) configured to connect to at least one computing unit (3), and configured to execute any of the embodiments of the methods implemented herein disclosed in which a remote server (26) is employed.
- the remote server (26) may be part of a server system or a communications network configured to establish a web service consumed by the computing units (3).
- the remote server (26) may be any server, computer or computing device that includes a processing unit configured to execute a series of instructions corresponding to stages or steps of methods, routines or processes.
- the server may install and/or execute a computer program that may be written in Java, JavaScript, Perl, PHP and C++, C#, C*, Python, SQL, Swift, Ruby, Delphi, Visual Basic, D, HTML, HTML5, CSS, and other programming languages known to a person of ordinary skill in the art.
- the remote server (26) has a communications module that allows establishing a connection with other servers or computing devices, such as the computing unit (3).
- servers may connect to each other, and to other computing devices through web services architectures and communicate over communications protocols such as SOAP, REST, HTTP/HTML/TEXT, HMAC, HTTP/S, RPC, SP, and other communications protocols known to a person of ordinary skill in the art.
- communications protocols such as SOAP, REST, HTTP/HTML/TEXT, HMAC, HTTP/S, RPC, SP, and other communications protocols known to a person of ordinary skill in the art.
- the servers mentioned in the Descriptive Chapter of this disclosure may be interconnected through networks such as the Internet, VPN networks, LAN networks, WAN networks, other equivalent or similar networks known to a person with average knowledge of the subject and combinations thereof. These same networks may connect one or more computing units (3) to one or more servers.
- the remote server (26) may be selected from virtual servers or web servers. Any of the servers described herein may include a memory module configured to store instructions that when executed by the server execute a portion, or all of one or more steps of any of the methods disclosed herein.
- Web services allow servers and computing units (3) to exchange data and communicate with each other regardless of architectural differences in software or hardware.
- Web services are generally in the form of data (“XML”) or similar programming languages used in the transmission of information over computer networks, such as the Internet.
- Web services operate over communications protocols, for example, Simple Object Access Protocol (SOAP) or its later versions, Representational State Transfer (“REST”) protocol (“HTTP/HTML”), and XML - Remote Procedure Call (“XML-RPC”).
- SOAP Simple Object Access Protocol
- REST Representational State Transfer
- HTTP/HTML HyperText Transfer
- XML-RPC XML - Remote Procedure Call
- the present disclosure also relates to a computer program comprising instructions, which when the program is executed on a device according to any of the embodiments described above causes said device to carry out the steps of a method according to any of the embodiments of the methods described above in this disclosure.
- the present disclosure also relates to a computer-readable medium comprising instructions, which when executed by a system according to any of the above-described embodiments cause said system to carry out the steps of a method according to any of the embodiments of the methods described above in this disclosure.
- the computer readable medium can be selected from executable files, installable files, compact discs, RAM (cache memory, SRAM, DRAM, DDR), ROM memory (Flash, Cache, hard drives, SSD, EPROM, EEPROM, removable ROM memories (e.g. SD (miniSD, microSD, etc.), MMC (MultiMedia Card), Compact Flash, SMC (Smart Media Card), SDC (Secure Digital Card), MS (Memory Stick), among others), CD-ROM, digital versatile discs (DVD for Digital Versatile Disc) or other optical storage, magnetic cassettes, magnetic tapes, storage or any other means that can be used to store information and which can be accessed by a processing unit, computing unit (3) or server (e.g., the remote server (26)).
- a processing unit computing unit (3) or server (e.g., the remote server (26)).
- the apparatus is installed in a 30-passenger bus-type vehicle (2).
- the device includes six LIDAR sensors (15), each arranged in a blind spot (16) of the vehicle (2).
- Each LIDAR sensor (15) is a commercially available sensor such as those used in Apple® devices such as iPhone 12 Pro Max and higher, which is connected by a circuit to the computing unit (3).
- the device has a computing unit (3) which is an iOS UNO, and has a GPS module (17) compatible with chicken.
- the computing unit (3) also has a RAM memory module and a MicroSD memory module.
- the device has a communications module (29) compatible with 3G, 4G and 5G technology that connects to the Internet through a 3G, 4G or 5G cellular telephone network.
- Each LIDAR sensor (15) of Example 2 includes a servo motor (30) with an axis that executes a 180° movement.
- a laser generating device (31) is mounted on the axis.
- each LIDAR sensor (15) has a laser receiver (32), which is an optical sensor configured for laser detection.
- the LIDAR sensor (15) has a LIDAR imaging device (34) which is an chicken UNO type computing unit, programmed to process bounce signals (37) detected by the laser receiver (32).
- Example 3 A first example of the method disclosed herein includes a step a) of receiving a plurality of LIDAR images (4), from the LIDAR sensors (15) of the device of example 1.
- the LIDAR images (4) include time data and distance values between the vehicle (2) and an object (1).
- This example of the method also has a step b) of obtaining an alert data (5) using an artificial vision method (6) that takes as input at least a first LIDAR image (7) taken at a first instant, and a second LIDAR image (8) taken at a second instant after the first instant.
- the machine vision method (6) generates the alert data (5) if the value of the distance data between the vehicle (2) and the object (1) in the second LIDAR image (8) is less than the value of the distance data between the vehicle (2) and the object (1) in the first LIDAR image (7).
- the machine vision method (6) of this example includes an object recognition-based process that determines an object class data (18) associated with a class to which the object (1) belongs.
- the machine vision method (6) further determines a reference point of the object (1) with minimum distance to the vehicle (2) in the first LIDAR image (7) and in the second LIDAR image (8), and determines whether the object (1) is approaching or moving away from the vehicle (2).
- the method of this example also includes a step c) of obtaining a georeferenced alert data (12) through a classification process (13) that takes as input the alert data (5) and at least one georeferencing data (14) obtained by the GPS module (17) of the device.
- the classification process (13) is a machine learning process based on decision trees.
- the method of this example has a step d) of obtaining a collision probability data (9) by means of a classification process (10) that takes as input the plurality of LIDAR images (4) and a plurality of georeferenced alert data (12) obtained during a travel path of the vehicle (2).
- the classification process (10) is a machine learning process based on probabilistic analysis that takes as input at least the distance data obtained from the LIDAR images (4).
- the method of example 3 was modified so that the apparatus of example 2 is used, and so that the method of the present example also includes a step aO), prior to step a) of obtaining the LIDAR images (4).
- Step aO) includes a sub-step (100) of scanning a detection area by sweeping with a laser (33) of the LIDAR sensor (15) an angular path of 180°.
- the detection area is divided into semicircular segments divisible into successive triangles.
- step aO) has a sub-step (101) of detecting a plurality of bounce signals (37) with a laser receiver (32) of the LIDAR sensor (15), where each bounce signal (37) is related to a semicircular segment of the detection area, and has a step (102) of obtaining at least a first LIDAR image (7) and a second LIDAR image (8) from the bounce signals (37), by means of a LIDAR image generating device (34) of the LIDAR sensor (15).
- the first LIDAR image (7) includes a first reference point
- the second LIDAR image (8) includes a second reference point.
- Each reference point is a coordinate of a coordinate system, and where the coordinate system includes an X coordinate axis parallel to a surface of the vehicle (2).
- the method has a step d) with a sub-step (103) of obtaining a functional form f(x) from the first reference point of the first LIDAR image (7) and the second reference point of the second LIDAR image (8).
- the functional form f(x) is a linear function.
- stage aO has a sub-stage (104) of obtaining the collision probability data (9) by integrating the functional form f(x), where the integration limits are defined based on the coordinates of the first reference point and the second reference point.
- the calculated areas will correspond to the collision probabilities and, in addition, the non-collision probabilities.
- Example 5 the calculated areas will correspond to the collision probabilities and, in addition, the non-collision probabilities.
- step b) includes a sub-step bl) in which the object class data (18) is obtained.
- the object class data (18) assigns a class value to the object (1) identified in the LIDAR images (4), and where the identified object (1) is selected from a vehicle, an obstacle and a pedestrian.
- the object class data (18) includes subclass values for the type of vehicle detected (bus, truck, car, van, motorcycle, bicycle, motor-car, etc.).
- the object classification method (19) is a pre-trained convolutional neural network.
- This method modality also has a sub-step b2) of obtaining a criticality value (20) through a criticality analysis process (38), taking as input the object class data (18).
- the criticality analysis process (38) is a supervised rule-based classification process.
- this embodiment of the method has a sub-step b3) of obtaining with the artificial vision method (6) the value of the alert data (5) taking as input the criticality value (20) and the LIDAR images (4), where the artificial vision method (6) applies a correction factor to the value of the alert data (5) depending on the criticality value (20). Also, the method of this example includes a sub-step b4) of moving on to step c) if the value of the alert data (5) exceeds an activation value, otherwise, discarding the alert data and repeating step a).
- an alert message such as a flashing image or an audible signal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
La présente divulgation décrit des modalités de procédés mis en œuvre par ordinateur et d'appareils d'obtention de données de probabilité de collision entre un objet et un véhicule. Le procédé ainsi divulgué comprend une étape a) de réception d'une pluralité d'images LIDAR, les images LIDAR comprenant une donnée de temps et de valeurs de distance entre le véhicule et un objet. En outre, le procédé comprend une étape b) d'obtention d'une donnée d'alerte au moyen d'un procédé de vision artificielle utilisant des images LIDAR comme données d'entrée. Le procédé de vision artificielle génère la donnée d'alerte si la valeur de donnée de distance entre le véhicule et l'objet de la seconde image LIDAR est inférieure à la valeur de donnée de distance entre le véhicule et l'objet de la première image LIDAR.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CONC2023/0001801 | 2023-02-17 | ||
CONC2023/0001801A CO2023001801A1 (es) | 2023-02-17 | 2023-02-17 | Método y aparato de obtención de datos de colisión entre un objeto y un vehículo |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024171146A1 true WO2024171146A1 (fr) | 2024-08-22 |
Family
ID=92300004
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2024/051508 WO2024171146A1 (fr) | 2023-02-17 | 2024-02-16 | Procédé et appareil d'obtention de données de collision entre un objet et un véhicule |
Country Status (2)
Country | Link |
---|---|
CO (1) | CO2023001801A1 (fr) |
WO (1) | WO2024171146A1 (fr) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8762043B2 (en) * | 2008-01-29 | 2014-06-24 | Volvo Car Corporation | Method and system for collision course prediction and collision avoidance and mitigation |
US20190265714A1 (en) * | 2018-02-26 | 2019-08-29 | Fedex Corporate Services, Inc. | Systems and methods for enhanced collision avoidance on logistics ground support equipment using multi-sensor detection fusion |
US10699565B2 (en) * | 2018-04-04 | 2020-06-30 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for inferring lane obstructions |
US20200272164A1 (en) * | 2017-06-23 | 2020-08-27 | Uatc, Llc | Collision-Avoidance System for Autonomous-Capable Vehicles |
US10915765B2 (en) * | 2017-01-03 | 2021-02-09 | Innoviz Technologies Ltd. | Classifying objects with additional measurements |
US20220128701A1 (en) * | 2020-10-23 | 2022-04-28 | Argo AI, LLC | Systems and methods for camera-lidar fused object detection with lidar-to-image detection matching |
US11391840B2 (en) * | 2018-06-25 | 2022-07-19 | Ricoh Company, Ltd. | Distance-measuring apparatus, mobile object, distance-measuring method, and distance measuring system |
US20220268933A1 (en) * | 2007-11-07 | 2022-08-25 | Magna Electronics Inc. | Object detection system |
US20230033470A1 (en) * | 2021-08-02 | 2023-02-02 | Nvidia Corporation | Belief propagation for range image mapping in autonomous machine applications |
-
2023
- 2023-02-17 CO CONC2023/0001801A patent/CO2023001801A1/es unknown
-
2024
- 2024-02-16 WO PCT/IB2024/051508 patent/WO2024171146A1/fr unknown
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220268933A1 (en) * | 2007-11-07 | 2022-08-25 | Magna Electronics Inc. | Object detection system |
US8762043B2 (en) * | 2008-01-29 | 2014-06-24 | Volvo Car Corporation | Method and system for collision course prediction and collision avoidance and mitigation |
US10915765B2 (en) * | 2017-01-03 | 2021-02-09 | Innoviz Technologies Ltd. | Classifying objects with additional measurements |
US20200272164A1 (en) * | 2017-06-23 | 2020-08-27 | Uatc, Llc | Collision-Avoidance System for Autonomous-Capable Vehicles |
US20190265714A1 (en) * | 2018-02-26 | 2019-08-29 | Fedex Corporate Services, Inc. | Systems and methods for enhanced collision avoidance on logistics ground support equipment using multi-sensor detection fusion |
US10699565B2 (en) * | 2018-04-04 | 2020-06-30 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for inferring lane obstructions |
US11391840B2 (en) * | 2018-06-25 | 2022-07-19 | Ricoh Company, Ltd. | Distance-measuring apparatus, mobile object, distance-measuring method, and distance measuring system |
US20220128701A1 (en) * | 2020-10-23 | 2022-04-28 | Argo AI, LLC | Systems and methods for camera-lidar fused object detection with lidar-to-image detection matching |
US20230033470A1 (en) * | 2021-08-02 | 2023-02-02 | Nvidia Corporation | Belief propagation for range image mapping in autonomous machine applications |
Also Published As
Publication number | Publication date |
---|---|
CO2023001801A1 (es) | 2024-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gupta et al. | Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues | |
US10769456B2 (en) | Systems and methods for near-crash determination | |
US11449073B2 (en) | Shared vehicle obstacle data | |
US12118470B2 (en) | System for predicting aggressive driving | |
US10816993B1 (en) | Smart vehicle | |
US11450117B2 (en) | Hierarchical machine-learning network architecture | |
US20210271258A1 (en) | Smart vehicle | |
US11048253B2 (en) | Agent prioritization for autonomous vehicles | |
CN113439247B (zh) | 自主载具的智能体优先级划分 | |
WO2020264010A1 (fr) | Détection de région à faible variance pour une détection améliorée | |
US11977382B2 (en) | Ranking agents near autonomous vehicles by mutual importance | |
US11537819B1 (en) | Learned state covariances | |
CN114929543A (zh) | 预测周围因素的加塞概率 | |
US11648962B1 (en) | Safety metric prediction | |
US12097845B2 (en) | Systems and methods for identifying high-risk driving situations from driving data | |
Kheder et al. | Iot-based vision techniques in autonomous driving: A review | |
WO2024171146A1 (fr) | Procédé et appareil d'obtention de données de collision entre un objet et un véhicule | |
Khairdoost | Driver Behavior Analysis Based on Real On-Road Driving Data in the Design of Advanced Driving Assistance Systems | |
US12039008B1 (en) | Data generation and storage system | |
Nasr Azadani | Driving Behavior Analysis and Prediction for Safe Autonomous Vehicles | |
Alatabani et al. | XAI applications in autonomous vehicles | |
Li | Safe training of traffic assistants for detection of dangerous accidents | |
Sinha | Machine Learning for Autonomous Vehicle Accident Prediction and Prevention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24756455 Country of ref document: EP Kind code of ref document: A1 |