US20220303738A1 - On-board machine vision device for activating vehicular messages from traffic signs - Google Patents

On-board machine vision device for activating vehicular messages from traffic signs Download PDF

Info

Publication number
US20220303738A1
US20220303738A1 US17/830,428 US202217830428A US2022303738A1 US 20220303738 A1 US20220303738 A1 US 20220303738A1 US 202217830428 A US202217830428 A US 202217830428A US 2022303738 A1 US2022303738 A1 US 2022303738A1
Authority
US
United States
Prior art keywords
vehicle
message
optical code
canceled
sign
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/830,428
Inventor
Enes Karaaslan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connected Wise LLC
Original Assignee
Connected Wise LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connected Wise LLC filed Critical Connected Wise LLC
Priority to US17/830,428 priority Critical patent/US20220303738A1/en
Assigned to Connected Wise LLC reassignment Connected Wise LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KARAASLAN, ENES
Publication of US20220303738A1 publication Critical patent/US20220303738A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/09623Systems involving the acquisition of information from passive traffic signs by means mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T7/00Brake-action initiating means
    • B60T7/12Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
    • B60T7/16Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger operated by remote control, i.e. initiating means not mounted on vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T7/00Brake-action initiating means
    • B60T7/12Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
    • B60T7/16Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger operated by remote control, i.e. initiating means not mounted on vehicle
    • B60T7/18Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger operated by remote control, i.e. initiating means not mounted on vehicle operated by wayside apparatus
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T8/00Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force
    • B60T8/17Using electrical or electronic regulation means to control braking
    • B60T8/1701Braking or traction control means specially adapted for particular types of vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0022Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement characterised by the communication link
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T2210/00Detection or estimation of road or environment conditions; Detection or estimation of road shapes
    • B60T2210/30Environment conditions or position therewithin
    • B60T2210/36Global Positioning System [GPS]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications

Definitions

  • the present invention is directed to infrastructure to vehicle communications (“I2V”), and more particularly, I2V communication without the need for traditional communication infrastructure such as wireless or fiber optic communication networks.
  • I2V infrastructure to vehicle communications
  • CAV connected automated vehicle
  • V2I vehicle to infrastructure
  • V2V vehicle to vehicle
  • the present invention overcomes the shortcomings of the prior art. In order to support this particular type of communication in areas where there is no proper infrastructure.
  • the present invention generates the same behavior messages that are typically sent from networked world societies without the need for expensive complex communications infrastructure.
  • An infrastructure to vehicle communications system includes a full spectrum camera disposed on a vehicle.
  • a sign disposed away from the vehicle, includes a message thereon.
  • the sign reflecting visible light, and at least a portion of the message being an optical code; the optical code not reflecting visible light and reflecting invisible light.
  • the full spectrum camera captures the image of the sign including the optical code.
  • An on board unit receives the image and determines the existence of the sign from the reflected visible light and the optical code thereon from the reflected invisible light.
  • a data base stores one or more optical codes and one or more messages associated with a respective optical code, the on board unit communicating with the database and retrieving the message as a function of the optical code, and displays the message at a display in the vehicle.
  • an on board full spectrum camera captures images from the vehicle.
  • An onboard unit receives the images and determines the identity of the captured image as a function of reflected visible light and invisible light. Upon recognition of a sign, the image is searched in the invisible light spectrum for an optical code, disposed on the sign.
  • the onboard unit recognizes the presence of an optical code, and when present, the optical code is compared with an onboard database of optical codes.
  • the onboard database mapping a respective optical code to a message. The onboard unit retrieving the message associated with the recognized optical code from the data base and displaying the message at a screen on board vehicle.
  • the invisible light is at least one of ultraviolet light and infrared light.
  • FIG. 1 is an operational diagram showing a system and use in an environment in accordance with the invention as compared to a prior art system;
  • FIG. 2 is a diagram of a sign constructed in accordance with the invention.
  • FIG. 3 is depiction of different optical codes created in accordance with the invention.
  • FIG. 4 is a block diagram of the onboard components of a system for operating in accordance with the invention.
  • FIG. 5 consists of comparison of descriptors for code matching at different distances
  • FIG. 6 consists of analysis of environmental factors affecting the system's performance
  • FIG. 7 consists of examples results with estimated parameters of image filtering functions.
  • the present invention provides a solution that enables vehicular messages (i.e. vehicle-to-infrastructure (V2I), infrastructure-to-vehicle (I2V), and vehicle-to-vehicle (V2V)) communication without relying on a wireless communication infrastructure.
  • vehicular messages i.e. vehicle-to-infrastructure (V2I), infrastructure-to-vehicle (I2V), and vehicle-to-vehicle (V2V)
  • V2I vehicle-to-infrastructure
  • I2V infrastructure-to-vehicle
  • V2V vehicle-to-vehicle
  • An artificial intelligence (AI) embedded smart machine-vision on-board device is disposed on a vehicle. Uniquely generated vehicular optical codes are placed on road signs.
  • the on-board device is equipped with standard-resolution full spectrum camera.
  • the on board device may also include a geolocation sensor and wireless transmitter. The on board device will convey the important roadway information such as operational messages, basic safety messages, warning messages or roadway geometry messages to the driver when travelling outside of wireless communication infrastructure range.
  • the inventive system and method replicate some of the vehicular messages that are typically sent from road side units (RSUs) without the dedicated communications infrastructure by detecting roadway entities and recognizing the message from unique identifier signs using machine vision techniques.
  • the system interprets messages/optical codes disposed on the signs and transmit it to a vehicle's on-board unit (OBU).
  • OBU on-board unit
  • the system can work as a supplement to the prior art communication systems and when the vehicle enters in range of a wireless infrastructure, the vehicle's OBU will start communicating directly with RSU and then the smart on-board vision device will stop transmitting messages.
  • the device can nevertheless request certain system updates to its libraries when the OBU shows availability of secure connection to a central information and communication (ICT) network locations.
  • ICT central information and communication
  • ICT 10 includes an integrated communications network, including a number of towers 30 having antennas 32 for transmitting and receiving signals with a vehicle 20 having a prior art system 22 thereon.
  • ICT 10 includes an integrated communications network, including a number of towers 30 having antennas 32 for transmitting and receiving signals with a vehicle 20 having a prior art system 22 thereon.
  • a number of buildings 40 may also have antennas 42 for receiving and transmitting signals with system 22 of vehicle 20 and vehicle 20 will receive information as known in the art.
  • a car 200 having a system 300 constructed in accordance with the invention, communicates with road signs 100 to receive messages 102 , 104 , as will be described in detail below.
  • System 300 includes a full spectrum camera 302 for receiving an image of items outside of vehicle 200 which are encountered by vehicle 200 as vehicle 200 travels.
  • System 300 is capable of identifying the presence of a sign 100 captured by camera 302 as well as the information disposed thereon.
  • the sign 100 will have visible information 106 and an invisible optical code 400 .
  • the optical code 400 indicates to the operator of the vehicle 200 information related to operation of vehicle 200 , such as pedestrian traffic is in the area (warning 104 ), is road damage ahead (warning 102 ).
  • full-spectrum camera 302 captures images and system 300 determines what is in the image such as sign 100 , or an actual pedestrian, or even the condition of a traffic light (red, green or yellow).
  • system 300 in a preferred nonlimiting embodiment is incorporated as a compliment to prior art system 22 so that system 300 takes over when out of range of an ICT 10 , but operates in a manner in accordance with the prior art when vehicle 200 is in range of an ICT 10 and RSU signals are detected.
  • QR codes are decoded by the mobile apps using computer vision methods instead of using a scanner that measures the reflectivity of a light beam.
  • QR codes are innately designed for a scanners' working principle and therefore may work inefficiently with cameras.
  • Using these types of matrix codes for static road signs brings about challenges stemming from ambient illumination, sign deterioration, obstruction by an object such as a tree etc. When the machine code is not complete, the information/message can become invalid and may even lead to safety issues for connected vehicles.
  • the required format for a map type of message consists of relatively large character sets, which will be impractical to fit in one 2D matrix code.
  • these codes can easily be modified by a third party if applied to traffic signs. This security problem will also lead to further vehicle safety issues.
  • a special type of unique image identifier is used as the optical code 400 ; instead of using 2D matrix codes.
  • the unique image identifiers are not encoded messages but rather visual representation of encrypted hash values. Unlike 2D matrix codes, the sign message can't be altered by a third party. Once the image identifier is changed. It will not match with the corresponding identifier in the library. These image identifiers were designed to optimize the camera recognition performance at high vehicle speed and challenging environment conditions such as illumination problems, heavy climatic conditions (i.e. fog, rainstorm, and snow). As seen in FIGS. 2 and 3 , the image of optical code 400 includes pattern of blocks 402 a , 402 b , 402 c , 402 n .
  • the block sizes and the color contrast values were chosen optimally for standard resolution cameras (video resolution of 720p and 1080p) to maximize the number of matching image features.
  • the image recognition algorithm looks for color differentiation and high visual transition contrast between adjacent blocks 402 to facilitate recognition. Additionally the optical code 400 includes redundancy to enable recognition when part of the code is blocked, covered, or occluded.
  • image identifiers will associate messages located in a database; map optical codes to associated messages for use by the vehicle operator. If a modification is needed in the message (i.e. lane geometry change, completed road construction etc.), the message database is updated when the connected vehicle is in range of an RSU.
  • unique optical codes 400 are generated for each vehicular message using hash visualization techniques. These uniquely generated optical codes 400 serve as optical codes 400 identifiers and can be robustly recognized by image recognition algorithms. However, generating the optical codes 400 with more distinct features (more corners, edges, intensity changes etc.) allows even more robust recognition.
  • Visual hashing is a commonly used method to identify an internet IP address with visual representation while protecting users' privacy (Haack, 2007). Visual hashing can also be used for image recognition and retrieval by converting a hash value to an image using an encryption method (Gangone, Whelan, Janoyan, & Jha, 2008). To provide safe and secure recognition of images and transmission of vehicular messages, a custom visual hashing algorithm is developed to optimize image recognition performance and to add a visually appealing look to the vehicular message signs.
  • API application-programming interface
  • BSD License 2.0 which is a part of a family of permissive free software licenses that impose minimal restrictions on the use and distribution of covered software, and will be open to all developers and agencies who intend to use it for sign identification or other similar image recognition purposes.
  • the visual hashing algorithm uses SHA-512 encryption method, a very secure one-way hash function, for the generation of optical codes 400 . Hence, a third party will not be able to reverse back to the original message by analyzing the identifier image. If a third party replaces the sign with a different identifier, the smart vision system will simply not recognize the message.
  • the generator API for vehicular message identifiers is designed specifically for CAV's camera recognition performance. This improved design of hash visualization entails increased robustness against occlusion (e.g. symmetrical design) and illumination changes (e.g. higher contrast difference).
  • system 300 can construct the database by region so that specific optical codes 400 not only correspond to a specific message, but by knowing the physical location of ovwhicale 200 and the anticipated messages in the region, different messages can be associated with a single optical code and displayed as a function of location of vehicle 200 .
  • the customized API also incorporates geofencing feature and the occurrence of a mismatch is extremely unlikely for nearby signs.
  • a secure web-based tool will be developed in order to provide a convenient process to deploy message signs for vehicular communication applications of highway agencies, cities/counties and expressway authorities.
  • a highway maintenance agency for instance, can deploy their work zone message sign by uploading the TIM message content and associate it with an automatically generated vehicular message sign.
  • System 300 includes a full spectrum camera 302 capable of capturing images in both the visible light spectrum and the invisible light spectrum.
  • the invisible light spectrum includes at least the ultraviolet wavelengths and infrared wavelengths incapable of being seen by the human eye.
  • Camera 302 is preferably a .multi-sensor camera module for RGB, invisible light and depth sensing.
  • the depth-sensing camera 302 provides the object distances at the standard resolution in order to replicate the necessary spacing and timing (SPaT) information for signals in the vehicular messages without compromising the overall affordability of the system.
  • System 300 can provide 25 fps (frames per second) processing speed with the current stereo vision system.
  • a smart vision device 304 is coupled to camera 302 and receives the images captured by camera 302 .
  • Smart vision device 304 analyzes the images for the existence of objects of interest; those that need to be acted upon for proper operation of the vehicle 200 . These objects may be signs, traffic lights, even pedestrians.
  • Smart vision device 304 identifies these things to an on board unit (OBU) 306 .
  • OBU on board unit
  • OBU 306 includes an optical code data base containing two or more distinct optical codes 400 mapped to a respective message.
  • the messages are items of information necessary for, or of interest for, operation of vehicle 200 .
  • the messages may include driving directions (i.e. Merge, Detour, or the like), road condition (Construction, Uneven Roadway, Unpaved or the like), or warnings (pedestrian crosswalk, animal crossing, or the like).
  • OBU 306 receives the identified optical code 400 and retrieves the corresponding message from the data base. The message may then be displayed at a screen 310 within vehicle 200 , or even at a heads up display on a vehicle windshield. In an alternative embodiment the message may be output as an audio message to the operator of vehicle 200 .
  • the database is part of OBU 306 , bit it is well within the scope of the invention to provide a standalone database in communication with OBU 306 and/or smart vision device 304 .
  • smart vision device 304 recognizes real world objects such as the sign 100 itself, pedestrians in the roadway, traffic lights and like. This identification is also input to OBU 306 for processing.
  • the OBU 306 also communicates with the operational control system of the vehicle 200 such as the braking system as now available in some cars which have autonomous braking, or a vehicle operation as known from current self driving vehicles. Therefore, OBU 306 communicates with a brake control 312 , by way of example, for operating the break autonomously. Therefore, if smart vision device 304 recognizes a pedestrian in the path of vehicle 200 and transmits that image to OBU 306 , if OBU 306 has not detected braking of the vehicle 200 , OBU 306 will send a command to break control 312 to break vehicle 200 .
  • System 300 may also include an interface 314 for connecting OBU 306 to other data sources and devices. In this way, OBU 306 may be updated to change the messages associated with specific optical codes 400 , or to add or delete optical codes 400 over time. Additionally, in a preferred embodiment, OBU 306 keeps a log of activity for each symbol utilized and message retrieved which may be downloaded for monitoring purposes through interface 314 .
  • System 300 includes a GPS receiver 308 for determining location of vehicle 200 having system 300 thereon. In this way the database can be accessed as a function of location as well as received message.
  • system 300 may also include a dedicated short range communication antenna 306 for communicating with an antenna 32 , at a tower 30 of an RSU, by way of example, to communicate with ICT 10 .
  • GPS receiver 308 may also provide an output to OBU 306 , and if OBU 306 determines that it is within an ICT 10 , OBU 306 operates on information received from antenna 316 , rather than optical codes 400 received from camera 302 .
  • antenna 316 re receives signals from ICT 10 , and upon receipt of such a signal system 300 operates in accordance with the prior art. Antenna 316 acts as a de facto switch.
  • a fast and robust image matching algorithm is required to compare the features of the detected sign image and optical code 400 with the features of each message identifier located in the OBU 306 database (the highest matching score is returned for the best match then the corresponding vehicular messages is found).
  • AI artificial intelligence
  • the AI framework utilized by smart vision device 304 can also make predictions for other vehicular message types (shown in Table 1) that are not based upon recognition of optical code 400 , such as identifying an animal or pedestrian on the road. Using an AI architecture, it is possible to reinforce the performance of the code matching of the optical code 400 and detect other roadway entities such as an animal on the road or a cyclist suddenly entering an intersection as well.
  • the AI-embedded system of smart vision device 304 analyzes the road environment and detects the roadway entities (e.g. traffic signs, vehicles, pedestrians, cyclists and traffic lights) with real-time tracking information.
  • MapData SAE J2735 MAP messages are the most convenient message types Message (MAP) when vehicular message signs are used to relay messages to the smart vision support device. Since, navigation apps typically have issues to provide most updated road geometry in the rural areas, it is important to send this information to CAVs.
  • TIM temporary work zones Message
  • PSM road Message
  • Road Side Alert SAE J2735 RSA messages can be partially supported by the smart (RSA) vision on-board device by relaying information about ice on the road or suboptimal road segment condition.
  • more dynamic messages such as ambulances operating in the area or train arrival are not suitable.
  • the vehicular message signs should be placed at locations before bridge entrances or suboptimal road segments to send activate RSA messages.
  • Dilemma Zone ISO The smart vision device can detect a yellow traffic light and Protection (SA1) TS 19091 estimate whether it is challenging for vehicle either to stop or continue before the signal turns red based on the current speed of the vehicle. This vehicular communication application will not require use of vehicular message signs.
  • Red Light ISO This warning requires SPaT and MAP messages in real-time Violation TS 19091 to notify the driver to avoid a potential red-light violation.
  • Warning SA2 Unlike MAP messages, SPaT messages cannot be generated directly from the vehicular message signs; however, smart vision device can replicate the SPaT by estimating the time of arrival to the traffic light in-real time.
  • Deep learning is a multi-layer artificial neural networks (ANN) framework that is based on learning more complex concepts from simpler concepts.
  • ANN artificial neural networks
  • the learning occurs in deep learning models when the data acquired through experience is used to manipulate uncertainty about predictions of unavailable data (Ghahramani 2015).
  • CNN Convolutional Neural Network
  • CNNs are proven to be very effective in image classification and object localization tasks (Karaaslan et al. 2019). These models reached a wide range of real-world applicability from self-driving vehicles to facial recognition in surveillance systems.
  • SSD Single Shot MultiBox Detector
  • SSDlite is a mobile optimized version of SSD models that replaces all the regular convolutions with separable convolutions (Sandler et al. 2018). SSDlite is used for location and object classification and becomes more memory efficient when used with a MobileNet V2 classifier.
  • a relatively light-weight classifier for traffic light color classification is also added to the AI framework in order to enable vehicular communication applications at signalized intersections (e.g. red-light violation warning).
  • This classifier model was trained on an open source traffic light color classification dataset (Kato et al., 2018).
  • the supervised training of the CNN models is performed in the Newton Visualization in a supercomputer.
  • the supercomputer includes ten compute nodes with thirty two cores and 192 GB memory in each node; two Nvidia V100 GPUs are available in each compute node totaling three hundred twenty cores and twenty GPUs.
  • One single training takes about six hours on the GPU cluster computer for total of two hundred thousand steps.
  • a TensorFlow v2.11 machine learning framework is used for model creation and performing the model trainings (Abadi et al. 2016).
  • Overfitting usually occurs when the data is too small compared to the size of the deep learning architecture. In that case, either the number of convolutional layers should be reduced, or dropout layers should be added at the end of each convolutional block (Karaaslan et al. 2018).
  • Another cause of overfitting is the wrong selection of training hyper-parameters.
  • the learning rate is often chosen to be too small in order to reduce training loss.
  • validation loss then becomes much larger. This indicates that the model is fitting over the branch of the loss function and not converge around the local minima. Increasing the learning rate, on the other hand, yields non-converging loss function.
  • SIFT Scale Invariant Feature Transform
  • a fast and robust image matching algorithm is used to compare the features of the detected optical code 400 with the features of each code image located in the database of system 300 , and more prefer ably OBU 306 . Then, the highest matching score is returned for the best code match and the corresponding V2I message is found.
  • Robust matching is very critical since a mismatch will lead to retrieval of an incorrect V2I message from the database.
  • the matching speed is also very important, since the code matching system on the connected vehicle will have very little amount of time to operate especially at high driving speeds. Therefore, various descriptors using different matching algorithms were tested on an exemplary optical code database and their performances are compared as shown in Table 4.
  • ORB descriptors with the Brute Force method showed a significant increase in speed, but it sacrificed some accuracy especially when the cropped image of the detected optical code 400 was very small.
  • SURF descriptors also performed reasonably well; however, its scale invariance was not as good as SIFT. The other descriptors on the other hand, did not show any close performance.
  • a distance test is conducted to measure the scale invariability of the descriptors.
  • the matching results of the SIFT, SURF, ORB, AKAZE and BRISK descriptors are compared at varying distances of 5 ft, 10 ft and 15 ft. In the comparison, good matching features above default threshold values were drawn in green color on the compared images. The homography transformation is also highlighted on the detected sign images.
  • ORB performs the fastest; however, SURF detects a lot more features in the sign images. As shown in Table 5 both descriptors did not do as well as SIFT did with very small images. Therefore, a dynamic descriptor selection is implemented in the code recognition system. ORB is chosen to be the main descriptor and when number compared features is under a certain threshold, the system switches to SURF. If ORB does not perform a match at all, the system switches to SIFT. In this way, the speed is optimized, and the robustness of the system is maximized. The system creates a vehicular message signal when the sign image is recognized and matched with a code image in the database.
  • the traffic sign detections including optical code 400
  • the traffic sign detections are extracted by smart vision device 304 and analyzed by OBU 306 using the image recognition algorithm in order to match the detection with a V2I message identifier from the database.
  • the bounding regions of the traffic sign detections are utilized as initial region of interest for the recognition of the V2I identifiers.
  • the optical code 400 is transmitted to the OBU 306 .
  • real-time image enhancement is applied to the device's camera feed as the vehicular message sign recognition system needs to tackle abrupt brightness changes.
  • poor brightness in the detected optical code 400 hides useful image features (i.e. edges and corners) and potentially causes misdetections. It is indispensable for the system to function robustly under variety of conditions, including different climates, ambient brightness, sign occlusion, and at different vehicle speeds. Therefore, real-life tests are applied.
  • FIG. 6 summarizes the effect of environment factors and the corresponding remedies.
  • Each filtering operation requires input parameters to calibrate the filtering level. These input parameters are defined as below:
  • the parameter estimation is conducted using a multivariate nonlinear multiple regression analysis that predicts parameter equations based on the image variables of the detected sign. These image variables are brightness (b), contrast (c) and resolution (c).
  • the regressions equations were predicted as below:
  • Predicting multi-variate non-linear equations with multiple dependent variables is a complex statistical problem. Therefore, regression functions that predict the gamma and tile parameters are assumed linear. Furthermore, the window and density parameters are assumed categorical variables. The required sample data are collected from the field tests that are conducted under varied climate and ambient light conditions. FIG. 7 shows example results from the regression functions.
  • the feasibility of the present invention at different vehicle speeds is ensured through the investigation of minimum size requirements for vehicular message signs.
  • the suggested dimensions of the message signs 100 for speed limits 30 mph, 40 mph and 50 mph are determined. It must be noted that these requirements are for the default camera settings, i.e. real-time video at 1080p resolution and 25 fps rate.
  • ROS is a flexible robotics framework than can perform different parallel operations without compromising the system's efficiency.
  • ROS environment is a preferred platform especially in self-driving vehicle technologies (e.g. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo).
  • self-driving vehicle technologies e.g. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo.
  • ROS will allow also compatibility to a wide range of automated driving systems in addition to entailing reliable operation of system on an actively supported platform.
  • the present invention collects important traffic safety data from the roadway including deterioration of road infrastructure (sign visibility and pavement damage etc.) and conflict point data (near crashes, dilemma zones, traffic congestions etc.).
  • the present invention can determine potentially useful information types (e.g. visibility, deterioration, obstruction etc.) that can be obtained from the detected optical codes 400 of road signs 100 and also the conflict point data that is generated from the records of all vehicular message warnings in a transportation network.
  • This conflict point data will be used to determine road-safety profiles in rural environments and will help the transportation agencies prioritize the deployment of connected infrastructure in these locations.
  • Operating system ensures fast, reliable, and safe operations of the present invention by building a platform environment that facilitates an application interface, a testing/troubleshooting interface, and a data collection/management interface.
  • the platform is built upon ROS environment for easy integration to automated vehicle platforms. This interface will allow the system developer to perform testing and troubleshooting by visualizing all input and output data (e.g. detections of road objects, encoded messages, feature matching of the V2I signs, and memory/GPU utilization control).
  • the unavailable roadway geometry information makes automated driving extremely difficult for CAVs in the roadway intersection.
  • SAE J2735 March 2016 by way of example, a practical online tool, the ISD Message Creator is used (USDOT, 2018). This tool allows users to define the lanes and approaches of an intersection using a graphical interface.
  • the user can encode an ISD, MAP, or SPaT message as an ASN.1 UPER Hex string.
  • the surveying information is entered for the verified point into the MAP message.
  • the generated MAP message is encoded in tight UPER HEX format.
  • the encoded message structure is also validated using a Connected Vehicle Message Validator tool (USDOT 2017). This validation ensures that the message can be recognized by an OBU 306 that complies with SAE J2735 standard.
  • the next step is to generate an image identifier that associates with this message in the device database.
  • the vehicular message sign 100 is created to incorporate the features described above and placed at the predefined location near the intersection.
  • the present invention estimates the real-time location of roadway objects and generate dynamic message data for vehicular communication applications such as personal safety messages (PSM), red-light violation warning, and vulnerable road user avoidance. Therefore, a stereo vision system was integrated in the on-board system 300 for accurate depth estimation.
  • the object distances up to 15 ft are estimated within 1-inch accuracy, up to 50 ft within 5 inches and up to 100 ft within 1 ft.
  • the V2I identifier (optical code 400 ) is recognized, the associated MAP message is decoded and displayed on the human machine interface (HMI) including the lane geometries, signage and pedestrian crossings.
  • HMI human machine interface
  • a relatively small, custom designed architecture is developed using Keras wrapper library with Tensorflow backend.
  • the classifier architecture has four convolutional layers and over six million trainable parameters.
  • the model is trained on an open source traffic light color classification dataset for autonomous vehicles (Kato et al., 2018).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

An infrastructure to vehicle communications system includes a full spectrum camera disposed on a vehicle. A sign, disposed away from the vehicle, includes a message thereon. The sign reflecting visible light, and at least a portion of the message being an optical code; the optical code not reflecting visible light and reflecting invisible light. The full spectrum camera captures the image of the sign including the optical code. A smart vision device receives the image and determines the existence of the sign from the reflected visible light and the optical code thereon from the reflected invisible light. A data base stores one or more optical codes and one or more messages associated with a respective optical code, an on board unit communicating with the database and retrieves the message as a function of the optical code, and displays the message at a display in the vehicle.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims priority to U.S. Provisional Application No. 62/852,252 filed May 23, 2019, the contents of which are herein incorporated.
  • BACKGROUND OF THE INVENTION
  • The present invention is directed to infrastructure to vehicle communications (“I2V”), and more particularly, I2V communication without the need for traditional communication infrastructure such as wireless or fiber optic communication networks.
  • To aid vehicular traffic and operation it is known in the art to utilize connected automated vehicle (“CAV”) technology to improve traffic safety and advance the art of transportation systems. CAV systems rely on vehicular communication. However, as known in the art assisting these vehicles through the vehicular communication (I2V, vehicle to infrastructure (“V2I”), and vehicle to vehicle (“V2V”)) requires substantial investment in wireless communication networks and infrastructure, especially in low population density areas. Access to power and fiber-optic infrastructure becomes a necessity to provide such communication. Therefore, the prior art suffers from the disadvantage that the availability of technology necessary for such communication is committed to those areas having proper infrastructure; high population density or wealthy areas.
  • The present invention overcomes the shortcomings of the prior art. In order to support this particular type of communication in areas where there is no proper infrastructure. The present invention generates the same behavior messages that are typically sent from networked world societies without the need for expensive complex communications infrastructure.
  • SUMMARY OF THE INVENTION
  • An infrastructure to vehicle communications system includes a full spectrum camera disposed on a vehicle. A sign, disposed away from the vehicle, includes a message thereon. The sign reflecting visible light, and at least a portion of the message being an optical code; the optical code not reflecting visible light and reflecting invisible light. The full spectrum camera captures the image of the sign including the optical code. An on board unit receives the image and determines the existence of the sign from the reflected visible light and the optical code thereon from the reflected invisible light. A data base stores one or more optical codes and one or more messages associated with a respective optical code, the on board unit communicating with the database and retrieving the message as a function of the optical code, and displays the message at a display in the vehicle.
  • During operation an on board full spectrum camera captures images from the vehicle. An onboard unit receives the images and determines the identity of the captured image as a function of reflected visible light and invisible light. Upon recognition of a sign, the image is searched in the invisible light spectrum for an optical code, disposed on the sign. The onboard unit recognizes the presence of an optical code, and when present, the optical code is compared with an onboard database of optical codes. The onboard database mapping a respective optical code to a message. The onboard unit retrieving the message associated with the recognized optical code from the data base and displaying the message at a screen on board vehicle.
  • In one embodiment, the invisible light is at least one of ultraviolet light and infrared light.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features and advantages of the present invention will become more readily apparent from the following detailed description of the invention in which like elements are labeled similarly and in which:
  • FIG. 1 is an operational diagram showing a system and use in an environment in accordance with the invention as compared to a prior art system;
  • FIG. 2 is a diagram of a sign constructed in accordance with the invention;
  • FIG. 3 is depiction of different optical codes created in accordance with the invention; and
  • FIG. 4 is a block diagram of the onboard components of a system for operating in accordance with the invention;
  • FIG. 5 consists of comparison of descriptors for code matching at different distances;
  • FIG. 6 consists of analysis of environmental factors affecting the system's performance;
  • FIG. 7 consists of examples results with estimated parameters of image filtering functions.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Generally the present invention provides a solution that enables vehicular messages (i.e. vehicle-to-infrastructure (V2I), infrastructure-to-vehicle (I2V), and vehicle-to-vehicle (V2V)) communication without relying on a wireless communication infrastructure. Using the power of image recognition, the present invention provides vehicular communication by activating certain vehicular messages associated with images (optical codes) that are uniquely generated using a hash visualization technique. These optical codes serve as image identifiers and can be robustly recognized by image recognition algorithms.
  • An artificial intelligence (AI) embedded smart machine-vision on-board device is disposed on a vehicle. Uniquely generated vehicular optical codes are placed on road signs. In this solution, the on-board device is equipped with standard-resolution full spectrum camera. In some embodiments the on board device may also include a geolocation sensor and wireless transmitter. The on board device will convey the important roadway information such as operational messages, basic safety messages, warning messages or roadway geometry messages to the driver when travelling outside of wireless communication infrastructure range.
  • The inventive system and method replicate some of the vehicular messages that are typically sent from road side units (RSUs) without the dedicated communications infrastructure by detecting roadway entities and recognizing the message from unique identifier signs using machine vision techniques. The system then interprets messages/optical codes disposed on the signs and transmit it to a vehicle's on-board unit (OBU).
  • However, the system can work as a supplement to the prior art communication systems and when the vehicle enters in range of a wireless infrastructure, the vehicle's OBU will start communicating directly with RSU and then the smart on-board vision device will stop transmitting messages. The device can nevertheless request certain system updates to its libraries when the OBU shows availability of secure connection to a central information and communication (ICT) network locations.
  • Reference is now made to FIG. 1 in which a schematic representation of the prior art environment and the operation of the invention outside of the prior art environment is provided. As discussed above, the prior art environment includes an information and communication networks (ICT) 10. ICT 10 includes an integrated communications network, including a number of towers 30 having antennas 32 for transmitting and receiving signals with a vehicle 20 having a prior art system 22 thereon. Similarly, in densely populated areas a number of buildings 40 may also have antennas 42 for receiving and transmitting signals with system 22 of vehicle 20 and vehicle 20 will receive information as known in the art.
  • However, as discussed above outside of the ICT enabled environment, wireless communication is unavailable and the prior art system 22 cannot operate. Therefore, a car 200 having a system 300, constructed in accordance with the invention, communicates with road signs 100 to receive messages 102, 104, as will be described in detail below.
  • System 300 includes a full spectrum camera 302 for receiving an image of items outside of vehicle 200 which are encountered by vehicle 200 as vehicle 200 travels. System 300 is capable of identifying the presence of a sign 100 captured by camera 302 as well as the information disposed thereon. In a preferred embodiment of the invention, the sign 100 will have visible information 106 and an invisible optical code 400.
  • As will be described in detail below, the optical code 400 indicates to the operator of the vehicle 200 information related to operation of vehicle 200, such as pedestrian traffic is in the area (warning 104), is road damage ahead (warning 102). It should be known, that in one embodiment of the invention, full-spectrum camera 302 captures images and system 300 determines what is in the image such as sign 100, or an actual pedestrian, or even the condition of a traffic light (red, green or yellow).
  • Furthermore, system 300, in a preferred nonlimiting embodiment is incorporated as a compliment to prior art system 22 so that system 300 takes over when out of range of an ICT 10, but operates in a manner in accordance with the prior art when vehicle 200 is in range of an ICT 10 and RSU signals are detected.
  • Generally, machine readable optical codes are helpful in conveying small information in a visual form. This information can be an URL or product identification. In the last decade, the use of 2D matrix codes has gained popularity with the use of QR codes in mobile applications. QR codes are decoded by the mobile apps using computer vision methods instead of using a scanner that measures the reflectivity of a light beam. However, these codes are innately designed for a scanners' working principle and therefore may work inefficiently with cameras. Using these types of matrix codes for static road signs brings about challenges stemming from ambient illumination, sign deterioration, obstruction by an object such as a tree etc. When the machine code is not complete, the information/message can become invalid and may even lead to safety issues for connected vehicles.
  • Furthermore, the required format for a map type of message consists of relatively large character sets, which will be impractical to fit in one 2D matrix code. Lastly, these codes can easily be modified by a third party if applied to traffic signs. This security problem will also lead to further vehicle safety issues. In order to prevent all these, a special type of unique image identifier is used as the optical code 400; instead of using 2D matrix codes.
  • The unique image identifiers are not encoded messages but rather visual representation of encrypted hash values. Unlike 2D matrix codes, the sign message can't be altered by a third party. Once the image identifier is changed. It will not match with the corresponding identifier in the library. These image identifiers were designed to optimize the camera recognition performance at high vehicle speed and challenging environment conditions such as illumination problems, heavy climatic conditions (i.e. fog, rainstorm, and snow). As seen in FIGS. 2 and 3, the image of optical code 400 includes pattern of blocks 402 a, 402 b, 402 c, 402 n. The block sizes and the color contrast values were chosen optimally for standard resolution cameras (video resolution of 720p and 1080p) to maximize the number of matching image features. The image recognition algorithm looks for color differentiation and high visual transition contrast between adjacent blocks 402 to facilitate recognition. Additionally the optical code 400 includes redundancy to enable recognition when part of the code is blocked, covered, or occluded.
  • Furthermore, rather than relaying an entire vehicular message on the static sign, image identifiers will associate messages located in a database; map optical codes to associated messages for use by the vehicle operator. If a modification is needed in the message (i.e. lane geometry change, completed road construction etc.), the message database is updated when the connected vehicle is in range of an RSU. This image recognition has the following advantages:
      • Designed specifically for machine vision systems for operational feasibility. The system uses identifier images that have distinct patterns and features making it more ideal for computer vision-based approaches.
      • Very robust to environmental conditions such as glare, low illumination, deterioration, fog and obstruction by a tree or snow.
      • No limit for message data capacity since the message is located in the device storage.
      • Secure against destruction or modification by third parties since the message is only visible to the system and if the identifier image is not recognized for a reason, the message will not be activated.
      • The relayed message can be modified easily without the replacement of the identifier image on the sign, therefore cost effective.
      • Easily detectable by low resolution cameras, the identifier image can be even very small.
      • Visually appealing designs; color spectrum, pattern and features can be customized.
  • To utilize the power of image recognition, unique optical codes 400 are generated for each vehicular message using hash visualization techniques. These uniquely generated optical codes 400 serve as optical codes 400 identifiers and can be robustly recognized by image recognition algorithms. However, generating the optical codes 400 with more distinct features (more corners, edges, intensity changes etc.) allows even more robust recognition.
  • Visual hashing is a commonly used method to identify an internet IP address with visual representation while protecting users' privacy (Haack, 2007). Visual hashing can also be used for image recognition and retrieval by converting a hash value to an image using an encryption method (Gangone, Whelan, Janoyan, & Jha, 2008). To provide safe and secure recognition of images and transmission of vehicular messages, a custom visual hashing algorithm is developed to optimize image recognition performance and to add a visually appealing look to the vehicular message signs.
  • An application-programming interface (API) that uses the visual hashing algorithm to generate vehicular message sign identifiers with user friendly interface is established. This API is published under BSD License 2.0, which is a part of a family of permissive free software licenses that impose minimal restrictions on the use and distribution of covered software, and will be open to all developers and agencies who intend to use it for sign identification or other similar image recognition purposes. The visual hashing algorithm uses SHA-512 encryption method, a very secure one-way hash function, for the generation of optical codes 400. Hence, a third party will not be able to reverse back to the original message by analyzing the identifier image. If a third party replaces the sign with a different identifier, the smart vision system will simply not recognize the message.
  • The generator API for vehicular message identifiers is designed specifically for CAV's camera recognition performance. This improved design of hash visualization entails increased robustness against occlusion (e.g. symmetrical design) and illumination changes (e.g. higher contrast difference). By geofencing, system 300 can construct the database by region so that specific optical codes 400 not only correspond to a specific message, but by knowing the physical location of ovwhicale 200 and the anticipated messages in the region, different messages can be associated with a single optical code and displayed as a function of location of vehicle 200. The customized API also incorporates geofencing feature and the occurrence of a mismatch is extremely unlikely for nearby signs.
  • A secure web-based tool will be developed in order to provide a convenient process to deploy message signs for vehicular communication applications of highway agencies, cities/counties and expressway authorities. Using this tool, a highway maintenance agency for instance, can deploy their work zone message sign by uploading the TIM message content and associate it with an automatically generated vehicular message sign.
  • Reference is now made to FIG. 4 where a block diagram of system 300 constructed in accordance with the invention is provided. System 300 includes a full spectrum camera 302 capable of capturing images in both the visible light spectrum and the invisible light spectrum. The invisible light spectrum includes at least the ultraviolet wavelengths and infrared wavelengths incapable of being seen by the human eye. Camera 302 is preferably a .multi-sensor camera module for RGB, invisible light and depth sensing. The depth-sensing camera 302 provides the object distances at the standard resolution in order to replicate the necessary spacing and timing (SPaT) information for signals in the vehicular messages without compromising the overall affordability of the system. System 300 can provide 25 fps (frames per second) processing speed with the current stereo vision system.
  • A smart vision device 304 is coupled to camera 302 and receives the images captured by camera 302. Smart vision device 304 analyzes the images for the existence of objects of interest; those that need to be acted upon for proper operation of the vehicle 200. These objects may be signs, traffic lights, even pedestrians. Smart vision device 304 identifies these things to an on board unit (OBU) 306. When a sign is recognized smart vision device 304 then determines whether an optical code 400 is present on the sign. If so, optical code 400 is transmitted to OBU 306.
  • OBU 306 includes an optical code data base containing two or more distinct optical codes 400 mapped to a respective message. The messages are items of information necessary for, or of interest for, operation of vehicle 200. The messages may include driving directions (i.e. Merge, Detour, or the like), road condition (Construction, Uneven Roadway, Unpaved or the like), or warnings (pedestrian crosswalk, animal crossing, or the like). OBU 306 receives the identified optical code 400 and retrieves the corresponding message from the data base. The message may then be displayed at a screen 310 within vehicle 200, or even at a heads up display on a vehicle windshield. In an alternative embodiment the message may be output as an audio message to the operator of vehicle 200. In the current embodiment the database is part of OBU 306, bit it is well within the scope of the invention to provide a standalone database in communication with OBU 306 and/or smart vision device 304.
  • As discussed above smart vision device 304 recognizes real world objects such as the sign 100 itself, pedestrians in the roadway, traffic lights and like. This identification is also input to OBU 306 for processing. The OBU 306 also communicates with the operational control system of the vehicle 200 such as the braking system as now available in some cars which have autonomous braking, or a vehicle operation as known from current self driving vehicles. Therefore, OBU 306 communicates with a brake control 312, by way of example, for operating the break autonomously. Therefore, if smart vision device 304 recognizes a pedestrian in the path of vehicle 200 and transmits that image to OBU 306, if OBU 306 has not detected braking of the vehicle 200, OBU 306 will send a command to break control 312 to break vehicle 200.
  • System 300 may also include an interface 314 for connecting OBU 306 to other data sources and devices. In this way, OBU 306 may be updated to change the messages associated with specific optical codes 400, or to add or delete optical codes 400 over time. Additionally, in a preferred embodiment, OBU 306 keeps a log of activity for each symbol utilized and message retrieved which may be downloaded for monitoring purposes through interface 314. System 300 includes a GPS receiver 308 for determining location of vehicle 200 having system 300 thereon. In this way the database can be accessed as a function of location as well as received message.
  • In a preferred embodiment, the described structure and operation of system 300 is a compliment to the system 200 capable of making use of ICT 10. Therefore, system 300 may also include a dedicated short range communication antenna 306 for communicating with an antenna 32, at a tower 30 of an RSU, by way of example, to communicate with ICT 10. To that end, GPS receiver 308 may also provide an output to OBU 306, and if OBU 306 determines that it is within an ICT 10, OBU 306 operates on information received from antenna 316, rather than optical codes 400 received from camera 302. Additionally, antenna 316 re receives signals from ICT 10, and upon receipt of such a signal system 300 operates in accordance with the prior art. Antenna 316 acts as a de facto switch.
  • For recognition of vehicular messages, a fast and robust image matching algorithm is required to compare the features of the detected sign image and optical code 400 with the features of each message identifier located in the OBU 306 database (the highest matching score is returned for the best match then the corresponding vehicular messages is found).
  • Recognition of roadway entities using artificial intelligence (AI) brings noteworthy capabilities to machine vision system 300, equipped with advanced camera 302 and sensor technologies, that are designed for vehicular communication. AI framework localizes the optical code 400 so that a robust code matching can be performed. When smart vision device 304 crops optical code 400 out from the camera scene, code matching by OBU 306 will be performed significantly faster and more robustly.
  • The AI framework utilized by smart vision device 304 can also make predictions for other vehicular message types (shown in Table 1) that are not based upon recognition of optical code 400, such as identifying an animal or pedestrian on the road. Using an AI architecture, it is possible to reinforce the performance of the code matching of the optical code 400 and detect other roadway entities such as an animal on the road or a cyclist suddenly entering an intersection as well. The AI-embedded system of smart vision device 304 analyzes the road environment and detects the roadway entities (e.g. traffic signs, vehicles, pedestrians, cyclists and traffic lights) with real-time tracking information.
  • TABLE 1
    Applicable vehicular communication types for the on-board smart vision support system
    Vehicular
    Message Type Standard Deployment Description
    MapData SAE J2735 MAP messages are the most convenient message types
    Message (MAP) when vehicular message signs are used to relay messages to
    the smart vision support device. Since, navigation apps
    typically have issues to provide most updated road
    geometry in the rural areas, it is important to send this
    information to CAVs. The road geometry messages can be
    easily encoded and relayed by the V2I signs. Unsignalized
    road intersection geometry messages, road lane reductions
    and geometry changes due to road construction are some of
    the important applications.
    Traveler SAE J2735 These messages can be configured by following a similar
    Information approach as in MAP messages. Since temporary work zones
    Message (TIM) are typically described in TIM, the geometry of the detour
    lanes can be sent to CAVs using vehicular message signs.
    Personal Safety SAE J2735 Vulnerable road users such as pedestrians, cyclists or road
    Message (PSM) workers can be effectively detected and tracked in real-time
    by the AI system through the depth camera equipped in the
    smart vision device. Without use of any vehicular message
    sign, the position, speed and heading information of the
    vulnerable road users can be generated within an acceptable
    accuracy.
    Road Side Alert SAE J2735 RSA messages can be partially supported by the smart
    (RSA) vision on-board device by relaying information about ice on
    the road or suboptimal road segment condition. However,
    more dynamic messages such as ambulances operating in
    the area or train arrival are not suitable. The vehicular
    message signs should be placed at locations before bridge
    entrances or suboptimal road segments to send activate RSA
    messages.
    Dilemma Zone ISO The smart vision device can detect a yellow traffic light and
    Protection (SA1) TS 19091 estimate whether it is challenging for vehicle either to stop
    or continue before the signal turns red based on the current
    speed of the vehicle. This vehicular communication
    application will not require use of vehicular message signs.
    Red Light ISO This warning requires SPaT and MAP messages in real-time
    Violation TS 19091 to notify the driver to avoid a potential red-light violation.
    Warning (SA2) Unlike MAP messages, SPaT messages cannot be generated
    directly from the vehicular message signs; however, smart
    vision device can replicate the SPaT by estimating the time
    of arrival to the traffic light in-real time.
    Turning ISO This message is very similar to PSM messages, the
    Assistant - TS 19091 application will follow a similar procedure except that a
    Vulnerable Road vehicular message sign will be used to send the intersection
    User Avoidance geometry MAP message. A warning signal will then be
    (SA5) generated if the smart vision device estimates a potential
    conflict.
    Non-signalized ISO Vehicular message sign is used to generate the intersection
    Crossing Traffic TS 19091 geometry message by following the same procedure as in
    Warning (SA6) SAE J2735 MAP messages.
  • The most advanced AI models follow deep learning approaches. Deep learning is a multi-layer artificial neural networks (ANN) framework that is based on learning more complex concepts from simpler concepts. The learning occurs in deep learning models when the data acquired through experience is used to manipulate uncertainty about predictions of unavailable data (Ghahramani 2015). One of the most commonly used deep learning models is Convolutional Neural Network (CNN). CNNs are proven to be very effective in image classification and object localization tasks (Karaaslan et al. 2019). These models reached a wide range of real-world applicability from self-driving vehicles to facial recognition in surveillance systems.
  • For real time detection and recognition of roadway entities, a lightweight architecture that can run fast on mobile devices was selected. Single Shot MultiBox Detector (SSD) is a relatively new, fast pipeline developed by Liu et al. (2016). SSD uses multi boxes in multiple layers of convolutional network; therefore, it has an accurate region proposal without requiring many extra feature layers. SSD predicts the location and classification of an object very quickly while sacrificing very little accuracy, as opposed to other models where significantly increased speed comes only at the cost of significantly decreased detection accuracy (Sandler et al. 2018).
  • The network architecture of the original SSD model is known in the art. Even though VGG-16 as a base architecture has become widely adopted classifier, newer classifiers such as MobileNetV2 offer much faster prediction speeds at similar accuracy in a fifteen times smaller network (Sandler et al. 2018).
  • In a preferred nonlimiting embodiment SSDlite is a mobile optimized version of SSD models that replaces all the regular convolutions with separable convolutions (Sandler et al. 2018). SSDlite is used for location and object classification and becomes more memory efficient when used with a MobileNet V2 classifier.
  • In addition to a road object detection model, a relatively light-weight classifier for traffic light color classification is also added to the AI framework in order to enable vehicular communication applications at signalized intersections (e.g. red-light violation warning). This classifier model was trained on an open source traffic light color classification dataset (Kato et al., 2018).
  • In deep learning models, the availability of training data is the most critical aspect of developing a reliable system with good accuracy in recognition. Therefore, an extensive training dataset is created by combining publicly available datasets from various sources including academic institutions, transportation agencies, and industrial partners. Some of the datasets are not annotated, whereas some others are annotated in varying formats. Therefore, a unified annotation style is chosen, and all datasets are converted to Pascal VOC 2012 format. Basic data augmentation is also applied to the datasets to further increase the training accuracy. The data augmentation included rotation, scaling, translation, and Gaussian noise. The information of the combined training datasets is summarized below.
  • TABLE 2
    Summary of the training datasets.
    Dataset Number of Dataset
    Name Classes Size Annotation Reference
    Microsoft 171 (5 classes 123,287 images Object Lin et al.,
    COCO: Common are (25,000 are in Detection (2014)
    Objects in transferred) transferred classes)
    Contexts
    Berkeley 40 (10 classes 100,000 images Object Yu et al.,
    Deep are Detection (2018)
    Drive transferred)
    (BDD100k)
    Road Damage 8 (2 classes are 9,053 images Object Maeda,
    Dataset transferred) (>2,000 are in Detection Sekimoto,
    transferred classes) Seto,
    Kashiyama, &
    Omata, (2018)
    Animals with 20 (all merged 37,322 images Segmentation Xian, Lampert,
    Attributes in 1 class) Schiele, &
    Akata, (2017)
  • The supervised training of the CNN models is performed in the Newton Visualization in a supercomputer. The supercomputer includes ten compute nodes with thirty two cores and 192 GB memory in each node; two Nvidia V100 GPUs are available in each compute node totaling three hundred twenty cores and twenty GPUs. One single training takes about six hours on the GPU cluster computer for total of two hundred thousand steps. A TensorFlow v2.11 machine learning framework is used for model creation and performing the model trainings (Abadi et al. 2016).
  • As known in the art, there are major challenges when trying to effectively train a deep learning model. One major challenge is overfitting. Overfitting usually occurs when the data is too small compared to the size of the deep learning architecture. In that case, either the number of convolutional layers should be reduced, or dropout layers should be added at the end of each convolutional block (Karaaslan et al. 2018). Another cause of overfitting is the wrong selection of training hyper-parameters. The learning rate is often chosen to be too small in order to reduce training loss. However, validation loss then becomes much larger. This indicates that the model is fitting over the branch of the loss function and not converge around the local minima. Increasing the learning rate, on the other hand, yields non-converging loss function. This indicates that the model is fitting over the branch of the loss function and not converge around the local minima. Increasing the learning rate, on the other hand, yields non-converging loss function. To overcome these challenges, learning rate scheduling and early stopping method are used during the training. The optimal hyper-parameters found for CNN models are shown in the table below.
  • TABLE 3
    Optimal hyper-parameters for CNN models
    Hyper SSD MobileNet SSDlite SSD ResNet50 FasterRCNN
    Parameters V1 and V2 MobileNet V2 FPN Inception V2
    Initial Learning Rate 0.004 0.004 0.04 0.002
    Model Optimizer RMS Prop RMS Prop Momentum Momentum
    Optimizer Optimizer Optimizer Optimizer
    Batch Size 24 24 32 12
    Training Steps 200,000 200,000 50,000 200,000
    Box Predictor Kernel Size = 1 Kernel Size = 4 Kernel Size = 3 Kernel Size = 2
    Activation RELU RELU RELU SIGMOID
  • Local features of images (i.e. descriptors like corners, edges and abrupt intensity changes) are widely utilized in computer vision applications such as image classification, image retrieval, robust matching, and object localization (Li and Allinson 2008). For image matching operations, early approaches for feature detection fail to deal with illumination and scale changes while also being sensitive to noise. A breakthrough change was seen when Scale Invariant Feature Transform (SIFT) descriptors were introduced (Lowe 2004). Later, many other feature detectors/descriptors similar to SIFT were developed including SURF (Herbert Bay, Tinne Tuytelaars 2008), BRISK (Leutenegger et al. 2011), ORB (Rublee et al. 2011) and AKAZE (Min et al. 2017). When coupled with decision trees and homography transformation, these SIFT like descriptors can be effectively used in image matching operations (Muja and Lowe 2012).
  • For recognition of image signs 100, a fast and robust image matching algorithm—Brute Force algorithm—is used to compare the features of the detected optical code 400 with the features of each code image located in the database of system 300, and more prefer ably OBU 306. Then, the highest matching score is returned for the best code match and the corresponding V2I message is found. Robust matching is very critical since a mismatch will lead to retrieval of an incorrect V2I message from the database. The matching speed is also very important, since the code matching system on the connected vehicle will have very little amount of time to operate especially at high driving speeds. Therefore, various descriptors using different matching algorithms were tested on an exemplary optical code database and their performances are compared as shown in Table 4.
  • TABLE 4
    Performance test on different image matching algorithms for recognition of image signs
    Feature Matching Feature Average
    Descriptor 1 Algorithm 2 Method 3 Speed 4 Density 5 Precision 6
    SIFT Brute Force Euclidean 0.284 s 0.405 96.3%
    SURF Brute Force Euclidean 0.242 s 0.872 84.9%
    ORB Brute Force Hamming 0.121 s 0.306 84.0%
    SIFT-FLANN Decision Trees Euclidean 0.657 s 0.404 97.5%
    SURF-FLANN Decision Trees Euclidean 0.553 s 0.896 85.3%
    BRISK-FLANN LSH Hamming 0.345 s 0.082 66.9%
    ORB-FLANN LSH Hamming 0.272 s 0.427 84.5%
    AKAZE-FLANN LSH Hamming 0.287 s 0.180 73.1%
    1 Descriptors with FLANN uses Fast Library for Approximate Nearest Neighbors
    2 When the detected features are small, Brute Force method shows faster results than Decision Trees and LSH due to overhead of FLANN library; however, FLANN algorithms will operate much faster than Brute Force when the image sizes are large.
    3 Euclidean Distance is only applicable to SIFT/SURF, Hamming Distance is the alternative for other descriptors.
    4 Matching speed is calculated for average result time of sign images matched in a database of 300 code images.
    5 Future density is the number of good matching features normalized by the image resolution.
    6 Average precision is calculated based on 126 detected sign images being matched in a database of 300 code images.
  • The optimal performance has been attained from SIFT descriptors using Brute Force algorithm considering the overall speed and precision performance. Typically, FLANN is known to increase the image matching speed by finding the approximate nearest neighbors instead of trying every possibility as in Brute Force method (Muja and Lowe 2012). However, the performance tests showed that it actually slows the overall matching speed when many small size images are matched iteratively. This is mainly due to the burden of loading its library at every iteration.
  • Using ORB descriptors with the Brute Force method showed a significant increase in speed, but it sacrificed some accuracy especially when the cropped image of the detected optical code 400 was very small. SURF descriptors also performed reasonably well; however, its scale invariance was not as good as SIFT. The other descriptors on the other hand, did not show any close performance.
  • A distance test is conducted to measure the scale invariability of the descriptors. The matching results of the SIFT, SURF, ORB, AKAZE and BRISK descriptors are compared at varying distances of 5 ft, 10 ft and 15 ft. In the comparison, good matching features above default threshold values were drawn in green color on the compared images. The homography transformation is also highlighted on the detected sign images.
  • As seen in FIG. 5, ORB performs the fastest; however, SURF detects a lot more features in the sign images. As shown in Table 5 both descriptors did not do as well as SIFT did with very small images. Therefore, a dynamic descriptor selection is implemented in the code recognition system. ORB is chosen to be the main descriptor and when number compared features is under a certain threshold, the system switches to SURF. If ORB does not perform a match at all, the system switches to SIFT. In this way, the speed is optimized, and the robustness of the system is maximized. The system creates a vehicular message signal when the sign image is recognized and matched with a code image in the database.
  • During the real-time detection of roadway objects, the traffic sign detections, including optical code 400, are extracted by smart vision device 304 and analyzed by OBU 306 using the image recognition algorithm in order to match the detection with a V2I message identifier from the database. In other words, the bounding regions of the traffic sign detections are utilized as initial region of interest for the recognition of the V2I identifiers. After the optical code 400 is recognized, the optical code 400 is transmitted to the OBU 306.
  • To tackle more effectively the problems related to illumination, real-time image enhancement is applied to the device's camera feed as the vehicular message sign recognition system needs to tackle abrupt brightness changes. Under reverse sunlight, poor brightness in the detected optical code 400 hides useful image features (i.e. edges and corners) and potentially causes misdetections. It is indispensable for the system to function robustly under variety of conditions, including different climates, ambient brightness, sign occlusion, and at different vehicle speeds. Therefore, real-life tests are applied.
  • The purpose of these tests is to observe how different environment factors are affecting the detection and recognition performance of the machine vision system. These observed factors are analyzed carefully in this task to investigate an adaptive method for remedy in each condition. First, the problem associated with each environment factor is classified as an image processing problem such as haze, noise, low/high gamma and blurriness. Then, as seen in FIG. 7, the problems are treated with effective combinations of image filtering operations. These operations are dehazing filter, denoising, gamma correction and contrast enhancement. FIG. 6 summarizes the effect of environment factors and the corresponding remedies.
  • In real world conditions, different environmental factors can simultaneously occur at varied levels of intensity. Therefore, effective combinations of all three filtering operations are required in order to develop an adaptive remedy strategy. Each filtering operation requires input parameters to calibrate the filtering level. These input parameters are defined as below:

  • Gamma Correction→f 1{gamma}

  • Contrast Enhancement→f 2{clip|tile}

  • Denoising→f 3{density|window}
  • Due to heavy computational burden of the dehazing filter, it is activated only during the presence of dense fog. However, other filtering functions are adaptively implemented in any environment condition by estimating the optimal input parameters. The parameter estimation is conducted using a multivariate nonlinear multiple regression analysis that predicts parameter equations based on the image variables of the detected sign. These image variables are brightness (b), contrast (c) and resolution (c). The regressions equations were predicted as below:
  • gamma = - 0.008 ( b ) + 2.12 ( 1 ) tile = - 0.025 ( r ) + 1.15 ( 2 ) window = { 1 ; 0 < r < 70 3 70 r < 130 5 ; r 130 } ( 3 ) clip = 0.2379 ( c ) 2 - 0.000283 ( b ) 2 + 0.00554 ( b ) ( r ) + 0.04938 ( b ) - 5.32 ( c ) + 22.63 ( 4 ) density = { 5 ; clip < 5 10 ; clip 5 } ( 5 )
  • Predicting multi-variate non-linear equations with multiple dependent variables is a complex statistical problem. Therefore, regression functions that predict the gamma and tile parameters are assumed linear. Furthermore, the window and density parameters are assumed categorical variables. The required sample data are collected from the field tests that are conducted under varied climate and ambient light conditions. FIG. 7 shows example results from the regression functions.
  • The feasibility of the present invention at different vehicle speeds is ensured through the investigation of minimum size requirements for vehicular message signs. The suggested dimensions of the message signs 100 for speed limits 30 mph, 40 mph and 50 mph are determined. It must be noted that these requirements are for the default camera settings, i.e. real-time video at 1080p resolution and 25 fps rate.
  • The present invention can be integrated in ROS to obtain required stability, flexibility, and computational and power efficiency. ROS is a flexible robotics framework than can perform different parallel operations without compromising the system's efficiency. ROS environment is a preferred platform especially in self-driving vehicle technologies (e.g. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo). Thus, ROS will allow also compatibility to a wide range of automated driving systems in addition to entailing reliable operation of system on an actively supported platform.
  • The present invention collects important traffic safety data from the roadway including deterioration of road infrastructure (sign visibility and pavement damage etc.) and conflict point data (near crashes, dilemma zones, traffic congestions etc.). The present invention can determine potentially useful information types (e.g. visibility, deterioration, obstruction etc.) that can be obtained from the detected optical codes 400 of road signs 100 and also the conflict point data that is generated from the records of all vehicular message warnings in a transportation network. This conflict point data will be used to determine road-safety profiles in rural environments and will help the transportation agencies prioritize the deployment of connected infrastructure in these locations.
  • Operating system ensures fast, reliable, and safe operations of the present invention by building a platform environment that facilitates an application interface, a testing/troubleshooting interface, and a data collection/management interface. The platform is built upon ROS environment for easy integration to automated vehicle platforms. This interface will allow the system developer to perform testing and troubleshooting by visualizing all input and output data (e.g. detections of road objects, encoded messages, feature matching of the V2I signs, and memory/GPU utilization control).
  • The unavailable roadway geometry information makes automated driving extremely difficult for CAVs in the roadway intersection. To generate a MAP message in compliance with standards, SAE J2735 March 2016 by way of example, a practical online tool, the ISD Message Creator is used (USDOT, 2018). This tool allows users to define the lanes and approaches of an intersection using a graphical interface.
  • Once designed, the user can encode an ISD, MAP, or SPaT message as an ASN.1 UPER Hex string. After the intersection geometry is defined with the necessary lane attributes, the surveying information is entered for the verified point into the MAP message. The generated MAP message is encoded in tight UPER HEX format. The encoded message structure is also validated using a Connected Vehicle Message Validator tool (USDOT 2017). This validation ensures that the message can be recognized by an OBU 306 that complies with SAE J2735 standard. After completion of MAP message data, the next step is to generate an image identifier that associates with this message in the device database. The vehicular message sign 100 is created to incorporate the features described above and placed at the predefined location near the intersection.
  • The present invention estimates the real-time location of roadway objects and generate dynamic message data for vehicular communication applications such as personal safety messages (PSM), red-light violation warning, and vulnerable road user avoidance. Therefore, a stereo vision system was integrated in the on-board system 300 for accurate depth estimation. The object distances up to 15 ft are estimated within 1-inch accuracy, up to 50 ft within 5 inches and up to 100 ft within 1 ft.
  • When the V2I identifier (optical code 400) is recognized, the associated MAP message is decoded and displayed on the human machine interface (HMI) including the lane geometries, signage and pedestrian crossings.
  • For red-light violation warning for rural area vehicular communication applications through the use of the present invention, a relatively small, custom designed architecture is developed using Keras wrapper library with Tensorflow backend. The classifier architecture has four convolutional layers and over six million trainable parameters. The model is trained on an open source traffic light color classification dataset for autonomous vehicles (Kato et al., 2018).
  • It will thus be seen that the objects set forth above, among those made apparent from the preceding description, are efficiently attained and, since certain changes may be made in carrying out the above method and in the construction set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
  • It is also to be understood that the following claims are intended to cover all of the generic and specific features of the invention herein described, and all statements of the scope of the invention which, as a matter of language, might be said to fall there between.

Claims (38)

1. (canceled)
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. (canceled)
10. (canceled)
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. (canceled)
17. (canceled)
18. (canceled)
19. (canceled)
20. An infrastructure to vehicle communications system comprising:
a full spectrum camera disposed on a vehicle;
a sign, disposed away from the vehicle, including a message thereon; the sign reflecting visible light, and at least a portion of the message being a sign based optical code; the sign based optical code not reflecting visible light and reflecting invisible light; the full spectrum camera capturing an image of the sign including the sign based optical code as a captured image;
a smart vision device receiving the image and determining the existence of the sign from the reflected visible light and the sign based optical code optical code thereon from the reflected invisible light;
an on board unit having a data base, the data base storing one or more optical codes and one or more messages associated with a respective optical code, the on board unit communicating with the database, comparing the optical sign based code with the one or more optical codes, comparing the captured image of the optical sign based code with the one or more optical codes stored in the database, and retrieving a message associated with the optical code as a function of an optical code stored in the database matching the captured image of the sign based optical code, and displaying the retrieved message at a display in the vehicle.
21. The infrastructure to vehicle communications system of claim 20, further comprising an antenna for communication with an information and communication network; the information and communication network transmitting the message to be received at the antenna when the vehicle is in communication range of the information and communication network, the on board unit displaying the message received from the information and communication network at the screen.
22. The infrastructure to vehicle system of claim 20, further comprising a GPS receiver on board the vehicle for determining the location of the vehicle, the GPS sensor being in communication with the on board unit and sending a location signal indicating that the vehicle is in communication range of the information and communication network, and the on board unit stopping the processing of optical codes in response to the location signal.
23. The infrastructure to vehicle system of claim 20, wherein the invisible light is in the infrared spectrum.
24. The infrastructure to vehicle system of claim 20, wherein the retrieved message is one of map directions, a road condition, or a warning.
25. The infrastructure to vehicle system of claim 20, further comprising a vehicle control, the on board unit receiving the image and providing an output in response thereto to operate the vehicle control.
26. The infrastructure to vehicle system of claim 25, where in the vehicle control is a brake control.
27. An infrastructure to vehicle communications system comprising:
a full spectrum camera disposed on a vehicle, the full spectrum camera capturing reflected visible light and reflected invisible light reflected from an object outside the vehicle to create an image of the object;
a smart vision device receiving the image and determining the existence of the sign from the reflected visible light and an optical code thereon from the reflected invisible light as a captured image; and
an on board unit having a data base, the data base storing one or more optical codes and one or more messages associated with a respective optical code, comparing the captured image of the optical sign based code with the one or more optical codes stored in the database, and retrieving a message associated with the optical code as a function of an optical code stored in the database matching the captured image of the sign based optical code the on board unit communicating with the database and retrieving the message as a function of the optical code matching the captured image, and displaying the retrieved message at a display in the vehicle.
28. The infrastructure to vehicle communications system of claim 27, further comprising an antenna for communication with an information and communication network; the information and communication network transmitting the message to be received at the antenna when the vehicle is in communication range of the information and communication network, the on board unit displaying the message received from the information and communication network at the screen.
29. The infrastructure to vehicle system of claim 28, further comprising a GPS receiver on board the vehicle for determining the location of the vehicle, the GPS sensor being in communication with the on board unit and sending a location signal indicating that the vehicle is in communication range of the information and communication network, and the on board unit stopping the processing of optical codes in response to the location signal.
30. The infrastructure to vehicle system of claim 27, wherein the invisible light is in the infrared spectrum.
31. The infrastructure to vehicle system of claim 27, wherein the retrieved message is one of map directions, a road condition, or a warning.
32. The infrastructure to vehicle system of claim 27, further comprising a vehicle control, the on board unit receiving the image and providing an output in response thereto to operate the vehicle control.
33. The infrastructure to vehicle system of claim 32, where in the vehicle control is a brake control.
34. A method for providing an onboard message to a vehicle from traffic signs having invisible messages thereon comprising the steps of:
utilizing a full image camera to capture a reflected visible light and a reflected invisible light reflected from an object outside the vehicle to create an image of the object;
determining the existence of a sign from the reflected visible light;
determining the existence of an optical code thereon from the reflected invisible light when the sign is considered to exist in the image;
providing a data base, the data base storing one or more optical codes and one or more messages associated with a respective optical code;
comparing the captured image of the optical sign based code with the one or more optical codes stored in the database, and
associated with the optical code as a function of an optical code stored in the database matching the captured image of the sign based optical code; and
displaying the message at a display in the vehicle.
35. The method for providing an onboard message to a vehicle of claim 34, wherein the invisible light is in the infrared spectrum.
36. The method for providing an onboard message to a vehicle of claim 34, wherein the retrieved message is one of map directions, a road condition, or a warning.
37. The method for providing an onboard message to a vehicle of claim 34, further comprising a vehicle control in response to the retrieved message.
38. The method for providing an onboard message to a vehicle of claim 37, wherein the vehicle control is braking the vehicle.
US17/830,428 2019-05-23 2022-06-02 On-board machine vision device for activating vehicular messages from traffic signs Abandoned US20220303738A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/830,428 US20220303738A1 (en) 2019-05-23 2022-06-02 On-board machine vision device for activating vehicular messages from traffic signs

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962852252P 2019-05-23 2019-05-23
US16/882,881 US11405761B2 (en) 2019-05-23 2020-05-26 On-board machine vision device for activating vehicular messages from traffic signs
US17/830,428 US20220303738A1 (en) 2019-05-23 2022-06-02 On-board machine vision device for activating vehicular messages from traffic signs

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/882,881 Continuation US11405761B2 (en) 2019-05-23 2020-05-26 On-board machine vision device for activating vehicular messages from traffic signs

Publications (1)

Publication Number Publication Date
US20220303738A1 true US20220303738A1 (en) 2022-09-22

Family

ID=73456553

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/882,881 Active US11405761B2 (en) 2019-05-23 2020-05-26 On-board machine vision device for activating vehicular messages from traffic signs
US17/830,428 Abandoned US20220303738A1 (en) 2019-05-23 2022-06-02 On-board machine vision device for activating vehicular messages from traffic signs

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/882,881 Active US11405761B2 (en) 2019-05-23 2020-05-26 On-board machine vision device for activating vehicular messages from traffic signs

Country Status (1)

Country Link
US (2) US11405761B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2602630A (en) * 2021-01-05 2022-07-13 Nissan Motor Mfg Uk Limited Traffic light detection
JP2024068553A (en) * 2022-11-08 2024-05-20 キヤノン株式会社 Data creating apparatus, control method, and program
JP2024175351A (en) * 2023-06-06 2024-12-18 トヨタ自動車株式会社 vehicle
CN116824524B (en) * 2023-07-17 2024-12-24 池州市谦跃信息技术有限公司 Big data flow supervision system and method based on machine vision
CN117475207B (en) * 2023-10-27 2024-10-15 江苏星慎科技集团有限公司 3D-based bionic visual target detection and identification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032402A1 (en) * 2014-04-14 2017-02-02 Sirus XM Radio Inc. Systems, methods and applications for using and enhancing vehicle to vehicle communications, including synergies and interoperation with satellite radio
US20200042849A1 (en) * 2016-09-28 2020-02-06 3M Innovative Properties Company Multi-dimensional optical code with static data and dynamic lookup data optical element sets
US20210264186A1 (en) * 2018-10-04 2021-08-26 3M Innovative Properties Company Hyperspectral optical patterns on retroreflective articles

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032402A1 (en) * 2014-04-14 2017-02-02 Sirus XM Radio Inc. Systems, methods and applications for using and enhancing vehicle to vehicle communications, including synergies and interoperation with satellite radio
US20200042849A1 (en) * 2016-09-28 2020-02-06 3M Innovative Properties Company Multi-dimensional optical code with static data and dynamic lookup data optical element sets
US20210264186A1 (en) * 2018-10-04 2021-08-26 3M Innovative Properties Company Hyperspectral optical patterns on retroreflective articles

Also Published As

Publication number Publication date
US11405761B2 (en) 2022-08-02
US20200374674A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
US11405761B2 (en) On-board machine vision device for activating vehicular messages from traffic signs
JP7460044B2 (en) Autonomous vehicle, and apparatus, program, and computer-readable medium relating to an autonomous vehicle system
Nidamanuri et al. A progressive review: Emerging technologies for ADAS driven solutions
CN118470974B (en) Driving safety early warning system and method based on Internet of vehicles
US10223910B2 (en) Method and apparatus for collecting traffic information from big data of outside image of vehicle
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN112949578B (en) Vehicle lamp state identification method, device, equipment and storage medium
EP4341913A2 (en) System for detection and management of uncertainty in perception systems, for new object detection and for situation anticipation
Huu et al. Proposing Lane and Obstacle Detection Algorithm Using YOLO to Control Self‐Driving Cars on Advanced Networks
Karpagalakshmi et al. Protecting Vulnerable Road users using IoT-CNN for Safety Measures
US12211379B2 (en) Transportation environment data service
CN118963353A (en) Road driving environment intelligent perception method, system, terminal and storage medium
Thevendran et al. Deep Learning & Computer Vision for IoT based Intelligent Driver Assistant System
Kahlon et al. An intelligent framework to detect and generate alert while cattle lying on road in dangerous states using surveillance videos
US20220004777A1 (en) Information processing apparatus, information processing system, information processing method, and program
Abaddi Q-Omni: a quantum computing and GPT-4o solution for Camel-Vehicle collisions
Ibrahim et al. AI-powered parking management systems: a review of applications and challenges
Byzkrovnyi et al. Comparison of object detection algorithms for the task of person detection on jetson tx2 nx platform
US20250206354A1 (en) Safe and scalable model for culturally sensitive driving by automated vehicles using a probabilistic architecture
Venkatesh et al. An intelligent traffic management system based on the Internet of Things for detecting rule violations
Thakare et al. Advanced Traffic Clearance System for Emergency Vehicles
Iparraguirre-Gil Computer vision and deep learning based road monitoring towards a connected, cooperative and automated mobility
Aeri et al. Unveiling Effectiveness: Advanced Vehicle Tracking and Detection Systems in Action
US20240391482A1 (en) Augmented reality projection of predicted high-risk movements
US12217511B1 (en) Real time management of detected issues

Legal Events

Date Code Title Description
AS Assignment

Owner name: CONNECTED WISE LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KARAASLAN, ENES;REEL/FRAME:060717/0050

Effective date: 20220726

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION