WO2016103258A1 - Système et procédé permettant d'empêcher les accidents - Google Patents

Système et procédé permettant d'empêcher les accidents Download PDF

Info

Publication number
WO2016103258A1
WO2016103258A1 PCT/IL2015/051240 IL2015051240W WO2016103258A1 WO 2016103258 A1 WO2016103258 A1 WO 2016103258A1 IL 2015051240 W IL2015051240 W IL 2015051240W WO 2016103258 A1 WO2016103258 A1 WO 2016103258A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
vehicle
road
database
signal
Prior art date
Application number
PCT/IL2015/051240
Other languages
English (en)
Inventor
Timor RAIMAN
Original Assignee
Raiman Timor
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raiman Timor filed Critical Raiman Timor
Publication of WO2016103258A1 publication Critical patent/WO2016103258A1/fr

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/09623Systems involving the acquisition of information from passive traffic signs by means mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • the present invention relates generally to systems and methods for preventing accidents, to systems and methods for vehicle navigation, to traffic control systems and methods and systems and methods for warning drivers to prevent road accidents.
  • driver error is where the driver fails to act to a traffic control signal such as a stop sign, a give way sign or a traffic light. This may be due to misinterpretation of the traffic control signals or simply ignoring it, such as by driving through a red light, driving beyond the speed limit, or driving against the direction of traffic.
  • driver error is where the driver neglects to take account of road conditions or miscalculates braking distance to an intersection.
  • red lights may be a traffic signal or may be the rear light of a vehicle or a breaking light. Even where a red light within the field of view of an onboard camera is indeed a traffic signal, and is correctly identified as being a stop sign, without much more information, it is not clear whether this is an instruction to the vehicle with the on-board camera (host vehicle) or whether it relates to vehicles in other traffic lanes. In some instances such lights are intended for traffic coming into a junction from a different direction and the host vehicle has right of way and should not be stopping.
  • the Global Positioning System is a satellite navigation system that provides location information anywhere on or near the Earth's surface. It comprises a number of satellites in orbit above Earth. Each satellite continually transmits messages that include the time the message was transmitted, and the satellite position. On the ground the GPS unit receives these messages and, by comparing the time at which the message was received (on its internal clock) against the time which the message was transmitted, it works out how far away it is from each satellite.
  • a good aerial is required in order to detect the message signals coming from the GPS satellites.
  • the strength of a GPS signal is often expressed in decibels referenced to one mill watt (dBm).
  • dBm decibels referenced to one mill watt
  • the signal is typically as weak as -125dBm to - 130dBm, even in clear open sky.
  • the signal can drop to as low as -150dBm (the larger the negative value, the weaker the signal).
  • some GPS devices struggle to acquire a signal (but may be able to continue tracking if a signal was first acquired in the open air).
  • a good high sensitivity GPS receiver can acquire signals down to -155 dBm and tracking can be continued down to levels approaching -165 dBm.
  • the receiver To calculate the distance between the GPS receiver and each satellite, the receiver first calculates the time that a signal has taken to arrive. It does this by taking the difference between the time at which the signal was transmitted which is included in the signal message, and the time the signal was received by using an internal clock. As the signals travel at the speed of light, even a 0.001 second error equates to a 300km inaccuracy of the calculated distance.
  • Embodiments of the present invention identify the precise location of the vehicle and locate traffic lights and identifies their status.
  • the traffic light itself does not require new hardware.
  • different vehicles communicate so that knowledge regarding the traffic lights is relayed to a vehicle of interest from other vehicles far earlier than the vehicle of interest is able to image the traffic light directly, such as when it is obscured for example.
  • a first aspect is directed to a system for improved road safety, the system comprising a sub- system for detecting a traffic signal directed to a vehicle, the subsystem comprising: a general positioning system; an output, and an outward looking camera all mounted within the vehicle, and in data communication with a common processor having image analysis functionality that is coupled to a database;
  • the general positioning system for providing a general position of the vehicle
  • the database comprising data regarding appearance and relative positions of a list of road-signs and their exact locations, such that a road sign can be identified as being one of its occurrences within the area of compounding the uncertainty of the general positioning system to the analyzed portion of the imaged area within that general position and the outward looking camera for capturing an image of a field of view
  • the processor with image analysis functionality for identifying objects within the field of view and for locating said objects by comparison with information in the database, thereby identifying exact position of the vehicle, and the output for outputting a driver warning.
  • the system further comprises at least one installation of a plurality of similar road-signs at distances exceeding an uncertainty span of the general positioning system compounded with an analyzed sub-region of the imaged area of the outward looking camera; thereby, enabling the sub-system to un-ambiguously identify an imaged road-sign from the plurality of similar road signs.
  • the general positioning system may be selected from a global positioning system using geostationary satellites and a positioning system using land based antennae.
  • the camera identifies at least one stationary road sign and the output includes data regarding the at least one road sign.
  • the at least one road sign is a traffic light.
  • a traffic light is a red light, an amber light or a green light and said output data includes color of said traffic light.
  • the output is an alert to a driver of the vehicle.
  • the alert comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.
  • the output directly controls the vehicle, bypassing a driver.
  • a second aspect is directed to a system comprising a plurality of the subsystems described above, wherein each sub-system comprises a receiver and the output of each subsystem is a data signal detectable by receivers of other sub-systems.
  • the output data includes information about GPS attenuations.
  • the output data includes information about traffic signals.
  • An aspect of the invention is directed to a system comprising a plurality of the sub-systems as above, wherein a common computer processor and database are provided with a receiver and a transmitter, the receiver for receiving signals from the outputs of the subsystems and each subsystem comprising a receiver for receiving transmissions from the transmitter.
  • the transmitter coupled to the common computer processor and database transmits data calculated from outputs of each subsystem.
  • comparing distortion of an image of a road sign in the image stream of the camera to the road sign data in the database provides absolute distance and directional information of the road sign from the vehicle to the sub-system.
  • an uncertainty in identification of the road sign is a function of its occurrences within an uncertainty area of the general positioning system compounded with an uncertainty in the absolute distance of the road sign from the vehicle.
  • dedicated road signs comprise a painted road sign applied to one of the group of road surfaces, tunnel walls, overhead signs and roadside signs.
  • the painted road sign comprises concentric markings.
  • At least one installation of a plurality of similar road signs at distances exceeding the diameter of an uncertainty area of the general positioning system by an error margin of the road- sign to vehicle distance calculation enables the sub-system to un-ambiguously identify an imaged road-sign from the plurality of similar road signs.
  • a further aspect of the invention is directed to a method for detecting an absolute position of a vehicle comprising:
  • a general positioning system an output and an outward looking camera all mounted within the vehicle, and in data communication with a common processor having image analysis functionality and coupled to a database;
  • the method may further comprise identifying at least one object in a field of view of the camera by comparing candidate data regarding objects within a general vicinity of the vehicle that are listed in a database, with objects in the field of view of the camera.
  • the database may comprises information regarding several traffic signs within the area corresponding to the analyzed portion of the field of view of the outward looking camera as determined by the general positioning system.
  • At least one object may be uniquely identified by comparing an analyzed portion of the field of view of the outward looking camera with the position of the vehicle as determined by the general positioning system.
  • at least one object comprises a stationary traffic light.
  • the warning comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.
  • the method may further comprise outputting a signal to directly control the vehicle, bypassing a driver.
  • data from a plurality of sub-systems is received by a computer processor which transmits information to a sub- system of a vehicle of interest.
  • a base station comprising a common computer processor and database, a receiver and transmitter, and the base station receives signals from each sub-system and transmits information to each sub-system.
  • a territory is divided into a tessellation of areas such that each area is larger than the uncertainly in position resulting from the general positioning system by at least the uncertainty in the distance of the recognized traffic sign from the vehicle, and the database associates each of the several traffic signs with the area in which it is installed; thereby, enabling candidates for the recognized traffic sign to be located in the database by association with one area and all its neighboring areas.
  • the areas are assigned a coloring so that each area has at most one neighbor of a given color and at least one traffic signal is installed only in areas assigned a particular color; thereby, the at least one traffic signal occurring no more than once in an area and all its neighboring areas and hence being uniquely identifiable to the subsystem.
  • At least one traffic sign is installed at distances exceeding the span of the analyzed portion of the field of view of the outward looking camera by the uncertainty in position resulting from the general positioning system; thereby, being uniquely identifiable to the subsystem.
  • the image-analysis functionality is split into phases, with an initial phase recognizing a constant shape shared by painted road signs and subsequent phases being foregone when the constant shape is not found in the image stream, thereby conserving processor power usage.
  • Fig. 1 is a simplified pictorial illustration showing a system for preventing accidents, in accordance with an embodiment of the present invention
  • Fig. 2 is a simplified schematic illustration of an on-board device for preventing accidents, in accordance with an embodiment of the present invention
  • Fig. 3 is a simplified schematic flowchart of a method for issuing driver warnings, in accordance with an embodiment of the present invention
  • Fig. 4 is a simplified schematic flowchart of a method for determining a
  • Fig. 5 is a simplified schematic flowchart of a method for assigning "tag lexemes" to locations of interest, in accordance with an embodiment of the present invention
  • Fig. 6 is a simplified schematic illustration of a sample surface grid coloring and 'tag lexeme" placement, in accordance with an embodiment of the present invention
  • Fig. 7 is a simplified schematic illustration of sample "tag lexeme” shapes, in accordance with an embodiment of the present invention.
  • Fig. 8 is a simplified schematic illustration of one kind of recognizable "tag lexeme” shape in accordance with an embodiment of the present invention
  • Fig. 9 is a simplified flowchart of a method for recognizing a "tag lexeme” shape, in accordance with an embodiment of the present invention
  • Fig. 10 is a simplified schematic flowchart of a method for determining the "viewing angles" of a camera, in accordance with an embodiment of the present invention
  • Fig. 11 is a simplified schematic flowchart of a method for looking up "recognizable objects" for “viewing angles” determination, in accordance with an embodiment of the present invention
  • Fig. 12 is a simplified schematic flowchart of a method for determining a "travel vector", in accordance with an embodiment of the present invention.
  • Fig. 13 is a simplified schematic flowchart of a method for looking up "recognizable objects" for "travel vector” determination, in accordance with an embodiment of the present invention
  • Fig. 14 is a simplified schematic flowchart of a method for determining the states of "traffic control signals", in accordance with an embodiment of the present invention.
  • Fig. 15 is a simplified schematic flowchart of a method for looking up "signal recognizable objects" for "traffic control signal” state determination, in accordance with an embodiment of the present invention
  • Fig. 16 is a simplified schematic of a "unique location" entity relation model, in accordance with an embodiment of the present invention.
  • Fig. 17 is a simplified schematic of a "(signal) recognizable object" entity relation model, in accordance with an embodiment of the present invention.
  • Fig. 18 is a simplified pictorial illustration of a mounting fixture, in accordance with an embodiment of the present invention.
  • Fig. 19 is another simplified pictorial illustration of a mounting fixture, in accordance with an embodiment of the present invention.
  • vehicle is used in a broad sense to denote any transportation apparatus, whether operated by a person or otherwise.
  • driver is used to denote whatever entity operates the vehicle and the terms “warning” and “alert” are used to denote an instruction relevant to the vehicle operation in the immediate, medium or long term.
  • road is used to denote the ground over which the vehicle moves.
  • road vehicles such as cars
  • road should be understood as including other ground surfaces.
  • traffic signal is used to designate visual signals, particularly traffic lights. However, in general, it is not required that a "traffic signal” in and of itself bear an instruction to a "driver”.
  • traffic light refers primarily to what are called traffic robots in the United States, i.e. red and green lights, and typically red, amber and green arrays that indicate that a vehicle should stop or may continue its journey.
  • recognizable object is used to denote any entity, whether natural or man-made, or a collection of such entities, which form some visually recognizable pattern or patterns not necessarily unique. This includes buildings.
  • object recognition algorithm or “object recognition method” refer to a means of searching for or confirming the presence of a "recognizable object” or a class of similar "recognizable objects” in an image
  • recognition signature of a "recognizable object” refers to the data needed to be made available to the "object recognition algorithm” in order for it to search for or confirm the presence of said "recognizable object” in a given image.
  • bounding box An area of the image where an "object recognition algorithm” should search for a "recognizable object” or where one was found will be referred to throughout the disclosure as "bounding box".
  • object recognition method may be similar to or based upon one or more existing prior work object recognition algorithms, such as SURF.
  • recognizable signal object and “signal recognizable object” both refer to a "recognizable object” which possesses one or more additional visually recognizable patters ("signal recognition” patterns), possibly equivalent to the visually recognizable pattern of the "signal recognizable object” itself.
  • signal recognition patterns
  • the presence of one or more such “signal recognition” patterns being indicative of a particular "traffic control signal” actively or passively conveying a particular instruction to a "driver” or “drivers” - for example, a three-colored traffic light displaying a red signal.
  • a "recognizable signal object” is in a particular "state” when it is displaying none, one or more of its “signal recognition” patterns.
  • recognizable signal objects are objects whose "signal recognition” patterns is equivalent to the "signal recognizable object's" own visually recognizable pattern are simply those “signal recognizable objects” whose very presence in a scene is indicative of a particular "traffic control signal” actively or passively conveying a particular instruction to a "driver” or “drivers” - for example, a lowered boom gate.
  • surface grid pattern and “tessellation of areas” are used interchangeably to denote a segmentation of the transportation medium into adjacent or overlapping segments.
  • a "space grid element” is one such segment, and equivalently a “(tessellation) area”.
  • unique location is used to denote a particular coordinate in the transportation medium which is chosen to have certain data about it recorded in the system of the invention, as will become clear.
  • tag lexeme is used throughout the disclosure to denote a visual marking or pattern which can be applied to the transportation medium or to other entities at or near a “unique location”, or otherwise be made visible or imaged by a camera located at or near a “unique location”.
  • viewing when applied to a camera, means capturing and streaming a view of a location, so that some
  • viewing angles is used to denote the camera orientation angles when it captures an image while “viewing” a particular unique location.
  • viewing angles refers to the camera roll, pitch and azimuth.
  • the reference axes for quantifying the actual angles of roll, pitch and azimuth can be thought of as the line of horizon for roll, the plane formed by the line of horizon and the camera for pitch and a specific arbitrary vector in this plane for azimuth - for example, the direction of normal "vehicle” traffic transition past the presently "viewed” unique location.
  • the reference vector for the azimuth component of the "viewing angles” is likely to be different whenever a camera "views" a different "unique location”.
  • traveling vector refers to the vector in the transportation medium from the coordinates of a particular "unique location” to some other coordinates - in particular, the coordinates where a camera was situated when it captured a particular image.
  • a “precise location” is the combination of a “unique location” and a “travel vector”.
  • the term "height” is used throughout the disclosure to denote a distance in a vector perpendicular to the plane formed by the line of horizon and a camera, usually with reference to the transportation medium surface.
  • the "height” can be computed by a method similar to the method for determining the "travel vector”, described below. This should not limit the scope of the invention to transportation media where the concept of "height” has little significance with respect to operation of the "vehicles” of that transportation medium.
  • the term "movement function” is used throughout the disclosure to refer to a transformation, usually associated with a particular "recognizable object” recorded in the system of the invention.
  • the "movement function” provides as output the "bounding box” where the recognizable object is expected to appear in an image captured by a camera possessing the given attributes situated at the given "height” at the "precise location” formed by the given "travel vector” in the presently viewed “unique location” and oriented at the given "viewing angles".
  • embodiments of the present invention may employ a different "movement function" for various combinations of camera attributes. This is considered an implementation detail and is therefore omitted from further discussion in the body of the disclosure and the claims.
  • Fig. 1 is a simplified pictorial illustration showing a system 100 for preventing accidents, in accordance with an embodiment of the present invention.
  • the system comprises an on-board subsystem 101 for detecting a traffic signal directed to a vehicle 102, the subsystem comprising:
  • a GPS system an output and an outward looking, in this case forward mounted camera all mounted within the vehicle 100 and in data communication with a common processor having image analysis functionality that is coupled to a database.
  • the GPS for providing a general position of the vehicle; the database for providing data regarding objects and their relative positions within that general position and the forward mounted camera for capturing an image of a field of view, the processor with image analysis functionality for identifying objects within the filed of view and for locating said objects by comparison with information in the database, thereby identifying said traffic signal unambiguously and the output for outputting data that includes identity of said traffic signal and its location.
  • the sub-system 101 is configured and constructed to broadcast and receive signals via a local broadcast network (peer to peer communication network) 103.
  • the sub- system may be integrated into a single unitary device and may comprise a modern smart phone that uses a GPS navigation application, includes a camera application and telecommunication capability with a transmitter and receiver for transmitting and receiving data.
  • the sub- system 101 is placed inside a vehicle 102, such as a car for example.
  • the sub-system 101 is mounted in a mounting apparatus 104 or cradle, which enables the camera of the sub-system 101 to capture and/or view images of the road ahead of the vehicle.
  • Tag lexemes 105 are applied to the road surface or road signs (not shown) and can be viewed by the camera of the on-board sub-system 101.
  • the subsystem 101 is configured and constructed to broadcast and receive signals via a centralized network connection 106 to a backbone (centralized) network 107, such as the Internet, for example.
  • a backbone (centralized) network 107 such as the Internet, for example.
  • a server 108 and database 109 are preferably provided. These are constructed and configured to broadcast and receive signals to the backbone
  • processing is divided between the onboard subsystem 101 and one or more servers 108. Similar, some information will be stored in a memory such as a flash memory of the on board sub- system, whether a dedicated navigation - safety system or a smart-phone, both of which have flash memories. Other information may be held in a server 108. Information may be transmitted from the on board sub- systems 101 in each vehicle 102 to a central server 107 and location specific data may be transmitted from the central server 107 to the on board subsystems 101 in each vehicle 102 to provide location relevant information.
  • An existing road surface marking 110 might be imaged/viewed by the device 101 and might be utilized by the methods of the invention as a "recognizable object" along with other visually recognizable entities (not shown) potentially also visible to the device 101.
  • On-board device 200 may be a specific implementation of subsystem 101 shown in Fig. 1.
  • On-board device 200 is connectable / connected to a backbone network 201 and a local broadcast network 202.
  • the device 200 is able to receive and send data/signals from/to these networks 201, 202.
  • the on-board device 200 comprises a processor 211 such as a central processing unit (CPU) programmed with appropriate software and further comprises a global positioning system (GPS) sensor 203, and a forward-facing camera.
  • a processor 211 such as a central processing unit (CPU) programmed with appropriate software and further comprises a global positioning system (GPS) sensor 203, and a forward-facing camera.
  • CPU central processing unit
  • GPS global positioning system
  • the on-board device 204 may also include other subsystems and functionality, such as a cabin (inside vehicle) camera 205, a cabin ambient microphone 206, orientation sensors 207 a video screen 208, an audio output device 209, a haptic device 210 for creating vibratory stimulation, and an optional local memory 212.
  • a cabin (inside vehicle) camera 205 may also include other subsystems and functionality, such as a cabin (inside vehicle) camera 205, a cabin ambient microphone 206, orientation sensors 207 a video screen 208, an audio output device 209, a haptic device 210 for creating vibratory stimulation, and an optional local memory 212.
  • sub-systems 101 that are not unitary, some of these functions may be provided by separate units, such as a GPS unit, a camera unit and an onboard computer. These may exchange data via wired or wireless connections, such as BluetoothTM, for example.
  • the sub-system 101 includes an output.
  • this is an audible alert and may be abstract or may be words generated by a speech synthesizer or pre-recorded messages such as "approaching traffic light”, “approaching red traffic light”, “stop sign ahead” and the like.
  • the alert may however, be a visual output such as a flashing light or written instruction and may be projected onto the windscreen, onto the glasses of the driver, or even directly onto the retina of the driver.
  • the alert may be a haptic signal, such as a vibration transmitted to the driver's body through the seat, through the pedals or through the seatbelt, for example. Theoretically, though uncommon, the sense of smell or taste could be alerted. Two of more senses may be stimulated with alerts at the same time.
  • FIG. 3 is a simplified schematic flowchart of a method 300 for issuing driver alerts, in accordance with an embodiment of the present invention.
  • Forward facing Camera 204 and GPS system 203 of the sub-system 101 or Device 200 are operative to capture camera images 310 and obtain GPS readings 312.
  • the camera images 310 and GPS readings 312 may be obtained and transmitted to the processor of the sub-system 101 or Device 200 continuously, semi- continuously or intermittently.
  • the images and/or data/signals associated therewith 310 along with the GPS readings 312 are fed into a unique location algorithm 316 discussed in detail below with respect to the flowchart shown in Fig. 4.
  • the unique location algorithm 316 is able to recognize the "unique location” which is presently “viewed” by the device's camera, this "unique location” is fed into the instantaneous viewing angles algorithm 318, along with the images captured in step 310.
  • the output of the instantaneous viewing angle algorithm 318 is then fed into the instantaneous travel vector algorithm 320, along with the images captured in step 310.
  • the instantaneous travel vector algorithm 320 is discussed in detail below with reference to the flowchart shown in Fig. 10.
  • the output of the instantaneous travel vector algorithm 320 together with the output of the unique location algorithm 316 allows determining the instant precise location of the vehicle 322.
  • device 200 or sub-system 101 is operative to obtain crowd GPS assist data 314 via the local broadcast network 202 or the backbone network 201 from other nearby devices.
  • device 200 or sub-system 101 may obtain crowd GPS assist data 314 and other data from the central server 108.
  • the GPS assist data 314 may increase the accuracy of the GPS 203 reading enough to conclude the current "precise location" of the vehicle 102 in step 322 even when this would not be possible via steps 316, 318 and 320 alone, for example when no "tag lexeme” or "recognizable objects” can be imaged by the device 200, as will become clear.
  • step 322 Once the current "precise location" of the vehicle is concluded in step 322, it is fed along with the current GPS reading 312 into step 324, whereby new crowd GPS assist data 314 is calculated and broadcast to the local broadcast network and/or to the central server 108.
  • step 322 the current "precise location" as concluded in step 322, along with the output of the unique location algorithm 316 and the instantaneous viewing angle algorithm 318 is fed into a signal state recognition algorithm in step 328, which is discussed in detail below with respect to the flowchart shown in Fig. 14.
  • any signal states not recognized directly may be obtained via the local broadcast network 202 or the backbone network 201 from other nearby devices or from the central server 108.
  • step 326 the output of step 328 is considered along with data on signal state recognitions by other nearby devices (not shown), to reduce false recognitions and to safely conclude the current signal states of all or some traffic control signals relevant to the presently "viewed" "unique location".
  • data on signal state recognition by the device 200 is broadcast via the local broadcast network and/or the backbone network to other devices and/or to the central server in step 330.
  • the "precise location" determined in step 322 is used to identify the relevant "traffic control signals” in step 332.
  • the relevant "traffic control signals” are those “traffic control signals” whose state dictates a particular way of operating the "vehicle” in the immediate or long term; particularly but not exclusively traffic lights.
  • step 334 the system of the invention is then operative to consider location- related and signal-related timing factors relating to the "precise location" from step 322, the relevant "traffic control signals” from step 332 and the states of the latter from step 326. Thereafter, in a decision step 336, the system is operative to decide if the "driver" should be warned about a signal state. If the output is YES, a driver alert or warning is issued in an issue driver warning step 338 via, for example, the audio, video or haptic outputs of the on-board device 200 (Fig. 2) or sub-system 101.
  • a query step 340 is performed to query a central database for location- specific warnings relevant to the "precise location" from step 322.
  • distractions such as noise inside cabin, driver eye movement, etc., if detected and monitored, and driving conditions factors such as time of day, speed, weather, road bends, relevant statistics, etc., may be considered - step 342.
  • the system is operative to consider whether the driver should be warned about current location conditions and / or suitability of his / her driving in step 344. If yes, a driver warning is issued - step 346.
  • Fig. 4 is a simplified schematic flowchart of a method for determining a "unique location” presently "viewed” by a camera, in accordance with an embodiment of the present invention.
  • a method 400 for providing the unique location that is one embodiment of the unique location algorithm 316 of Fig. 3 is detailed.
  • Firstly camera images are obtained in a camera image obtaining step 402, by camera 204 (Fig. 2).
  • camera 204 Fig. 2
  • viewing angles are obtained using orientation sensors 207 and/or one or more of cameras 204, (205 where provided) see Fig. 2.
  • the outputs of steps 402, 404 are fed into a searching step 408, for searching for a "tag lexeme" in the image, for example, as is described hereinbelow in Figs. 7-9.
  • step 410 the system is operative to determine if a tag lexeme has been recognized in the previous step 408. If the response is negative, i.e. NO, more images and viewing angles are obtained in steps 402, 404, 406 respectively and the process is repeated. If the response is YES then step 416 detailed herein below is performed.
  • step 406 GPS readings are obtained in step 406 using GPS sensor 203 (Fig. 2). Then, in step 412, the outputs of step 406 are used to determine the identification or ID of the space grid corresponding to the GPS reading obtained in step 406. Thereafter, all GPS grids neighboring the grid from step 412 are determined in step 414.
  • step 416 the identified tag lexeme from step 410 is looked up in the database 109 or local storage 212 among tag lexemes in place in the identified grids from steps 412 and 414, where, due to the provisions of the method for tag lexeme assignment described herein below in Fig. 5, the identified tag lexeme is guaranteed to appear at most one time; and thus, the space grid where the recognized tag lexeme is in use is uniquely identified.
  • step 416 along with the recognized tag lexeme from step 410 uniquely identify the specific appearance of the identified tag lexeme - a "unique location" "viewed” by the camera at the time that the imaging step 402 was performed.
  • this "unique location” is concluded or deduced concluding method 400.
  • Fig. 5 is a simplified schematic flowchart of a method for assigning "tag lexemes" to locations of interest, in accordance with one embodiment of the present invention.
  • tag lexemes placed so that when an image captured from a particular coordinate is subject to image analysis, at most one instance of the same tag lexeme has the potential to be present in the analysed portion of the image; or, when the chosen image analysis yields the camera to tag-lexeme distance - in any distance-uncertainty sub-region of the analysed portion of the image.
  • tag lexemes reappear no closer than twice the maximum GPS error, and usually further still, as necessitated by the implementation details and uncertainty margins. While there are other methods of satisfying this aspect of the invention, in Fig. 5 one such method 420 is shown by way of an example.
  • the method 420 involves defining a tessellating array of areas that cover the territory.
  • the method 420 consists of the following steps:
  • the surface grid pattern for example a uniform hexagonal grid in a surface grid pattern definition step 424;
  • Identify locations of interest for tag application (for example by criticality) in a tag location identification step 428; For instance, such locations may be specific lanes in a controlled intersection;
  • step 432 One way of performing step 432 is exemplified in 434 as follows:
  • a grid annotation step 440 annotate each grid element with colors l..z, such that no grid elements neighboring or overlapping one grid elements are annotated with the same color;
  • a "tag lexeme” allocation step 442 allocate "tag lexemes” to locations of interest such that if a location resides in a grid element annotated with color i, then a tag is allocated only from the subset of T Ti.
  • FIG. 6 is a simplified schematic illustration of a sample surface grid coloring consisting of hexagonal cells, and "tag lexeme” placement 460, in accordance with an embodiment of the present invention.
  • a uniform hexagonal surface grid pattern is applied to a surface, producing uniform hexagonal surface grid elements. All grid elements, like grid element 462, bear edges longer than worst case GPS accuracy 468.
  • tag lexeme in a surface grid element 462 is tagged with the letter A.
  • tag lexeme 464 is denoted Al.
  • tag lexemes denoted Ai, Bi and Ci each belong to a different non intersecting subset of the set of tag lexemes.
  • tag lexemes denoted Ai are present in surface grid elements tagged with the color A, while tag lexemes denoted Bi, Ci are present in surface grid element tagged with the color C.
  • Fig. 7 shows a simplified schematic illustration of sample "tag lexeme” shapes
  • the "tag lexeme” shapes shown in 482 are substantially concentric images that may be characterized by complete omni-directional symmetry which can contribute to recognizability by a computationally inexpensive recognition algorithm (not shown).
  • the "tag lexeme” shapes shown in 484 are constructed by composing arbitrary elements onto an omni-directionally symmetrical sub-pattern / element.
  • the "tax lexeme” shapes shown in 486 are constructed by alteration of straight lines of two different lengths while incorporating an omni-directionally symmetrical element.
  • Fig. 8 is a simplified schematic illustration of one kind of recognizable "tag lexeme” shape 490, in accordance with an embodiment of the present invention.
  • this type of recognizable tag lexeme shape 490 comprises at least one invariant element 492, an invariant element for suspect confirmation 494, an invariant element with at most one line of symmetry for orientation disambiguation 496 and a variable component 498 unique to each "tag lexeme" in the defined finite set of "tag lexemes".
  • element 492 is omni-directionally symmetrical (i.e. concentric) and/or element 494 has at least two lines of symmetry.
  • all of the above-mentioned sub-patterns / elements are symmetrical with respect to a common line of symmetry 499a while sub-patterns / elements 494 and 492 are also symmetrical to a second line of symmetry 499b.
  • Fig. 9 is a simplified schematic flowchart of a method 491 for recognizing a tag lexeme shape, similar to 490 (Fig. 8), in accordance with an embodiment of the present invention.
  • the method 491 is one possible implementation of step 408 of method 400 (Fig. 4).
  • the method 491 comprises the following steps:
  • the method is then operative to search pavement area of image for angle and distance modified likenesses of sub-pattern / element 492 (Fig. 8) using a
  • a first checking step 49 If, it is determined whether sub-pattern /element 492 is recognized;
  • a second checking step 49 lh determine if sub-pattern / element 494 is identified
  • a matching step 49 lj attempt to match the variable sub-pattern / element 498 against known tag lexeme shapes in the finite set of "tag lexemes", by, for example, iteratively attempting to recognize the variant element of each "tag lexeme” from the set until a recognition is found or the set is exhausted (not shown);
  • a conclusion step 491k the method is operative to conclude whether or not a tag lexeme was recognized and if one was recognized, which one.
  • FIG. 10 is a simplified schematic flowchart of a method 500 for determining the "viewing angles" (azimuth, pitch and roll) of a camera similar to 204 (Fig. 2), in accordance with an embodiment of the present invention. This method provides more details of the instantaneous viewing angles algorithm 318 (Fig. 3).
  • a camera image is obtained by forward facing camera 204 (Fig. 2).
  • the system of the invention is operative to lookup appropriate recognizable objects in local storage memory 212 or in a server accessible database 108, their object recognition methods, object recognition signatures and movement functions and to compute predicted bounding boxes in a lookup step 502 using a method such as 550 (Fig. 11) discussed below.
  • recognizable object's object recognition method requires generating a list of match options 503 for each recognizable object in the image from step 501 within the predicted bounding boxes from step 502.
  • a viewing angle computation step 504 the recognizable object's movement function is examined for a combination of inputs which produce an output requiring the recognizable object to be present in the image at coordinates sufficiently close to those at which it was found / matched in at least one of the match options produced in the recognizing step 503.
  • this consideration takes into account either a minimal or the last known travel vector according to a recent iteration of 322 (Fig. 3), since the inputs required by the movement function include both the travel vector and the viewing angles.
  • the calculation step 504 consists of solving a system of simultaneous equations.
  • step 503 recognizable objects were recognized in step 503 and whether step 504 yielded exactly one viewing angle which would satisfy at least one match option of each recognized recognizable object.
  • step 507 concludes the viewing angles corresponding to the image captured in step 501.
  • the system is operative to enlarge search hounding boxes for all recognizable objects in a bounding box enlarging step 506 and repeat steps 502 to 505.
  • FIG. 11 there is seen a simplified schematic flowchart of a method 550 for looking up recognizable objects in step 502 (Fig 10) of the viewing angle algorithm 500, in accordance with one embodiment of the present invention.
  • a most recent recognized unique location ID 551 (316 of Fig. 3) is fed into a computing procedure step 552 that looks up recognizable objects associated with the unique location ID with which it is provided. (See figures 16 and 17).
  • a last known travel vector 320 (Fig. 3) is considered by the device 211 (Fig. 2).
  • the last known viewing angles 318 (Fig. 3) are considered by the device 211 (Fig. 2).
  • orientation / rotation data may be obtained from orientation sensors 207 (Fig. 2) or by analysis of data from cameras 206 and / or 205 (Fig 2).
  • predicted bounding boxes for each selected recognizable object may be formulated based on the afore mentioned considered last known travel vector (step 553), last known viewing angle (step 554) and the previously considered orientation / rotation data (step 555), by supplying the travel vector and viewing angle, corrected by the orientation / rotation data as input to the movement function associated with each recognizable object (see Fig 17).
  • the predicted bounding boxes may be enlarged in a bounding box enlargement step 557 proportionally to the present search iteration number, in accordance with an iteration count of step 506 (Fig. 10) of the viewing angle algorithm 500 (Fig. 10).
  • recognizable objects whose bounding boxes are outside image boundaries are dropped.
  • the system may be queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 551.
  • step 560 the current image, time of day, weather forecast and crowd data from previous step 559, are used in lighting conditions computation step 560.
  • the results of step 560 are used to filter recognizable objects by lighting conditions in a second filtering step 561, where recognizable objects not relevant for current lighting conditions are dropped.
  • recognizable objects are only relevant for daytime, while others are exclusively relevant for night time, while still others possess different recognition signatures during dusk and are thus recorded in the system of the invention as separate recognizable objects for dusk and for daylight viewing).
  • a lookup recent recognizability statistics from crowd data step 562 is performed, where the system of the invention is queried for recent statistics regarding successful recognition of the recognizable objects of the presently viewed unique location by other devices.
  • a thirds filtering step 563 where recognizable objects which, according to the output of the previous step 562, recently tended to fail to be recognized by devices viewing the present unique location - and thus fall below a specified recognizability threshold - are dropped.
  • an ordering step 564 is performed, whereby the set of recognizable objects is ordered sequentially in accordance with least effect of travel vector - in one embodiment, this could be the average absolute value of the recognizable objects' movement function applied to a number of predefined travel vectors.
  • n a predetermined number, of recognizable objects are picked from the top of the output of the ordering step 564, thus concluding the method 550.
  • Fig. 12 is a simplified schematic flowchart of a method 600 for determining a "travel vector" as defined herein above, of a camera similar to 204 (Fig. 2), in accordance with an embodiment of the present invention.
  • Method 600 provides more details of the instantaneous travel vector algorithm 320 referred to in Fig. 3.
  • a camera image (step 601) is obtained by camera 204 of Fig. 2.
  • system 100 is operative to lookup appropriate recognizable objects, their object recognition methods, object recognition signatures and movement functions and to compute predicted bounding boxes in a lookup step 602, using a method such as 650 of Fig. 13, for example, as discussed below.
  • each recognizable object's object recognition method is consulted in a recognizing step 603 to yield all match options for that recognizable object in the image from step 601 within the predicted bounding boxes from step 602.
  • a travel vector computation step 604 the recognizable object's movement function is examined for a combination of inputs which would produce an output requiring the recognizable object to be to be present in the image at coordinates sufficiently close to those at which it was found / matched in at least one of the match options produced in the recognizing step 603.
  • this consideration takes into account the present viewing angles computed in 318 (Fig. 3), since the movement function takes as input both the travel vector and the viewing angles.
  • this step 604 amounts to solving a system of linear equations.
  • recognizable objects were recognized in step 603 and whether step 604 yielded exactly one travel vector which would satisfy at least one match option of each recognized recognizable object. If YES, then the present iteration of the method is concluded and the system of 30 the invention, in a concluding step 607, concludes the travel vector corresponding to the image captured in step 601. If NO, then the system is operative to enlarge search bounding boxes for all recognizable objects in a bounding box enlarging step 606 and repeat steps 602-605.
  • Fig. 13 is a simplified schematic flowchart of a method for looking up "recognizable objects" for "travel vector" determination, in accordance with an embodiment of the present invention.
  • a method 650 for looking up recognizable objects in step 602 (Fig 10) of the travel vector algorithm 600, in accordance with one embodiment of the present invention is shown.
  • Step 651 a most recent output of 316 (Fig. 3) is fed into step 652.
  • Step 652 is operative to query the system of the invention and thus lookup recognizable objects associated with the unique location ID it was provided (see figures 16 and 17).
  • a last known travel vector 320 (Fig. 3) is potentially considered by the device 211 (Fig. 2).
  • the output of method 500 (Fig 10), with captured image in step 501 being the same as the presently captured image in step 601, is consulted.
  • predicted bounding boxes for each selected recognizable object are formulated based on the afore -recollected last known travel vector (step 653) and viewing angles (step 654), by supplying the said travel vector and viewing angles as input to the movement function associated with each recognizable object (see Fig. 17).
  • the predicted bounding boxes are potentially enlarged in a bounding box enlargement step 657 proportionally to the present search iteration number, in accordance with an iteration count of step 606 (Fig. 12) of the travel vector algorithm 600 (Fig. 12).
  • recognizable objects whose bounding boxes are outside image boundaries are dropped.
  • step 659 the system of the invention is queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 651.
  • step 660 are used to filter recognizable objects by lighting conditions in a second filtering step 661, where recognizable objects not relevant for current lighting conditions are dropped.
  • recognizable objects are only relevant for daytime, while others are exclusively relevant for night time, while still others possess different recognition signatures during dusk and are thus recorded in the system of the invention as separate recognizable objects for dusk and for daytime.
  • a lookup recent recognizability statistics from crowd data step 662 is performed, where the system of the invention is queried for recent statistics regarding successful recognition of the recognizable objects of the presently viewed unique location by other devices.
  • a thirds filtering step 663 where recognizable object which, according to the output of the previous step 662, recently tended to fail to be recognized by devices viewing the present unique location - and thus fall below a specified recognizability threshold - are dropped.
  • an ordering step 664 is performed, whereby, the set of recognizable objects is ordered sequentially in accordance with most effect of travel vector - in one embodiment, this could be the average absolute value of the recognizable objects' movement function applied to a number of predefined travel vectors.
  • n a predefined number, of recognizable objects are picked from the top of the output of the ordering step 664, thus concluding the method 650.
  • Fig. 14 is a simplified schematic flowchart of a method for determining the states of "traffic control signals", in accordance with an embodiment of the present invention.
  • a method 700 for recognizing the signal states of traffic control signals in accordance with an embodiment of the present invention is shown. This method provides more details of the signal state algorithm 328 (Fig. 3).
  • a camera image is obtained by camera 204 (Fig. 2).
  • system 100 is operative to lookup relevant "signal recognizable objects", their object recognition methods and signatures, signal recognition methods and signatures and to compute predicted bounding boxes in a lookup step 702, using a method such as 750 (Fig. 15) discussed below.
  • a confirmation step 703 correct object recognition methods provided by step 702 are consulted to confirm recognition of each signal recognizable object at expected bounding boxes.
  • the confirmation possibly yields minor adjustments required for the correct recognition of the signal state in the next step 704.
  • signals which are not confirmed in this step 703 are excluded from signal state recognition in the next step 704. This allows minimizing the frequency of false state recognitions, for instance if traffic control signal is temporarily obscured by an object which might otherwise confuse the signal recognition algorithm employed in the next step 704.
  • a state recognition step 704 is performed, whereby the state of each of the signal recognizable objects confirmed in step 704 is attempted to be determined using the corresponding signal recognition method given by step 702.
  • an obtaining crowd-cast signal states step 705 the system of the invention is queried for signal state recognitions made available in 330 (Fig. 3) by other devices viewing the present unique location.
  • this step 705 either or both of the networks 202 (Fig. 2) and / or 201 (Fig. 2) may be instrumental.
  • the here- obtained crowd-cast signal states are further marked in this step 705 as the present states of those traffic control signals for which no associated signal recognizable objects were confirmed in step 703, or those for which the state of associated signal recognizable objects was not conclusively recognized in the recognition state 704 by the appropriate recognition method.
  • the method 700 is concluded in a concluding step 707, where the system of the invention uses the outputs of steps 704 and 705 to conclude the states of the traffic control signals associated (see Fig 17) with the signal recognizable objects of the presently viewed unique location.
  • the system of the invention is operative to consider various parameters such as recognition confidence level and inter-signal rules. (For example if when traffic light A is green, a conflicting traffic light B is always red, and A was recognized as green with 95% confidence while B was recognized as yellow with 15% confidence, B will be reported as red contrary to its recognition).
  • Fig. 15 a simplified schematic flowchart of a method 750 for looking up "signal recognizable objects" in step 702 (Fig 14) of the signal state recognition algorithm 700, in accordance with one embodiment of the present invention is now described.
  • Step 751 a recent output of 316 (Fig. 3) is fed into step 752.
  • Step 752 is operative to query the system of the invention and thus lookup signal recognizable objects associated with the unique location ID it was provided (see figures 16 and 17).
  • predicted bounding boxes for each selected signal recognizable object are formulated based on the afore-recollected last known travel vector (step 753) and viewing angles (step 754), by supplying the said travel vector and viewing angles as input to the movement function associated with each signal recognizable object (see Fig. 17).
  • a filtering step 756 signal recognizable objects whose bounding boxes are outside image boundaries are dropped.
  • step 757 the system of the invention is queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 751.
  • the current image, time of day, weather forecast and crowd data from previous step 757 are used in lighting conditions computation step 758.
  • step 758 are used to filter signal recognizable objects by lighting conditions in a second filtering step 759, where signal recognizable objects that cannot be decisively confirmed / recognized using their object recognition method under current lighting conditions are dropped.
  • signal recognizable objects that cannot be decisively confirmed / recognized using their object recognition method under current lighting conditions are dropped.
  • a traffic light cannot be recognized / confirmed based on its external outlines at night, but the same traffic light can be recognized at night by the hue of its halo, which is not appropriate at day time).
  • step 759 are used to filter signal recognizable objects once again in a third filtering step 760, where signal recognizable objects whose state cannot be determined using their signal state recognition method under current lighting conditions are dropped.
  • signal recognizable objects whose state cannot be determined using their signal state recognition method under current lighting conditions are dropped.
  • the state of the traffic light discussed above cannot be determined at night based on the distance of the lit light from the external outlines of the signal; but rather, color-based recognition methods can be applied, which might be less trustworthy at daytime).
  • Fig. 16 is a simplified schematic illustration of a "unique location" entity relation model 800, in accordance with an embodiment of the present invention.
  • the "unique location” entity relation model 800 is centered around the "unique location” entity 840.
  • Figures 4, 5 and 6 make use of a notion of a space grid, which corresponds in the domain of the invention to a defined region of the transportation medium.
  • the system of the invention 100 (Fig. 1) is operative to uniquely identify each space grid entity 820 by a space grid ID 822 and to non-uniquely label each space grid entity with a color 821, as specified in step 440 (Fig. 5).
  • the system of the invention 100 (Fig. 1) is also operative to associate each space grid entity 820, with 6 other such space grid entities 820 which are adjacent to it in the domain of the invention, in the 6 to 6 relation 810 "Is Adjacent To” (assuming a hexagonal space grid, as shown in 460 (Fig. 6). This relation is instrumental in step 414 (Fig. 4).
  • Figures 4, 5, 6, 10, 11, 12, 13, 14 and 15 make use of a notion of a "unique location", which, as described and defined herein above, can be thought of as a particular coordinate in the transportation medium which is chosen to have certain data about it recorded in the system of the invention.
  • the system of the invention 100 (Fig. 1) is operative to associate with each space grid entity 820 zero or more such unique location entities 840 by the 1 to any "Harbors" relation 830. In this case, it can be said that the associated "space grid” 820 harbors the associated "unique location” 840.
  • the system of the invention is further operative to uniquely identify each "unique location" 840 by a unique location ID 842.
  • system of the invention is also operative to non-uniquely label each "unique location" 840 by a "recognition lexeme” 841, representing one non- unique "tag lexeme” shape as exemplified in figures 7 and 8.
  • This labeling is taken advantage of in step 416 (Fig. 4).
  • the assignment of this labeling is the subject of method 420 (Fig. 5).
  • Fig. 17 is a simplified schematic of a "(signal) recognizable object” entity relation model 900 centered around the "(signal) recognizable object' ' entity 906.
  • Fig. 17 is provided in order to simplify the understanding of the interrelationship of some of the entities used elsewhere in this detailed description of preferred embodiments.
  • Unique location entities 902 pictured in Fig. 17 may be similar or identical to entities 840 (Fig. 16).
  • Entities 906 pictured in Fig. 17 depict both signal recognizable objects and regular recognizable objects; hence, entities 906 are titled "(signal) recognizable objects”. Following this syntax, items only relevant to signal recognizable objects appear in Fig. 17 in brackets, while items without brackets are relevant equally to regular "recognizable objects" and "signal recognizable objects”.
  • recognizable object entities 906 need not be real objects in the domain of the invention, but rather any entity forming some visually recognizable pattern or patterns in the domain of the invention. Such patterns need not be unique. Examples of such recognizable objects in the domain of the invention may include a contour, a horizon line, a textured wall, a graffiti segment, an outline of a street light, a bright light source etc.
  • the system of the invention is operative to associate with each "(signal) recognizable object” entity 906, its recognition signature (910), its object recognition method (908), its movement function (912) and its object recognition required lighting conditions (914).
  • 906 is a signal recognizable object entity
  • the system of the invention is operative to also associate it with its signal recognition method (922), its signal recognition signature (924) and its signal recognition required lighting conditions (926).
  • a camera located within the vicinity of a particular unique location 902 can image certain recognizable objects 906, the system of the invention is operative to associate with the said unique location entity 902 the said recognizable object entities 906 by the one to any relation "Has a View of 904.
  • This relation is instrumental in steps 552 (Fig. 11), 652 (Fig. 13) and 752 (Fig 15).
  • the camera is said to be "viewing" the said unique location 902.
  • entity 920 corresponds to a traffic control signal in the domain of the invention, as defined herein above.
  • a recognized or an unrecognized signal state of some signal recognizable object entities 906 is indicative of the presence or absence of one or more states of a traffic control signal 920 in the domain of the invention
  • the system of the invention is operative to associate the said signal recognizable objects entities 906 with the said traffic control signal entity 920 via the any to one relation "Indicates State of 918.
  • multiple signal recognizable objects 906 may indicate the presence or absence of various states of a single traffic control signal 920.
  • the relation 918 is used in step 326 (Fig. 3).
  • Fig. 18 is a simplified pictorial illustration of a mounting fixture 1100, in accordance with an embodiment of the present invention.
  • Mounting fixture with offset arm and optional adjustable mirrors 1100 is configured to lower the effective vantage point of device camera 204 (Fig. 2) below an attachment element 1106.
  • Attachment element, such as suction cup 1106 is attached to an offset arm 1108, wherein the offset shape of the arm allows for the arm not to block said device camera 204.
  • the fixture further comprises connecting stalk 1110, bearing joints 1112 and 1116.
  • the joint 1112 attaches the connecting stalk 1110 to the offset arm 1108 and permits rotation of the connecting stalk 1110 around multiple axes with respect to the offset arm 1108.
  • the fixture further optionally comprises a pair of adjustable mirrors 1114, which by their configuration allow displacement of device camera view point below the attachment element 1106 and angle adjustment.
  • the joints 1116 attach the adjustable mirrors 1114 to the connecting stalk 1110 and permit rotation of the adjustable mirrors 1114 around multiple axes with respect to the connecting stalk 1110.
  • the presence of multiple receptacles for the said joints 1116 exemplifies how in the present embodiment the joint position is allowed to vary or transit along the connecting stalk 1110, while one mirror is placed opposite the camera 204 (Fig 2) and the other mirror is placed at the desired position below the attachment element 1106, so that said attachment element does not obscure the view of the camera.
  • the rotation of joints 1112 and 1116 is achieved by implementing said joints 1112 and 1116 as ball joints.
  • the mounting fixture 1110 further comprises an adjustable holder 1118 for retaining the on board device within the mounting fixture.
  • the adjustable holder 1118 is attached to the connecting stalk 1110 by virtue of being an extrusion thereof.
  • the adjustable holder is characterized by its ability to retain onboard device 211 (Fig. 2) or it camera 204 (Fig. 2), for example through a clamp-like mechanism, as pictured.
  • Fig. 19 is another simplified pictorial illustration of a mounting fixture 1200, similar to the described mounting fixture 1100 (Fig. 18), in accordance with an embodiment of the present invention is shown.
  • the above description is thus a preferred embodiment.
  • the general approach of combining a GPS system with a camera to generally and then precisely locate a vehicle and to identify traffic control signals and, where these are traffic lights, to identify the state of the traffic light, i.e. whether it is red, amber or green, is a new approach designed to generate highly accurate and unambiguous warnings and alerts in real time, despite the enormous number of roads and junctions, approach angles and distances.
  • embodiments consist of a sub-system 101 which may be a unitary device 200 or a collection of interconnecting elements.
  • the sub-system 101 detects traffic signals directed to a vehicle 102.
  • Subsystem includes a GPS 203 and a forward mounted camera 204 both mounted within the vehicle 102, and in data communication with a common processor 211 having image analysis functionality that is coupled to a database that may reside partially or completely within an onboard memory 212 or an external database 109 in data communication with the common processor 211 via a server 108, for example.
  • the GPS 203 provides a general position (GPS reading 312) of the vehicle 102 and the database 109, 212 for providing data regarding objects and their relative positions within that general position of the vehicle 101.
  • the forward mounted camera 204 captures an image of a field of view 310, the processor with image analysis functionality may apply a unique location algorithm 316, an instantaneous viewing algorithm 318 and an instantaneous travel vector algorithm to determine the exact location of the vehicle 101.
  • a plurality of sub-systems 101 described above interact, either directly via a telecommunication link 103, or indirectly via a common computer processor such as a server 108, and access a common database 109.
  • the sub- systems include receivers and transmitters for receiving signals from the transmitters of other sub-systems and the server.
  • the system identifies candidate objects within the filed of view and locates and identifies these objects by comparison with information in the database 109, 212, thereby identifying objects such as traffic signals unambiguously, outputting data that includes identity of said traffic signal and its location.
  • the sub-system 101 (200) may be used to identify traffic signals such as the lights of a traffic light generating data that includes the color of traffic light.
  • the sub-system 101 is configured to alert a driver of the vehicle 102 via at least one of a haptic signal, an audible signal and a visual signal to the driver.
  • the sub-system 101 (200) may be configured such that the output thereof directly controls the vehicle, bypassing a driver.
  • the system may use existing hardware such as off the shelf GPS units and cameras and may use an appropriately mounted and positioned smart phone. It may be implemented as a new software program which may include additional functionality such as navigation software, or may be implemented as an add-on or retrofit to existing systems or as a series of procedures within available systems such as
  • a plurality of the sub-systems each with a transceiver may generate positioning signals that are detectable by receivers of other sub-systems.
  • the GPS can only provide a very general location due to limitations of satellite positioning, including the affects of overhang, adverse weather conditions and the like. It is a particular feature of some preferred embodiments that the territory is divided into a tessellation of areas where in one embodiment, each area is larger than the uncertainly in position resulting from the GPS, and in another embodiment, each area is larger than the worst case accuracy of the GPS (effectively half of the former). It will be appreciated that the two embodiments apply the same concept.
  • the uncertainty in position of a vehicle is no more than within one area and all its neighboring areas, and in embodiments where the tessellation areas are larger than the uncertainty in position resulting from the GPS, within one of four areas; while in such embodiments where also the territory is divided into a tessellating array of identical hexagons - within one of three areas.
  • the database 108 or locally stored data in local storage 212 typically includes information regarding all traffic signals (including road markings, such as tag lexemes 105) within each area such that a vehicle 102 knows that any traffic signal within the field of view must lie within one of the areas identified in accordance with the accuracy of the GPS, image processing and the construction of the tessellation of areas.
  • a limited number of labels may uniquely label all items of interest in the road environment, such as road signs and markings; since three, four, or as in the tessellation 462, seven, sets of labels may be used over and over, such that no two tessellation areas potentially intersected by uncertainty in position of a vehicle use the same labels.
  • the detected objects include road-signs.
  • the road signs include painted road signs which may be painted on a road surface and which may contain concentric markings. Details of the painted road-signs are included in the database 109. These may be used to provide exact location, the distortion of an image in the painted road- sign in the image stream of the camera 204 providing absolute distance and directional information to the on-board sub- system 101.
  • a method for detecting traffic signals directed to a vehicle 102 consists of:
  • a subsystem 101, 200 that includes at least a GPS 203, an output (e.g. 208, 209, 210) and a forward mounted camera 204 all mounted within the vehicle, 102 and in data communication with a common processor 211 having image analysis functionality and coupled to a database which may be in a local storage memory 212, an accessible central database 109 accessible via a server 108, for example, or distributed within both, and possibly other locations as well.
  • a subsystem 101, 200 that includes at least a GPS 203, an output (e.g. 208, 209, 210) and a forward mounted camera 204 all mounted within the vehicle, 102 and in data communication with a common processor 211 having image analysis functionality and coupled to a database which may be in a local storage memory 212, an accessible central database 109 accessible via a server 108, for example, or distributed within both, and possibly other locations as well.
  • the general geostationary location of the vehicle is determined in absolute coordinates using the GPS 203 (step 312 of Fig. 3 also step 406 of Fig. 4).
  • step 310 of Fig. 3 and step 402 of Fig. 4 Capturing an image stream with a field of view with the forward mounted camera (this is step 310 of Fig. 3 and step 402 of Fig. 4) and (iv) comparing the list of objects and their relative positions from the database with the image stream from the forward mounted camera 204 and determining the actual position of the vehicle 102 (this is step 418 of Fig. 4 which goes into more detail of a specific embodiment) and identify at least one visual traffic signal (step 332) by transforming and aligning the geostationary position of the visual traffic signal with a candidate location within a relative polar coordinate system of the vehicle by mapping onto the field of view (steps 316, 318, 320).
  • the traffic signal comprises a traffic light, or tag lexeme 105.
  • an alert (338, 346) is output to a driver of the vehicle 102.
  • the alert may comprise at least one of a haptic signal 210, an audible signal 209 and a visual signal 208 to the driver.
  • Data from a plurality of sub-systems 101 may be received by a computer processor 211, 108 which corrects information to a sub-system 101 of a vehicle 102 of interest.
  • a computer processor 211, 108 which corrects information to a sub-system 101 of a vehicle 102 of interest.
  • crowd GPS is shown as step 314 of Fig. 3.
  • a base station comprising a common computer processor or server 108 and database 109, a receiver and transmitter (or transceiver) for coupling to a network 107, such that the base station receives and transmits signals to each on-board subsystem 101.
  • the territory is divided into a tessellation of areas 424 such that each area is larger than the worst case position accuracy resulting from the GPS 422 such that the uncertainty in position of a vehicle is no more than within one area or any of it neighboring areas and the processor compares data from the database that relates to those areas 426.
  • a tessellation grid is colored in preferred embodiments of the invention so that each area has at most one neighbor of a given color and the tags are segmented for reuse by colors assigned to areas where these tags are allowed to be applied.
  • This is one implementation of the general principle of tag reuse limitation whereby in any uncertainty area resulting from GPS and image processing, a given tag (or traffic signal) appears at most once.
  • the database 109 comprises information regarding all traffic signals
  • A1-A9, B1-B7, C1-C3 of Fig. 5 (the latter specifically being tag lexeme road markings), for example, within each area such that a vehicle knows that any such traffic signal within the field of view must lie within the area containing the GPS reading or one of its neighboring areas.
  • GPS accuracy varies with weather. Warning system need to be robust and trust-worthy; they should issue warnings in close to 100% of cases regardless of weather.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

L'invention concerne un système d'amélioration de la sécurité routière comprenant un sous-système pour détecter un signal de trafic dirigé sur un véhicule, le sous-système comprenant : un système de positionnement général, une sortie, une caméra tournée vers l'extérieur, le tout étant monté à l'intérieur de véhicule, et en communication de données avec un processeur ayant une fonctionnalité d'analyse d'image couplé à une base de données. Le système de positionnement général est destiné à fournir une position générale de véhicule ; la base de données comprend des données concernant l'aspect et les positions relatives de panneaux de signalisation et de localisations, de telle sorte qu'un panneau de signalisation peut être identifié à l'intérieur d'une zone en mélangeant l'incertitude du système de positionnement général avec la zone imagée ; la caméra tournée vers l'extérieur est destinée à capturer une image de champ de vision ; le processeur avec une fonctionnalité d'analyse d'image est destiné à identifier des objets dans le champ de vision pour localiser des objets par comparaison avec des informations dans la base de données, ce qui permet d'identifier une position exacte du véhicule ; et la sortie est destinée à délivrer en sortie un avertissement de conducteur.
PCT/IL2015/051240 2014-12-24 2015-12-22 Système et procédé permettant d'empêcher les accidents WO2016103258A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462096778P 2014-12-24 2014-12-24
US62/096,778 2014-12-24

Publications (1)

Publication Number Publication Date
WO2016103258A1 true WO2016103258A1 (fr) 2016-06-30

Family

ID=56149382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2015/051240 WO2016103258A1 (fr) 2014-12-24 2015-12-22 Système et procédé permettant d'empêcher les accidents

Country Status (1)

Country Link
WO (1) WO2016103258A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018065325A1 (fr) * 2016-10-05 2018-04-12 Bayerische Motoren Werke Aktiengesellschaft Détermination de caractéristiques d'identification d'une victime accidentée lors d'un accident de la victime accidentée avec un véhicule automobile
US10410074B2 (en) 2016-10-25 2019-09-10 Ford Global Technologies, Llc Systems and methods for locating target vehicles
CN110531752A (zh) * 2018-05-23 2019-12-03 通用汽车环球科技运作有限责任公司 用于自主车辆地图维护的众包施工区域检测

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1383098A1 (fr) * 2002-07-09 2004-01-21 Accenture Global Services GmbH Dispositif de détection automatique de panneaux de signalisation routière
US20060034484A1 (en) * 2004-08-16 2006-02-16 Claus Bahlmann Method for traffic sign detection
US20100103040A1 (en) * 2008-10-26 2010-04-29 Matt Broadbent Method of using road signs to augment Global Positioning System (GPS) coordinate data for calculating a current position of a personal navigation device
EP2383679A1 (fr) * 2006-12-06 2011-11-02 Mobileye Technologies Limited Détection et reconnaissance des signes de la route
US20140362221A1 (en) * 2004-04-15 2014-12-11 Magna Electronics Inc. Vision system for vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1383098A1 (fr) * 2002-07-09 2004-01-21 Accenture Global Services GmbH Dispositif de détection automatique de panneaux de signalisation routière
US20140362221A1 (en) * 2004-04-15 2014-12-11 Magna Electronics Inc. Vision system for vehicle
US20060034484A1 (en) * 2004-08-16 2006-02-16 Claus Bahlmann Method for traffic sign detection
EP2383679A1 (fr) * 2006-12-06 2011-11-02 Mobileye Technologies Limited Détection et reconnaissance des signes de la route
US20100103040A1 (en) * 2008-10-26 2010-04-29 Matt Broadbent Method of using road signs to augment Global Positioning System (GPS) coordinate data for calculating a current position of a personal navigation device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018065325A1 (fr) * 2016-10-05 2018-04-12 Bayerische Motoren Werke Aktiengesellschaft Détermination de caractéristiques d'identification d'une victime accidentée lors d'un accident de la victime accidentée avec un véhicule automobile
CN109690644A (zh) * 2016-10-05 2019-04-26 宝马股份公司 用于在事故参与者与机动车发生事故时确定事故参与者的识别特征的方法
US10625698B2 (en) 2016-10-05 2020-04-21 Bayerische Motoren Werke Aktiengesellschaft Determination of identifying characteristics of an accident participant in the event of an accident involving the accident participant and a motor vehicle
US10410074B2 (en) 2016-10-25 2019-09-10 Ford Global Technologies, Llc Systems and methods for locating target vehicles
US11093765B2 (en) 2016-10-25 2021-08-17 Ford Global Technologies, Llc Systems and methods for locating target vehicles
CN110531752A (zh) * 2018-05-23 2019-12-03 通用汽车环球科技运作有限责任公司 用于自主车辆地图维护的众包施工区域检测
CN110531752B (zh) * 2018-05-23 2022-10-21 通用汽车环球科技运作有限责任公司 用于自主车辆地图维护的众包施工区域检测

Similar Documents

Publication Publication Date Title
US11983894B2 (en) Determining road location of a target vehicle based on tracked trajectory
US20210063162A1 (en) Systems and methods for vehicle navigation
CN106767853B (zh) 一种基于多信息融合的无人驾驶车辆高精度定位方法
US10677597B2 (en) Method and system for creating a digital map
JP6241422B2 (ja) 運転支援装置、運転支援方法、および運転支援プログラムを記憶する記録媒体
JP2020115136A (ja) 自律車両ナビゲーションのための疎な地図
CN106463051B (zh) 信号机识别装置以及信号机识别方法
KR102166512B1 (ko) 주변 환경에서 자동차를 정밀 위치 추적하기 위한 방법, 디바이스, 맵 관리 장치 및 시스템
WO2020242945A1 (fr) Systèmes et procédés de navigation de véhicule sur la base d'une analyse d'image
CN110491156A (zh) 一种感知方法、装置及系统
JP2007010335A (ja) 車両位置検出装置及びシステム
KR20150064909A (ko) Ais 표적정보를 이용하여 선박을 추적하는 장치 및 그 방법
US20220355818A1 (en) Method for a scene interpretation of an environment of a vehicle
WO2016103258A1 (fr) Système et procédé permettant d'empêcher les accidents
Choi et al. In‐Lane Localization and Ego‐Lane Identification Method Based on Highway Lane Endpoints
CN110717007A (zh) 应用路侧特征辨识的图资定位系统及方法
CN113269977A (zh) 地图生成用数据收集装置以及地图生成用数据收集方法
JP7468075B2 (ja) 管制制御システム
KR102603877B1 (ko) 차량의 정밀 측위 방법 및 장치
CN113390422B (zh) 汽车的定位方法、装置及计算机存储介质
US20240125604A1 (en) Method for operating a sensor circuit in a motor vehicle, a sensor circuit, and a motor vehicle with the sensor circuit
Kojima et al. High accuracy local map generation method based on precise trajectory from GPS Doppler
KR101738351B1 (ko) 차량용 항법 장치 및 방법
JP2023135409A (ja) 車両制御装置、車両制御方法及び車両制御用コンピュータプログラム
JP2023169631A (ja) 車両制御装置、車両制御方法及び車両制御用コンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15872099

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15872099

Country of ref document: EP

Kind code of ref document: A1