WO2022104294A1 - Advanced driver assistance system (adas) with camera on windshield and mobile device - Google Patents

Advanced driver assistance system (adas) with camera on windshield and mobile device Download PDF

Info

Publication number
WO2022104294A1
WO2022104294A1 PCT/US2021/065467 US2021065467W WO2022104294A1 WO 2022104294 A1 WO2022104294 A1 WO 2022104294A1 US 2021065467 W US2021065467 W US 2021065467W WO 2022104294 A1 WO2022104294 A1 WO 2022104294A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
camera
images
vehicle
computing system
Prior art date
Application number
PCT/US2021/065467
Other languages
French (fr)
Inventor
Xihua DONG
Zhebin ZHANG
Hongyu Sun
Jian Sun
Original Assignee
Innopeak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc. filed Critical Innopeak Technology, Inc.
Priority to PCT/US2021/065467 priority Critical patent/WO2022104294A1/en
Publication of WO2022104294A1 publication Critical patent/WO2022104294A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Definitions

  • the present disclosure relates, in general, to methods, systems, and apparatuses for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.).
  • ADAS advanced driver assistance system
  • a mobile device e.g., a smartphone, a mobile phone, a tablet computer, etc.
  • ADAS Advanced driver assistance systems
  • ADAS features include keeping a vehicle centered in its lane, bringing a vehicle to a complete stop in an emergency, and identifying other vehicles or pedestrians approaching, and much more.
  • ADAS implementations fall under one of two major categories: (a) designated or dedicated ADAS hardware/software platforms; or (b) ADAS implementation using only a cellphone.
  • mainstream automakers e.g., GM, BMW, and Tesla, as well as startup companies, e.g., Waymo and Cruise, etc.
  • startup companies e.g., Waymo and Cruise, etc.
  • a typical implementation of ADAS may involve various system components (e.g., radar, lidar, sensors, cameras, and CPU, etc.).
  • Such a complex solution makes it difficult or even impossible to implement on existing vehicles without pre-installed hardware.
  • ADAS with cellphone may only perform well under good conditions, and is ill-suited for less than ideal conditions.
  • implementation with only a cellphone may require mounting the cell phone on the windshield or some other place that may block the view of driver, which may be a violation of law in some locations (e.g., in California, etc.).
  • calibration a careful calibration of the camera is generally required for proper operation of ADAS system whenever the relative position of camera and vehicle is changed. This requirement makes implementation with cellphone only almost impractical since a cellphone is supposed to be "mobile.”
  • ADAS advanced driver assistance system
  • the techniques of this disclosure generally relate to tools and techniques for implementing advanced driver assistance system (“ADAS”), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device.
  • ADAS advanced driver assistance system
  • a method may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera; generating, using the computing system on the mobile device, one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks.
  • a mobile device might comprise a computing system and a non- transitory computer readable medium communicatively coupled to the computing system.
  • the non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera; generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
  • a system might comprise a first camera mounted to a first fixed position on a windshield of a first vehicle and a mobile device.
  • the mobile device may comprise a computing system and a first non-transitory computer readable medium communicatively coupled to the computing system.
  • the first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from the first camera; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera; generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
  • Fig. 1 is a schematic diagram illustrating a system for implementing advanced driver assistance system ("ADAS") with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • ADAS advanced driver assistance system
  • FIG. 2 is a schematic block flow diagram illustrating a non-limiting example of a process for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • Fig. 3A is an image illustrating a non-limiting example of the use of a windshield camera in conjunction with a mobile device during implementation of ADAS, in accordance with various embodiments.
  • Fig. 3B is an image illustrating a non-limiting example of a fused image that is generated during implementation of ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • FIGs. 4A-4G are flow diagrams illustrating a method for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • FIG. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Fig. 6 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
  • Various embodiments provide tools and techniques for implementing advanced driver assistance system (“ADAS”), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.).
  • a computing system on a mobile device may receive one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle.
  • the computing system may analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera.
  • the computing system may generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images, and may analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle. Based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, the computing system may perform one or more driver assistance tasks.
  • the computing system may comprise at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit (“CPU”) on the mobile device, at least one graphics processing unit (“GPU”) on the mobile device, a machine learning system, an artificial intelligence (“Al”) system, a deep learning system, a neural network, a convolutional neural network (“CNN”), a deep neural network (“DNN”), or a fully convolutional network (“FCN”), and/or the like.
  • the mobile device may comprise at least one of a smartphone, a tablet computer, a display device, an augmented reality (“AR”) device, a virtual reality (“VR”) device, or a mixed reality (“MR”) device, and/or the like.
  • receiving the one or more first images from the first camera may comprise one of: receiving the one or more first images from the first camera via a wireless communication link between the first camera and the mobile device; or receiving the one or more first images from the first camera via a wired cable communication link between the first camera and the mobile device.
  • receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera, and the computing system may further determine an estimated speed of the first vehicle; and adjust at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle.
  • the frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle and the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle.
  • determining the estimated speed of the first vehicle may comprise determining an estimated speed of the first vehicle based on at least one of global positioning system (“GPS”) data, global navigation satellite system (“GNSS”) data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
  • GPS global positioning system
  • GNSS global navigation satellite system
  • changes in image recognition-based landmark identification system data changes in telecommunications signal triangulation-based location identification system data
  • radar-based location identification system data changes in lidar-based location identification system data
  • speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
  • the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis, wherein the one or more image processing operations may comprise at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
  • analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system; identifying and highlighting one or more landmarks along the roadway using a landmark detection system; or identifying and highlighting one or more objects on or near the roadway using an object detection system, the one or more objects comprising at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like; and/or the like.
  • generating the one or more first fused images may comprise generating one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays comprising at least one of text-based data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects, and/or the like; and fusing the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images.
  • the at least one first alert condition may each comprise at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the like.
  • performing the one or more driver assistance tasks may comprise at least one of: presenting the one or more first fused images on a display device on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device; generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device; or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device; and/or the like.
  • the computing system may receive one or more second images from at least one second camera, the at least one second camera comprising at least one of a third camera that is mounted to a second fixed position on the windshield of the first vehicle, a fourth camera that is integrated with the mobile device with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera that is mounted to a fourth fixed position on a rear window of the first vehicle, and/or the like; and may analyze the one or more second images, wherein the one or more second images from one of the third camera or the fourth camera may be analyzed to determine differences with the one or more first images from the first camera and to obtain stereoscopic vision or three-dimensional ("3D") data based on the determined differences, and wherein the one or more second images from the fifth camera may be analyzed to obtain rearview data based on detection of objects behind the first vehicle.
  • the at least one second camera comprising at least one of a third camera that is mounted to a second fixed position on the windshield
  • generating the one or more first fused images may comprise generating one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images, and/or the like.
  • the computing system may receive one or more object detection signal data from at least one of one or more radar sensors or one or more lidar sensors that are mounted on the first vehicle and that are communicatively coupled to the mobile device; and may analyze the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects, wherein any of the one or more second objects that are determined to correspond to any of the one or more first objects may be merged with said one or more first objects.
  • generating the one or more first fused images may comprise generating one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images.
  • a system and method are provided for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.).
  • a mobile device e.g., a smartphone, a mobile phone, a tablet computer, etc.
  • This allows for improvements over conventional ADAS systems that fall under the two categories of: (a) designated or dedicated ADAS hardware/software platforms; and (b) ADAS implementation using only a cellphone.
  • These improvements are in terms of availability, cost, user experience, and performance.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments may be implemented on any existing vehicle even without designated or dedicated ADAS hardware.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments is a low cost implementation because windshield cameras are inexpensive and widely available dashcams can be further modified to work as the required windshield camera for implementation according to the various embodiments.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments brings better user experience since users can put their cell phone at any convenient place, and the system is easy to use while providing the desired performance (as discussed below).
  • the combination windshield camera and mobile device ADAS platform may improve performance over cellphone only implementations without the exorbitant costs of designated or dedicated ADAS systems in terms of the following points: (i) commonly available night vision functionalities of windshield mounted cameras allow for video data suitable for ADAS processing even under night and/or severe conditions; (ii) fixed mounted windshield cameras make camera calibration a one-time task; and (iii) optional views from a second camera (e.g., another windshield camera or the phone's camera) allow for stereoscope or 3D vision functionalities; and/or the like. Further, the various embodiments provide a low-latency communication scheme between the windshield camera and cell phone that enhances ADAS implementation.
  • some embodiments can improve the functioning of user equipment or systems themselves (e.g., object detection systems, camera- mobile device video communication systems, driver assistance systems, etc.), for example, by receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera; generating, using the computing system on the mobile device, one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks; and/or the like.
  • object detection systems e.g., object
  • Figs. 1-6 illustrate some of the features of the method, system, and apparatus for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, as referred to above.
  • ADAS advanced driver assistance system
  • the methods, systems, and apparatuses illustrated by Figs. 1-6 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments.
  • the description of the illustrated methods, systems, and apparatuses shown in Figs. 1-6 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
  • Fig. 1 is a schematic diagram illustrating a system 100 for implementing advanced driver assistance system ("ADAS") with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • ADAS advanced driver assistance system
  • system 100 may comprise a vehicle 105 and a mobile device 110 removably located therein.
  • the mobile device 110 may include, but is not limited to, computing system 115, communications system 120, one or more cameras 125, a display screen 130, and/or an audio speaker(s) 135 (optional), and/or the like.
  • the computing system 115 may include, without limitation, at least one of a driver assistance system (e.g., driver assistance system 115a, or the like), an object detection system or an object detection and ranging system (e.g., object detection system 115b, or the like), a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device (e.g., one or more processors 115c, including, but not limited to, one or more central processing units (“CPUs"), graphics processing units (“GPUs”), and/or one or more other processors, and/or the like), a machine learning system (e.g., machine learning system 115d, including, but not limited to, at least one of an artificial intelligence (“Al”) system, a machine learning system, a deep learning system, a neural network, a convolutional neural network (“CNN”), a deep neural network (“DNN”), or a fully convolutional network (“FCN”), and/or
  • the mobile device 110 may include, but is not limited to, at least one of a smartphone, a tablet computer, a display device, an augmented reality (“AR”) device, a virtual reality (“VR”) device, or a mixed reality (“MR”) device, and/or the like.
  • a smartphone a tablet computer
  • a display device an augmented reality (“AR") device, a virtual reality (“VR”) device, or a mixed reality (“MR”) device, and/or the like.
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • System 100 may further comprise one or more cameras 140a- 140c that may be mounted in respective fixed positions on the windshield 145 a at the front of the vehicle 105 or on the rear window 145b at the rear of the vehicle 105 (e.g., first camera 140a and/or second camera 140b (optional) may be mounted at respective first fixed position and second fixed position on the windshield 145a, while third camera 140c (optional) may be mounted at a third fixed position on the rear window 145b).
  • first camera 140a and/or second camera 140b may be mounted at respective first fixed position and second fixed position on the windshield 145a
  • third camera 140c (optional) may be mounted at a third fixed position on the rear window 145b).
  • Cameras 140a, 140b, and/or 125 may capture images or videos in front of the vehicle 105, including images or videos of one or more objects 150a-150n and/or one or more landmarks 155 that may be in front of vehicle 105, as well as detecting lanes on a roadway on which the vehicle 105 may be travelling.
  • Camera 140c may capture images or videos behind the vehicle 105, including images or videos of one or more objects 160a-160n that may be behind vehicle 105.
  • system 100 may further comprise a location determination system 165, which may communicate with a remote location signal source(s) 170 over network(s) 175.
  • location determination system 165 (and corresponding remote location signal source(s) 170) may utilize location determination data including, but not limited to, at least one of global positioning system (“GPS”) data, global navigation satellite system (“GNSS”) data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, and/or the like.
  • GPS global positioning system
  • GNSS global navigation satellite system
  • location determination system 165 may be used in conjunction with one or more radar sensors 180 (optional) and/or one or more lidar sensors 185 (optional) on vehicle 105, by using location determination data including, but not limited to, at least one of changes in radar-based location identification system data, changes in lidar-based location identification system data, and/or the like.
  • system 100 may further comprise an on-board diagnostics ("OBD2") scanner/transceiver 190 (optional) that may be used to access status data of various vehicle sub-systems, in some cases, via vehicle computing system 195, or the like.
  • OBD2 on-board diagnostics
  • communications system 120 may communicatively couple with one or more of first camera 140a or second camera 140b via wired cable connection (such as depicted in Fig. 1 by connector lines between communications system 120 and each of first camera 140a and second camera 140b, or the like) or via wireless communication link (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of first camera 140a and second camera 140b, or the like).
  • wired cable connection such as depicted in Fig. 1 by connector lines between communications system 120 and each of first camera 140a and second camera 140b, or the like
  • wireless communication link such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of first camera 140a and second camera 140b, or the like.
  • communications system 120 may also communicatively couple with one or more of third camera 140c, location determination system 165, network(s) 175, the one or more radar sensors 180, the one or more lidar sensors 185, and/or the OBD2 scanner/transceiver 190 via wireless communication link(s) (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of these components, or the like).
  • wireless communication link(s) such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of these components, or the like.
  • the wireless communications may include wireless communications using protocols including, but not limited to, at least one of BluetoothTM communications protocol, WiFi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z-wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.
  • protocols including, but not limited to, at least one of BluetoothTM communications protocol, WiFi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z-wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.
  • the network(s) 175 may each include a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-RingTM network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • LAN local area network
  • WAN wide-area network
  • WWAN wireless wide area network
  • VPN virtual network
  • PSTN public switched telephone network
  • PSTN public switched telephone network
  • a wireless network including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art,
  • the network(s) 175 might include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network(s) 175 may include a core network of the service provider, and/or the Internet.
  • ISP Internet service provider
  • computing system 115 may receive one or more first images from a first camera (e.g., first camera 140a, or the like) that is mounted to a first fixed position on a windshield (e.g., windshield 145a, or the like) of a first vehicle (e.g., vehicle 105, or the like).
  • the computing system may analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera (e.g., objects 150a-150n, landmark(s) 155, and/or the like).
  • the computing system may generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images, and may analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle. Based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, the computing system may perform one or more driver assistance tasks.
  • the at least one first alert condition may each include, but is not limited to, at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the
  • performing the one or more driver assistance tasks may include, without limitation, at least one of: presenting the one or more first fused images on a display device on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device; generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device; or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device; and/or the like.
  • receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera.
  • the computing system may determine an estimated speed of the first vehicle, and may adjust at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle.
  • the frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle, while the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle, as follows: (Eqns.
  • any suitable values for/and p may be used, including, but not limited to, frame rates in the ranges between 15 and 30 fps, between 24 and 30 fps, between 10 and 30 fps, or between 15 and 120 fps, or the like, progressive scan resolution values including, but not limited to, 480, 576, 640, 720, 1080, or 2160, or the like.
  • the wireless communication link may include, without limitation, a WiFi communication link, or the like.
  • the built-in WiFi of the first camera (or of the at least one second camera) (if available) may be used, with the mobile device being set as a client.
  • the built-in WiFi of mobile device (if available) may be used, with the first camera (or the at least one second camera) being set as a client. In this manner, reliable and high-speed connection may be provided to enable real-time (or near- real-time) video transmission between the first camera (or the at least one second camera) and the mobile device.
  • determining the estimated speed of the first vehicle may include, without limitation, determining an estimated speed of the first vehicle based on at least one of GPS data, GNSS data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
  • At least one of GPS data, GNSS data, image recognition-based landmark identification system data or changes therein, and/or telecommunications signal triangulation-based location identification system data or changes therein, or the like may be received from location signal source(s) 170 via network(s) 175, location determination system 165, and communications system 120, or the like.
  • At least one of radar-based location identification system data or changes therein, lidar-based location identification system data or changes therein, or speed data obtained from the vehicle computing system (e.g., vehicle computing system 195) of the first vehicle, or the like may be received from the one or more radar sensors 180, the one or more lidar sensors 185, and the OBD2 scanner/transceiver 190, respectively, via wireless communications links (denoted in Fig. 1 by lightning bolt symbols) with communications system 120, or the like.
  • the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis.
  • the one or more image processing operations may include, without limitation, at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
  • analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system; identifying and highlighting one or more landmarks (e.g., landmark(s) 155, or the like) along the roadway using a landmark detection system; or identifying and highlighting one or more objects (e.g., objects 150a-150n and/or 160a-160n, or the like) on or near the roadway using an object detection system, the one or more objects including, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like; and/or the like.
  • landmarks e.g., landmark(s) 155, or the like
  • objects e.g., objects 150a-150n and/or 160a-160n, or the like
  • generating the one or more first fused images may comprise generating one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays including, but not limited to, at least one of text-based data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects, and/or the like; and fusing the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images.
  • the computing system may receive one or more second images from at least one second camera (e.g., second camera 140b, camera(s) 125, or third camera 140c, or the like).
  • the at least one second camera may include, without limitation, at least one of a third camera (e.g., second camera 140b, or the like) that is mounted to a second fixed position on the windshield (e.g., windshield 145a) of the first vehicle, a fourth camera (e.g., camera(s) 125, or the like) that is integrated with the mobile device (e.g., mobile device 110, or the like) with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera (e.g., third camera 140c, or the like) that is mounted to a fourth fixed position on a rear window (e.g., rear window 145b, or the like) of the first vehicle, and/or the like.
  • a third camera e.g.,
  • the computing system may analyze the one or more second images.
  • the one or more second images from one of the third camera (e.g., second camera 140b, or the like) or the fourth camera (e.g., camera(s) 125, or the like) may be analyzed to determine differences with the one or more first images from the first camera (e.g., first camera 140a, or the like) and to obtain stereoscopic vision or three-dimensional ("3D") vision data based on the determined differences.
  • the one or more second images from the fifth camera e.g., third camera 140c, or the like
  • generating the one or more first fused images may comprise generating one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images, and/or the like.
  • the computing system may receive one or more object detection signal data from at least one of one or more radar sensors (e.g., radar sensor(s) 180, or the like) or one or more lidar sensors (e.g., lidar sensor(s) 185, or the like) that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device.
  • the computing system may analyze the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects.
  • any of the one or more second objects that are determined to correspond to any of the one or more first objects may be merged with said one or more first objects.
  • generating the one or more first fused images may comprise generating one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images.
  • ADAS with a camera(s) on a windshield of a vehicle and a mobile device allows for improvements over conventional ADAS systems that fall under the two categories of: (a) designated or dedicated ADAS hardware/software platforms; and (b) ADAS implementation using only a cellphone. These improvements are in terms of availability, cost, user experience, and performance.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments may be implemented on any existing vehicle even without designated or dedicated ADAS hardware.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments is a low cost implementation because windshield cameras are inexpensive and widely available dashcams can be further modified to work as the required windshield camera for implementation according to the various embodiments.
  • the combination windshield camera and mobile device ADAS platform according to the various embodiments brings better user experience since users can put their cell phone at any convenient place, and the system is easy to use while providing the desired performance (as discussed below).
  • the combination windshield camera and mobile device ADAS platform may improve performance over cellphone only implementations without the exorbitant costs of designated or dedicated ADAS systems in terms of the following points: (i) commonly available night vision functionalities of windshield mounted cameras allow for video data suitable for ADAS processing even under night and/or severe conditions; (ii) fixed mounted windshield cameras make camera calibration a one-time task; and (iii) optional views from a second camera (e.g., another windshield camera or the phone's camera) allow for stereoscope or 3D vision functionalities; and/or the like. Further, the various embodiments provide a low- latency communication scheme between the windshield camera and cell phone that enhances ADAS implementation.
  • FIG. 2 is a schematic block flow diagram illustrating a non-limiting example 200 of a process for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • a windshield camera 140 and a mobile device 110 may be used within a vehicle 105 to provide driver assistance (including, but not limited to, ADAS functionalities, or the like).
  • windshield camera 140 may include, without limitation, at least one of a first camera 140a, a video encoder 205, or a transmitter 210, and/or the like.
  • Mobile device 110 may include, but is not limited to, at least one of computing system 115, camera(s) 125, receiver 215, video decoder 220, display screen 130, or audio speaker(s) 135, and/or the like.
  • vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, audio speaker(s) 135, and first camera 140a in Fig. 2 may be similar, if not identical, to corresponding vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, audio speaker(s) 135, and first camera 140a in Fig. 1, and the descriptions of these components in Fig. 1 may be applicable to the descriptions of the corresponding components in Fig. 2.
  • the first camera 140a may capture one or more images or videos of objects and/or landmarks in front of vehicle 105.
  • Video encoder 205 may encode the video data from the first camera 140a, and transmitter 210 may transmit the encoded video data to receiver 215 in mobile device 110.
  • Video decoder 220 decodes the encoded video data received by receiver 215.
  • the video decoder 220 may also be used to decode such video, or, if not encoded, may pass-through such video from camera(s).
  • camera 140b or camera 140c in Fig. 1 may each also be embodied in a similar windshield camera as windshield camera 140, and the processes would be identical to those for camera 140a and windshield camera 140, as described above.
  • camera calibration may be performed.
  • camera calibration may refer to the process of estimating intrinsic and/or extrinsic parameters.
  • Intrinsic parameters deal with the camera's internal characteristics (including, but not limited to, its focal length, skew, distortion, and/or image center, and/or the like).
  • ADAS ADAS, calibration is also required to determine the relative position between the vehicle and cameras (which is one of the extrinsic parameters).
  • cameras e.g., cameras 140a, 140b, and/or 140c in Fig.
  • camera calibration need only be performed each time after the cameras have been mounted or remounted, and would otherwise bypass camera calibration (denoted as being optional by the short-dash lined arrows between video decoder 220 and camera calibration 225 and between camera calibration 225 and pre-processing 230, or the like, with direct connection between video decoder 220 and pre-processing 230).
  • camera calibration 225 must be performed often, prior to pre-processing (at block 230).
  • pre-processing may be performed, in which some image processing operations are performed to make the images/video ready for object detection.
  • the image processing operations may include, without limitation, at least one of prewhitening, resizing, aligning, cropping, or formatting, and/or the like.
  • Object detection may then be performed at blocks 235-245, at which modelling algorithms (including, but not limited to, DNN algorithms or other Al, machine learning, or neural network algorithms or systems) may be implemented for identifying and highlighting one or more lanes of a roadway using lane detection (at block 235); identifying and highlighting one or more landmarks along the roadway using landmark detection (at block 240); and/or identifying and highlighting one or more objects on or near the roadway using object detection (at block 245); and/or the like.
  • modelling algorithms including, but not limited to, DNN algorithms or other Al, machine learning, or neural network algorithms or systems
  • the one or more objects may include, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.
  • information from one or more of lane detection (at block 235), landmark detection (at block 240), and/or object detection (at block 245) may be fused. Decision-making may then occur based on the fused information (at block 255), resulting in initiation of action(s) (at block 260). In this manner, based on the object detection results (at blocks 235-245) and/or based on the fused information, the system may determine the relative position of vehicle 105 and the lanes of the roadway and other vehicles, as well as determining the drivable areas, etc., and appropriate action(s) may be taken.
  • initiating actions may include, without limitation, at least one of: presenting the one or more first fused images on a display device (e.g., display screen 130, or the like) on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device (e.g., display screen 130, or the like); generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device (e.g., display screen 130, or the like); or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker (e.g., audio speaker(s) 135, or the like) on the mobile device; and/or the like.
  • a display device e.g., display screen 130, or the like
  • Fig. 3A is an image illustrating a non-limiting example 300 of the use of a windshield camera in conjunction with a mobile device during implementation of ADAS, in accordance with various embodiments.
  • Fig. 3B is an image illustrating a non-limiting example 300' of a fused image that is generated during implementation of ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • implementation of the various embodiments of the driver assistance system includes a windshield camera (e.g., windshield camera 140a, or the like) and a cell phone (e.g., mobile device 110, or the like).
  • the windshield camera when properly mounted on the windshield, can have a good view of the environment in front of the vehicle, while the cell phone can be placed on any convenient place for the user(s).
  • an ADAS software package may be installed on the cell phone.
  • Real-time video captured by the windshield camera may be encoded and sent to the cell phone via WiFi or USB cable.
  • the cell phone serves as the computation unit, where the incoming videos may be decoded and analyzed. Based on the understanding of the scene, a decision will also be made by the cell phone. Subsequently, one or more corresponding actions (including, but not limited to, alarm(s), reminder(s), etc.) may be taken.
  • FIG. 3B An example of fusion of road detection results is illustrated in Fig. 3B.
  • a set of DNN algorithms was applied for road detection. These algorithms were mainly based on Yolov5 or similar algorithms that are designed to be light-weight, fast, and suitable for realtime processing in a cell phone. These algorithms are fine-tuned with local data.
  • lane detection results are highlighted as well as object detection results (in some cases, with probability of correct identification of objects, including, but not limited to, cars, traffic lights, etc.).
  • FIGs. 4A-4G are flow diagrams illustrating a method 400 for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
  • Method 400 of Fig. 4E returns to Fig. 4A following the circular marker denoted, "A.”
  • Fig. 4 e.g., by executing instructions embodied on a computer readable medium
  • the systems, examples, or embodiments 100, 200, 300, and 300' of Figs. 1, 2, 3A, and 3B can each also operate according to other modes of operation and/or perform other suitable procedures.
  • method 400 at block 405, may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle.
  • the computing system may include, without limitation, at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit (“CPU”) on the mobile device, at least one graphics processing unit (“GPU”) on the mobile device, a machine learning system, an artificial intelligence (“Al”) system, a deep learning system, a neural network, a convolutional neural network (“CNN”), a deep neural network (“DNN”), or a fully convolutional network (“FCN”), and/or the like.
  • a driver assistance system an object detection system, an object detection and ranging system, a positioning and mapping system
  • an image processing system an image data fusing system
  • a graphics engine a processor on the mobile device
  • CPU central processing unit
  • GPU graphics processing unit
  • Al artificial intelligence
  • a deep learning system a neural network
  • CNN convolutional neural network
  • the mobile device may include, but is not limited to, at least one of a smartphone, a tablet computer, a display device, an augmented reality (“AR”) device, a virtual reality (“VR”) device, or a mixed reality (“MR”) device, and/or the like.
  • a smartphone a tablet computer
  • a display device an augmented reality (“AR") device, a virtual reality (“VR”) device, or a mixed reality (“MR”) device, and/or the like.
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • method 400 may comprise pre-processing, using the computing system on the mobile device, the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis.
  • the one or more image processing operations may include, without limitation, at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
  • Method 400 may further comprise, at block 415, analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera.
  • method 400 may comprise receiving, using the computing system on the mobile device, one or more second images from at least one second camera.
  • the at least one second camera may include, but is not limited to, at least one of a third camera that is mounted to a second fixed position on the windshield of the first vehicle, a fourth camera that is integrated with the mobile device with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera that is mounted to a fourth fixed position on a rear window of the first vehicle, and/or the like.
  • Method 400 may comprise pre-processing, using the computing system on the mobile device, the received one or more second images using the one or more image processing operations to prepare the received one or more second images for analysis.
  • Method 400 may further comprise, at optional block 430, analyzing, using the computing system on the mobile device, the one or more second images.
  • the one or more second images from one of the third camera or the fourth camera may be analyzed to determine differences with the one or more first images from the first camera and to obtain stereoscopic vision or three-dimensional ("3D") data based on the determined differences.
  • the one or more second images from the fifth camera may be analyzed to obtain rearview data based on detection of objects behind the first vehicle.
  • method 400 may comprise generating, using the computing system on the mobile device, one or more first fused images.
  • Method 400 may further comprise analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle (block 440); and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks (block 445).
  • the at least one first alert condition may each include, but is not limited to, at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the
  • receiving the one or more first images from the first camera (at block 405) or receiving the one or more second images from the at least one second camera (at optional block 420) may comprise one of: receiving, using the computing system on the mobile device, the one or more first images from the first camera or the one or more second images from the at least one second camera via a wireless communication link between the first camera and the mobile device or between the at least one second camera and the mobile device (block 450a); or receiving, using the computing system on the mobile device, the one or more first images from the first camera or the one or more second images from the at least one second camera via a wired cable communication link between the first camera and the mobile device or between the at least one second camera and the mobile device (block 450b).
  • the wireless communication link may include, without limitation, a WiFi communication link, or the like.
  • the built-in WiFi of the first camera (or of the at least one second camera) (if available) may be used, with the mobile device being set as a client.
  • the built-in WiFi of mobile device (if available) may be used, with the first camera (or the at least one second camera) being set as a client. In this manner, reliable and high-speed connection may be provided to enable real-time (or near- real-time) video transmission between the first camera (or the at least one second camera) and the mobile device.
  • receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera (this also applies to the one or more second images as one or more second video data from the at least one second camera).
  • low latency transmission from windshield camera to cell phone is critical in successful implementation of ADAS.
  • an adaptive rate control approach has been designed to reduce the latency and communication load based on the following observations: (a) If the surrounding environment is complex, then the vehicle speed is generally slower, thus larger latency may be tolerated but more details about the environment may be required; or (b) If the surrounding environment is simple, then the vehicle speed could be faster, thus lower latency is required but less details about the environment may be required.
  • method 400 may further comprise determining, using the computing system on the mobile device, an estimated speed of the first vehicle (block 455); and adjusting, using the computing system on the mobile device, at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle (block 460a) or adjusting, using the computing system on the mobile device, at least one of frame rate or resolution of transmission of the one or more second video data from the second camera as a function of the estimated speed of the first vehicle (block 460b).
  • the frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle, while the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle, as follows: (Eqns. 1 & 2) where 5 denotes the speed of the first vehicle, /denotes the frame rate, p denotes the resolution of the video (e.g., progressive scan), and a and P denote two constants.
  • frame rates in the ranges between 15 and 30 fps, between 24 and 30 fps, between 10 and 30 fps, or between 15 and 120 fps, or the like, progressive scan resolution values including, but not limited to, 480, 576, 640, 720, 1080, or 2160, or the like.
  • determining the estimated speed of the first vehicle may include, without limitation, determining, using the computing system on the mobile device, an estimated speed of the first vehicle based on at least one of global positioning system (“GPS”) data, global navigation satellite system (“GNSS”) data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulationbased location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
  • GPS global positioning system
  • GNSS global navigation satellite system
  • changes in image recognition-based landmark identification system data changes in telecommunications signal triangulationbased location identification system data
  • radar-based location identification system data changes in lidar-based location identification system data
  • speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
  • method 400 may further comprise analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera (at block 415) may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system (block 415a); identifying and highlighting one or more landmarks along the roadway using a landmark detection system (block 415b); or identifying and highlighting one or more objects on or near the roadway using an object detection system (block 415c); and/or the like.
  • the one or more lanes may include, without limitation, at least one of single-lane roads, bridges, or paths; two-lane roads with one or more of no-passing lane markers in one or more first stretches of the roadway, one-way passing-permitted lane markers in one or more second stretches of the roadway, two-way passing-permitted lane markers in one or more third stretches of the roadway, and/or the like; three-lane roads with a reversible lane in the middle (and corresponding lane markers and overhead traffic lights or traffic flow directions, or the like) allowing traffic to travel in either direction depending on traffic conditions; four-lane roadways; multi-lane highways (with five or more lanes with similar lane markings as described above with respect to the smaller numbered lane roadways); turn lanes; highway merge lanes; highway exit lanes; and so on.
  • the one or more landmarks may include, but are not limited to, natural formations, manmade structures (e.g., buildings, bridges, or other public works structures, or the like), signage for any such landmarks, and/or the like.
  • the one or more objects may include, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.
  • method 400 may further comprise receiving, using the computing system on the mobile device, one or more object detection signal data from at least one of one or more radar sensors or one or more lidar sensors that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device (at optional block 465); analyzing, using the computing system on the mobile device, the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects (at optional block 470); and merging, using the computing system on the mobile device, any of the one or more second objects that are determined to correspond to any of the one or more first objects with said one or more first objects (at optional block 475); and/or the like.
  • Method 400 may return to the process at block 435 in Fig. 4A following the circular marker denoted, "A.”
  • generating the one or more first fused images may comprise one of: [in embodiments following, e.g., the processes at blocks 405-415 in Fig. 4A, or the like] fusing the identified and highlighted one or more first objects with the one or more first images (block 480); [in embodiments following, e.g., the processes at blocks 405-415 in Fig.
  • performing the one or more driver assistance tasks may comprise at least one of: presenting the one or more first fused images on a display device on the mobile device (block 445 a); generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device (block 445b); generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device (block 445c); or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device (block 445d); and/or the like.
  • Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Fig. 5 provides a schematic illustration of one embodiment of a computer system 500 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., mobile device 110, computing system(s) 115, location determination system 165, location signal source(s) 170, and vehicle computing system 195, etc.), as described above.
  • Fig. 5 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate.
  • Fig. 5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • the computer or hardware system 500 - which might represent an embodiment of the computer or hardware system (i.e., mobile device 110, computing system(s) 115, location determination system 165, location signal source(s) 170, and vehicle computing system 195, etc.), described above with respect to Figs. 1-4 - is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate).
  • the hardware elements may include one or more processors 510, including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 520, which can include, without limitation, a display device, a printer, and/or the like.
  • processors 510 including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like)
  • input devices 515 which can include, without limitation, a mouse, a keyboard, and/or the like
  • output devices 520 which can include, without limitation, a display device, a printer, and/or the like.
  • the computer or hardware system 500 may further include (and/or be in communication with) one or more storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
  • the computer or hardware system 500 might also include a communications subsystem 530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a BluetoothTM device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein.
  • the computer or hardware system 500 will further comprise a working memory 535, which can include a RAM or ROM device, as described above.
  • the computer or hardware system 500 also may comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • an operating system 540 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 545 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
  • a set of these instructions and/or code might be encoded and/or stored on a non- transitory computer readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 500.
  • the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer or hardware system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • some embodiments may employ a computer or hardware system (such as the computer or hardware system 500) to perform methods in accordance with various embodiments of the invention.
  • some or all of the procedures of such methods are performed by the computer or hardware system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535.
  • Such instructions may be read into the working memory 535 from another computer readable medium, such as one or more of the storage device(s) 525.
  • execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein.
  • machine readable medium and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion.
  • various computer readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer readable medium is a non-transitory, physical, and/or tangible storage medium.
  • a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like.
  • Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 525.
  • Volatile media includes, without limitation, dynamic memory, such as the working memory 535.
  • a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communication subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
  • Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH- EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 500.
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 505 retrieves and executes the instructions.
  • the instructions received by the working memory 535 may optionally be stored on a storage device 525 either before or after execution by the processor(s) 510.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

Novel tools and techniques are provided for implementing advanced driver assistance system ("ADAS") with a camera(s) on a windshield of a vehicle and a mobile device. In various embodiments, a computing system on a mobile device may receive one or more images from a camera that is mounted to a fixed position on a windshield of a vehicle, may analyze the received one or more images to identify and highlight one or more objects captured by the camera, may generate one or more fused images by fusing the identified and highlighted one or more objects with the one or more images, and may analyze the one or more fused images to identify one or more alert conditions associated with operation of the vehicle. Based on a determination that at least one first alert condition has been identified, the computing system may perform one or more driver assistance tasks.

Description

ADVANCED DRIVER ASSISTANCE SYSTEM (ADAS) WITH CAMERA ON
WINDSHIELD AND MOBILE DEVICE
COPYRIGHT STATEMENT
[0001] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD
[0002] The present disclosure relates, in general, to methods, systems, and apparatuses for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.).
BACKGROUND
[0003] Advanced driver assistance systems ("ADAS") are technological features that are designed to increase the safety and to improve the user experience of driving a vehicle. Popular ADAS features include keeping a vehicle centered in its lane, bringing a vehicle to a complete stop in an emergency, and identifying other vehicles or pedestrians approaching, and much more.
[0004] Conventional ADAS implementations fall under one of two major categories: (a) designated or dedicated ADAS hardware/software platforms; or (b) ADAS implementation using only a cellphone. Regarding (a), over the past few decades, mainstream automakers (e.g., GM, BMW, and Tesla, as well as startup companies, e.g., Waymo and Cruise, etc.) have invested a great deal of effort into ADAS. A typical implementation of ADAS may involve various system components (e.g., radar, lidar, sensors, cameras, and CPU, etc.). Such a complex solution makes it difficult or even impossible to implement on existing vehicles without pre-installed hardware. Thus, though many new vehicles today are shipped with some ADAS features, only about 10 percent of global vehicles adopt ADAS. In particular, designated or dedicated ADAS platforms are mainly available to new vehicles, and are very difficult to install on existing vehicles. Each such ADAS package also incur extremely high costs, from thousands of dollars to tens of thousands of dollars. [0005] Regarding (b), some companies recently proposed to implement ADAS using a cellphone, so that existing vehicles could also benefit from fast-growing ADAS technology at a low cost. However, these cellphone only solutions suffer from severe performance degradation and unsatisfactory user experience. In particular, regarding performance, cameras on cell phones are generally not designed to operate under severe conditions (e.g., low illumination, motion blur, etc.). Thus, ADAS with cellphone may only perform well under good conditions, and is ill-suited for less than ideal conditions. Regarding user experience, in order to get a good view for ADAS, implementation with only a cellphone may require mounting the cell phone on the windshield or some other place that may block the view of driver, which may be a violation of law in some locations (e.g., in California, etc.). Regarding calibration, a careful calibration of the camera is generally required for proper operation of ADAS system whenever the relative position of camera and vehicle is changed. This requirement makes implementation with cellphone only almost impractical since a cellphone is supposed to be "mobile."
[0006] Hence, there is a need for more robust and scalable solutions for implementing advanced driver assistance system ("ADAS").
SUMMARY
[0007] The techniques of this disclosure generally relate to tools and techniques for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device.
[0008] In an aspect, a method may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera; generating, using the computing system on the mobile device, one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks. [0009] In another aspect, a mobile device might comprise a computing system and a non- transitory computer readable medium communicatively coupled to the computing system. The non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera; generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
[0010] In yet another aspect, a system might comprise a first camera mounted to a first fixed position on a windshield of a first vehicle and a mobile device. The mobile device may comprise a computing system and a first non-transitory computer readable medium communicatively coupled to the computing system. The first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from the first camera; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera; generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
[0011] Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above-described features.
[0012] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0013] A further understanding of the nature and advantages of particular embodiments may be realized by reference to the remaining portions of the specification and the drawings, in which like reference numerals are used to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
[0014] Fig. 1 is a schematic diagram illustrating a system for implementing advanced driver assistance system ("ADAS") with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0015] Fig. 2 is a schematic block flow diagram illustrating a non-limiting example of a process for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0016] Fig. 3A is an image illustrating a non-limiting example of the use of a windshield camera in conjunction with a mobile device during implementation of ADAS, in accordance with various embodiments.
[0017] Fig. 3B is an image illustrating a non-limiting example of a fused image that is generated during implementation of ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0018] Figs. 4A-4G are flow diagrams illustrating a method for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0019] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
[0020] Fig. 6 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
DETAILED DESCRIPTION
[0021] Overview
[0022] Various embodiments provide tools and techniques for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.). [0023] In various embodiments, a computing system on a mobile device may receive one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle. The computing system may analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera. The computing system may generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images, and may analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle. Based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, the computing system may perform one or more driver assistance tasks.
[0024] In some embodiments, the computing system may comprise at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like. In some instances, the mobile device may comprise at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like.
[0025] According to some embodiments, receiving the one or more first images from the first camera may comprise one of: receiving the one or more first images from the first camera via a wireless communication link between the first camera and the mobile device; or receiving the one or more first images from the first camera via a wired cable communication link between the first camera and the mobile device.
[0026] In some embodiments, receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera, and the computing system may further determine an estimated speed of the first vehicle; and adjust at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle. In some cases, the frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle and the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle. In some cases, determining the estimated speed of the first vehicle may comprise determining an estimated speed of the first vehicle based on at least one of global positioning system ("GPS") data, global navigation satellite system ("GNSS") data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
[0027] According to some embodiments, prior to analysis of the received one or more first images, the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis, wherein the one or more image processing operations may comprise at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
[0028] In some embodiments, analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system; identifying and highlighting one or more landmarks along the roadway using a landmark detection system; or identifying and highlighting one or more objects on or near the roadway using an object detection system, the one or more objects comprising at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like; and/or the like.
[0029] According to some embodiments, generating the one or more first fused images may comprise generating one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays comprising at least one of text-based data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects, and/or the like; and fusing the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images.
[0030] Merely by way of example, in some cases, the at least one first alert condition may each comprise at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the like.
[0031] In some embodiments, performing the one or more driver assistance tasks may comprise at least one of: presenting the one or more first fused images on a display device on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device; generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device; or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device; and/or the like.
[0032] According to some embodiments, the computing system may receive one or more second images from at least one second camera, the at least one second camera comprising at least one of a third camera that is mounted to a second fixed position on the windshield of the first vehicle, a fourth camera that is integrated with the mobile device with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera that is mounted to a fourth fixed position on a rear window of the first vehicle, and/or the like; and may analyze the one or more second images, wherein the one or more second images from one of the third camera or the fourth camera may be analyzed to determine differences with the one or more first images from the first camera and to obtain stereoscopic vision or three-dimensional ("3D") data based on the determined differences, and wherein the one or more second images from the fifth camera may be analyzed to obtain rearview data based on detection of objects behind the first vehicle. In such cases, generating the one or more first fused images may comprise generating one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images, and/or the like.
[0033] In some embodiments, the computing system may receive one or more object detection signal data from at least one of one or more radar sensors or one or more lidar sensors that are mounted on the first vehicle and that are communicatively coupled to the mobile device; and may analyze the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects, wherein any of the one or more second objects that are determined to correspond to any of the one or more first objects may be merged with said one or more first objects. In such cases, generating the one or more first fused images may comprise generating one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images.
[0034] In the various aspects described herein, a system and method are provided for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.). This allows for improvements over conventional ADAS systems that fall under the two categories of: (a) designated or dedicated ADAS hardware/software platforms; and (b) ADAS implementation using only a cellphone. These improvements are in terms of availability, cost, user experience, and performance. Regarding availability, the combination windshield camera and mobile device ADAS platform according to the various embodiments may be implemented on any existing vehicle even without designated or dedicated ADAS hardware. Regarding cost, the combination windshield camera and mobile device ADAS platform according to the various embodiments is a low cost implementation because windshield cameras are inexpensive and widely available dashcams can be further modified to work as the required windshield camera for implementation according to the various embodiments. Regarding user experience, the combination windshield camera and mobile device ADAS platform according to the various embodiments brings better user experience since users can put their cell phone at any convenient place, and the system is easy to use while providing the desired performance (as discussed below). Regarding performance, the combination windshield camera and mobile device ADAS platform according to the various embodiments may improve performance over cellphone only implementations without the exorbitant costs of designated or dedicated ADAS systems in terms of the following points: (i) commonly available night vision functionalities of windshield mounted cameras allow for video data suitable for ADAS processing even under night and/or severe conditions; (ii) fixed mounted windshield cameras make camera calibration a one-time task; and (iii) optional views from a second camera (e.g., another windshield camera or the phone's camera) allow for stereoscope or 3D vision functionalities; and/or the like. Further, the various embodiments provide a low-latency communication scheme between the windshield camera and cell phone that enhances ADAS implementation.
[0035] These and other aspects of the system and method for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device are described in greater detail with respect to the figures.
[0036] The following detailed description illustrates a few embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention. [0037] In the following description, for the purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these details. In other instances, some structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features.
[0038] Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term "about." In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms "and" and "or" means "and/or" unless otherwise indicated. Moreover, the use of the term "including," as well as other forms, such as "includes" and "included," should be considered non-exclusive. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.
[0039] Various embodiments as described herein - while embodying (in some cases) software products, computer-performed methods, and/or computer systems - represent tangible, concrete improvements to existing technological areas, including, without limitation, object detection technology, camera-mobile device video communication technology, driver assistance technology, and/or the like. In other aspects, some embodiments can improve the functioning of user equipment or systems themselves (e.g., object detection systems, camera- mobile device video communication systems, driver assistance systems, etc.), for example, by receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera; generating, using the computing system on the mobile device, one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks; and/or the like.
[0040] In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve novel functionality (e.g., steps or operations), such as, implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.), and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, providing a low cost and easy to use ADAS system that may be used on any existing vehicle (even vehicles without designated or dedicated ADAS hardware) and provides a high performance to cost implementation (relying on functionalities (e.g., night vision, etc.) of windshield cameras, or the like) coupled with the low latency video transmission (as described herein) between the windshield camera and the mobile device (which serves as the computing core for the ADAS functionality), at least some of which may be observed or measured by users (e.g., drivers, ADAS technicians, etc.), developers, and/or object detection system or other ADAS manufacturers.
[0041] Some Embodiments
[0042] We now turn to the embodiments as illustrated by the drawings. Figs. 1-6 illustrate some of the features of the method, system, and apparatus for implementing advanced driver assistance system ("ADAS"), and, more particularly, to methods, systems, and apparatuses for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, as referred to above. The methods, systems, and apparatuses illustrated by Figs. 1-6 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown in Figs. 1-6 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
[0043] With reference to the figures, Fig. 1 is a schematic diagram illustrating a system 100 for implementing advanced driver assistance system ("ADAS") with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0044] In the non-limiting embodiment of Fig. 1, system 100 may comprise a vehicle 105 and a mobile device 110 removably located therein. In some embodiments, the mobile device 110 may include, but is not limited to, computing system 115, communications system 120, one or more cameras 125, a display screen 130, and/or an audio speaker(s) 135 (optional), and/or the like. In some embodiments, the computing system 115 may include, without limitation, at least one of a driver assistance system (e.g., driver assistance system 115a, or the like), an object detection system or an object detection and ranging system (e.g., object detection system 115b, or the like), a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device (e.g., one or more processors 115c, including, but not limited to, one or more central processing units ("CPUs"), graphics processing units ("GPUs"), and/or one or more other processors, and/or the like), a machine learning system (e.g., machine learning system 115d, including, but not limited to, at least one of an artificial intelligence ("Al") system, a machine learning system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like), and/or the like. In some instances, the mobile device 110 may include, but is not limited to, at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like.
[0045] System 100 may further comprise one or more cameras 140a- 140c that may be mounted in respective fixed positions on the windshield 145 a at the front of the vehicle 105 or on the rear window 145b at the rear of the vehicle 105 (e.g., first camera 140a and/or second camera 140b (optional) may be mounted at respective first fixed position and second fixed position on the windshield 145a, while third camera 140c (optional) may be mounted at a third fixed position on the rear window 145b). Cameras 140a, 140b, and/or 125 may capture images or videos in front of the vehicle 105, including images or videos of one or more objects 150a-150n and/or one or more landmarks 155 that may be in front of vehicle 105, as well as detecting lanes on a roadway on which the vehicle 105 may be travelling. Camera 140c may capture images or videos behind the vehicle 105, including images or videos of one or more objects 160a-160n that may be behind vehicle 105.
[0046] In some embodiments, system 100 may further comprise a location determination system 165, which may communicate with a remote location signal source(s) 170 over network(s) 175. In some cases, location determination system 165 (and corresponding remote location signal source(s) 170) may utilize location determination data including, but not limited to, at least one of global positioning system ("GPS") data, global navigation satellite system ("GNSS") data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, and/or the like. Alternatively, or additionally, location determination system 165 may be used in conjunction with one or more radar sensors 180 (optional) and/or one or more lidar sensors 185 (optional) on vehicle 105, by using location determination data including, but not limited to, at least one of changes in radar-based location identification system data, changes in lidar-based location identification system data, and/or the like. In some cases, system 100 may further comprise an on-board diagnostics ("OBD2") scanner/transceiver 190 (optional) that may be used to access status data of various vehicle sub-systems, in some cases, via vehicle computing system 195, or the like.
[0047] According to some embodiments, communications system 120 may communicatively couple with one or more of first camera 140a or second camera 140b via wired cable connection (such as depicted in Fig. 1 by connector lines between communications system 120 and each of first camera 140a and second camera 140b, or the like) or via wireless communication link (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of first camera 140a and second camera 140b, or the like). In some cases, communications system 120 may also communicatively couple with one or more of third camera 140c, location determination system 165, network(s) 175, the one or more radar sensors 180, the one or more lidar sensors 185, and/or the OBD2 scanner/transceiver 190 via wireless communication link(s) (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of these components, or the like). In some embodiments, the wireless communications may include wireless communications using protocols including, but not limited to, at least one of Bluetooth™ communications protocol, WiFi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z-wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.
[0048] In some cases, the network(s) 175 may each include a local area network ("LAN"), including, without limitation, a fiber network, an Ethernet network, a Token-Ring™ network, and/or the like; a wide-area network ("WAN"); a wireless wide area network ("WWAN"); a virtual network, such as a virtual private network ("VPN"); the Internet; an intranet; an extranet; a public switched telephone network ("PSTN"); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network(s) 175 might include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network(s) 175 may include a core network of the service provider, and/or the Internet.
[0049] In operation, computing system 115 (herein, simply referred to as "computing system" or the like) may receive one or more first images from a first camera (e.g., first camera 140a, or the like) that is mounted to a first fixed position on a windshield (e.g., windshield 145a, or the like) of a first vehicle (e.g., vehicle 105, or the like). The computing system may analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera (e.g., objects 150a-150n, landmark(s) 155, and/or the like). The computing system may generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images, and may analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle. Based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, the computing system may perform one or more driver assistance tasks.
[0050] Merely by way of example, in some cases, the at least one first alert condition may each include, but is not limited to, at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the like.
[0051] In some embodiments, performing the one or more driver assistance tasks may include, without limitation, at least one of: presenting the one or more first fused images on a display device on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device; generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device; or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device; and/or the like.
[0052] In some instances, receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera. In such cases, the computing system may determine an estimated speed of the first vehicle, and may adjust at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle. The frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle, while the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle, as follows: (Eqns. 1 & 2)
Figure imgf000016_0001
where 5 denotes the speed of the first vehicle, /denotes the frame rate, p denotes the resolution of the video (e.g., progressive scan), and a and denote two constants. In some embodiments, the following values may be used: Highway: /= 30 fps, p = 640; Local: f- 15 fps, p - 1080; or the like. Although specific values for/and p are described with respect to two conditions, the various embodiments are not so limited, and any suitable values for/and p may be used, including, but not limited to, frame rates in the ranges between 15 and 30 fps, between 24 and 30 fps, between 10 and 30 fps, or between 15 and 120 fps, or the like, progressive scan resolution values including, but not limited to, 480, 576, 640, 720, 1080, or 2160, or the like.
[0053] In some embodiments, the wireless communication link may include, without limitation, a WiFi communication link, or the like. In some cases, the built-in WiFi of the first camera (or of the at least one second camera) (if available) may be used, with the mobile device being set as a client. Alternatively, the built-in WiFi of mobile device (if available) may be used, with the first camera (or the at least one second camera) being set as a client. In this manner, reliable and high-speed connection may be provided to enable real-time (or near- real-time) video transmission between the first camera (or the at least one second camera) and the mobile device.
[0054] In some cases, determining the estimated speed of the first vehicle may include, without limitation, determining an estimated speed of the first vehicle based on at least one of GPS data, GNSS data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like.
[0055] In some instances, at least one of GPS data, GNSS data, image recognition-based landmark identification system data or changes therein, and/or telecommunications signal triangulation-based location identification system data or changes therein, or the like may be received from location signal source(s) 170 via network(s) 175, location determination system 165, and communications system 120, or the like. In some cases, at least one of radar-based location identification system data or changes therein, lidar-based location identification system data or changes therein, or speed data obtained from the vehicle computing system (e.g., vehicle computing system 195) of the first vehicle, or the like may be received from the one or more radar sensors 180, the one or more lidar sensors 185, and the OBD2 scanner/transceiver 190, respectively, via wireless communications links (denoted in Fig. 1 by lightning bolt symbols) with communications system 120, or the like. [0056] According to some embodiments, prior to analysis of the received one or more first images, the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis. In some cases, the one or more image processing operations may include, without limitation, at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
[0057] In some embodiments, analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system; identifying and highlighting one or more landmarks (e.g., landmark(s) 155, or the like) along the roadway using a landmark detection system; or identifying and highlighting one or more objects (e.g., objects 150a-150n and/or 160a-160n, or the like) on or near the roadway using an object detection system, the one or more objects including, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like; and/or the like.
[0058] According to some embodiments, generating the one or more first fused images may comprise generating one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays including, but not limited to, at least one of text-based data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects, and/or the like; and fusing the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images.
[0059] According to some embodiments, the computing system may receive one or more second images from at least one second camera (e.g., second camera 140b, camera(s) 125, or third camera 140c, or the like). The at least one second camera may include, without limitation, at least one of a third camera (e.g., second camera 140b, or the like) that is mounted to a second fixed position on the windshield (e.g., windshield 145a) of the first vehicle, a fourth camera (e.g., camera(s) 125, or the like) that is integrated with the mobile device (e.g., mobile device 110, or the like) with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera (e.g., third camera 140c, or the like) that is mounted to a fourth fixed position on a rear window (e.g., rear window 145b, or the like) of the first vehicle, and/or the like. The computing system may analyze the one or more second images. For example, the one or more second images from one of the third camera (e.g., second camera 140b, or the like) or the fourth camera (e.g., camera(s) 125, or the like) may be analyzed to determine differences with the one or more first images from the first camera (e.g., first camera 140a, or the like) and to obtain stereoscopic vision or three-dimensional ("3D") vision data based on the determined differences. The one or more second images from the fifth camera (e.g., third camera 140c, or the like) may be analyzed to obtain rearview data based on detection of objects (e.g., objects 160a-160n, or the like) behind the first vehicle. In such cases, generating the one or more first fused images may comprise generating one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images, and/or the like.
[0060] In some embodiments, the computing system may receive one or more object detection signal data from at least one of one or more radar sensors (e.g., radar sensor(s) 180, or the like) or one or more lidar sensors (e.g., lidar sensor(s) 185, or the like) that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device. The computing system may analyze the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects. In some cases, any of the one or more second objects that are determined to correspond to any of the one or more first objects may be merged with said one or more first objects. In such cases, generating the one or more first fused images may comprise generating one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images.
[0061] In the various aspects, ADAS with a camera(s) on a windshield of a vehicle and a mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.) allows for improvements over conventional ADAS systems that fall under the two categories of: (a) designated or dedicated ADAS hardware/software platforms; and (b) ADAS implementation using only a cellphone. These improvements are in terms of availability, cost, user experience, and performance. Regarding availability, the combination windshield camera and mobile device ADAS platform according to the various embodiments may be implemented on any existing vehicle even without designated or dedicated ADAS hardware. Regarding cost, the combination windshield camera and mobile device ADAS platform according to the various embodiments is a low cost implementation because windshield cameras are inexpensive and widely available dashcams can be further modified to work as the required windshield camera for implementation according to the various embodiments. Regarding user experience, the combination windshield camera and mobile device ADAS platform according to the various embodiments brings better user experience since users can put their cell phone at any convenient place, and the system is easy to use while providing the desired performance (as discussed below). Regarding performance, the combination windshield camera and mobile device ADAS platform according to the various embodiments may improve performance over cellphone only implementations without the exorbitant costs of designated or dedicated ADAS systems in terms of the following points: (i) commonly available night vision functionalities of windshield mounted cameras allow for video data suitable for ADAS processing even under night and/or severe conditions; (ii) fixed mounted windshield cameras make camera calibration a one-time task; and (iii) optional views from a second camera (e.g., another windshield camera or the phone's camera) allow for stereoscope or 3D vision functionalities; and/or the like. Further, the various embodiments provide a low- latency communication scheme between the windshield camera and cell phone that enhances ADAS implementation.
[0062] These and other functions of the system 100 (and its components) are described in greater detail below with respect to Figs. 2-4.
[0063] Fig. 2 is a schematic block flow diagram illustrating a non-limiting example 200 of a process for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0064] With reference to the non-limiting example 200 of Fig. 2, a windshield camera 140 and a mobile device 110 may be used within a vehicle 105 to provide driver assistance (including, but not limited to, ADAS functionalities, or the like). As shown in Fig. 2, windshield camera 140 may include, without limitation, at least one of a first camera 140a, a video encoder 205, or a transmitter 210, and/or the like. Mobile device 110 may include, but is not limited to, at least one of computing system 115, camera(s) 125, receiver 215, video decoder 220, display screen 130, or audio speaker(s) 135, and/or the like. In some instances, vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, audio speaker(s) 135, and first camera 140a in Fig. 2 may be similar, if not identical, to corresponding vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, audio speaker(s) 135, and first camera 140a in Fig. 1, and the descriptions of these components in Fig. 1 may be applicable to the descriptions of the corresponding components in Fig. 2. [0065] In operation, the first camera 140a may capture one or more images or videos of objects and/or landmarks in front of vehicle 105. Video encoder 205 may encode the video data from the first camera 140a, and transmitter 210 may transmit the encoded video data to receiver 215 in mobile device 110. Video decoder 220 decodes the encoded video data received by receiver 215. In the case that camera(s) 125 of the mobile device 110 is also used to capture the one or more images or videos of objects and/or landmarks in front of vehicle 105 (denoted as being optional by the long-dash lined arrow between camera(s) 125 and video decoder 220, or the like), the video decoder 220 may also be used to decode such video, or, if not encoded, may pass-through such video from camera(s). Although not shown in Fig. 2, camera 140b or camera 140c in Fig. 1 may each also be embodied in a similar windshield camera as windshield camera 140, and the processes would be identical to those for camera 140a and windshield camera 140, as described above.
[0066] At block 225, camera calibration may be performed. Herein, camera calibration may refer to the process of estimating intrinsic and/or extrinsic parameters. Intrinsic parameters deal with the camera's internal characteristics (including, but not limited to, its focal length, skew, distortion, and/or image center, and/or the like). In ADAS, calibration is also required to determine the relative position between the vehicle and cameras (which is one of the extrinsic parameters). For cameras (e.g., cameras 140a, 140b, and/or 140c in Fig. 1) that are mounted in fixed locations on a windshield or rear window of a vehicle, camera calibration need only be performed each time after the cameras have been mounted or remounted, and would otherwise bypass camera calibration (denoted as being optional by the short-dash lined arrows between video decoder 220 and camera calibration 225 and between camera calibration 225 and pre-processing 230, or the like, with direct connection between video decoder 220 and pre-processing 230). For cameras (e.g., camera(s) 125) that is not mounted in a fixed position (even if its mounting apparatus does not move relative to the windshield), due to even minor shifts in the position of the mobile device within its mounting apparatus on the windshield (either during mounting/remounting or even during normal operation), the camera's intrinsic and/or extrinsic characteristics may change. Accordingly, camera calibration 225 must be performed often, prior to pre-processing (at block 230).
[0067] At block 230, pre-processing may be performed, in which some image processing operations are performed to make the images/video ready for object detection. In some instances, the image processing operations may include, without limitation, at least one of prewhitening, resizing, aligning, cropping, or formatting, and/or the like. [0068] Object detection may then be performed at blocks 235-245, at which modelling algorithms (including, but not limited to, DNN algorithms or other Al, machine learning, or neural network algorithms or systems) may be implemented for identifying and highlighting one or more lanes of a roadway using lane detection (at block 235); identifying and highlighting one or more landmarks along the roadway using landmark detection (at block 240); and/or identifying and highlighting one or more objects on or near the roadway using object detection (at block 245); and/or the like. In some cases, the one or more objects may include, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.
[0069] At bock 250, information from one or more of lane detection (at block 235), landmark detection (at block 240), and/or object detection (at block 245) may be fused. Decision-making may then occur based on the fused information (at block 255), resulting in initiation of action(s) (at block 260). In this manner, based on the object detection results (at blocks 235-245) and/or based on the fused information, the system may determine the relative position of vehicle 105 and the lanes of the roadway and other vehicles, as well as determining the drivable areas, etc., and appropriate action(s) may be taken.
[0070] In some embodiments, initiating actions (at block 260) may include, without limitation, at least one of: presenting the one or more first fused images on a display device (e.g., display screen 130, or the like) on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device (e.g., display screen 130, or the like); generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device (e.g., display screen 130, or the like); or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker (e.g., audio speaker(s) 135, or the like) on the mobile device; and/or the like.
[0071] These and other functions of the example 200 (and its components) are described in greater detail below with respect to Figs. 1, 3, and 4.
[0072] Fig. 3A is an image illustrating a non-limiting example 300 of the use of a windshield camera in conjunction with a mobile device during implementation of ADAS, in accordance with various embodiments. Fig. 3B is an image illustrating a non-limiting example 300' of a fused image that is generated during implementation of ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.
[0073] As illustrated in the non-limiting example 300 of Fig. 3A, implementation of the various embodiments of the driver assistance system includes a windshield camera (e.g., windshield camera 140a, or the like) and a cell phone (e.g., mobile device 110, or the like). The windshield camera, when properly mounted on the windshield, can have a good view of the environment in front of the vehicle, while the cell phone can be placed on any convenient place for the user(s). In some embodiments, an ADAS software package may be installed on the cell phone.
[0074] Real-time video captured by the windshield camera may be encoded and sent to the cell phone via WiFi or USB cable. The cell phone serves as the computation unit, where the incoming videos may be decoded and analyzed. Based on the understanding of the scene, a decision will also be made by the cell phone. Subsequently, one or more corresponding actions (including, but not limited to, alarm(s), reminder(s), etc.) may be taken.
[0075] An example of fusion of road detection results is illustrated in Fig. 3B. Here, a set of DNN algorithms was applied for road detection. These algorithms were mainly based on Yolov5 or similar algorithms that are designed to be light-weight, fast, and suitable for realtime processing in a cell phone. These algorithms are fine-tuned with local data. As shown in Fig. 3B, lane detection results are highlighted as well as object detection results (in some cases, with probability of correct identification of objects, including, but not limited to, cars, traffic lights, etc.).
[0076] Figs. 4A-4G (collectively, "Fig. 4") are flow diagrams illustrating a method 400 for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments. Method 400 of Fig. 4E returns to Fig. 4A following the circular marker denoted, "A."
[0077] While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method 400 illustrated by Fig. 4 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 200, 300, and 300' of Figs. 1, 2, 3A, and 3B, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments 100, 200, 300, and 300' of Figs. 1, 2, 3A, and 3B, respectively (or components thereof), can operate according to the method 400 illustrated by Fig. 4 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 200, 300, and 300' of Figs. 1, 2, 3A, and 3B can each also operate according to other modes of operation and/or perform other suitable procedures.
[0078] In the non-limiting embodiment of Fig. 4A, method 400, at block 405, may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle. In some embodiments, the computing system may include, without limitation, at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like. In some instances, the mobile device may include, but is not limited to, at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like.
[0079] At block 410, method 400 may comprise pre-processing, using the computing system on the mobile device, the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis. In some cases, the one or more image processing operations may include, without limitation, at least one of pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.
[0080] Method 400 may further comprise, at block 415, analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera.
[0081] At optional block 420, method 400 may comprise receiving, using the computing system on the mobile device, one or more second images from at least one second camera. According to some embodiments, the at least one second camera may include, but is not limited to, at least one of a third camera that is mounted to a second fixed position on the windshield of the first vehicle, a fourth camera that is integrated with the mobile device with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera that is mounted to a fourth fixed position on a rear window of the first vehicle, and/or the like. [0082] Method 400, at optional block 425, may comprise pre-processing, using the computing system on the mobile device, the received one or more second images using the one or more image processing operations to prepare the received one or more second images for analysis. Method 400 may further comprise, at optional block 430, analyzing, using the computing system on the mobile device, the one or more second images. In some cases, the one or more second images from one of the third camera or the fourth camera may be analyzed to determine differences with the one or more first images from the first camera and to obtain stereoscopic vision or three-dimensional ("3D") data based on the determined differences. In some instances, the one or more second images from the fifth camera may be analyzed to obtain rearview data based on detection of objects behind the first vehicle.
[0083] At block 435, method 400 may comprise generating, using the computing system on the mobile device, one or more first fused images. Method 400 may further comprise analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle (block 440); and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks (block 445).
[0084] Merely by way of example, in some cases, the at least one first alert condition may each include, but is not limited to, at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway, and/or the like.
[0085] With reference to the non-limiting example of Fig. 4B, receiving the one or more first images from the first camera (at block 405) or receiving the one or more second images from the at least one second camera (at optional block 420) may comprise one of: receiving, using the computing system on the mobile device, the one or more first images from the first camera or the one or more second images from the at least one second camera via a wireless communication link between the first camera and the mobile device or between the at least one second camera and the mobile device (block 450a); or receiving, using the computing system on the mobile device, the one or more first images from the first camera or the one or more second images from the at least one second camera via a wired cable communication link between the first camera and the mobile device or between the at least one second camera and the mobile device (block 450b).
[0086] In some embodiments, the wireless communication link may include, without limitation, a WiFi communication link, or the like. In some cases, the built-in WiFi of the first camera (or of the at least one second camera) (if available) may be used, with the mobile device being set as a client. Alternatively, the built-in WiFi of mobile device (if available) may be used, with the first camera (or the at least one second camera) being set as a client. In this manner, reliable and high-speed connection may be provided to enable real-time (or near- real-time) video transmission between the first camera (or the at least one second camera) and the mobile device.
[0087] In some embodiments, receiving the one or more first images from the first camera may comprise receiving one or more first video data from the first camera (this also applies to the one or more second images as one or more second video data from the at least one second camera). In such embodiments, low latency transmission from windshield camera to cell phone is critical in successful implementation of ADAS. Herein, an adaptive rate control approach has been designed to reduce the latency and communication load based on the following observations: (a) If the surrounding environment is complex, then the vehicle speed is generally slower, thus larger latency may be tolerated but more details about the environment may be required; or (b) If the surrounding environment is simple, then the vehicle speed could be faster, thus lower latency is required but less details about the environment may be required.
[0088] Turning to the non- limiting example of Fig. 4C, method 400 may further comprise determining, using the computing system on the mobile device, an estimated speed of the first vehicle (block 455); and adjusting, using the computing system on the mobile device, at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle (block 460a) or adjusting, using the computing system on the mobile device, at least one of frame rate or resolution of transmission of the one or more second video data from the second camera as a function of the estimated speed of the first vehicle (block 460b). The frame rate may be adjusted in a manner proportional to the estimated speed of the first vehicle, while the resolution may be adjusted in a manner inversely proportional to the estimated speed of the first vehicle, as follows: (Eqns. 1 & 2)
Figure imgf000027_0001
where 5 denotes the speed of the first vehicle, /denotes the frame rate, p denotes the resolution of the video (e.g., progressive scan), and a and P denote two constants. In some embodiments, the following values may be used: Highway: /= 30 fps, p = 640;
Local: /= 15 fps, p = 1080; or the like. Although specific values for/and p are described with respect to two conditions, the various embodiments are not so limited, and any suitable values for/and p may be used, including, but not limited to, frame rates in the ranges between 15 and 30 fps, between 24 and 30 fps, between 10 and 30 fps, or between 15 and 120 fps, or the like, progressive scan resolution values including, but not limited to, 480, 576, 640, 720, 1080, or 2160, or the like.
[0089] In some cases, determining the estimated speed of the first vehicle may include, without limitation, determining, using the computing system on the mobile device, an estimated speed of the first vehicle based on at least one of global positioning system ("GPS") data, global navigation satellite system ("GNSS") data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulationbased location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device, and/or the like. [0090] With reference to the non-limiting example of Fig. 4D, method 400 may further comprise analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera (at block 415) may comprise at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system (block 415a); identifying and highlighting one or more landmarks along the roadway using a landmark detection system (block 415b); or identifying and highlighting one or more objects on or near the roadway using an object detection system (block 415c); and/or the like.
[0091] In some cases, the one or more lanes may include, without limitation, at least one of single-lane roads, bridges, or paths; two-lane roads with one or more of no-passing lane markers in one or more first stretches of the roadway, one-way passing-permitted lane markers in one or more second stretches of the roadway, two-way passing-permitted lane markers in one or more third stretches of the roadway, and/or the like; three-lane roads with a reversible lane in the middle (and corresponding lane markers and overhead traffic lights or traffic flow directions, or the like) allowing traffic to travel in either direction depending on traffic conditions; four-lane roadways; multi-lane highways (with five or more lanes with similar lane markings as described above with respect to the smaller numbered lane roadways); turn lanes; highway merge lanes; highway exit lanes; and so on. In some instances, the one or more landmarks may include, but are not limited to, natural formations, manmade structures (e.g., buildings, bridges, or other public works structures, or the like), signage for any such landmarks, and/or the like. In some cases, the one or more objects may include, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.
[0092] Referring to the non-limiting example of Fig. 4E, method 400 may further comprise receiving, using the computing system on the mobile device, one or more object detection signal data from at least one of one or more radar sensors or one or more lidar sensors that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device (at optional block 465); analyzing, using the computing system on the mobile device, the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects (at optional block 470); and merging, using the computing system on the mobile device, any of the one or more second objects that are determined to correspond to any of the one or more first objects with said one or more first objects (at optional block 475); and/or the like.
[0093] Method 400 may return to the process at block 435 in Fig. 4A following the circular marker denoted, "A."
[0094] In the non-limiting example of Fig. 4F, generating the one or more first fused images (at block 435) may comprise one of: [in embodiments following, e.g., the processes at blocks 405-415 in Fig. 4A, or the like] fusing the identified and highlighted one or more first objects with the one or more first images (block 480); [in embodiments following, e.g., the processes at blocks 405-415 in Fig. 4A, or the like] generating, using the computing system on the mobile device, one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays comprising at least one of text-based data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects, and/or the like (block 485a), and fusing, using the computing system on the mobile device, the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images (block 485b); [in embodiments following, e.g., the processes at optional blocks 420-430 in Fig. 4A, or the like] generating, using the computing system on the mobile device, one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images, and/or the like (block 490); [in embodiments following, e.g., the processes at optional blocks 465-475 in Fig. 4E, or the like] generating, using the computing system on the mobile device, one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images (block 495); and/or the like.
[0095] Turning to the non- limiting example of Fig. 4G, performing the one or more driver assistance tasks (at block 445) may comprise at least one of: presenting the one or more first fused images on a display device on the mobile device (block 445 a); generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device (block 445b); generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device (block 445c); or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device (block 445d); and/or the like.
[0096] Examples of System and Hardware Implementation
[0097] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments. Fig. 5 provides a schematic illustration of one embodiment of a computer system 500 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., mobile device 110, computing system(s) 115, location determination system 165, location signal source(s) 170, and vehicle computing system 195, etc.), as described above. It should be noted that Fig. 5 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. Fig. 5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
[0098] The computer or hardware system 500 - which might represent an embodiment of the computer or hardware system (i.e., mobile device 110, computing system(s) 115, location determination system 165, location signal source(s) 170, and vehicle computing system 195, etc.), described above with respect to Figs. 1-4 - is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 510, including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 520, which can include, without limitation, a display device, a printer, and/or the like.
[0099] The computer or hardware system 500 may further include (and/or be in communication with) one or more storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
[0100] The computer or hardware system 500 might also include a communications subsystem 530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system 500 will further comprise a working memory 535, which can include a RAM or ROM device, as described above.
[0101] The computer or hardware system 500 also may comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. [0102] A set of these instructions and/or code might be encoded and/or stored on a non- transitory computer readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 500. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
[0103] It will be apparent to those skilled in the art that substantial variations may be made in accordance with particular requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
[0104] As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system 500) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535. Such instructions may be read into the working memory 535 from another computer readable medium, such as one or more of the storage device(s) 525. Merely by way of example, execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein.
[0105] The terms "machine readable medium" and "computer readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion. In an embodiment implemented using the computer or hardware system 500, various computer readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 525. Volatile media includes, without limitation, dynamic memory, such as the working memory 535. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communication subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
[0106] Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH- EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
[0107] Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 500. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
[0108] The communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 505 retrieves and executes the instructions. The instructions received by the working memory 535 may optionally be stored on a storage device 525 either before or after execution by the processor(s) 510.
[0109] While particular features and aspects have been described with respect to some embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while particular functionality is ascribed to particular system components, unless the context dictates otherwise, this functionality need not be limited to such and can be distributed among various other system components in accordance with the several embodiments.
[0110] Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with — or without — particular features for ease of description and to illustrate some aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising: receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyzing, using the computing system on the mobile device, the received one or more first images to identify and highlight one or more first objects captured by the first camera; generating, using the computing system on the mobile device, one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyzing, using the computing system on the mobile device, the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, performing, using the computing system on the mobile device, one or more driver assistance tasks.
2. The method of claim 1, wherein the computing system comprises at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN").
3. The method of claim 1 or 2, wherein the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device.
4. The method of any of claims 1-3, wherein receiving the one or more first images from the first camera comprises one of:
33 receiving, using the computing system on the mobile device, the one or more first images from the first camera via a wireless communication link between the first camera and the mobile device; or receiving, using the computing system on the mobile device, the one or more first images from the first camera via a wired cable communication link between the first camera and the mobile device.
5. The method of any of claims 1-4, wherein receiving the one or more first images from the first camera comprises receiving one or more first video data from the first camera, wherein the method further comprises: determining, using the computing system on the mobile device, an estimated speed of the first vehicle; and adjusting, using the computing system on the mobile device, at least one of frame rate or resolution of transmission of the one or more first video data from the first camera as a function of the estimated speed of the first vehicle, wherein the frame rate is adjusted in a manner proportional to the estimated speed of the first vehicle and the resolution is adjusted in a manner inversely proportional to the estimated speed of the first vehicle.
6. The method of claim 5, wherein determining the estimated speed of the first vehicle comprises determining, using the computing system on the mobile device, an estimated speed of the first vehicle based on at least one of global positioning system ("GPS") data, global navigation satellite system ("GNSS") data, changes in image recognition-based landmark identification system data, changes in telecommunications signal triangulation-based location identification system data, changes in radar-based location identification system data, changes in lidar-based location identification system data, or speed data obtained from a vehicle computing system of the first vehicle via a communications link between the vehicle computing system and the computing system on the mobile device.
7. The method of any of claims 1-6, further comprising: prior to analysis of the received one or more first images, pre-processing, using the computing system on the mobile device, the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis, wherein the one or more image processing operations comprise at least one of prewhitening, resizing, aligning, cropping, or formatting.
34
8. The method of any of claims 1-7, wherein analyzing the received one or more first images to identify and highlight the one or more first objects captured by the first camera comprises at least one of: identifying and highlighting one or more lanes of a roadway using a lane detection system; identifying and highlighting one or more landmarks along the roadway using a landmark detection system; or identifying and highlighting one or more objects on or near the roadway using an object detection system, the one or more objects comprising at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects.
9. The method of any of claims 1-8, wherein generating the one or more first fused images comprises: generating, using the computing system on the mobile device, one or more image overlays based at least in part on analysis of the one or more first images, the one or more image overlays comprising at least one of textbased data, image-based data, or graphics-based data associated with information regarding at least one object among the identified one or more first objects; and fusing, using the computing system on the mobile device, the one or more image overlays with the identified and highlighted one or more first objects and the one or more first images.
10. The method of any of claims 1-9, wherein the at least one first alert condition each comprises at least one of driving on a lane marker along a roadway along which the first vehicle is travelling, drifting toward an adjacent lane on the roadway, driving between lanes on the roadway, drifting toward a shoulder of the roadway, driving on the shoulder of the roadway, driving toward a median along the roadway, traffic congestion detected ahead along the roadway, a traffic accident detected ahead along the roadway, a construction site detected ahead along the roadway, one or more people detected on or near the roadway, one or more animals detected on or near the roadway, one or more objects detected on or near the roadway, a tracked weather event detected along or near the roadway, a natural hazard detected ahead, a manmade hazard detected ahead, one or more people potentially intercepting the first vehicle along the roadway, one or more animals potentially intercepting the first vehicle along the roadway, one or more objects potentially intercepting the first vehicle along the roadway, or one or more third vehicles potentially intercepting the first vehicle along the roadway.
11. The method of any of claims 1-10, wherein performing the one or more driver assistance tasks comprises at least one of: presenting the one or more first fused images on a display device on the mobile device; generating a graphical display depicting one or more of the at least one first alert condition or the one or more first fused images, and presenting the generated graphical display on the display device; generating a text-based message describing one or more of the at least one first alert condition or the one or more first fused images, and presenting the text-based message on the display device; or generating at least one audio message regarding one or more of the at least one first alert condition or the one or more first fused images, and presenting the at least one audio message on at least one audio speaker on the mobile device.
12. The method of any of claims 1-11, further comprising: receiving, using the computing system on the mobile device, one or more second images from at least one second camera, the at least one second camera comprising at least one of a third camera that is mounted to a second fixed position on the windshield of the first vehicle, a fourth camera that is integrated with the mobile device with the mobile device mounted to a third position on the windshield of the first vehicle and with the fourth camera pointed in front of the first vehicle, or a fifth camera that is mounted to a fourth fixed position on a rear window of the first vehicle; and analyzing, using the computing system on the mobile device, the one or more second images, wherein the one or more second images from one of the third camera or the fourth camera are analyzed to determine differences with the one or more first images from the first camera and to obtain stereoscopic vision or three-dimensional ("3D") data based on the determined differences, and wherein the one or more second images from the fifth camera are analyzed to obtain rearview data based on detection of objects behind the first vehicle; wherein generating the one or more first fused images comprises generating, using the computing system on the mobile device, one or more second fused images by fusing at least one of the identified and highlighted one or more first objects, the 3D data, or the rearview data with the one or more first images.
13. The method of any of claims 1-12, further comprising: receiving, using the computing system on the mobile device, one or more object detection signal data from at least one of one or more radar sensors or one or more lidar sensors that are mounted on the first vehicle and that are communicatively coupled to the mobile device; and analyzing, using the computing system on the mobile device, the received one or more object detection signal data to identify and highlight one or more second objects and to determine whether the one or more second objects corresponding to the one or more first objects, wherein any of the one or more second objects that are determined to correspond to any of the one or more first objects are merged with said one or more first objects; wherein generating the one or more first fused images comprises generating, using the computing system on the mobile device, one or more third fused images by fusing at least one of the identified and highlighted one or more first objects or the identified and highlighted one or more second objects with the one or more first images.
14. A mobile device, comprising: a computing system; and a non-transitory computer readable medium communicatively coupled to the computing system, the non-transitory computer readable medium having stored thereon computer software comprising a set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from a first camera that is mounted to a first fixed position on a windshield of a first vehicle; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera;
37 generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
15. A system, comprising: a first camera mounted to a first fixed position on a windshield of a first vehicle; and a mobile device, comprising: a computing system; and a first non-transitory computer readable medium communicatively coupled to the computing system, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from the first camera; analyze the received one or more first images to identify and highlight one or more first objects captured by the first camera; generate one or more first fused images by fusing the identified and highlighted one or more first objects with the one or more first images; analyze the one or more first fused images to identify one or more alert conditions associated with operation of the first vehicle; and based on a determination that at least one first alert condition associated with operation of the first vehicle has been identified, perform one or more driver assistance tasks.
16. The system of claim 15, wherein the computing system comprises at least one of a driver assistance system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system,
38 an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN").
17. The system of claim 15 or 16, wherein the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device.
39
PCT/US2021/065467 2021-12-29 2021-12-29 Advanced driver assistance system (adas) with camera on windshield and mobile device WO2022104294A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/065467 WO2022104294A1 (en) 2021-12-29 2021-12-29 Advanced driver assistance system (adas) with camera on windshield and mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/065467 WO2022104294A1 (en) 2021-12-29 2021-12-29 Advanced driver assistance system (adas) with camera on windshield and mobile device

Publications (1)

Publication Number Publication Date
WO2022104294A1 true WO2022104294A1 (en) 2022-05-19

Family

ID=81601827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/065467 WO2022104294A1 (en) 2021-12-29 2021-12-29 Advanced driver assistance system (adas) with camera on windshield and mobile device

Country Status (1)

Country Link
WO (1) WO2022104294A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170113664A1 (en) * 2015-10-23 2017-04-27 Harman International Industries, Incorporated Systems and methods for detecting surprising events in vehicles
US20200057488A1 (en) * 2017-04-28 2020-02-20 FLIR Belgium BVBA Video and image chart fusion systems and methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170113664A1 (en) * 2015-10-23 2017-04-27 Harman International Industries, Incorporated Systems and methods for detecting surprising events in vehicles
US20200057488A1 (en) * 2017-04-28 2020-02-20 FLIR Belgium BVBA Video and image chart fusion systems and methods

Similar Documents

Publication Publication Date Title
KR102618662B1 (en) 3D feature prediction for autonomous driving
KR102605807B1 (en) Generating ground truth for machine learning from time series elements
US10878696B2 (en) Monitoring and reporting traffic information
US20180288320A1 (en) Camera Fields of View for Object Detection
US9832241B1 (en) Broadcasting telematics data to nearby mobile devices, vehicles, and infrastructure
US11592570B2 (en) Automated labeling system for autonomous driving vehicle lidar data
US20200223454A1 (en) Enhanced social media experience for autonomous vehicle users
JP2019096072A (en) Object detection device, object detection method and program
CN111999752A (en) Method, apparatus and computer storage medium for determining road information data
CN114740839A (en) Roadside system and method for cooperative automatic driving of vehicle and road
EP3700198A1 (en) Imaging device, image processing apparatus, and image processing method
US20220139090A1 (en) Systems and methods for object monitoring
US11551373B2 (en) System and method for determining distance to object on road
WO2022104296A1 (en) Camera radar fusion for advanced driver assistance system (adas) with radar and mobile phone
CN112204975A (en) Time stamp and metadata processing for video compression in autonomous vehicles
KR20240019763A (en) Object detection using image and message information
CN112166618B (en) Autonomous driving system, sensor unit of autonomous driving system, computer-implemented method for operating autonomous driving vehicle
US10691958B1 (en) Per-lane traffic data collection and/or navigation
US11182623B2 (en) Flexible hardware design for camera calibration and image pre-procesing in autonomous driving vehicles
WO2022104294A1 (en) Advanced driver assistance system (adas) with camera on windshield and mobile device
CN112585657A (en) Safe driving monitoring method and device
US11410432B2 (en) Methods and systems for displaying animal encounter warnings in vehicles
CN115421122A (en) Target object detection method and device, electronic equipment and readable storage medium
CN115063969A (en) Data processing method, device, medium, roadside cooperative device and system
JP2019101806A (en) Running field survey support system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893044

Country of ref document: EP

Kind code of ref document: A1