WO2019213177A1 - Vehicle telematic assistive apparatus and system - Google Patents

Vehicle telematic assistive apparatus and system Download PDF

Info

Publication number
WO2019213177A1
WO2019213177A1 PCT/US2019/030071 US2019030071W WO2019213177A1 WO 2019213177 A1 WO2019213177 A1 WO 2019213177A1 US 2019030071 W US2019030071 W US 2019030071W WO 2019213177 A1 WO2019213177 A1 WO 2019213177A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
speech
user
voice
communication
Prior art date
Application number
PCT/US2019/030071
Other languages
French (fr)
Inventor
Jonathan E. Ramaci
Original Assignee
Ramaci Jonathan E
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ramaci Jonathan E filed Critical Ramaci Jonathan E
Publication of WO2019213177A1 publication Critical patent/WO2019213177A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • the present invention relates to personal digital assistance, and specifically to vehicle telematics.
  • the disclosure relates to a system, apparatus, and methods for assisting an automobile driver with the use of mobile computing devices in conjunction with a voice-controlled input-output user interface and cloud computing services.
  • Driving is a motor task that requires significant visual guidance and attention. Any additional complex visual-motor secondary tasks performed while driving, such as dialing a phone, texting/browsing the Internet, or reaching for an object, impede the ability of a driver to successfully complete the task of driving. Studies have shown show that secondary tasks involving manual typing, texting, dialing, reaching for an object, or reading are dangerous (Dingus TA. Estimates of the Prevalence and Risk Associated with Inattention and Distraction based Upon In Situ Naturalistic Data. Engaged Driving Symposium/ Annals of Advances Automotive Medicine. March 31, 2014).
  • the use of a cell phone is associated with a quadrupling of the risk of injury and property damage.
  • Cellular telephone use while driving is a risk factor, but the magnitude of risk is unknown, especially with the use smartphones that are essentially hand-held Internet-accessible computers, allowing access, for example, to social media platforms such as Facebook and Twitter.
  • Vehicle telematics is the integration of wireless communications, monitoring systems, and location devices to provide real-time spatial and performance data of a vehicle or a fleet of vehicles.
  • An emerging market application lies in, for example, car insurance telematics, whereby insurers have been interested in monitoring driving activities in order to provide fair insurance premiums to customers.
  • Auto insurance actuarial models are traditionally based on static factors such as a driver’ s socio-demographic information (e.g., age, gender, marital status, etc.), the type of vehicle (e.g., year, manufacturer, model, etc.), and with historical driving records (e.g., violations, at fault accident, etc.).
  • Insurance telematics rely on an insurance premium that is based not only on static measures like the drivers age, occupation or place of residence, car model and configuration, or expected mileage over the policy period, but also on dynamic measures like actual mileage, time spent on the road or the time of day when the trip is being made, location, and the driver’s actual style of driving.
  • These insurance schemes known as Usage-based Insurance (UBI)
  • UBI Usage-based Insurance
  • PA YD pay-as-you-drive
  • PHYD pay-how-you drive
  • MHYD manage-how-you-drive
  • UBI allows insurers to create risk profiles of customers based on real-world driving behavior, create automated risk-monitoring and risk-transfer systems, providing the input required, for example, to measure the quality and risks of individual drivers.
  • OBD on-board diagnostic
  • a critical process of fleet management is the collection and control of vehicle data related to use, costs, and condition diagnosis.
  • Fleet data collection imposes significant challenges to fleet managers to effectively and efficiently collect, store, and process vehicle performance data in a timely manner. Individual car owners also face similar challenges with vehicle performance and maintenance.
  • the implementation of telematics has the potential to increase operational efficiency and improve driver safety in many ways, e.g., by tracking a vehicle’s location, mileage, and speed with GPS technology. For example, fleet managers can use this information to optimize routes and scheduling efficiency. That is, a driver’s actions can be monitored with accelerometers that measure changes in speed and direction. This information can then be used to improve driver performance through a one-on-one or in- vehicle coaching program.
  • Telematics provides a large amount of data that for example report the individual vehicle’s location and performance. However, it does not directly provide useful information about its operational condition or efficiency.
  • Fleet managers lack efficient computational hardware and algorithms to transform large amounts of telematics data into more useful predictive information on the condition of a fleet, an individual vehicle, or vehicle components (e.g., engine, tire).
  • Telematics technology providers do not provide a clear methodology for the integration the collected data and information into typical fleet managerial tasks, such as vehicle health condition assessment, maintenance, repair, and replacement of parts or the whole vehicle.
  • fleet managers and drivers during operation often lack an efficient way to schedule an appointment in advance with a service provider or identify and locate a repair location to effectively manage their transportation equipment maintenance program.
  • fleet management and drivers lack an efficiently automated and predictive system to manage a vehicle maintenance program.
  • Fleet managers and drivers also need a more safe, efficient, and effective communication system for vehicle health monitoring and maintenance, one that requires low cognitive demands of drivers, that minimizes visual-manual interactions with in-vehicle displays.
  • the use of smartphones for collecting driving data has been identified as a promising alternative, due to the high penetration of smartphones among end-users, sensing capabilities (e.g., accelerometers, magnetometers, GPS), and the efficiency of wireless data transfer.
  • Smartphones provide the feasibility of collecting individualized, high-fidelity and high-resolution driving data from users, revealing smartphone users’ temporal- spatial travel patterns.
  • Telematics industry focus has also begun to change from a hardware device focus to an integrated software focus.
  • OEM Original Equipment Manufacturers
  • Telematics data is continuously sent to the World-Wide-Web systems of Original Equipment Manufacturers (OEM) that can store, organize, and present these data to the fleet managers using visual and user-friendly interfaces.
  • OEM Original Equipment Manufacturers
  • older equipment tiers that do not have an OEM telematics system can be equipped with Telematic service Provider (TSP)-rigged units that are connected to the equipment’s mechanical and electrical subsystems to obtain telematic data.
  • TSP Telematic service Provider
  • the two primary information sources in smartphone-based insurance telematics are the global navigation satellite system (GNSS) receiver and the inertial measurement unit (IMEG).
  • GNSS global navigation satellite system
  • IMEG inertial measurement unit
  • the telematics-based insurance ecosystem is as heterogeneous as the mobile ecosystem, and the combination increases overall complexity. A lack of standardization in data and auto platforms makes it challenging for insurers to integrate mobile devices into their IT infrastructure.
  • VCS Voice-Control-System
  • VCS Voice Call Control
  • a VCS that enables drivers to access rich information sources without distraction still faces several challenges: complicated navigation entry; cognitively demanding voice interaction; accurate voice recognition; and challenging computational requirements for natural language interactions to mitigate errors and task complexity.
  • Speech recognition performance is critical because failures to understand drivers’ commands increase the distraction potential of the system.
  • a high error rate could lead drivers to visually verify commands and revert to visual-manual interaction.
  • Conversational interaction with a VCS could reduce several of the challenges posed by task complexity and recognition errors (Jenness, J. W. (2016, October)).
  • VHM Vehicle health monitoring
  • VHM Vehicle health monitoring
  • a vehicle health monitoring system is a concept of collecting vital equipment performance parameters to continuously assess the condition of a vehicle and detect signs of possible failure.
  • VHM provides a proactive approach to vehicle asset maintenance by fixing vehicle equipment before an occurrence of a severe failure event.
  • VHM is an essential facilitator of predictive maintenance program, in which maintenance tasks are scheduled just before failures are expected to occur based on the monitored performance of the vehicle.
  • Previous research on equipment and machine health monitory focused primarily on monitoring the condition (e.g., vibrations) of stationary mechanical machines.
  • Such a system should incorporate an automated voice-controlled AI digital assistant as a simple, user-friendly, natural, and low cognitive demand user-interface within a fleet management communication network ecosystem for safe, secured, and efficient operation.
  • an assistive technology platform comprising an apparatus, methods, and system is implemented for driver assistance with in-vehicle communication using speech-to-text (STT) and text- to- speech (TTS) input-out (I/O) and Artificial Intelligence (AI) digital assistance remote cloud services.
  • the platform incorporates at least one portable, wearable, or attachable device, providing one or more user functions including, but not limited to, voice, data, voice, data, SMS, alerts, location via SMS, GPS location/navigation, roadside assistance services, voice- controlled audio I/O.
  • a vehicle telematics device comprising: at least one processor contained in a housing, a portion of the housing comprising a voice-controlled user interface; and a memory in communication with the at least one processor, the memory storing executable instructions for causing the at least one processor to provide at least one selected from the group consisting of automated voice recognition response, natural language understanding, speech-to-text processing and text- to- speech processing, wherein the vehicle telematics device is an electronic wireless communication device.
  • the vehicle telematics device further comprises an audio coder-decoder (CODEC) in communication with the processor; and a speaker in communication with the audio CODEC, wherein the memory further stores executable instructions for causing the at least one processor to broadcast feedback communication for a user and one or more network participants through the speaker.
  • CODEC audio coder-decoder
  • the device may further comprise a microphone in communication with the audio CODEC, the microphone configured to receive a voice command from the user, convert the voice command to a voice signal, and send the voice signal to the audio CODEC, wherein the audio CODEC is configured to encode the voice signal to produce an encoded voice signal, and to transmit the encoded voice signal to the at least one processor; wherein the memory further stores executable instructions for causing the at least one processor to transmit the encoded voice signal to at least one voice translation service in communication with the at least one database server; and wherein the memory further stores executable instructions for causing the at least one processor to receive, from the voice translation service, a verbal response to the user’s voice command, and to broadcast the response through the speaker.
  • a vehicle telematics system comprising a wireless communication device, the wireless communication device comprising: at least one processor; a voice- controlled user interface; and a memory in communication with the at least one processor, the memory storing executable instructions; and one or more remote cloud-based servers configured to perform speech-to-text (STT) and text-to-speech (TTS) conversion services, wherein the wireless communication device and the conversion services located on the one or more remote cloud-based servers constitute serve as an Artificial Intelligence (AI) assistant, the AI assistant providing conversational interactions with a user utilizing automated voice recognition-response, natural language processing and predictive algorithms, and wherein information generated from interactions of the user with the AI assistant are stored in application software on the one or more remote cloud-based servers so as to be accessible to multiple users of the system.
  • AI Artificial Intelligence
  • the device is configured to operate with a vehicle on-board communication system.
  • the AI digital assistant provides a risk factor status to the user based on vehicle status, vehicle environment, or driver behavior.
  • the one or more remote cloud-based servers comprise one or more application software modules selected from the group consisting of an I/O processing module, a speech-to-text (STT) processing module, a phonetic alphabet conversion module, user database, a vocabulary database, a service processing module, task flow processing module, a speech-to-text (STT) processing module, and a speech synthesis module.
  • the device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device.
  • the device can perform one or executable codes, algorithms, methods, and or software instructions for automated voice recognition- response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular
  • the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a transportation communication ecosystem.
  • the ecosystem enables communication for a driver and one or more network participants (e.g., family member, worker, employer, etc.).
  • the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech-based interactions with the user, to perform/process STT and or TTS functions/services, preferably through a voice-controlled speech user interface.
  • the device and technology of the invention assists a driver with in-vehicle communication activities, particularly using speech-to-text (STT) and text-to-speech (TTS) input-out (I/O) technology and cloud computing services.
  • STT speech-to-text
  • TTS text-to-speech
  • I/O input-out
  • the system incorporates a voice-controlled AI digital assistant as simple, user-friendly, natural, and low cognitive demand user-interface or human surrogate within a communication ecosystem.
  • the device and ecosystem together address the short-comings of conventional VCSs and to reduce the amount of visual-manual interactions between a driver, a mobile computing device, and or a vehicle on-board communication system for safe and secured driving.
  • the voice-controlled speech user interface of said device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech.
  • the voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service.
  • the combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant.
  • AI Artificial Intelligent
  • the AI digital assistant provides conversational interactions, utilizing automated voice recognition- response, natural language processing, predictive algorithms, and the like, interact with the user, fulfill user requests, preferably providing speech-to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, or the like.
  • STT speech-to-text
  • TTS text-to-speech
  • the device is portable, wearable, or attachable to a dashboard, windshield, or the like, for use by a driver or an occupant of a vehicle.
  • the device may pair to operate with a vehicle on-board communication system via Bluetooth.
  • the device and functions may be incorporated into the vehicle on-board communication system.
  • the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like.
  • the said device provide access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, roadside assistance services, audio I/O.
  • the said enable the user to access and interact with the said AI digital assistant and transportation communication ecosystem for safe and secured driving.
  • the invention relates to a vehicle telematics platform and fleet management system comprising an apparatus, methods, and system incorporating a voice-controlled user interface and Artificial Intelligence (AI) digital assistance remote cloud services.
  • the platform comprises at least one wireless telematics communication device, providing one or more user functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, motion detection, voice- controlled audio I/O.
  • the said device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, UART, OBDII connectors, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device.
  • wireless transceiver e.g., RF, WiFi, Bluetooth, IoT
  • RF electronic circuits UART, OBDII connectors
  • WiFi electronic circuits e.g., RF
  • the device can perform from a tangible, non-transitory computer-readable medium (memory), one or executable codes, algorithms, methods, and or software instructions for data transmission, automated voice recognition- response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular telematics communication.
  • a tangible, non-transitory computer-readable medium memory
  • executable codes algorithms, methods, and or software instructions for data transmission
  • automated voice recognition- response automated voice recognition- response
  • natural language understanding-processing speech-to-text (STT) and text-to- speech (TTS) processing/services
  • STT speech-to-text
  • TTS text-to- speech
  • the electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle telematics and fleet operation management ecosystem.
  • the ecosystem enables operational and or feedback communication for a driver and one or more network participants (e.g., fleet manager, dispatcher, parents etc.).
  • the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech-based interactions with the user, to perform/process STT and or TTS functions/services, perform predictive fault analyses, whereby access to said servers is performed preferably through a voice-controlled speech user interface.
  • the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle fleet vehicle management system for, including but not limited to, vehicle component data collection, transmission, analysis, assessment, fault hazard prediction, or the like.
  • the application collects, aggregates, and processes telematics data, single or compound variables, to generate analytical and or predictive information including but not limited to a pre-set, threshold, captured, or monitored vehicle component or parts parameters.
  • the application software platform dynamically receives, captures, and monitors the statuses of vehicle components.
  • the application software may be configured to dynamically provide updates to a driver, preferably through an audio output via the voice- controlled user interface.
  • the ecosystem is accessible using a software application embedded within a mobile computing device (e.g., smartphone, etc.).
  • the said device is portable, wearable, attachable to the vehicle interior (e.g., dashboard, holder, windshield), or embedded within an electronic control unit (ECU), or the like, for use by a driver or an occupant of a vehicle.
  • the device may pair to operate with a vehicle on-board communication system (e.g., OBDII, OBD dongle) via WiFi or Bluetooth.
  • the device and functions may be incorporated into the vehicle on-board communication system or OBD system.
  • the device may access the Controller Area Network (CAN) bus of the vehicle.
  • the device may access the Tire Pressure Management System (TPMS) of the vehicle.
  • CAN Controller Area Network
  • TPMS Tire Pressure Management System
  • the device in conjunction with the vehicle on-board communication system or OBD system preferably captures vehicle operational and components parameters of the motor vehicle during operation.
  • the said device in conjunction with the vehicle on-board communication system or OBD system function as an integrated system providing all proprioceptive sensors (e.g. accelerometer, magnetometer, etc.) and/or measuring devices for sensing the operating parameters of the motor vehicle and/or exteroceptive sensors (e.g., camera, IR sensors, ultrasonic, proximity sensor, etc.) and/or measuring devices for sensing the vehicle’s components/parts parameters (e.g., tire pressure, tire temperature) during operation of the motor vehicle.
  • the device is operable within or in the proximity of a vehicle including but not limited to a car, electric vehicle, SUV, truck, van, bus, a motorcycle, train, tram, or the like.
  • the device provides access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, audio I/O.
  • the device is configured to enable the user to access and interact with the AI digital assistant and personal car or a vehicle fleet management ecosystem for safe and secured driving.
  • the voice-controlled speech user interface of the inventive device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech.
  • the voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service.
  • the combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant.
  • AI Artificial Intelligent
  • the AI digital assistant provides user voice authentication/identification, conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, or the like, interact with the user, fulfill user requests, preferably providing speech- to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, information relating to vehicle function, alerting driver of vehicle and driving environment, alerting driver behaviors, or the like.
  • STT speech- to-text
  • TTS text-to-speech
  • the vehicle and driving environment includes but limited to, time-dependent vehicle position/location, distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving conditions, location, temperature, throttle position, fuel consumption, VIN (vehicle identification number), tachometer value/reading (RPM), G forces, brake pedal position, blind spot, sun angle and sun information, local high occupancy vehicle (HOV) conditions, visibility, lighting condition, seatbelt status, rush hour, CAN vehicle bus parameters including fuel level, distance from other vehicles, distance from obstacles, driver alertness, activation/usage of automated features, activation/ usage of Advanced Driver Assistance Systems, traction control data, usage of headlights and other lights, usage of blinkers, vehicle weight, number of vehicle passengers, traffic sign information, junctions crossed, running of yellow and red traffic lights, railroad crossing, alcohol/drug level detection devices, lane position, car passed, or the like.
  • time-dependent vehicle position/location distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving
  • driver behaviors include but not limited to, hard breaking, acceleration, cornering, driving distance, mobile phone usage (while driving), seatbelt status, sign of fatigue, driver confidence, lane changes, lane choice, driver alertness, driver distraction, driver aggressiveness, driver mental, mood, and emotional condition, or the like.
  • the AI digital assistant provides a vehicle component status to the driver or fleet manager. In an embodiment, the AI digital assistant provides in-vehicle coaching based on real-time driver behavior. The AI digital assistant preferably conducts said tasks in conjunction with one or more cloud servers and or cloud computing services.
  • the device, methods, and system provides analytics for fault prediction, and or failure hazard estimation based on real-time data and measurements of vehicle components.
  • the vehicle telematics system captures vehicle components/parts data, analyze, assess, and synthesizes one or more predictive occurrence of automotive fault codes, and wherein the resulting prediction is communicated to a driver, preferably through an audio-visual output via the AI digital assistant during operation of the motor vehicle.
  • the analysis predicts occurrence of fault or no fault.
  • the predictive occurrence generator measures and/or generates a single or compound of component variables or fault parameters profiling the condition of a vehicle component during operation of the motor vehicle, based on, but not limited to, a preset, threshold, triggered, captured, or one or more said monitored vehicle component operating functions or parameters.
  • the analysis predicts the type of fault and identifies possible contributing factors.
  • the analysis correlates the fault type to a system- level failure log, correlates extracted fault type from time-series analytics to predict, preferably in real-time, system levels of occurrences.
  • a risk score is generated and communicated to the fleet manager and or driver which may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
  • machine learning algorithms are used for analyses and predictions, including but not limited to, Support Vector Machine (SVM), Artificial Neural Network, Logistic Regression, Decision Tree, Random Forest, or the like.
  • failure hazard estimation models are used to make predictions including but not limited to, Cox’ s proportional hazards, Kaplan-Meier, or the like.
  • the telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more measures of vehicle performance or a status of vehicle components including, but not limited to, engine, coolant temperature, engine oil pressure, engine oil temperature, tire pressure, tire temperature, or the like.
  • the telematic system also enables real-time dynamic alerts, maintenance alerts, driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
  • the device, methods, and software system of the invention can be connected, or to interact or exchange information, with the enterprise resource planning (ERP) software platform and or services of, including but not limited to, a vehicle manufacturer, vehicle dealership/service department, a service station, a fleet company’s service station, an OEM, or the like.
  • ERP enterprise resource planning
  • the software application (via APIs) enables a driver and or fleet manager to locate, identify, search, obtain quote, and to schedule and or purchase a replacement part and or maintenance/repair service based on the relevant telematics data, geo-location, or predictive analysis afforded by the said platform.
  • a vehicle has many consumable items (e.g.
  • the system enables the avoidance of such incidents allowing the control, maintenance, scheduling, and replacement of consumables based on predictive analytics and subsequent maintenance transactions (e.g., order, purchase, of replacement) using the software application in conjunction with said ERP software platform and or services.
  • the device and technology platform establish a vehicle telematics fleet management ecosystem, particularly using speech-to-text (STT) and text-to- speech (TTS) input-out (I/O) technology and cloud computing services.
  • STT speech-to-text
  • TTS text-to- speech
  • I/O input-out
  • the system incorporates a voice-controlled AI digital assistant as a simple, user-friendly, natural, and low cognitive demand user-interface or within a communication ecosystem for safe and secured operation.
  • the device and ecosystem together address the short comings of current and future static, labor-intensive, vehicle fleet management systems.
  • the invention provides a dynamic, efficient, automated, AI assistive, analytical, and predictive vehicle fault/maintenance system for vehicle fleet or personal car health management.
  • the invention provides a vehicle telematics platform and insurance risk management system comprising an apparatus, methods, and system incorporating a voice-controlled user interface and Artificial Intelligence (AI) digital assistance remote cloud services.
  • the platform incorporates at least one wireless communication device, providing one or more user functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, motion detection, voice-controlled audio PO.
  • the said device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device.
  • wireless transceiver e.g., RF, WiFi, Bluetooth, IoT
  • RF electronic circuits RF electronic circuits
  • WiFi electronic circuits WiFi electronic circuits
  • Bluetooth electronic circuits e.g., RF, WiFi, Bluetooth
  • the device can perform from a tangible, non-transitory computer-readable medium (memory), one or executable codes, algorithms, methods, and or software instructions for automated voice recognition-response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular communication.
  • a tangible, non-transitory computer-readable medium memory
  • executable codes algorithms, methods, and or software instructions for automated voice recognition-response
  • natural language understanding-processing speech-to-text (STT) and text-to- speech (TTS) processing/services
  • STT speech-to-text
  • TTS text-to- speech
  • the electronic device is configured to function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle telematics and insurance risk management ecosystem.
  • the ecosystem enables feedback communication for a driver and one or more network participants (e.g., family member, worker, employer, insurer, etc.).
  • the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech- based interactions with the user, to perform/process STT and or TTS functions/services, preferably through a voice-controlled speech user interface.
  • the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish an insurance telematics risk measurement, analysis, assessment, and management system.
  • the application collects, aggregates, and processes telematics data, single or compound variables, to generate analytical information including but not limited to a score and score parameters profiling the driver and or environmental driving condition during operation of the motor vehicle based on a pre-set, threshold, captured, or monitored operating or environmental parameters.
  • the application software platform dynamically captures and categorizes behavioral and risk profiles of the drivers.
  • the application software may be configured to dynamically provide updates, preferably audio output via the voice-controlled user interface to a driver.
  • the device is portable, wearable, or attachable to the vehicle interior (e.g., holder, windshield), or the like, for use by a driver or an occupant of a vehicle.
  • the device may pair to operate with a vehicle on-board communication system (e.g., OBD, OBD dongle) via WiFi or Bluetooth.
  • the device and functions may be incorporated into the vehicle on-board communication system or OBD system.
  • the device may access the Controller Area Network (CAN) bus of the vehicle.
  • the device in conjunction with the vehicle on board communication system or OBD system preferably captures contextual/environmental or operational parameters of the motor vehicle during operation.
  • the said device in conjunction with the vehicle on-board communication system or OBD system, function as an integrated system providing all proprioceptive sensors (e.g. accelerometer, magnetometer, etc.) and/or measuring devices for sensing the operating parameters of the motor vehicle and/or exteroceptive sensors (e.g., camera, IR sensors, ultrasonic, proximity sensor, etc.) and/or measuring devices for sensing the environmental parameters during operation of the motor vehicle.
  • the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like.
  • the said device provide access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, audio I/O.
  • the said enable the user to access and interact with the said AI digital assistant and vehicle telematics ecosystem and or insurance telematics system for safe and secured driving.
  • the voice-controlled speech user interface of the device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech.
  • the voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service.
  • the combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant.
  • AI Artificial Intelligent
  • the AI digital assistant provides user voice authentication/identification, conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, and the like, interact with the user, fulfill user requests, preferably providing speech-to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, information relating to vehicle function, alerting driver of vehicle and driving environment, alerting driver behaviors, or the like.
  • STT speech-to-text
  • TTS text-to-speech
  • the vehicle and driving environment includes but limited to, time-dependent vehicle position/location, distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving conditions, location, temperature, throttle position, fuel consumption, VIN (vehicle identification number), tachometer value/reading (RPM), G forces, brake pedal position, blind spot, sun angle and sun information, local high occupancy vehicle (HOV) conditions, visibility, lighting condition, seatbelt status, rush hour, CAN vehicle bus parameters including fuel level, distance from other vehicles, distance from obstacles, driver alertness, activation/usage of automated features, activation/ usage of Advanced Driver Assistance Systems, traction control data, usage of headlights and other lights, usage of blinkers, vehicle weight, number of vehicle passengers, traffic sign information, junctions crossed, running of orange and red traffic lights, railroad crossing, alcohol/drug level detection devices, lane position, car passed, or the like.
  • time-dependent vehicle position/location distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving
  • driver behaviors include but not limited to, hard breaking, acceleration, cornering, driving distance, mobile phone usage (while driving), seatbelt status, sign of fatigue, driver confidence, lane changes, lane choice, driver alertness, driver distraction, driver aggressiveness, driver mental, mood, and emotional condition, or the like.
  • the AI digital assistant provides a risk factor status to the driver, a parent, or an insurer, based on said vehicle status, vehicle environment, or driver behavior.
  • the AI digital assistant offers a reward (e.g. insurance discount) for good driving behavior.
  • the AI digital assistant warns the risk of an increase in insurance premium based on real-time driver behavior.
  • the AI digital assistant provides counseling or advice based on real-time driver behavior.
  • the AI digital assistant preferably conducts said tasks in conjunction with one or more cloud servers and or cloud computing services.
  • the device, methods, and system provides a dynamic, driver behavior, risk profile, scoring system based on real-time scoring and measurements.
  • the vehicle telematics system captures driver behavior data, analyze, assess, and synthesizes one or more risk profiles, and wherein the resulting profile is, preferably provided for audio-visual output via the AI digital assistant to a driver, a family member, or an insurer using proprioceptive sensors of the said device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle.
  • the score generator measuring and/or generating a single or compound of variable scoring parameters profiling the use and/or style and/or environmental condition of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more said monitored operating parameters or environmental parameters.
  • the variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter.
  • variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters.
  • the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
  • the telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system.
  • tachometer reading e.g. high RPM
  • unsteady driving condition e.g. high RPM
  • excessive engine power e.g. high RPM
  • ECO drive system e.g.
  • the telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
  • the device, methods, and system of the invention enables insurers to provide on or more insurance quote based on the score and other relevant telematics data (e.g., automatic capture and analysis of risk scores and reaction to data) afforded by the said platform.
  • the information generated by the platform is an integral solution to UBI insurance schemes including, but not limited to, pay-as-you-drive (PA YD), pay-how-you drive (PHYD), manage-how-you-drive (MHYD), or n PHYD.
  • the risk management/profiling system may allow an insurer to offer a discount based on driving behavior.
  • the risk-management/profiling system may for example discount based on mileage (how much a person drives) and not where or how.
  • the information generated by the system may be combined with additional information, including but not limited to; data and info of insurance policy, individual driving, crash forensics, credit scores, driving statistics, historic claims, market databases, driving license points, claims statistics, rewards, discounts, contextual data for weather, driving conditions, road type, environment, or the like.
  • the platform provides an insurer with a comprehensive risk-transfer structure comprising device and vehicle sensors collection and or combined with ADAS (advanced driver assistance systems) data for accurate risk analysis and incorporation within an automated risk-transfer system/coverage, claims notification, and value-added services (e.g., crash reporting, post-accident services, Emergency-Call/Breakdown-Call, vehicle theft, driver coaching/scoring, reward, driver safety training, etc.).
  • ADAS advanced driver assistance systems
  • FIGS. 1A and IB are schematic diagrams illustrating the components of the cellular communication device for assisting drivers with in-vehicle communication activities.
  • FIG. 2 is an illustration of a simple user interface according to an embodiment of the invention.
  • FIG. 3 is an illustration depicting the electronic device within a communication ecosystem.
  • FIG. 4 is an illustration depicting remote servers and application modules within a server for processing speech-to-text and text-to- speech services according to an embodiment of the invention
  • FIG. 5 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics risk prediction-risk scoring system.
  • FIG. 6 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics vehicle health and maintenance predictive system.
  • FIG. 7 illustrates a method to obtain the most fitted survival function derived from different groups of vehicle health parameters.
  • FIG. 8 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
  • FIG. 9A is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
  • FIG. 9B is an illustration of a process that occurs when a customer requests a part or services from a vendor through an ERP system.
  • spatially relative terms such as “above,” “below,” “upper,” “lower,” “top, “bottom,” and the like, may be used herein for ease of description to describe one element or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” other elements or features would then be oriented “above” the other elements or features. Thus, the term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
  • a vehicle telematics device Numerous alternative embodiments of a vehicle telematics device, methods, and technology platform (system) are described herein. Such device, methods and system assist drivers with in-vehicle mobile communication activities and deter risky behavior for safety purposes.
  • An object of the invention is the use of a technology platform to facilitate mobile communication between a driver and his or her network participants (e.g., parents, insurer, etc.).
  • the system leverages a low cognitive demand, voice-controlled AI digital assistant for accessing a variety of remote cloud computing services including but not limited to, automated voice recognition-response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services.
  • STT speech-to-text
  • TTS text-to- speech
  • the platform or system comprises a combination of at least one of the following components: cellular communication device; computing device; communication network; remote server; cloud server; cloud application software.
  • the cloud server and service are commonly referred to as “cloud computing”, “on-demand computing”, “software as a service (SaaS)”, “platform computing”, “network-accessible platform”, “cloud services”, “data centers,” and the like.
  • cloud can include“a collection of hardware and software that forms a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, etc.), which can be suitably provisioned to provide on- demand self-service, network access, resource pooling, elasticity and measured service, among other features.”
  • Cloud may be deployed as a private cloud (e.g., infrastructure operated by a single enterprise/organization), community cloud (e.g., infrastructure shared by several organizations to support a specific community that has shared concerns), public cloud (e.g., infrastructure made available to the general public, such as the Internet), or a suitable combination of two or more disparate types of clouds.
  • a cloud computing model enables some of those responsibilities which previously may have been provided by an organization's own information technology department, to instead be delivered as service layers within a cloud environment, for use by consumers (either within or external to the organization, according to the cloud's public/private nature).”
  • a cloud computing model can take the form of various service models such as, for example, Software as a Service (“SaaS”),“in which consumers use software applications that are running upon a cloud infrastructure, while a SaaS provider manages or controls the underlying cloud infrastructure and applications,” and Platform as a Service (“PaaS”), “in which consumers can use software programming languages and development tools supported by a PaaS provider to
  • the driver communication assistive system comprises a combination of at least one; voice-controlled speech user interface; computing device; communication network; remote server; cloud server; server system; cloud application software.
  • server system can also employ various virtual devices and/or services of third-party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of server system. These components are configured to function together to enable a user to interact with a resulting AI digital assistant.
  • Non limiting examples of an AI digital assistant include the ALEXA software and services from Amazon of Seattle, WA, CORTANA software and services from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW software and services from Google Inc. of Mountain View, Calif., and the SIRI software and services from Apple Inc. of Cupertino, Calif.
  • an application software accessible by the user and others, using one or more remote computing devices, provides transportation network support system for the driver.
  • the terms " AI digital assistant,” “virtual assistant,” “intelligent automated assistant,” or “automatic digital assistant” can refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent.
  • the electronic device of the invention is a fully functional wireless mobile communication device that is wearable, or attachable to a dashboard, windshield, or the like, for use by a driver or an occupant of a vehicle.
  • the device may pair to operate with a vehicle on-board communication system via Bluetooth or Low Energy (BTLE).
  • BTLE Bluetooth or Low Energy
  • the device and functions may be incorporated into the vehicle on-board communication system.
  • the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like.
  • the device provides a user-interface that allows a user to access features that include smart and secure location-based services, mobile phone module, voice and data, advanced battery system and power management, direct 911 emergency service access, and motion detection via an accelerometer sensor. Additional functions may include one or more measurements of linear acceleration for motion detection.
  • the said device may contain one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, Low Energy (BTLE), IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, Wifi, Blutooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), power gauge monitor, wireless battery charger, wireless transceiver, and the like, to function fully as a portable mobile communication device.
  • wireless transceiver e.g., RF, WiFi, Bluetooth, Low Energy (BTLE), IoT
  • RF electronic circuits RF electronic circuits
  • schematic diagram 100 illustrates the components that may be incorporated within the electronic device according to the invention.
  • device 101 may contain a radio module 102 configured to function as a stand-alone wireless cellular communication (e.g., sans Bluetooth, Low Energy (BTLE)) apparatus via connection to cellular antenna 103 and GPS antenna 104.
  • the communication antennas may be incorporated within device 101 or located externally to device 101.
  • the device is powered by a rechargeable battery 105 which can be re-energized by charger 106 in conjunction with an external docking station accessible through direct contact or wireless connection 107.
  • a fuel/power gauge 108 allows the device to monitor energy consumption and life of the battery 105.
  • a switched-mode power supply unit 109 and low dropout regulator (LDO) 110 may be incorporated to convert electrical power efficiently, eliminate switching noise, and provide simplicity in design.
  • Device 101 also contains an audio CODEC 111 functioning together with an audio input-output (EO) 112. Additional I/O devices may include a connection to one or more I/O buttons 113, output light-emitting diodes (LEDs) via LED driver 114 through connection 115, and vibrational motor 116.
  • device 101 may contain an accelerometer unit 117 as well as a subscriber identification module (SIM) 118.
  • SIM subscriber identification module
  • the device 101 may pair to operate with a vehicle on-board communication system via Bluetooth or Low Energy (BTLE).
  • BTLE Bluetooth or Low Energy
  • the communication device, hardware, and internal components can be integrated within a variety of form factors.
  • the physical form factors may include, but are not limited to, an apparatus of the invention, a self-contained compact box, a miniature device, a device resembling a portable mobile unit, a cellular phone, a mobile phone, a tablet, or the like.
  • the wireless communication may include a cellular communication that uses at least one of long-term evolution (LTE), LTE advanced (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunication system (UMTS), wireless broadband (WiBro), and global system for mobile communication (GSM).
  • LTE long-term evolution
  • LTE-A LTE advanced
  • CDMA code division multiple access
  • WCDMA wideband CDMA
  • UMTS universal mobile telecommunication system
  • WiBro wireless broadband
  • GSM global system for mobile communication
  • the wireless communication may include at least one of wireless fidelity (WIFI), BluetoothTM, Bluetooth low energy (BLE), ZigBeeTM, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN).
  • the wireless communication may include a global positioning system (GPS), global navigation satellite system (GNSS), Beidou navigation satellite system (Beidou), Galileo, and the European global satellite-based navigation system.
  • GPS global positioning system
  • GPS may be interchangeably may be referred to as "GNSS.” Additional bands and equivalent terminologies include Third Generation (3G), Fourth Generation (4G), Fifth Generation (5G), future generations, and the like.
  • the wireless transceivers may be configured to communicate according to an IEEE 802.11 standard, cellular (e.g., 2G,3G/4G/LITE/5G) standard, a GPS standard, or other standards.
  • IEEE 802.11 standard
  • cellular e.g., 2G,3G/4G/LITE/5G
  • GPS standard e.g., 3G/4G/LITE/5G
  • such a wireless communication can be implemented in accordance with one or more radio technology protocols, for example, such as NFC, 3 GPP LTE, LTE- A, 3G, 4G, 5G, WiMax, Wi-Fi, Bluetooth, ZigBee, IoT, or the like.
  • schematic diagram 100 illustrates the components that may be incorporated, but not limited, within the vehicle telematics device according to the invention.
  • device 101 may contain a radio module 102 that is configured so as to be able to function as a stand-alone wireless cellular communication apparatus via connection to cellular antenna 103 and GPS antenna 104.
  • Device 101 can also incorporate a Bluetooth antenna 125.
  • the said communication antennas may be incorporated within device 101 or located external to device 101.
  • the device is optionally powered by a rechargeable battery 105 which can be re-energized by charger 106 in conjunction with an external docking station accessible through direct contact or wireless connection 107.
  • a fuel/power gauge 108 allows the device to monitor energy consumption and life of battery 105.
  • rechargeable battery 105 and charger 106 may be disconnected and device 101 is powered directly from a vehicle’s power system.
  • a switched-mode power supply unit 109 and low dropout regulator (LDO) 110 may be incorporated to convert electrical power efficiently, eliminate switching noise, and provide simplicity in design.
  • Device 101 contains also an audio CODEC 111 functioning together with an audio input-output (I/O) 112. Additional I/O devices may include a connection to one or more I/O buttons 113, output light-emitting diodes (LEDs) via LED driver 114 through connection 115, and vibrational motor 116.
  • device 101 may contain an accelerometer unit 117 as well as a subscriber identification module (SIM) 118.
  • SIM subscriber identification module
  • the device 101 may incorporate an OBDII interpreter Integrated Circuit (IC) 120, for example, but non-limiting, an ELM327 (Elm Electronics Inc., Ontario, CA), to communicate and automatically interpret all OBD II signal protocols, via RS232, to a vehicle OBDII 121.
  • component 120 may comprise, for example, but non limiting, an ET7190 (Haikou Xingong Electronic Co, Ltd, Haikou City, China) OBDII protocol chip, connected to one or more microcontroller of device 101, to communicate and automatically interpret all OBD II signal protocols.
  • the device 101 may pair to operate with a vehicle on-board communication system, CAN bus, or OBDII with WiFi or Bluetooth, via for example, a ELM327 integrated component (Elm Electronics Inc., Ontario, CA) 122, or Low Energy (BTLE).
  • a vehicle on-board communication system CAN bus, or OBDII with WiFi or Bluetooth
  • BTLE Low Energy
  • accessing OBDII and or a vehicle’s CAN bus, and or a ECU, and or central vehicle computer enables device 101 to receive data from a vehicle’s Tire Pressure Management System (TPMS).
  • TPMS Tire Pressure Management System
  • TPMSs continuously measure air pressure inside all tires of passenger cars, trucks, and multipurpose passenger vehicles, and alert drivers if any tire is significantly underinflated or overinflated.
  • Most automobiles are equipped with direct TPMSs, relying on battery-powered pressure sensors inside each tire to measure tire pressure and communicate their data via a radio frequency (RF) transmitter.
  • RF radio frequency
  • the receiving tire pressure control unit analyzes the data and can send results or commands to the central car computer over the Controller Area Network (CAN), e.g., to trigger a warning message on the vehicle dashboard.
  • CAN Controller Area Network
  • radio module 102 may comprise a RF transceiver to directly receive tire sensors data (e.g., pressure, temperature).
  • tire sensors data e.g., pressure, temperature
  • communication voice, data, SMS, etc.
  • OBDII CAN bus, TPMS, or tire sensors are secured using security encryption and transmission protocols known in the arts.
  • the vehicle’s position estimates may be derived from a combination of cellular radios with an accuracy of only a few kilometers or several hundred meters, shorter-range radios like 802.11 (WiFi) with an accuracy of a few tens to a few hundreds of meters, or by sampling of a GPS receiver on the device. It is understood that GPS may be inaccurate or unavailable in certain geographic locations (e.g., urban canyons). The use of short range radios, where networks are accessible, may be desirable for use to conserve battery energy.
  • WiFi 802.11
  • the accelerometer data generated from the device may be used principally, instead of GPS exclusively, to infer vehicular acceleration (longitudinal, lateral, and vertical).
  • the communication device, hardware, and internal components can be integrated within a variety of form factors.
  • the physical form factors may include, but not limited to, an apparatus of the invention, a self-contained, compact box, a miniature device, a device resembling a portable mobile unit, a cellular phone, a mobile phone, a tablet, or the like.
  • the wireless communication may include a cellular communication that uses at least one of long-term evolution (LTE), LTE advanced (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunication system (UMTS), wireless broadband (WiBro), and global system for mobile communication (GSM).
  • LTE long-term evolution
  • LTE-A LTE advanced
  • CDMA code division multiple access
  • WCDMA wideband CDMA
  • UMTS universal mobile telecommunication system
  • WiBro wireless broadband
  • GSM global system for mobile communication
  • the wireless communication may include at least one of wireless fidelity (WIFI), BLUETOOTH, BLUETOOTH low energy (BLE), ZIGBEE, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN).
  • the wireless communication may include a global positioning system (GPS), global navigation satellite system (GNSS), Beidou navigation satellite system (Beidou), Galileo, and the European global satellite-based navigation system.
  • GPS global
  • GPS may be interchangeably may be referred to as "GNSS.” Additional bands and equivalent terminologies include Third Generation (3G), Fourth Generation (4G), Fifth Generation (5G), future generations, and the like.
  • the wireless transceivers may be configured to communicate according to an IEEE 802.11 standard, cellular (e.g., 2G,3G/4G/LITE/5G) standard, a GPS standard, or other standards.
  • IEEE 802.11 standard
  • cellular e.g., 2G,3G/4G/LITE/5G
  • GPS standard e.g., 3G/4G/LITE/5G
  • such a wireless communication can be implemented in accordance with one or more radio technology protocols, for example, such as NFC, 3 GPP LTE, LTE- A, 3G, 4G, 5G, WiMax, Wi-Fi, Bluetooth, ZigBee, IoT, or the like.
  • the device 101 of FIG. 1 operates in conjunction with an integrated external-facing one or more simple user I/O interfaces, including but not limited to a, microphone, speaker, button, LED, E-ink, display, touch screen, or the like for user interaction, that is user-friendly, natural, and low cognitive, low visual demand.
  • an integrated external-facing one or more simple user I/O interfaces including but not limited to a, microphone, speaker, button, LED, E-ink, display, touch screen, or the like for user interaction, that is user-friendly, natural, and low cognitive, low visual demand.
  • FIG. 2 is an illustration 200 of a preferred simple user interface 201 of said device 101 of FIG. 1.
  • the simple user interface 201 may incorporate an easy recognizable on-off button 202 and a LED ring 203 that is on/lit (i.e., emitting light) when the device is active and allows a user to know that the device is on or in active mode.
  • the simple user interface 201 may also incorporate a microphone 204 for audio reception, a speaker 205 for audio output, and a touch screen display 207 for accessing device functions and visual outputs. Audio reception, which may include spoken words from a user, is captured by microphone 204 and processed by audio CODEC 111 of FIG. 1. It is understood that one more alternative I/O devices may be incorporated for use within said device 101 of FIG. 1.
  • the input devices may include a keyboard, a mouse device, additional microphone, voice- controlled speech interface, sensor, CCD detector, sensors, and the like.
  • the output devices may include displays, touch screen, audio output devices (e.g. speaker, vibrating motor), LED, LED display, E-Ink display, or other output devices.
  • the input/out interfaces of input-out (I/O) permit communication of information between device 101 of FIG. 1 and an external device, such as another computing device, e.g., a network element or an end-user device. Such communication can include direct communication or indirect communication, such as exchange of information between the electronic device 101 and the external device via a network or elements thereof.
  • the 1/0 interfaces can include one or more of network adapter(s), peripheral adapter(s), and rendering unit(s).
  • the peripheral adapter(s) can include a group of ports, which can include at least one of parallel ports, serial ports, Ethernet ports, V.35 ports, or X.21 ports.
  • the parallel ports can include General Purpose Interface Bus (GPIB), IEEE- 1284, while the serial ports can include Recommended Standard (RS)-232, V.ll, Universal Serial Bus (USB), FireWire or IEEE-1394.
  • GPIB General Purpose Interface Bus
  • RS Recommended Standard
  • USB Universal Serial Bus
  • FireWire IEEE-1394.
  • Network protocols for communication purposes among two or more electronic devices may be implemented using well-known network communication protocols such as Hypertext Transfer Protocol (HTTP), Secure Hypertext Transfer Protocol (SHTTP), Transmission Control Protocol/ Internet Protocol (TCP/IP), Ethernet, FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi Fi, voice over Internet (VoIP), Wi-MAX, a protocol for e mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable current or yet developed communication protocol.
  • HTTP Hypertext Transfer Protocol
  • SHTTP Secure Hypertext Transfer Protocol
  • TCP/IP Transmission Control Protocol/ Internet Protocol
  • Ethernet
  • the driver assistive technology system utilizes an application software platform to create an ecosystem for communication and networking between a driver and network participants (e.g., parents, co-workers, employer, etc.).
  • illustration 300 describes the elements of said ecosystem.
  • One or more user can access the system using a portable computing device 302 or stationary computing device 303.
  • Computing device 302 may be a laptop used by a family member.
  • Stationary computing device 303 may reside at a company facility.
  • One or more user may access the system using other portable computing devices such as a smart phone, a smart appliance, smart TV, AI digital assistance-enabled devices, PDA, or the like.
  • Device 301 corresponding to device 101 of FIG.
  • the application software platform can be stored in the one or more servers 308, 309.
  • the software environment allows for, but is not limited to, daily tracking of driver location, sending-receiving instant text messages, push notification, sending-receiving voice messages, sending-receiving audio streams, sending-receiving videos, or the like.
  • transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS).
  • EMS Enhanced Messaging Service
  • instant messaging refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
  • the message contents may be transmitted to one or more remote server providing services. This may include but not limited to, voice recognition- response, natural language processing, speech-to-text (STT) and or text-to-speech (TTS) services.
  • the device 301 enables communication with one or more remote servers, for example server 308, configured for providing cloud-based voice-control service, to perform natural language or speech-based interaction with the user.
  • Communication with server 308 may provide access to other servers and services, preferably access to one or more server providing STT and or TTS conversion services.
  • the device 301 detection audio inputs/listens and interacts with a user to determine a user intent based on natural language understanding of the user's speech.
  • the device 301 is configured to capture user utterances and provide them to the voice-control service located on server 308.
  • the control service performs speech recognition-response and natural language understanding-processing on the utterances to determine intents expressed by the utterances.
  • the controlled service causes a corresponding action to be performed.
  • An action may be performed at the control service or by instructing the said device 301 to perform a function.
  • the combination of the said device 301 and control service located on remote server 308 serve as an AI digital assistant.
  • the said assistant provides conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, and the like, to: perform functions, interact with the user, fulfill user requests.
  • the device 301 enables the driver to access and interact with the assistant for assisting with in-vehicle communication activities.
  • the information generated from the interaction of the user and others can be captured and stored in a remote server, for example remote server 309.
  • This information may be incorporated into the application software making it accessible to multi-users (e.g. family member, co-worker, employer) of the transportation communication ecosystem of this invention.
  • the application software residing in remote server 309 may also be accessible using a multimedia device.
  • Non-limiting exemplary devices include smart TV, smart appliance, FireTV, Fire HD8 Tablet, Echo Show; products available from Amazon.com (Seattle, WA), Nucleus (Nucleuslife.com), Triby (Invoxia.com), TCL Xcess, or the like.
  • said voice-control service server 308 may provide speech services implementing an automated speech recognition (ASR) function, a natural language understanding (NLU) function, an intent router/controller, and one or more applications providing commands back to the voice-controlled access device 101 of FIG. 1.
  • ASR automated speech recognition
  • NLU natural language understanding
  • the ASR function can recognize human speech in an audio signal transmitted by the voice-controlled speech interface device received from built-in microphone, for example, microphone 204 of FIG. 2.
  • the NLU function can determine a user intent based on user speech that is recognized by the ASR components.
  • the speech services may also include speech generation functionality that synthesizes speech audio.
  • the control service may also provide a dialog management component configured to coordinate speech dialogs or interactions with the user in conjunction with the speech services.
  • Speech dialogs may be used to determine the user intents using speech prompts.
  • One or more applications can serve as a command interpreter that determines functions or commands corresponding to intents expressed by user speech.
  • commands may correspond to functions that are to be performed by the voice-controlled speech user interface embedded within device 101 and the command interpreter may in those cases provide device commands or instructions to the voice-controlled speech user interface for implementing such functions.
  • the command interpreter can implement "built-in" capabilities that are used in conjunction with the voice-controlled speech user interface.
  • the control service may be configured to use a library of installable applications including one or more software applications or skill applications of this invention.
  • the control service may interact with other network-based services (e.g., ALEXA from Amazon of Seattle, WA, CORTANA from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW from Google Inc. of Mountain View, Calif., and the SIRI from Apple Inc. of Cupertino, Calif.) to obtain information, access additional database, application, or services on behalf of the user.
  • a dialog management component is configured to coordinate dialogs or interactions with the user based on speech as recognized by the ASR component and or understood by the NLU component.
  • the control service may also have a TTS component responsive to the dialog management component to generate speech for playback on the voice-controlled speech user interface.
  • control service may also have a SST component responsive to the dialog management component to convert speech to text for sending text-based messages.
  • SST components may function based on models or rules, which may include acoustic models, specify grammar, lexicons, phrases, responses, and the like created through various training techniques.
  • the dialog management component may utilize dialog models that specify logic for conducting dialogs with users.
  • a dialog comprises an alternating sequence of natural language statements or utterances by the user and system generated speech or textual responses.
  • the dialog models embody logic for creating responses based on received user statements to prompt the user for more detailed information of the intents or to obtain other information from the user.
  • An application selection component or intent router identifies, selects, and/or invokes installed device applications and/or installed server applications in response to user intents identified by the NLU component.
  • the intent router can identify one of the installed applications capable of servicing the user intent.
  • the application can be called or invoked to satisfy the user intent or to conduct further dialog with the user to further refine the user intent.
  • Each of the installed applications may have an intent specification that defines the serviceable intent.
  • the control service uses the intent specifications to detect user utterances, expressions, or intents that correspond to the applications.
  • An application intent specification may include NLU models for use by the natural language understanding component.
  • one or installed applications may contain specified dialog models for that create and coordinate speech interactions with the user.
  • the dialog models may be used by the dialog management component in conjunction with the dialog models to create and coordinate dialogs with the user and to determine user intent either before or during operation of the installed applications.
  • the NLU component and the dialog management component may be configured to use the intent specifications of the applications either to conduct dialogs, to identify expressed intents of users, identify and use the intent specifications of installed applications, in conjunction with the NLU models and dialog modes, to determine when a user has expressed an intent that can be serviced by the application, and to conduct on or more dialogs with the user.
  • the control service may refer to the intent specifications of multiple applications, including both device applications and server application, for example, to identify a“Drivesafe" intent.
  • the service may then invoke the corresponding application or“skill” containing instructions to process the intent.
  • the application may receive an indication of the determined intent and may conduct or coordinate further dialogs with the user to elicit further intent details.
  • the application may perform its designed functionality in fulfillment of the intent.
  • said“skill” may be developed using application tools from vendors (e.g., e.g., ALEXA from Amazon of Seattle, WA, CORTANA from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW from Google Inc. of Mountain View, Calif., and the SIRI from Apple Inc. of Cupertino, Calif.) providing cloud control services.
  • device 101 of FIG. 1 is operable in conjunction with a voice-control service server 401 of additional resident software applications, or another remote server 402, for executing AI digital assistance functions to perform STT/TTS conversion and audio I/O functions.
  • FIG. 4 is an illustration 400 of the components of remote server 402 providing STT/TTS conversion and delivery services.
  • the remote servers preferably comprising application software modules that include one or more of an I/O processing module 403, a speech-to-text (STT) processing module 404, a phonetic alphabet conversion module 405, user database 406, a vocabulary database 407, service processing module 408, task flow processing module 409, a speech-to-text (STT) processing module 410, and speech synthesis module 411.
  • Each of these modules can access to one or more of the following systems or data and models, or a subset thereof: ontology, vocabulary index, user data, task flow models, service models, and ASR systems of service 401.
  • the AI digital assistant can perform the following, but not limited to: converting speech input into text; converting text output into speech; identifying a user's intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully infer the user's intent; determining the task flow for fulfilling the inferred intent; and executing the task flow to fulfill the inferred intent.
  • 1/0 processing module can forward the speech input to STT processing module (or a speech recognizer) for speech-to-text conversions.
  • I/O processing module 403 can forward the text input to TTS processing module 404 for text-to- speech conversions.
  • the speech synthesis module 411 can be configured to synthesize speech outputs for presentation to the user. Speech synthesis module 411 synthesizes speech outputs based on text provided by the digital assistant services of server 401, which may be in the form of out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, or the like.
  • transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS).
  • the generated dialogue response can be in the form of a text string that speech synthesis module 411 can convert to an audible speech output.
  • a text string may require one or more processing steps that include, but are not limited to: text pre-processing (e.g., spell-check), text processing (e.g., standardization and normalization, grapheme to phoneme conversion, dictionary based, and speech synthesis (e.g., waveform generation, output).
  • Speech synthesis module 411 can use any appropriate speech synthesis technique to generate speech outputs from text, including, but not limited, to concatenative synthesis, unit selection synthesis, diphone synthesis, domain- specific synthesis, formant synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sinewave synthesis.
  • speech synthesis module 411 can be configured to synthesize individual words based on phonemic strings corresponding to the words. For example, a phonemic string can be associated with a word in the generated dialogue response. The phonemic string can be stored in metadata associated with the word. Speech synthesis model can be configured to directly process the phonemic string in the metadata to synthesize the word in speech form.
  • the speech synthesis can be performed on one or more remote server with high processing power or resources, preferably to obtain higher quality and fast speech outputs, to be sent to electronic device 101 of FIG. 1, to address the short-comings of the conventional VCS.
  • FIG. 5 schematically illustrates an architecture 500 for an embodiment of a dynamically triggered vehicle telematics risk prediction/ risk scoring system, in particularly providing a dynamic, telematics-based connection to cloud computing server 308 or 309 of FIG. 3, and a telematics data aggregator/analytics module by means of a mobile telematics cellular communication device 101 of FIG.l executing mobile telematics described herein software applications.
  • Telematics data aggregator/analytics score generating module comprises an event detection component 501 and a scoring function component 502.
  • Event detection component 501 receives from the vehicle telematics device 101, through the communication network as described in FIG. 3, various operational data inputs including IMU data 503, GPS data 504, and driver data 505.
  • the identified event 506 is combined with environmental contextual data 507 (e.g., weather, terrain, etc., and for example, but not limited to, other data 508 is fed into the scoring function component 502 for processing.
  • the out of the telematics data aggregator/analytics module is a score element 509.
  • the score element 509 may be, but not limited to, a driver score, a driver behavior score, a vehicle operating/status score, a safety score, a driving condition score, a road condition score, a weather-related score, a risk score, a distraction score, or contextual or environment score.
  • the vehicle telematics communication device and cloud computing server applications react in real-time, dynamically on captured operational or contextual parameters, particularly monitored and captured vehicle parameters during operation.
  • the present invention also provides a telematics based automated risk profile, alert and real-time notification.
  • the inventive system provides a structure for the use of telematics together with real-time risk monitor, assessment, analysis, and management insurance system.
  • the vehicle telematics system captures driver behavior data, analyzes, assesses, and synthesizes one or more risk profiles, based on one or more score 509 output, and wherein the resulting profile is , preferably provided for audio-visual output via the AI digital assistant to a driver, a family member, or an insurer using proprioceptive sensors of the device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle.
  • the score generator module measures and/or generates a single or compound of variable scoring parameters profiling the use and/or style and/or contextual condition/data 507 of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more herein monitored operating parameters or environmental parameters.
  • the vehicle telematics system captures driver behavior data, analyze, assess, and synthesizes one or more risk profiles, based on one or more score 509 output, and wherein the resulting profile is, preferably provided for audio-visual output via the AI digital assistant to a driver, a fleet manager, or an insurer using proprioceptive sensors of the said device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle.
  • the score generator module measures and/or generates a single or compound of variable scoring parameters profiling the use and/or style and/or contextual condition/data 507 of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more herein monitored operating parameters or environmental parameters.
  • the AI digital assistant may be used to remind, teach, or guide a driver, including but not limited to, company driving policy, maintenance policy, driving guidelines, regulations, training instructions, or the like.
  • FIG. 6 schematically illustrates an architecture 600 for an embodiment of a dynamically triggered vehicle telematics vehicle health and maintenance predictive system, in particularly providing a dynamic, telematics-based connection to cloud computing server 308 or 309 of FIG. 3, and a telematics data analytics module by means of a mobile telematics cellular communication device 101 of FIG. 1 executing mobile telematics described herein software applications.
  • Telematics data analytics generating module comprises an event detection component 601 and a fault hazard function estimation component 602.
  • Event detection component 601 receives from the vehicle telematics device 101, through the communication network as described in FIG. 3, various operational data inputs including IMU data 603, GPS data 604, and vehicle operation data 605.
  • the identified event 606 is combined with environmental contextual data 607 (e.g., weather, terrain, etc.) and for example, but not limited to, other data 608 is fed into the fault hazard estimator 602 for processing.
  • the fault estimator 602 preferably incorporate the method 700 of FIG. 7 (discussed below) to determine one or more survival functions for the vehicle or a vehicle component.
  • the out of the telematics data analytics module is a score element 609.
  • the score element 609 may be, but not limited to, a vehicle operating/status score, a vehicle maintenance score, a vehicle health score, a vehicle component’s status, or the like.
  • the score may be store be transmitted or sent to one or more said remote servers and stored within one or more databases.
  • Exemplary databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase databases.
  • databases may be implemented using standardized data structures, such as an array, hash, linked list, structured text file (e.g., XML), table, or as object-oriented databases (e.g., ObjectStore, Poet, Zope, etc.).
  • the scores may be compared to a logic of a pre-set, threshold, captured, or monitored vehicle component, vehicle health, status of vehicle parts or parameters to generate an alert, a maintenance alert, such alerts preferably communicated to the driver via the AI digital assistant described herein.
  • the vehicle telematics communication device and cloud computing server applications react in real-time, dynamically on captured operational or contextual parameters, particularly monitored and captured vehicle parameters or health during operation.
  • the vehicle health and maintenance predictive system measures and/or generates a single or compound of component variables or fault parameters profiling the condition of a vehicle component during operation of the motor vehicle, based on, but not limited to, a preset, threshold, triggered, captured, or one or more said monitored vehicle component operating functions or parameters.
  • the analysis predicts the type of fault and identifies possible contributing factors.
  • the analysis correlates the fault type to a system-level failure log, correlates extracted fault type from time-series analytics to predict, preferably in real-time, system levels of occurrences.
  • a risk score is generated and communicated to the fleet manager and or driver which may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
  • machine learning algorithms are used for analyses and predictions, including but not limited to, Support Vector Machine (SVM), Artificial Neural Network, Logistic Regression, Decision Tree, Random Forest, or the like.
  • failure hazard estimation models are used to make predictions including but not limited to, said Cox’s proportional hazards, Kaplan- Meier, or the like.
  • FIG. 7 illustrates a method 700 of the invention for obtaining the most fitted survival function derived from different groups of vehicle health parameters.
  • Health parameters may include an engine, tire, a brake, etc.
  • step 701 the telematics data time series of one or more vehicle components is broken down into successive survival lives that are divided by the component’s failure events (e.g., tire blown).
  • the telematics data received from the TPMS can be used to detect a severe failure event. It is anticipated that a last open-ended time interval of component survival, representing a recovery from a failure with no observed failure event during the analysis period, known as data right censoring.
  • the fleet manager defines the time-varying periods of the survival function to represent the dynamically changing component’s failure hazard over time, preferably the periods are optimally set to provide a statistically sufficient/significant dynamic and accurate survival function.
  • the failure hazard h(t) for each telematics entry at time t experienced by the vehicle component over a survival life period is quantified and sampled as the ratio between (1) the net operating total hours consumed since the survival life start time to the corresponding time t; and (2) the survival life length L as the difference between the net tire operating total hours experienced during the survival period.
  • the outliers in the telematics data entries are identified and eliminated if the absolute studentized residual of any of its fields is bigger than 3.0.
  • step 705 the telematics entries are distributed between the time-varying periods of the survival function, based on their time (t) values.
  • step 706 the telematics entries of each time-varying period are randomly divided into two equal groups: (1) an estimation group that is used to estimate the hazard function regression coefficients (see step 707); and 2) a prediction group that is used to validate the hazard function and its estimated coefficient (see step 708).
  • the survival function baseline failure rate ho(t) and coefficients vector b(1) are estimated by applying the data linearization regression technique to the estimation group of telematics entries.
  • the regression population includes all the telematics data estimation group within the corresponding time-varying survival function period.
  • the fitness of the generated survival function and its coefficients can be evaluated using (1) the p-value for the constant and each covariate coefficient as generated from by regression analysis, which is used to test the hypothesis of survival function dependency on each of the covariates; (2) coefficients of determination (R-square, Multiple R-square, and adjusted R-square) to test the fit of the resulting survival function to the observed data; and (3) analysis of variance (ANOVA) significance level F, which quantifies the probability that the proposed function does not explain the variation in the equipment hazard.
  • ANOVA analysis of variance
  • the survival function coefficients are validated by using the survival function to calculate the failure hazard estimate values for every telematics entry in the prediction group and comparing the estimates with the observed values.
  • the validity is assessed using the Pearson coefficient of correlation (R corr ) and the student t-test to examine the hypothesis that no relation exists between the observed and estimated hazard values.
  • the variance between estimated and observed failure hazard values is quantified using the root mean square error (RMSE), where its smaller values reflect higher prediction accuracy.
  • RMSE root mean square error
  • step 709 repeat steps 707 and 708 to experiment with different combinations of the proposed covariates to find the survival function coefficients that provide the maximal fit to the estimation telematics data group (p-value, R-square, F) and the minimal variance with the prediction data group (i.e., RMSE) with validated correlation (R corr and t-test).
  • the method can use for example to calculate the survival function, (1) tire pressure warning lights to model failure events and survival period boundaries; (2) the tire’s total run hour to calculate the failure hazard; and (3) additional parameters as the covariates of the survival function, which can include but not limited to, tire temperature, vehicle speed, vehicle operation hours, external environmental temperature, travel terrain, vehicle idling hours, vehicle odometer.
  • the estimated survival function enables a proactive maintenance tool to estimate vehicle and components failure probability.
  • FIG. 8 schematically illustrates an architecture 800 for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
  • a vehicle motion and orientation rate 801 is determined with inputs from IMEG sensors 802.
  • a vehicle heading and speed variation 803 is determined from GPS data 804.
  • Elements 801 and 803 are combined, via linear and on non-linear, by sensor aggregator 805 and the data is transmitted to analytics engine 806, which may reside on one or more remote cloud servers.
  • Analytics engine may access one or more database 808, residing on the same server or another server, containing analytics logics, rules, algorithms, or the like to process, analyze, assess, or determine driver behavior 809, and subsequently a driving score 509 of FIG. 5.
  • variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter.
  • the variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters.
  • the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
  • the telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system.
  • the score generator may incorporate additional information to determine a score including a vehicle safety score, a cyber risk score, a software certification/testing risk score, an NHTSA level risk score, or the like.
  • the telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
  • the device, methods, and system can communicate with one or more servers external to the ecosystem described herein.
  • the one or more servers may be remote servers accessible through a cellular network, a communication network or a through the Internet.
  • the said server may comprise an enterprise server of a manufacturer, a vehicle service provider, a vehicle parts vendor, an OEM telematics service provider, or the like.
  • the enterprise server may contain on or more ERP planning platform accessible through on or more front-end APIs.
  • the ERP platform and its function are accessible using the voice-controlled AI digital assistant described herein.
  • one more user of said fleet management ecosystem described herein may a manufacture, a vendor, or a service provider enables to receive an order from a driver or a fleet manager and submit on or more quote based on receiving relevant telematics data (e.g., tire type, size, oil, coolant, etc.) afforded by the said platform.
  • relevant telematics data e.g., tire type, size, oil, coolant, etc.
  • FIG. 8 schematically illustrates an architecture 800 for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
  • a vehicle motion and orientation rate 801 is determined with inputs from IMU sensors 802.
  • a vehicle heading and speed variation 803 is determined from GPS data 804.
  • Elements 801 and 803 are combined, via linear and on non-linear, by sensor aggregator 805 and the data is transmitted to analytics engine 806, which may reside on one or more remote cloud servers.
  • Analytics engine may access one or more database 808, residing on the same server or another server, containing analytics logics, rules, algorithms, or the like to process, analyze, assess, or determine driver behavior 809, and subsequently a driving score 709 of FIG. 7.
  • variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter.
  • the variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters.
  • the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
  • the telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system.
  • the score generator may incorporate additional information to determine a score including a vehicle safety score, a cyber risk score, a software certification/testing risk score, an NHTSA level risk score, or the like.
  • the telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
  • the invention provides for a device, methods, and a system that enables insurers to provide one or more insurance quote based on the score and other relevant telematics data (e.g., automatic capture and analysis of risk scores and reaction to data) afforded by the said platform.
  • FIG. 9A is an illustration 900 of a process that occurs when a customer requests a product (e.g., an underwriting and/or insurance product) from an underwriter, customer service representative, distributor, underwriting system, insurer, insurance agent, or the like.
  • the method may be illustrative of a process of self-service underwriting product pricing (such as the customer pricing an insurance policy online).
  • the method may comprise initiating the quote process, at 901.
  • An underwriter and/or customer may, for example, utilize an interface, such as an interface provided by client terminals 302 or 303 of FIG. 3 to search for, identify, and/or otherwise open or determine an existing account.
  • an account search may comprise an account login and/or associated credential check (e.g., password- protected account login).
  • An account search may be based, in some embodiments, on a customer name, business name, account number, and/or other identification information that is or becomes known or practicable.
  • a computerized processing device such as a PC or computer server described herein, and/or a software program described herein and/or interface may conduct the search and/or may receive information descriptive of the search and/or one or more indications thereof.
  • the method may comprise a determination, at 902, as to whether vehicle telematics described herein will be utilized in association with the desired policy/product.
  • An agent, CSR, and/or underwriter may inquire, for example, as to whether a customer desires (and/or will allow) the use of vehicle telematic data (e.g., personal) in association with the desired policy/product.
  • information related to and/or descriptive of vehicle telematics may be received at 902.
  • Such information may include, for example, but not limited to, information descriptive of a quantity and/or type of vehicle and/or information descriptive of vehicle.
  • the method may proceed directly to determine whether to accept and/or modify the application/request, at 903.
  • the method may continue to product pricing, quote, and sale at 904.
  • the product pricing may, according to some embodiments, comprise policy creation that may, for example, be based on policy type selection, customer detail entry, and/or account searching and/or data (e.g., a number and/or percentage of vehicles utilizing vehicle telematic devices).
  • An underwriting program and/or associated device and/or interface may create a policy number, session, and/or account identifier, log, and/or other record of policy type selection, for example, in reference to the customer and/or underwriter desiring to price the policy or product.
  • the product pricing may comprise coverage selection and/or determination.
  • the customer and/or underwriter may select various available coverage levels and/or types for the policy.
  • interface options may allow various available coverage parameters to be selected and/or input.
  • a computerized processing device such as a PC or computer server and/or a software program and/or interface described herein may receive the coverage selection and/or one or more indications thereof.
  • the underwriter may provide a quote at 904 for any number of underwriting products such as a quote for each of a plurality of insurance product types and/or tiers.
  • the underwriter may determine, define, generate, and/or otherwise identify the quote at 904. The quote may then, for example, be provided, transmitted, displayed, and/or otherwise output to the customer via any methodology that is or becomes desirable or practicable.
  • the quote provided may comprise (but is not limited to) one or more of the following: premium/price (which may include a high-risk price and/or a low-risk price), insurance and/or surety capacity (e.g., an aggregate line of credit), collateral requirements, indemnity requirements, international bond restrictions, surety product type restrictions, other risk restrictions/exclusions, and/or financial reporting requirements.
  • premium/price which may include a high-risk price and/or a low-risk price
  • insurance and/or surety capacity e.g., an aggregate line of credit
  • collateral requirements e.g., indemnity requirements, international bond restrictions, surety product type restrictions, other risk restrictions/exclusions, and/or financial reporting requirements.
  • the method may proceed to the risk management system 905.
  • Various methodologies may be utilized, for example, to determine a level of risk associated with the customer (e.g., based on vehicle telematics and/or an extent and/or type of utilization thereof).
  • the risk management system 905 may comprise a risk control inspection interview, at 906.
  • Risk control personnel and/or electronic monitoring
  • Types, quantities, and/or configurations of vehicle telematic devices may be reviewed and inspected, for example, and/or safety program procedures and/or personnel may be reviewed.
  • results of the risk control inspection may be analyzed and/or processed during a risk control evaluation, at 906.
  • Results of the risk control evaluation may then be utilized during the determination of whether to accept and/or modify the application/ policy, at 907, and/or during the product pricing at 904.
  • Less desirable and/or effective (actual or predicted) safety programs utilizing vehicle telematics may, for example, result in higher perceived risk and accordingly warrant higher pricing/ premiums for the desired product.
  • an application and/or policy may be declined at 908.
  • the risk management system may also or alternatively comprise a risk control interview at 906.
  • the customer and/or a representative of the customer may, for example, be interviewed (in person and/or via telephone or online via the software application described herein) to gather data regarding the customer's safety program and/or vehicle telematics usage.
  • the information gathered during the interview may then be utilized, for example to inform and/or influence the risk control evaluation.
  • the risk management system 905 may also or alternatively comprise receiving vehicle telematic data, at 909.
  • Telematic data from one or more vehicle sensors or electronic device 101 of FIG. 1 and/or data obtained from a vehicle telematics service/data provider may, for example, be received by an insurance underwriter and/or risk control engineer.
  • the received data may be analyzed at 910.
  • Vehicle telematic data may be processed, for example, to determine an expected level of risk, risk profile, driver score, driver behavior past, present, predicted behavior, associated with the customer using the methods described herein (e.g. methods described in FIG. 5 and FIG. 6).
  • the results of the analyzing of the vehicle telematic data at 908 may be provided to and/or utilized in the risk control evaluation at 909.
  • the results of the analyzing of the vehicle telematic data at 908 may be provided directly to and/or may directly influence the determination of whether to accept and/or modify the product pricing at 904.
  • Effective and/or diligent implementation of a safety program may, for example, allow the customer to earn a discount in premiums (examples further described below), such information may be conveyed in real-time to a customer/driver via the AI digital assistant described herein.
  • the information generated by the risk management system 905 is an integral solution to UBI insurance schemes including, but not limited to, pay-as-you-drive (PA YD), pay-how-you drive (PHYD), manage-how-you- drive (MHYD), or n PHYD.
  • the risk management system may allow an insurer to offer a discount based on driving behavior.
  • the risk management system may for example offer a discount based on mileage (how much a person drives) and not where or how.
  • the information generated by the system may be combined with additional information, including but not limited to; data and info of insurance policy, individual driving, crash forensics, credit scores, driving statistics, historic claims, market databases, driving license points, claims statistics, rewards, discounts, contextual data for weather, driving conditions, road type, environment, or the like.
  • the platform provides an insurer with a comprehensive risk-transfer structure comprising device and vehicle sensors collection and or combined with ADAS (advanced driver assistance systems) data for accurate risk analysis and incorporation within an automated risk-transfer system/coverage, claims notification, and value-added services (e.g., crash reporting, post-accident services, Emergency- Call/Breakdown-Call, vehicle theft, driver coaching/scoring, reward, driver safety training, etc.).
  • ADAS advanced driver assistance systems
  • the device, methods, and system of the invention can communicate with one or more servers external to the ecosystem described herein.
  • the one or more servers may be remote servers accessible through a cellular network, a communication network or a through the Internet.
  • the said server may comprise an enterprise server of a manufacturer, a vehicle service provider, a vehicle parts vendor, an OEM telematics service provider, or the like.
  • the enterprise server may contain on or more ERP planning platform accessible through on or more front-end APIs.
  • the ERP platform and its function are accessible using the voice-controlled AI digital assistant described herein.
  • one more user of said fleet management ecosystem described herein may a manufacture, a vendor, or a service provider enables to receive an order from a driver or a fleet manager and submit on or more quote based on receiving relevant telematics data (e.g., tire type, size, oil, coolant, etc.) afforded by the platform.
  • relevant telematics data e.g., tire type, size, oil, coolant, etc.
  • FIG. 9B is an illustration 910 of an alternate embodiment of the invention directed to a process that occurs when a customer requests a part or services from a vendor, a manufacturer, a vehicle service provider, or the like.
  • the method may be illustrative of a process of self-service product pricing and purchase (such as the customer pricing a vehicle part online).
  • the method may comprise initiating the quote process, at the front end 921.
  • a fleet manager may, for example, utilize an application interface, such as an interface provided by client terminals 302 or 303 of FIG. 3 to search for, identify, and/or otherwise open or determine an existing account.
  • an account search may comprise an account login and/or associated credential check (e.g., password-protected account login).
  • An account search may be based, in some embodiments, on a customer name, business name, account number, voice-signature, and/or other identification information that is or becomes known or practicable.
  • a computerized processing device such as a PC or computer server described herein, and/or a software program described herein and/or interface may conduct the search and/or may receive information descriptive of the search and/or one or more indications thereof.
  • the method may comprise a determination, at 922, as to whether vehicle telematics described herein will be utilized in association with the desired product, part, or service request.
  • a vendor representative may inquire, for example, as to whether a customer desires (and/or will allow) the use of vehicle telematic data (e.g., personal data) in association with the desired product, vehicle replace parts, or the like.
  • vehicle telematic data e.g., personal data
  • information related to and/or descriptive of vehicle telematics may be received at 922. Such information may include, for example, but not limited to, information descriptive of a quantity and/or type of vehicle and/or information descriptive of vehicle, vehicle health, vehicle components, vehicle supplies in need of replacement and or service.
  • the method may proceed directly to determine whether to accept and/or modify the application/request, at 923.
  • the method may continue to product pricing, quote, and sale at 924.
  • the product pricing may, according to some embodiments, comprise the creation of a customer profile or product profile policy creation that may, for example, be based on product type selection, customer detail entry, and/or account searching and/or data (e.g., previous product purchase) or a maintenance program.
  • the product pricing may comprise warranty selection and/or determination.
  • the customer may select various available warranty levels and or maintenance schedule for the purchase.
  • interface options may allow various available warranty options to be selected and/or input.
  • a computerized processing device such as a PC or computer server and/or a software program and/or interface described herein may receive the warranty selection and/or one or more indications thereof.
  • the vendor may provide a quote at 924 for any number of vehicle products or parts or service such as a quote for each of a plurality of available product types and/or tiers. According to some embodiments, the vendor may determine, define, generate, and/or otherwise identify the quote at 924.
  • the quote may then, for example, be provided, transmitted, displayed, and/or otherwise output to the customer via any methodology that is or becomes desirable or practicable.
  • the quote provided e.g., by a vendor, manufacturer, or service provider
  • the method may proceed to an ERP management system 925.
  • the ERP system 925 may comprise a product purchase request evaluation, at 926. Types, quantities, and/or configurations of vehicle components or parts available may be reviewed or automatically identified within a database 921. Results of the evaluation may then be utilized during the determination of whether to accept and/or modify, for example, a service request, at 927, and/or during the product pricing at 924. In some embodiments, a purchase request may be declined at 928, for various reasons (e.g., parts not available).
  • the ERP system may also or alternatively comprise a customer interview at 926.
  • the customer and/or a representative of the customer, such as a fleet manager may, for example, be interviewed (in person and/or via telephone or online via the software application, by the AI digital assistant, described herein) to gather data regarding the customer's needs for parts or services.
  • the information gathered during the interview may then be utilized, for example to inform and/or influence the product pricing or quote 924.
  • the ERP system 925 may also or alternatively comprise receiving vehicle telematic data, at 929. Telematic data from one or more vehicle sensors or electronic device 101 of FIG.
  • a vehicle telematics service/data provider may, for example, be received by a vendor representative.
  • the received data may be analyzed at 930.
  • Vehicle telematic data may be processed, for example, to determine in advance an expected type or level of service required stemming from one or more Diagnostic Trouble Code.
  • the results of the analyzing of the vehicle telematic data at 928 may be provided to and/or utilized in the request evaluation at 926.
  • the AI digital assistant uses a control service (e.g., Amazon Lex) available from Amazon.com (Seattle, WA). Access to skills require the use a device wake word (“Alexa”) as well as an invocation phrase (“Drivesafe”) for skills specifically developed for the said device that embodies one or more components of the present invention.
  • a control service e.g., Amazon Lex
  • Alexa device wake word
  • Drivesafe invocation phrase

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A vehicle telematics assistive device, system and methods for providing digital assistance to an automobile driver with the use of mobile computing devices in conjunction with a voice-controlled input-output user interface and cloud computing platform. The device is a fully functional wireless mobile communication device that is wearable, attachable to the interior of a vehicle, usable by a driver or an occupant of a vehicle, or may be directly connected to operate with the on-board communication system of the vehicle.

Description

VEHICLE TELEMATIC ASSISTIVE APPARATUS AND SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority benefit of provisional applications 62/664,812 filed on April 30, 2018, 62/664,816 filed on April 30, 2018, and 62/664,824 filed on April 30, 2018, the contents of each of which are incorporated herein in their entirety.
TECHNICAL FIELD
The present invention relates to personal digital assistance, and specifically to vehicle telematics. In particular, the disclosure relates to a system, apparatus, and methods for assisting an automobile driver with the use of mobile computing devices in conjunction with a voice- controlled input-output user interface and cloud computing services.
BACKGROUND
People are increasingly carrying and using mobile devices, especially cell phones, in various places and also while driving a vehicle (e.g. car, truck, etc.). Cell phone activities have changed in recent years, with reading, writing, texting, online browsing, and use of GPS navigation and other applications being common. Text messaging has become a popular way to communicate so naturally more drivers are sending and reading text messages (e.g., short message system (SMS) messages, multimedia messaging service (MMS) messages, etc.) while driving. According to the Cellular Telecommunications Industry Association, the average number of text messages sent in the U.S. per day exploded from 31 million in 2002 to 6.1 billion in 2012 (Yager C. An Evaluation of the Effectiveness of Voice-to-Text Programs at Reducing Incidences of Distracted Driving. Final Report SWUTC/13/600451-00011-1. Texas A&M Transportation Institute. April 2013). In 2015, the percentage of passenger vehicle drivers text-messaging or visibly manipulating handheld devices remained at 2.2 percent (Driver Electronic Device Use in 2015. Traffic Safety Facts Research Note. NHTSA September 2016). One of the most commonly recognized driving distractions is texting or talking on a cell phone (Amy Schick. Distraction by Cell Phones and Texting. NTSA. November 2014). In a large and representative sample (N=1211) of U.S. drivers, nearly 60% of drivers admitted to at least one cell-phone related distraction while driving in the past 30 days (Glicklich et al. Texting while driving: A study of 1211 U.S. adults with the Distracted Driving Survey. Preventive Medicine Reports 4 (2016) 486— 489) . Another study of 18 to 24 years old drivers (N=228), reading text messages was very common, with 71.5 % of participants saying they read text messages while driving in the past 30 days (Bergmark et al. Texting while driving: the development and validation of the distracted driving survey and risk score among young adults. Injury Epidemiology (2016) 3:7 DOI 10.1186/S40621-016-0073-8).
Driving is a motor task that requires significant visual guidance and attention. Any additional complex visual-motor secondary tasks performed while driving, such as dialing a phone, texting/browsing the Internet, or reaching for an object, impede the ability of a driver to successfully complete the task of driving. Studies have shown show that secondary tasks involving manual typing, texting, dialing, reaching for an object, or reading are dangerous (Dingus TA. Estimates of the Prevalence and Risk Associated with Inattention and Distraction based Upon In Situ Naturalistic Data. Engaged Driving Symposium/ Annals of Advances Automotive Medicine. March 31, 2014). The use of a cell phone is associated with a quadrupling of the risk of injury and property damage. Cellular telephone use while driving is a risk factor, but the magnitude of risk is unknown, especially with the use smartphones that are essentially hand-held Internet-accessible computers, allowing access, for example, to social media platforms such as Facebook and Twitter.
Laboratory-based studies and studies of controlled on-road driving have consistently demonstrated that talking on or manipulating a cell phone while driving adversely affects certain aspects of driving performance. Cell phone use was found to be associated with increased steering variance and increased glance concentration on the forward roadway. Dialing a hand-held cell phone was found to significantly increase the likelihood of being in a crash or near-crash, whereas having a verbal conversation on a hand-held cell phone did not significantly increase this risk. Hand-held cell phone use and visual-manual hand-held cell phone tasks were associated with increased risk of safety critical events; however, simply talking on a hand-held phone or using a hands-free cell phone absent of any visual-manual tasks were not. A study also found statistically significant odds ratios (ORs) for crash involvement associated with overall hand-held cell phone use (OR = 3.6), phone browsing (OR = 2.7), dialing (OR = 12.2), reaching for a hand-held cellular phone (OR = 4.8), hand-held texting (OR = 6.1), and hand-held talking (OR = 2.2). Another study found that visual- manual engagement with portable electronic devices resulted in a significantly increased OR of 2.8, and texting resulted in a significantly increased OR of 5.6. Driving performance when texting using either a touch screen or keyboard interface and found higher lane position variability and workload for the touch interface. Speech-based and hand-held text messaging using an Android-based smartphone can increase in lane position and speed variability relative to baseline driving for both input modalities. Manual and voice-controlled entry of navigation destinations on a smartphone in a driving simulator have been found to be advantages in a variety of driving performance metrics and subjective workload for the voice-controlled system (Owens JM et al. Crash Risk of Cell Phone Use While Driving: A Case-Crossover Analysis of Naturalistic Driving Data. Foundation for Traffic Safety. January 2018).
The monitoring and behavioral profiling of drivers have an increasing relevance in the application of vehicle telematics. Vehicle telematics is the integration of wireless communications, monitoring systems, and location devices to provide real-time spatial and performance data of a vehicle or a fleet of vehicles. An emerging market application lies in, for example, car insurance telematics, whereby insurers have been interested in monitoring driving activities in order to provide fair insurance premiums to customers. Auto insurance actuarial models are traditionally based on static factors such as a driver’ s socio-demographic information (e.g., age, gender, marital status, etc.), the type of vehicle (e.g., year, manufacturer, model, etc.), and with historical driving records (e.g., violations, at fault accident, etc.). Insurance telematics rely on an insurance premium that is based not only on static measures like the drivers age, occupation or place of residence, car model and configuration, or expected mileage over the policy period, but also on dynamic measures like actual mileage, time spent on the road or the time of day when the trip is being made, location, and the driver’s actual style of driving. These insurance schemes, known as Usage-based Insurance (UBI), are often labeled as pay-as-you-drive (PA YD), pay-how-you drive (PHYD), manage-how-you-drive (MHYD), and the like. UBI allows insurers to create risk profiles of customers based on real-world driving behavior, create automated risk-monitoring and risk-transfer systems, providing the input required, for example, to measure the quality and risks of individual drivers.
The commercial expansion of the UBI industry is currently hampered by the process of acquiring data, which involves costly installation, maintenance, and logistics. Automobile manufacturers are increasingly equipping new vehicles with telematics capabilities that do not require additional hardware (e.g., installed devices, OBD (on-board diagnostic)-dongles). The OBD system is an in-vehicle sensor system designed to monitor and report on the performance of many vehicle components. As of 2001, OBD is mandatory for all passenger cars sold in Europe or in Northern America. OBD data can be sent from an OBD dongle to a smartphone using either WiFi or Bluetooth.
As another example of a highly applicable industry - fleet management is a function that allows companies relying on transportation to minimize the risks associated with vehicle investment, improving efficiency, improving productivity, reducing overall costs, and complying with government legislations. These functions are performed by either an in-house fleet management department or an outsourced fleet management service provider. A critical process of fleet management is the collection and control of vehicle data related to use, costs, and condition diagnosis. Fleet data collection imposes significant challenges to fleet managers to effectively and efficiently collect, store, and process vehicle performance data in a timely manner. Individual car owners also face similar challenges with vehicle performance and maintenance.
The implementation of telematics has the potential to increase operational efficiency and improve driver safety in many ways, e.g., by tracking a vehicle’s location, mileage, and speed with GPS technology. For example, fleet managers can use this information to optimize routes and scheduling efficiency. That is, a driver’s actions can be monitored with accelerometers that measure changes in speed and direction. This information can then be used to improve driver performance through a one-on-one or in- vehicle coaching program.
The challenges of using telematics in fleet management is the transformation of data into actionable information. Telematics provides a large amount of data that for example report the individual vehicle’s location and performance. However, it does not directly provide useful information about its operational condition or efficiency. Fleet managers lack efficient computational hardware and algorithms to transform large amounts of telematics data into more useful predictive information on the condition of a fleet, an individual vehicle, or vehicle components (e.g., engine, tire). Telematics technology providers do not provide a clear methodology for the integration the collected data and information into typical fleet managerial tasks, such as vehicle health condition assessment, maintenance, repair, and replacement of parts or the whole vehicle. As a result, for example, fleet managers and drivers, during operation often lack an efficient way to schedule an appointment in advance with a service provider or identify and locate a repair location to effectively manage their transportation equipment maintenance program. Overall, fleet management and drivers lack an efficiently automated and predictive system to manage a vehicle maintenance program. Fleet managers and drivers also need a more safe, efficient, and effective communication system for vehicle health monitoring and maintenance, one that requires low cognitive demands of drivers, that minimizes visual-manual interactions with in-vehicle displays. The use of smartphones for collecting driving data has been identified as a promising alternative, due to the high penetration of smartphones among end-users, sensing capabilities (e.g., accelerometers, magnetometers, GPS), and the efficiency of wireless data transfer. Smartphones provide the feasibility of collecting individualized, high-fidelity and high-resolution driving data from users, revealing smartphone users’ temporal- spatial travel patterns.
Given the proliferation of smart phones acting as powerful customer-interface and data-collection devices, the telematics industry focus has also begun to change from a hardware device focus to an integrated software focus. As a result, there is an emphasis on developing enterprise systems that collect, analyze, and present the information in the most cost-effective and efficient manner. Telematics data is continuously sent to the World-Wide-Web systems of Original Equipment Manufacturers (OEM) that can store, organize, and present these data to the fleet managers using visual and user-friendly interfaces. In contrast, older equipment tiers that do not have an OEM telematics system can be equipped with Telematic service Provider (TSP)-rigged units that are connected to the equipment’s mechanical and electrical subsystems to obtain telematic data.
The two primary information sources in smartphone-based insurance telematics are the global navigation satellite system (GNSS) receiver and the inertial measurement unit (IMEG). The telematics-based insurance ecosystem is as heterogeneous as the mobile ecosystem, and the combination increases overall complexity. A lack of standardization in data and auto platforms makes it challenging for insurers to integrate mobile devices into their IT infrastructure.
Since 2012, nearly all new vehicles made in the United States have some voice interface capabilities to minimize distraction. But speech-recognition errors, complex interactions, and response delays might all draw drivers’ attention away from the road. Voice-Control-System (VCS) have the potential to reduce visual-manual distraction by offering drivers more complex features than may be safely possible with a visual-manual interface. However, there are still significant shortcomings.
The US Department of Transportation, National Highway Traffic Safety Administration recently published the performance metrics for evaluating VCS used in a variety of vehicles from three studies: an on-road contextual interview, a driving simulator, and a laboratory-based collision detection task study. The results from these studies identified several observed short-comings: human speech interaction errors were clearly associated with increased system interaction times; frequent system time-outs; system pairing problems; lack of a human defined surrogate role (e.g. helpful personal assistant); and lack of a user interface possessing consistency of command terms (Jenness, J. W., Boyle, L. N., Lee, J. D., Chang, C-C., Venkatraman, V., Gibson, M., ... & Kellman, D. (2016, October). In-vehicle voice control interface performance evaluation (Final report. Report No. DOT HS 812 314). Washington, DC: National Highway Traffic Safety Administration).
In practice, most VCS have significant drawbacks, requiring push-button activation, additional button presses and driver glances toward the in-vehicle display. Interactions with VCS may also place unnecessary demands on drivers’ attention that can compromise performance and safety. Specifically, a VCS that enables drivers to access rich information sources without distraction still faces several challenges: complicated navigation entry; cognitively demanding voice interaction; accurate voice recognition; and challenging computational requirements for natural language interactions to mitigate errors and task complexity. Speech recognition performance is critical because failures to understand drivers’ commands increase the distraction potential of the system. A high error rate could lead drivers to visually verify commands and revert to visual-manual interaction. Conversational interaction with a VCS could reduce several of the challenges posed by task complexity and recognition errors (Jenness, J. W. (2016, October)).
Vehicle health monitoring (VHM) is a concept of collecting vital equipment performance parameters to continuously assess the condition of a vehicle and detect signs of possible failure. VHM provides a proactive approach to vehicle asset maintenance by fixing vehicle equipment before an occurrence of a severe failure event. VHM is an essential facilitator of predictive maintenance program, in which maintenance tasks are scheduled just before failures are expected to occur based on the monitored performance of the vehicle. Previous research on equipment and machine health monitory focused primarily on monitoring the condition (e.g., vibrations) of stationary mechanical machines. The need exists for the use of telematics data in a vehicle predictive diagnostics VHM maintenance fleet management system which also has certain utility for personal car owners. Such a system should incorporate an automated voice-controlled AI digital assistant as a simple, user-friendly, natural, and low cognitive demand user-interface within a fleet management communication network ecosystem for safe, secured, and efficient operation.
SUMMARY
According to the principle of the invention, an assistive technology platform comprising an apparatus, methods, and system is implemented for driver assistance with in-vehicle communication using speech-to-text (STT) and text- to- speech (TTS) input-out (I/O) and Artificial Intelligence (AI) digital assistance remote cloud services. In the broadest terms, the platform incorporates at least one portable, wearable, or attachable device, providing one or more user functions including, but not limited to, voice, data, voice, data, SMS, alerts, location via SMS, GPS location/navigation, roadside assistance services, voice- controlled audio I/O.
Provided is a vehicle telematics device, comprising: at least one processor contained in a housing, a portion of the housing comprising a voice-controlled user interface; and a memory in communication with the at least one processor, the memory storing executable instructions for causing the at least one processor to provide at least one selected from the group consisting of automated voice recognition response, natural language understanding, speech-to-text processing and text- to- speech processing, wherein the vehicle telematics device is an electronic wireless communication device. In embodiments, the vehicle telematics device further comprises an audio coder-decoder (CODEC) in communication with the processor; and a speaker in communication with the audio CODEC, wherein the memory further stores executable instructions for causing the at least one processor to broadcast feedback communication for a user and one or more network participants through the speaker. The device may further comprise a microphone in communication with the audio CODEC, the microphone configured to receive a voice command from the user, convert the voice command to a voice signal, and send the voice signal to the audio CODEC, wherein the audio CODEC is configured to encode the voice signal to produce an encoded voice signal, and to transmit the encoded voice signal to the at least one processor; wherein the memory further stores executable instructions for causing the at least one processor to transmit the encoded voice signal to at least one voice translation service in communication with the at least one database server; and wherein the memory further stores executable instructions for causing the at least one processor to receive, from the voice translation service, a verbal response to the user’s voice command, and to broadcast the response through the speaker.
Also provided is a vehicle telematics system comprising a wireless communication device, the wireless communication device comprising: at least one processor; a voice- controlled user interface; and a memory in communication with the at least one processor, the memory storing executable instructions; and one or more remote cloud-based servers configured to perform speech-to-text (STT) and text-to-speech (TTS) conversion services, wherein the wireless communication device and the conversion services located on the one or more remote cloud-based servers constitute serve as an Artificial Intelligence (AI) assistant, the AI assistant providing conversational interactions with a user utilizing automated voice recognition-response, natural language processing and predictive algorithms, and wherein information generated from interactions of the user with the AI assistant are stored in application software on the one or more remote cloud-based servers so as to be accessible to multiple users of the system. In embodiments, the device is configured to operate with a vehicle on-board communication system. In some embodiments, the AI digital assistant provides a risk factor status to the user based on vehicle status, vehicle environment, or driver behavior. In embodiments, the one or more remote cloud-based servers comprise one or more application software modules selected from the group consisting of an I/O processing module, a speech-to-text (STT) processing module, a phonetic alphabet conversion module, user database, a vocabulary database, a service processing module, task flow processing module, a speech-to-text (STT) processing module, and a speech synthesis module.
In embodiments, the device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device. The device can perform one or executable codes, algorithms, methods, and or software instructions for automated voice recognition- response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular communication.
According to the principle of the invention, the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a transportation communication ecosystem. The ecosystem enables communication for a driver and one or more network participants (e.g., family member, worker, employer, etc.). Furthermore, the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech-based interactions with the user, to perform/process STT and or TTS functions/services, preferably through a voice-controlled speech user interface.
In summary, the device and technology of the invention assists a driver with in-vehicle communication activities, particularly using speech-to-text (STT) and text-to-speech (TTS) input-out (I/O) technology and cloud computing services. The system incorporates a voice- controlled AI digital assistant as simple, user-friendly, natural, and low cognitive demand user-interface or human surrogate within a communication ecosystem. The device and ecosystem together address the short-comings of conventional VCSs and to reduce the amount of visual-manual interactions between a driver, a mobile computing device, and or a vehicle on-board communication system for safe and secured driving.
According to embodiments of the invention, the voice-controlled speech user interface of said device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech. The voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service. The combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant. The AI digital assistant provides conversational interactions, utilizing automated voice recognition- response, natural language processing, predictive algorithms, and the like, interact with the user, fulfill user requests, preferably providing speech-to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, or the like.
In embodiments of the invention, the device is portable, wearable, or attachable to a dashboard, windshield, or the like, for use by a driver or an occupant of a vehicle. In an embodiment, the device may pair to operate with a vehicle on-board communication system via Bluetooth. In an alternative embodiment, the device and functions may be incorporated into the vehicle on-board communication system. In another embodiment, the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like. The said device provide access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, roadside assistance services, audio I/O. In addition, the said enable the user to access and interact with the said AI digital assistant and transportation communication ecosystem for safe and secured driving.
In further embodiments, the invention relates to a vehicle telematics platform and fleet management system comprising an apparatus, methods, and system incorporating a voice-controlled user interface and Artificial Intelligence (AI) digital assistance remote cloud services. In the broadest terms, the platform comprises at least one wireless telematics communication device, providing one or more user functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, motion detection, voice- controlled audio I/O. The said device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, UART, OBDII connectors, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device.
The device according to embodiments of the invention can perform from a tangible, non-transitory computer-readable medium (memory), one or executable codes, algorithms, methods, and or software instructions for data transmission, automated voice recognition- response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular telematics communication.
In embodiments, the electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle telematics and fleet operation management ecosystem. The ecosystem enables operational and or feedback communication for a driver and one or more network participants (e.g., fleet manager, dispatcher, parents etc.). Furthermore, the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech-based interactions with the user, to perform/process STT and or TTS functions/services, perform predictive fault analyses, whereby access to said servers is performed preferably through a voice-controlled speech user interface. In an alternative, the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle fleet vehicle management system for, including but not limited to, vehicle component data collection, transmission, analysis, assessment, fault hazard prediction, or the like. The application collects, aggregates, and processes telematics data, single or compound variables, to generate analytical and or predictive information including but not limited to a pre-set, threshold, captured, or monitored vehicle component or parts parameters. The application software platform dynamically receives, captures, and monitors the statuses of vehicle components. The application software may be configured to dynamically provide updates to a driver, preferably through an audio output via the voice- controlled user interface. The ecosystem is accessible using a software application embedded within a mobile computing device (e.g., smartphone, etc.).
According to the principle of the invention, the said device is portable, wearable, attachable to the vehicle interior (e.g., dashboard, holder, windshield), or embedded within an electronic control unit (ECU), or the like, for use by a driver or an occupant of a vehicle. In an embodiment, the device may pair to operate with a vehicle on-board communication system (e.g., OBDII, OBD dongle) via WiFi or Bluetooth. In an alternative embodiment, the device and functions may be incorporated into the vehicle on-board communication system or OBD system. In one embodiment, the device may access the Controller Area Network (CAN) bus of the vehicle. In one embodiment, the device may access the Tire Pressure Management System (TPMS) of the vehicle. In another embodiment, the device in conjunction with the vehicle on-board communication system or OBD system, preferably captures vehicle operational and components parameters of the motor vehicle during operation. In an alternative embodiment, the said device in conjunction with the vehicle on-board communication system or OBD system, function as an integrated system providing all proprioceptive sensors (e.g. accelerometer, magnetometer, etc.) and/or measuring devices for sensing the operating parameters of the motor vehicle and/or exteroceptive sensors (e.g., camera, IR sensors, ultrasonic, proximity sensor, etc.) and/or measuring devices for sensing the vehicle’s components/parts parameters (e.g., tire pressure, tire temperature) during operation of the motor vehicle. In another embodiment, the device is operable within or in the proximity of a vehicle including but not limited to a car, electric vehicle, SUV, truck, van, bus, a motorcycle, train, tram, or the like.
According to embodiments of the invention, the device provides access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, audio I/O. In addition, the device is configured to enable the user to access and interact with the AI digital assistant and personal car or a vehicle fleet management ecosystem for safe and secured driving.
In still further embodiments, the voice-controlled speech user interface of the inventive device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech. The voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service. The combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant. The AI digital assistant provides user voice authentication/identification, conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, or the like, interact with the user, fulfill user requests, preferably providing speech- to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, information relating to vehicle function, alerting driver of vehicle and driving environment, alerting driver behaviors, or the like. In an embodiment, the vehicle and driving environment includes but limited to, time-dependent vehicle position/location, distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving conditions, location, temperature, throttle position, fuel consumption, VIN (vehicle identification number), tachometer value/reading (RPM), G forces, brake pedal position, blind spot, sun angle and sun information, local high occupancy vehicle (HOV) conditions, visibility, lighting condition, seatbelt status, rush hour, CAN vehicle bus parameters including fuel level, distance from other vehicles, distance from obstacles, driver alertness, activation/usage of automated features, activation/ usage of Advanced Driver Assistance Systems, traction control data, usage of headlights and other lights, usage of blinkers, vehicle weight, number of vehicle passengers, traffic sign information, junctions crossed, running of yellow and red traffic lights, railroad crossing, alcohol/drug level detection devices, lane position, car passed, or the like. In an embodiment, driver behaviors include but not limited to, hard breaking, acceleration, cornering, driving distance, mobile phone usage (while driving), seatbelt status, sign of fatigue, driver confidence, lane changes, lane choice, driver alertness, driver distraction, driver aggressiveness, driver mental, mood, and emotional condition, or the like.
In an embodiment, the AI digital assistant provides a vehicle component status to the driver or fleet manager. In an embodiment, the AI digital assistant provides in-vehicle coaching based on real-time driver behavior. The AI digital assistant preferably conducts said tasks in conjunction with one or more cloud servers and or cloud computing services.
In some embodiments, the device, methods, and system provides analytics for fault prediction, and or failure hazard estimation based on real-time data and measurements of vehicle components. In a preferred embodiment, the vehicle telematics system captures vehicle components/parts data, analyze, assess, and synthesizes one or more predictive occurrence of automotive fault codes, and wherein the resulting prediction is communicated to a driver, preferably through an audio-visual output via the AI digital assistant during operation of the motor vehicle. In an embodiment, the analysis predicts occurrence of fault or no fault. The predictive occurrence generator measures and/or generates a single or compound of component variables or fault parameters profiling the condition of a vehicle component during operation of the motor vehicle, based on, but not limited to, a preset, threshold, triggered, captured, or one or more said monitored vehicle component operating functions or parameters. In one embodiment, the analysis predicts the type of fault and identifies possible contributing factors.
In an alternative embodiment, the analysis correlates the fault type to a system- level failure log, correlates extracted fault type from time-series analytics to predict, preferably in real-time, system levels of occurrences. In an alternative embodiment, a risk score is generated and communicated to the fleet manager and or driver which may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features.
In various embodiments, machine learning algorithms are used for analyses and predictions, including but not limited to, Support Vector Machine (SVM), Artificial Neural Network, Logistic Regression, Decision Tree, Random Forest, or the like. In other embodiments, failure hazard estimation models are used to make predictions including but not limited to, Cox’ s proportional hazards, Kaplan-Meier, or the like. The telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more measures of vehicle performance or a status of vehicle components including, but not limited to, engine, coolant temperature, engine oil pressure, engine oil temperature, tire pressure, tire temperature, or the like. The telematic system also enables real-time dynamic alerts, maintenance alerts, driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
The device, methods, and software system of the invention can be connected, or to interact or exchange information, with the enterprise resource planning (ERP) software platform and or services of, including but not limited to, a vehicle manufacturer, vehicle dealership/service department, a service station, a fleet company’s service station, an OEM, or the like. The software application (via APIs) enables a driver and or fleet manager to locate, identify, search, obtain quote, and to schedule and or purchase a replacement part and or maintenance/repair service based on the relevant telematics data, geo-location, or predictive analysis afforded by the said platform. A vehicle has many consumable items (e.g. engine oil, brake fluid, spark plug, belt, air and oil filters, etc.) which requires replacements at regular intervals or the vehicle performance can degrade, leading to excessive fuel consumption and even complete breakdown. The system enables the avoidance of such incidents allowing the control, maintenance, scheduling, and replacement of consumables based on predictive analytics and subsequent maintenance transactions (e.g., order, purchase, of replacement) using the software application in conjunction with said ERP software platform and or services.
The device and technology platform according to embodiments of the invention establish a vehicle telematics fleet management ecosystem, particularly using speech-to-text (STT) and text-to- speech (TTS) input-out (I/O) technology and cloud computing services. The system incorporates a voice-controlled AI digital assistant as a simple, user-friendly, natural, and low cognitive demand user-interface or within a communication ecosystem for safe and secured operation. The device and ecosystem together address the short comings of current and future static, labor-intensive, vehicle fleet management systems. The invention provides a dynamic, efficient, automated, AI assistive, analytical, and predictive vehicle fault/maintenance system for vehicle fleet or personal car health management.
In still other embodiments, the invention provides a vehicle telematics platform and insurance risk management system comprising an apparatus, methods, and system incorporating a voice-controlled user interface and Artificial Intelligence (AI) digital assistance remote cloud services. In the broadest terms, the platform incorporates at least one wireless communication device, providing one or more user functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, motion detection, voice-controlled audio PO. The said device preferably incorporates, one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, WiFi, Bluetooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), preferably configured in combination, to function as an electronic device. The device can perform from a tangible, non-transitory computer-readable medium (memory), one or executable codes, algorithms, methods, and or software instructions for automated voice recognition-response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services, and wireless mobile cellular communication.
According to embodiments of the invention, the electronic device is configured to function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish a vehicle telematics and insurance risk management ecosystem. The ecosystem enables feedback communication for a driver and one or more network participants (e.g., family member, worker, employer, insurer, etc.). Furthermore, the device may function in combination with one or more remote servers, cloud control services, to perform natural language or speech- based interactions with the user, to perform/process STT and or TTS functions/services, preferably through a voice-controlled speech user interface. In an alternative, the said electronic device may function in combination with an application software platform accessible to multiple clients (users) executable on one or more remote servers, to preferably establish an insurance telematics risk measurement, analysis, assessment, and management system. The application collects, aggregates, and processes telematics data, single or compound variables, to generate analytical information including but not limited to a score and score parameters profiling the driver and or environmental driving condition during operation of the motor vehicle based on a pre-set, threshold, captured, or monitored operating or environmental parameters. The application software platform dynamically captures and categorizes behavioral and risk profiles of the drivers. The application software may be configured to dynamically provide updates, preferably audio output via the voice-controlled user interface to a driver.
The device is portable, wearable, or attachable to the vehicle interior (e.g., holder, windshield), or the like, for use by a driver or an occupant of a vehicle. In an embodiment, the device may pair to operate with a vehicle on-board communication system (e.g., OBD, OBD dongle) via WiFi or Bluetooth. In an alternative embodiment, the device and functions may be incorporated into the vehicle on-board communication system or OBD system. In one embodiment, the device may access the Controller Area Network (CAN) bus of the vehicle. In another embodiment, the device in conjunction with the vehicle on board communication system or OBD system, preferably captures contextual/environmental or operational parameters of the motor vehicle during operation. In an alternative embodiment, the said device in conjunction with the vehicle on-board communication system or OBD system, function as an integrated system providing all proprioceptive sensors (e.g. accelerometer, magnetometer, etc.) and/or measuring devices for sensing the operating parameters of the motor vehicle and/or exteroceptive sensors (e.g., camera, IR sensors, ultrasonic, proximity sensor, etc.) and/or measuring devices for sensing the environmental parameters during operation of the motor vehicle. In another embodiment, the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like. The said device provide access to the vehicle occupant one or more functions including, but not limited to, voice, data, SMS, alerts, location via SMS, GPS location/navigation, navigation guidance, motion detection, audio I/O. In addition, the said enable the user to access and interact with the said AI digital assistant and vehicle telematics ecosystem and or insurance telematics system for safe and secured driving.
In embodiments, the voice-controlled speech user interface of the device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding of the user's speech. The voice-controlled speech user interface is configured to capture user utterances and provide them to a cloud control service. The combination of the speech interface device and one or more applications executed by the control service serves as an Artificial Intelligent (AI) digital assistant. The AI digital assistant provides user voice authentication/identification, conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, and the like, interact with the user, fulfill user requests, preferably providing speech-to-text (STT) and or text-to-speech (TTS) services, including but not limited to, audio I/O of an out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, information relating to vehicle function, alerting driver of vehicle and driving environment, alerting driver behaviors, or the like. In an embodiment, the vehicle and driving environment includes but limited to, time-dependent vehicle position/location, distance, speed, acceleration, mileage, time of day, road and terrain type, topology, weather/driving conditions, location, temperature, throttle position, fuel consumption, VIN (vehicle identification number), tachometer value/reading (RPM), G forces, brake pedal position, blind spot, sun angle and sun information, local high occupancy vehicle (HOV) conditions, visibility, lighting condition, seatbelt status, rush hour, CAN vehicle bus parameters including fuel level, distance from other vehicles, distance from obstacles, driver alertness, activation/usage of automated features, activation/ usage of Advanced Driver Assistance Systems, traction control data, usage of headlights and other lights, usage of blinkers, vehicle weight, number of vehicle passengers, traffic sign information, junctions crossed, running of orange and red traffic lights, railroad crossing, alcohol/drug level detection devices, lane position, car passed, or the like. In an embodiment, driver behaviors include but not limited to, hard breaking, acceleration, cornering, driving distance, mobile phone usage (while driving), seatbelt status, sign of fatigue, driver confidence, lane changes, lane choice, driver alertness, driver distraction, driver aggressiveness, driver mental, mood, and emotional condition, or the like. In an embodiment, the AI digital assistant provides a risk factor status to the driver, a parent, or an insurer, based on said vehicle status, vehicle environment, or driver behavior. In an alternative the AI digital assistant offers a reward (e.g. insurance discount) for good driving behavior. In yet another embodiment, the AI digital assistant warns the risk of an increase in insurance premium based on real-time driver behavior. In yet another embodiment, the AI digital assistant provides counseling or advice based on real-time driver behavior. The AI digital assistant preferably conducts said tasks in conjunction with one or more cloud servers and or cloud computing services.
According to certain embodiments, the device, methods, and system provides a dynamic, driver behavior, risk profile, scoring system based on real-time scoring and measurements. In a preferred embodiment, the vehicle telematics system captures driver behavior data, analyze, assess, and synthesizes one or more risk profiles, and wherein the resulting profile is, preferably provided for audio-visual output via the AI digital assistant to a driver, a family member, or an insurer using proprioceptive sensors of the said device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle. The score generator measuring and/or generating a single or compound of variable scoring parameters profiling the use and/or style and/or environmental condition of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more said monitored operating parameters or environmental parameters. The variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter. The variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters. In an alternative embodiment, the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features. The telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system. The telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
The device, methods, and system of the invention enables insurers to provide on or more insurance quote based on the score and other relevant telematics data (e.g., automatic capture and analysis of risk scores and reaction to data) afforded by the said platform. In an embodiment, the information generated by the platform is an integral solution to UBI insurance schemes including, but not limited to, pay-as-you-drive (PA YD), pay-how-you drive (PHYD), manage-how-you-drive (MHYD), or n PHYD. In PHYD, the risk management/profiling system may allow an insurer to offer a discount based on driving behavior. In PA YD, the risk-management/profiling system may for example discount based on mileage (how much a person drives) and not where or how. In an alternative embodiment, the information generated by the system may be combined with additional information, including but not limited to; data and info of insurance policy, individual driving, crash forensics, credit scores, driving statistics, historic claims, market databases, driving license points, claims statistics, rewards, discounts, contextual data for weather, driving conditions, road type, environment, or the like. In yet another embodiment, the platform provides an insurer with a comprehensive risk-transfer structure comprising device and vehicle sensors collection and or combined with ADAS (advanced driver assistance systems) data for accurate risk analysis and incorporation within an automated risk-transfer system/coverage, claims notification, and value-added services (e.g., crash reporting, post-accident services, Emergency-Call/Breakdown-Call, vehicle theft, driver coaching/scoring, reward, driver safety training, etc.).
BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings are included to provide a further understanding of the invention in reference to exemplified embodiments illustrated in the below figures.
FIGS. 1A and IB are schematic diagrams illustrating the components of the cellular communication device for assisting drivers with in-vehicle communication activities.
FIG. 2 is an illustration of a simple user interface according to an embodiment of the invention.
FIG. 3 is an illustration depicting the electronic device within a communication ecosystem.
FIG. 4 is an illustration depicting remote servers and application modules within a server for processing speech-to-text and text-to- speech services according to an embodiment of the invention
FIG. 5 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics risk prediction-risk scoring system.
FIG. 6 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics vehicle health and maintenance predictive system.
FIG. 7 illustrates a method to obtain the most fitted survival function derived from different groups of vehicle health parameters.
FIG. 8 is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
FIG. 9A is a schematic illustration of an architecture for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior.
FIG. 9B is an illustration of a process that occurs when a customer requests a part or services from a vendor through an ERP system.
DETAILED DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Where possible, any terms expressed in the singular form herein are meant to also include the plural form and vice versa, unless explicitly stated otherwise. Also, as used herein, the term“a” and/or“an” shall mean“one or more,” even though the phrase“one or more” is also used herein. Furthermore, when it is said herein that something is“based on” something else, it may be based on one or more other things as well. In other words, unless expressly indicated otherwise, as used herein “based on” means“based at least in part on” or“based at least partially on.” Like numbers refer to like elements throughout.
The terminology used herein is for describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including,", and variants thereof, when used herein, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
It will be understood that when an element is referred to as being "coupled," "connected," or "responsive" to another element, it can be directly coupled, connected, or responsive to the other element, or intervening elements may also be present. In contrast, when an element is referred to as being "directly coupled," "directly connected," or "directly responsive" to another element, there are no intervening elements present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Spatially relative terms, such as "above," "below," "upper," "lower," "top, "bottom," and the like, may be used herein for ease of description to describe one element or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" other elements or features would then be oriented "above" the other elements or features. Thus, the term "below" can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. Thus, a first element could be termed a second element without departing from the teachings of the present embodiments.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which these embodiments belong. It will be further understood that terms, such as those defined in commonly-used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Numerous alternative embodiments of a vehicle telematics device, methods, and technology platform (system) are described herein. Such device, methods and system assist drivers with in-vehicle mobile communication activities and deter risky behavior for safety purposes.
An object of the invention is the use of a technology platform to facilitate mobile communication between a driver and his or her network participants (e.g., parents, insurer, etc.). The system leverages a low cognitive demand, voice-controlled AI digital assistant for accessing a variety of remote cloud computing services including but not limited to, automated voice recognition-response, natural language understanding-processing, speech-to-text (STT) and text-to- speech (TTS) processing/services. The platform enables a driver and his or her network participants or peers to monitor the status, well-being, and driving behavior, as well as the status of the vehicle.
In one embodiment, the platform or system comprises a combination of at least one of the following components: cellular communication device; computing device; communication network; remote server; cloud server; cloud application software. The cloud server and service are commonly referred to as "cloud computing", "on-demand computing", "software as a service (SaaS)", "platform computing", "network-accessible platform", "cloud services", "data centers," and the like.
As explained with regard to cloud computing generally in U.S. Patent Application Publication No. 2014/0379910 to Saxena et ah, cloud can include“a collection of hardware and software that forms a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, etc.), which can be suitably provisioned to provide on- demand self-service, network access, resource pooling, elasticity and measured service, among other features.” Cloud may be deployed as a private cloud (e.g., infrastructure operated by a single enterprise/organization), community cloud (e.g., infrastructure shared by several organizations to support a specific community that has shared concerns), public cloud (e.g., infrastructure made available to the general public, such as the Internet), or a suitable combination of two or more disparate types of clouds. In this description,“cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). As stated in U.S. Patent Application Publication No. 2014/0075431 to Kumar et al:“Generally, a cloud computing model enables some of those responsibilities which previously may have been provided by an organization's own information technology department, to instead be delivered as service layers within a cloud environment, for use by consumers (either within or external to the organization, according to the cloud's public/private nature).” As further explained in the aforementioned publication, a cloud computing model can take the form of various service models such as, for example, Software as a Service (“SaaS”),“in which consumers use software applications that are running upon a cloud infrastructure, while a SaaS provider manages or controls the underlying cloud infrastructure and applications,” and Platform as a Service (“PaaS”), “in which consumers can use software programming languages and development tools supported by a PaaS provider to develop, deploy, and otherwise control their own applications, while the PaaS provider manages or controls other aspects of the cloud environment (i.e., everything below the run-time execution environment).” The definition of "cloud computing” is not limited to any of the other numerous advantages that can be obtained from such models when properly deployed.
In an alternative embodiment, the driver communication assistive system comprises a combination of at least one; voice-controlled speech user interface; computing device; communication network; remote server; cloud server; server system; cloud application software. One skilled in the art will recognize, if appropriate, that the present invention may be implemented using a distributed software architecture. In some examples, server system can also employ various virtual devices and/or services of third-party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of server system. These components are configured to function together to enable a user to interact with a resulting AI digital assistant. Non limiting examples of an AI digital assistant include the ALEXA software and services from Amazon of Seattle, WA, CORTANA software and services from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW software and services from Google Inc. of Mountain View, Calif., and the SIRI software and services from Apple Inc. of Cupertino, Calif. In addition, an application software, accessible by the user and others, using one or more remote computing devices, provides transportation network support system for the driver. The terms " AI digital assistant," "virtual assistant," "intelligent automated assistant," or "automatic digital assistant" can refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent. The electronic device of the invention is a fully functional wireless mobile communication device that is wearable, or attachable to a dashboard, windshield, or the like, for use by a driver or an occupant of a vehicle. In an embodiment, the device may pair to operate with a vehicle on-board communication system via Bluetooth or Low Energy (BTLE). In an alternative embodiment, the device and functions may be incorporated into the vehicle on-board communication system. In another embodiment, the device is operable within or in the proximity of a vehicle including but not limited to a, car, electric vehicle, SUV, truck, van, bus, a motorcycle, a bicycle, a plane, a spaceship, or the like. The device provides a user-interface that allows a user to access features that include smart and secure location-based services, mobile phone module, voice and data, advanced battery system and power management, direct 911 emergency service access, and motion detection via an accelerometer sensor. Additional functions may include one or more measurements of linear acceleration for motion detection. The said device may contain one or more microprocessor, microcontroller, micro GSM/GPRS chipset, micro SIM module, read-write memory device, read-only memory device (ROM), random access memory (RAM), flash memory, memory storage device, memory I-O, 1-0 devices, buttons, display, LED, user interface, rechargeable battery, microphone, speaker, wireless transceiver (e.g., RF, WiFi, Bluetooth, Low Energy (BTLE), IoT), RF electronic circuits, WiFi electronic circuits, Bluetooth electronic circuits, transceivers (e.g., RF, Wifi, Blutooth, IoT, etc.), audio CODEC, cellular antenna, GPS antenna, WiFi antenna, Bluetooth antenna, IoT antenna, vibrating motor(output), power gauge monitor, wireless battery charger, wireless transceiver, and the like, to function fully as a portable mobile communication device.
Referring to FIG. 1, schematic diagram 100 illustrates the components that may be incorporated within the electronic device according to the invention. As shown in 1A, device 101 may contain a radio module 102 configured to function as a stand-alone wireless cellular communication (e.g., sans Bluetooth, Low Energy (BTLE)) apparatus via connection to cellular antenna 103 and GPS antenna 104. The communication antennas may be incorporated within device 101 or located externally to device 101. The device is powered by a rechargeable battery 105 which can be re-energized by charger 106 in conjunction with an external docking station accessible through direct contact or wireless connection 107. A fuel/power gauge 108 allows the device to monitor energy consumption and life of the battery 105. A switched-mode power supply unit 109 and low dropout regulator (LDO) 110 may be incorporated to convert electrical power efficiently, eliminate switching noise, and provide simplicity in design. Device 101 also contains an audio CODEC 111 functioning together with an audio input-output (EO) 112. Additional I/O devices may include a connection to one or more I/O buttons 113, output light-emitting diodes (LEDs) via LED driver 114 through connection 115, and vibrational motor 116. In addition, device 101 may contain an accelerometer unit 117 as well as a subscriber identification module (SIM) 118. In an alternative embodiment, the device 101 may pair to operate with a vehicle on-board communication system via Bluetooth or Low Energy (BTLE). Device 101 and one or more of its internal components, connected to external components, preferably when attached, operate together to assist a driver with mobile communication activities. It is understood that the communication device, hardware, and internal components can be integrated within a variety of form factors. For illustration purposes, the physical form factors may include, but are not limited to, an apparatus of the invention, a self-contained compact box, a miniature device, a device resembling a portable mobile unit, a cellular phone, a mobile phone, a tablet, or the like.
The wireless communication may include a cellular communication that uses at least one of long-term evolution (LTE), LTE advanced (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunication system (UMTS), wireless broadband (WiBro), and global system for mobile communication (GSM). The wireless communication may include at least one of wireless fidelity (WIFI), Bluetooth™, Bluetooth low energy (BLE), ZigBee™, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN). The wireless communication may include a global positioning system (GPS), global navigation satellite system (GNSS), Beidou navigation satellite system (Beidou), Galileo, and the European global satellite-based navigation system. Herein, "GPS" may be interchangeably may be referred to as "GNSS.” Additional bands and equivalent terminologies include Third Generation (3G), Fourth Generation (4G), Fifth Generation (5G), future generations, and the like. The wireless transceivers may be configured to communicate according to an IEEE 802.11 standard, cellular (e.g., 2G,3G/4G/LITE/5G) standard, a GPS standard, or other standards. In addition, such a wireless communication can be implemented in accordance with one or more radio technology protocols, for example, such as NFC, 3 GPP LTE, LTE- A, 3G, 4G, 5G, WiMax, Wi-Fi, Bluetooth, ZigBee, IoT, or the like.
Referring to FIG. IB, schematic diagram 100 illustrates the components that may be incorporated, but not limited, within the vehicle telematics device according to the invention. In one embodiment, device 101 may contain a radio module 102 that is configured so as to be able to function as a stand-alone wireless cellular communication apparatus via connection to cellular antenna 103 and GPS antenna 104. Device 101 can also incorporate a Bluetooth antenna 125. The said communication antennas may be incorporated within device 101 or located external to device 101. The device is optionally powered by a rechargeable battery 105 which can be re-energized by charger 106 in conjunction with an external docking station accessible through direct contact or wireless connection 107. A fuel/power gauge 108 allows the device to monitor energy consumption and life of battery 105. In an alternative embodiment, rechargeable battery 105 and charger 106 may be disconnected and device 101 is powered directly from a vehicle’s power system. A switched-mode power supply unit 109 and low dropout regulator (LDO) 110 may be incorporated to convert electrical power efficiently, eliminate switching noise, and provide simplicity in design. Device 101 contains also an audio CODEC 111 functioning together with an audio input-output (I/O) 112. Additional I/O devices may include a connection to one or more I/O buttons 113, output light-emitting diodes (LEDs) via LED driver 114 through connection 115, and vibrational motor 116. In addition, device 101 may contain an accelerometer unit 117 as well as a subscriber identification module (SIM) 118. In one embodiment, the device 101 may incorporate an OBDII interpreter Integrated Circuit (IC) 120, for example, but non-limiting, an ELM327 (Elm Electronics Inc., Ontario, CA), to communicate and automatically interpret all OBD II signal protocols, via RS232, to a vehicle OBDII 121. In another embodiment, component 120 may comprise, for example, but non limiting, an ET7190 (Haikou Xingong Electronic Co, Ltd, Haikou City, China) OBDII protocol chip, connected to one or more microcontroller of device 101, to communicate and automatically interpret all OBD II signal protocols. In an alternative embodiment, the device 101 may pair to operate with a vehicle on-board communication system, CAN bus, or OBDII with WiFi or Bluetooth, via for example, a ELM327 integrated component (Elm Electronics Inc., Ontario, CA) 122, or Low Energy (BTLE). The use of specifically identified components and source manufactures merely serve as an example and should not be construed as a limitation of the present disclosure. In one embodiment, accessing OBDII and or a vehicle’s CAN bus, and or a ECU, and or central vehicle computer, enables device 101 to receive data from a vehicle’s Tire Pressure Management System (TPMS). As known in the art, TPMSs continuously measure air pressure inside all tires of passenger cars, trucks, and multipurpose passenger vehicles, and alert drivers if any tire is significantly underinflated or overinflated. Most automobiles are equipped with direct TPMSs, relying on battery-powered pressure sensors inside each tire to measure tire pressure and communicate their data via a radio frequency (RF) transmitter. The receiving tire pressure control unit, in turn, analyzes the data and can send results or commands to the central car computer over the Controller Area Network (CAN), e.g., to trigger a warning message on the vehicle dashboard.
In an alternative embodiment, radio module 102 may comprise a RF transceiver to directly receive tire sensors data (e.g., pressure, temperature). In an embodiment, communication (voice, data, SMS, etc.) between device 101, OBDII, CAN bus, TPMS, or tire sensors are secured using security encryption and transmission protocols known in the arts.
According to an embodiment of the invention, device 101 and one or more of its internal components, connected to external components, preferably when attached, operate together to as a vehicle telematics system. In an embodiment, the vehicle’s position estimates may be derived from a combination of cellular radios with an accuracy of only a few kilometers or several hundred meters, shorter-range radios like 802.11 (WiFi) with an accuracy of a few tens to a few hundreds of meters, or by sampling of a GPS receiver on the device. It is understood that GPS may be inaccurate or unavailable in certain geographic locations (e.g., urban canyons). The use of short range radios, where networks are accessible, may be desirable for use to conserve battery energy. As such, the accelerometer data generated from the device may be used principally, instead of GPS exclusively, to infer vehicular acceleration (longitudinal, lateral, and vertical). It is understood that the communication device, hardware, and internal components can be integrated within a variety of form factors. For illustration purposes, the physical form factors may include, but not limited to, an apparatus of the invention, a self-contained, compact box, a miniature device, a device resembling a portable mobile unit, a cellular phone, a mobile phone, a tablet, or the like.
The wireless communication may include a cellular communication that uses at least one of long-term evolution (LTE), LTE advanced (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunication system (UMTS), wireless broadband (WiBro), and global system for mobile communication (GSM). The wireless communication may include at least one of wireless fidelity (WIFI), BLUETOOTH, BLUETOOTH low energy (BLE), ZIGBEE, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN). The wireless communication may include a global positioning system (GPS), global navigation satellite system (GNSS), Beidou navigation satellite system (Beidou), Galileo, and the European global satellite-based navigation system. Herein, "GPS" may be interchangeably may be referred to as "GNSS.” Additional bands and equivalent terminologies include Third Generation (3G), Fourth Generation (4G), Fifth Generation (5G), future generations, and the like. The wireless transceivers may be configured to communicate according to an IEEE 802.11 standard, cellular (e.g., 2G,3G/4G/LITE/5G) standard, a GPS standard, or other standards. In addition, such a wireless communication can be implemented in accordance with one or more radio technology protocols, for example, such as NFC, 3 GPP LTE, LTE- A, 3G, 4G, 5G, WiMax, Wi-Fi, Bluetooth, ZigBee, IoT, or the like.
In a preferred embodiment, the device 101 of FIG. 1 operates in conjunction with an integrated external-facing one or more simple user I/O interfaces, including but not limited to a, microphone, speaker, button, LED, E-ink, display, touch screen, or the like for user interaction, that is user-friendly, natural, and low cognitive, low visual demand.
FIG. 2 is an illustration 200 of a preferred simple user interface 201 of said device 101 of FIG. 1. The simple user interface 201 may incorporate an easy recognizable on-off button 202 and a LED ring 203 that is on/lit (i.e., emitting light) when the device is active and allows a user to know that the device is on or in active mode. The simple user interface 201 may also incorporate a microphone 204 for audio reception, a speaker 205 for audio output, and a touch screen display 207 for accessing device functions and visual outputs. Audio reception, which may include spoken words from a user, is captured by microphone 204 and processed by audio CODEC 111 of FIG. 1. It is understood that one more alternative I/O devices may be incorporated for use within said device 101 of FIG. 1. The input devices may include a keyboard, a mouse device, additional microphone, voice- controlled speech interface, sensor, CCD detector, sensors, and the like. The output devices may include displays, touch screen, audio output devices (e.g. speaker, vibrating motor), LED, LED display, E-Ink display, or other output devices. The input/out interfaces of input-out (I/O) permit communication of information between device 101 of FIG. 1 and an external device, such as another computing device, e.g., a network element or an end-user device. Such communication can include direct communication or indirect communication, such as exchange of information between the electronic device 101 and the external device via a network or elements thereof. The 1/0 interfaces can include one or more of network adapter(s), peripheral adapter(s), and rendering unit(s). For example, the peripheral adapter(s) can include a group of ports, which can include at least one of parallel ports, serial ports, Ethernet ports, V.35 ports, or X.21 ports. In certain embodiments, the parallel ports can include General Purpose Interface Bus (GPIB), IEEE- 1284, while the serial ports can include Recommended Standard (RS)-232, V.ll, Universal Serial Bus (USB), FireWire or IEEE-1394. Network protocols for communication purposes among two or more electronic devices may be implemented using well-known network communication protocols such as Hypertext Transfer Protocol (HTTP), Secure Hypertext Transfer Protocol (SHTTP), Transmission Control Protocol/ Internet Protocol (TCP/IP), Ethernet, FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi Fi, voice over Internet (VoIP), Wi-MAX, a protocol for e mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable current or yet developed communication protocol.
The driver assistive technology system according to embodiments of the invention utilizes an application software platform to create an ecosystem for communication and networking between a driver and network participants (e.g., parents, co-workers, employer, etc.). Referring to FIG. 3, illustration 300 describes the elements of said ecosystem. One or more user can access the system using a portable computing device 302 or stationary computing device 303. Computing device 302 may be a laptop used by a family member. Stationary computing device 303 may reside at a company facility. One or more user may access the system using other portable computing devices such as a smart phone, a smart appliance, smart TV, AI digital assistance-enabled devices, PDA, or the like. Device 301, corresponding to device 101 of FIG. 1, communicates with the system via communication means 304 to one or more cellular communication network 305 which can connect device 301 via communication means 306 to the Internet 307. Device 301, 302, and 303 can access one or more remote servers 308, 309, capable of providing voice-controlled services, via the Internet 307 through communication means 310 and 311 depending on the server. Device 302 and 303 can access one or more servers through communication means 312 and 313. The application software platform can be stored in the one or more servers 308, 309. The software environment allows for, but is not limited to, daily tracking of driver location, sending-receiving instant text messages, push notification, sending-receiving voice messages, sending-receiving audio streams, sending-receiving videos, or the like. In some embodiments, transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS). In another embodiment, the message contents may be transmitted to one or more remote server providing services. This may include but not limited to, voice recognition- response, natural language processing, speech-to-text (STT) and or text-to-speech (TTS) services.
In a preferred embodiment, the device 301, corresponding to device 101 of FIG. 1, enables communication with one or more remote servers, for example server 308, configured for providing cloud-based voice-control service, to perform natural language or speech-based interaction with the user. Communication with server 308 may provide access to other servers and services, preferably access to one or more server providing STT and or TTS conversion services. The device 301 detection audio inputs/listens and interacts with a user to determine a user intent based on natural language understanding of the user's speech. The device 301 is configured to capture user utterances and provide them to the voice-control service located on server 308. The control service performs speech recognition-response and natural language understanding-processing on the utterances to determine intents expressed by the utterances. In response to an identified intent, the controlled service causes a corresponding action to be performed. An action may be performed at the control service or by instructing the said device 301 to perform a function. The combination of the said device 301 and control service located on remote server 308 serve as an AI digital assistant. The said assistant provides conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, and the like, to: perform functions, interact with the user, fulfill user requests. Ultimately the device 301 enables the driver to access and interact with the assistant for assisting with in-vehicle communication activities. The information generated from the interaction of the user and others can be captured and stored in a remote server, for example remote server 309. This information may be incorporated into the application software making it accessible to multi-users (e.g. family member, co-worker, employer) of the transportation communication ecosystem of this invention. The application software residing in remote server 309 may also be accessible using a multimedia device. Non-limiting exemplary devices include smart TV, smart appliance, FireTV, Fire HD8 Tablet, Echo Show; products available from Amazon.com (Seattle, WA), Nucleus (Nucleuslife.com), Triby (Invoxia.com), TCL Xcess, or the like.
According to a principle of the invention that, without being bound to a specific configuration, said voice-control service server 308 may provide speech services implementing an automated speech recognition (ASR) function, a natural language understanding (NLU) function, an intent router/controller, and one or more applications providing commands back to the voice-controlled access device 101 of FIG. 1. The ASR function can recognize human speech in an audio signal transmitted by the voice-controlled speech interface device received from built-in microphone, for example, microphone 204 of FIG. 2. The NLU function can determine a user intent based on user speech that is recognized by the ASR components. The speech services may also include speech generation functionality that synthesizes speech audio.
The control service may also provide a dialog management component configured to coordinate speech dialogs or interactions with the user in conjunction with the speech services. Speech dialogs may be used to determine the user intents using speech prompts. One or more applications can serve as a command interpreter that determines functions or commands corresponding to intents expressed by user speech. In certain instances, commands may correspond to functions that are to be performed by the voice-controlled speech user interface embedded within device 101 and the command interpreter may in those cases provide device commands or instructions to the voice-controlled speech user interface for implementing such functions. The command interpreter can implement "built-in" capabilities that are used in conjunction with the voice-controlled speech user interface. The control service may be configured to use a library of installable applications including one or more software applications or skill applications of this invention. The control service may interact with other network-based services (e.g., ALEXA from Amazon of Seattle, WA, CORTANA from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW from Google Inc. of Mountain View, Calif., and the SIRI from Apple Inc. of Cupertino, Calif.) to obtain information, access additional database, application, or services on behalf of the user. A dialog management component is configured to coordinate dialogs or interactions with the user based on speech as recognized by the ASR component and or understood by the NLU component. The control service may also have a TTS component responsive to the dialog management component to generate speech for playback on the voice-controlled speech user interface. Vice versa, the control service may also have a SST component responsive to the dialog management component to convert speech to text for sending text-based messages. These components may function based on models or rules, which may include acoustic models, specify grammar, lexicons, phrases, responses, and the like created through various training techniques. The dialog management component may utilize dialog models that specify logic for conducting dialogs with users. A dialog comprises an alternating sequence of natural language statements or utterances by the user and system generated speech or textual responses. The dialog models embody logic for creating responses based on received user statements to prompt the user for more detailed information of the intents or to obtain other information from the user. An application selection component or intent router identifies, selects, and/or invokes installed device applications and/or installed server applications in response to user intents identified by the NLU component. In response to a determined user intent, the intent router can identify one of the installed applications capable of servicing the user intent. The application can be called or invoked to satisfy the user intent or to conduct further dialog with the user to further refine the user intent. Each of the installed applications may have an intent specification that defines the serviceable intent. The control service uses the intent specifications to detect user utterances, expressions, or intents that correspond to the applications. An application intent specification may include NLU models for use by the natural language understanding component. In addition, one or installed applications may contain specified dialog models for that create and coordinate speech interactions with the user. The dialog models may be used by the dialog management component in conjunction with the dialog models to create and coordinate dialogs with the user and to determine user intent either before or during operation of the installed applications. The NLU component and the dialog management component may be configured to use the intent specifications of the applications either to conduct dialogs, to identify expressed intents of users, identify and use the intent specifications of installed applications, in conjunction with the NLU models and dialog modes, to determine when a user has expressed an intent that can be serviced by the application, and to conduct on or more dialogs with the user.
As an example, in response to a user utterance, the control service may refer to the intent specifications of multiple applications, including both device applications and server application, for example, to identify a“Drivesafe" intent. The service may then invoke the corresponding application or“skill” containing instructions to process the intent. Upon invocation, the application may receive an indication of the determined intent and may conduct or coordinate further dialogs with the user to elicit further intent details. Upon determining sufficient details regarding the user intent, the application may perform its designed functionality in fulfillment of the intent. In one embodiment, said“skill” may be developed using application tools from vendors (e.g., e.g., ALEXA from Amazon of Seattle, WA, CORTANA from Microsoft Corporation of Redmond, Wash., the GOOGLE NOW from Google Inc. of Mountain View, Calif., and the SIRI from Apple Inc. of Cupertino, Calif.) providing cloud control services.
According to a principle of the invention, device 101 of FIG. 1 is operable in conjunction with a voice-control service server 401 of additional resident software applications, or another remote server 402, for executing AI digital assistance functions to perform STT/TTS conversion and audio I/O functions. FIG. 4, is an illustration 400 of the components of remote server 402 providing STT/TTS conversion and delivery services. The remote servers, preferably comprising application software modules that include one or more of an I/O processing module 403, a speech-to-text (STT) processing module 404, a phonetic alphabet conversion module 405, user database 406, a vocabulary database 407, service processing module 408, task flow processing module 409, a speech-to-text (STT) processing module 410, and speech synthesis module 411. Each of these modules can access to one or more of the following systems or data and models, or a subset thereof: ontology, vocabulary index, user data, task flow models, service models, and ASR systems of service 401.
In an embodiment, using the processing modules, data, and models implemented by the remote servers, the AI digital assistant can perform the following, but not limited to: converting speech input into text; converting text output into speech; identifying a user's intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully infer the user's intent; determining the task flow for fulfilling the inferred intent; and executing the task flow to fulfill the inferred intent. When a user request is received by 1/0 processing module and the user request can include speech input, 1/0 processing module can forward the speech input to STT processing module (or a speech recognizer) for speech-to-text conversions. Vice versa, when a user request is received by 1/0 processing module and the user request can include text input, I/O processing module 403 can forward the text input to TTS processing module 404 for text-to- speech conversions. The speech synthesis module 411 can be configured to synthesize speech outputs for presentation to the user. Speech synthesis module 411 synthesizes speech outputs based on text provided by the digital assistant services of server 401, which may be in the form of out-going (sending) or in-coming (received) text message, email, voicemail, text document, social media notification, social media stream (e.g., Facebook postings, Twitter feed, etc.), video, video stream, podcast, webpage contents, GPS navigation information, or the like. In some embodiments, transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). For example, the generated dialogue response can be in the form of a text string that speech synthesis module 411 can convert to an audible speech output. A text string may require one or more processing steps that include, but are not limited to: text pre-processing (e.g., spell-check), text processing (e.g., standardization and normalization, grapheme to phoneme conversion, dictionary based, and speech synthesis (e.g., waveform generation, output). Speech synthesis module 411 can use any appropriate speech synthesis technique to generate speech outputs from text, including, but not limited, to concatenative synthesis, unit selection synthesis, diphone synthesis, domain- specific synthesis, formant synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sinewave synthesis. In some examples, speech synthesis module 411 can be configured to synthesize individual words based on phonemic strings corresponding to the words. For example, a phonemic string can be associated with a word in the generated dialogue response. The phonemic string can be stored in metadata associated with the word. Speech synthesis model can be configured to directly process the phonemic string in the metadata to synthesize the word in speech form. In a preferred embodiment, the speech synthesis can be performed on one or more remote server with high processing power or resources, preferably to obtain higher quality and fast speech outputs, to be sent to electronic device 101 of FIG. 1, to address the short-comings of the conventional VCS.
FIG. 5 schematically illustrates an architecture 500 for an embodiment of a dynamically triggered vehicle telematics risk prediction/ risk scoring system, in particularly providing a dynamic, telematics-based connection to cloud computing server 308 or 309 of FIG. 3, and a telematics data aggregator/analytics module by means of a mobile telematics cellular communication device 101 of FIG.l executing mobile telematics described herein software applications.
Telematics data aggregator/analytics score generating module comprises an event detection component 501 and a scoring function component 502. Event detection component 501 receives from the vehicle telematics device 101, through the communication network as described in FIG. 3, various operational data inputs including IMU data 503, GPS data 504, and driver data 505. The identified event 506 is combined with environmental contextual data 507 (e.g., weather, terrain, etc., and for example, but not limited to, other data 508 is fed into the scoring function component 502 for processing. The out of the telematics data aggregator/analytics module is a score element 509. The score element 509 may be, but not limited to, a driver score, a driver behavior score, a vehicle operating/status score, a safety score, a driving condition score, a road condition score, a weather-related score, a risk score, a distraction score, or contextual or environment score. The vehicle telematics communication device and cloud computing server applications react in real-time, dynamically on captured operational or contextual parameters, particularly monitored and captured vehicle parameters during operation. In some embodiments, the present invention also provides a telematics based automated risk profile, alert and real-time notification. The inventive system provides a structure for the use of telematics together with real-time risk monitor, assessment, analysis, and management insurance system. In a preferred embodiment, the vehicle telematics system captures driver behavior data, analyzes, assesses, and synthesizes one or more risk profiles, based on one or more score 509 output, and wherein the resulting profile is , preferably provided for audio-visual output via the AI digital assistant to a driver, a family member, or an insurer using proprioceptive sensors of the device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle. The score generator module measures and/or generates a single or compound of variable scoring parameters profiling the use and/or style and/or contextual condition/data 507 of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more herein monitored operating parameters or environmental parameters.
In a preferred embodiment, the vehicle telematics system captures driver behavior data, analyze, assess, and synthesizes one or more risk profiles, based on one or more score 509 output, and wherein the resulting profile is, preferably provided for audio-visual output via the AI digital assistant to a driver, a fleet manager, or an insurer using proprioceptive sensors of the said device for sensing operating parameters of the motor vehicle and/or exteroceptive sensors for sensing environmental parameters during operation of the motor vehicle. The score generator module measures and/or generates a single or compound of variable scoring parameters profiling the use and/or style and/or contextual condition/data 507 of driving during operation of the motor vehicle is based on a preset, threshold, triggered, captured, or one or more herein monitored operating parameters or environmental parameters. In one embodiment, the AI digital assistant may be used to remind, teach, or guide a driver, including but not limited to, company driving policy, maintenance policy, driving guidelines, regulations, training instructions, or the like.
FIG. 6 schematically illustrates an architecture 600 for an embodiment of a dynamically triggered vehicle telematics vehicle health and maintenance predictive system, in particularly providing a dynamic, telematics-based connection to cloud computing server 308 or 309 of FIG. 3, and a telematics data analytics module by means of a mobile telematics cellular communication device 101 of FIG. 1 executing mobile telematics described herein software applications. Telematics data analytics generating module comprises an event detection component 601 and a fault hazard function estimation component 602. Event detection component 601 receives from the vehicle telematics device 101, through the communication network as described in FIG. 3, various operational data inputs including IMU data 603, GPS data 604, and vehicle operation data 605. The identified event 606 is combined with environmental contextual data 607 (e.g., weather, terrain, etc.) and for example, but not limited to, other data 608 is fed into the fault hazard estimator 602 for processing. The fault estimator 602 preferably incorporate the method 700 of FIG. 7 (discussed below) to determine one or more survival functions for the vehicle or a vehicle component. The out of the telematics data analytics module is a score element 609. The score element 609 may be, but not limited to, a vehicle operating/status score, a vehicle maintenance score, a vehicle health score, a vehicle component’s status, or the like. The score may be store be transmitted or sent to one or more said remote servers and stored within one or more databases.
Exemplary databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase databases. Alternatively, such databases may be implemented using standardized data structures, such as an array, hash, linked list, structured text file (e.g., XML), table, or as object-oriented databases (e.g., ObjectStore, Poet, Zope, etc.). The scores may be compared to a logic of a pre-set, threshold, captured, or monitored vehicle component, vehicle health, status of vehicle parts or parameters to generate an alert, a maintenance alert, such alerts preferably communicated to the driver via the AI digital assistant described herein. The vehicle telematics communication device and cloud computing server applications react in real-time, dynamically on captured operational or contextual parameters, particularly monitored and captured vehicle parameters or health during operation.
In another embodiment, the vehicle health and maintenance predictive system measures and/or generates a single or compound of component variables or fault parameters profiling the condition of a vehicle component during operation of the motor vehicle, based on, but not limited to, a preset, threshold, triggered, captured, or one or more said monitored vehicle component operating functions or parameters. In one embodiment, the analysis predicts the type of fault and identifies possible contributing factors. In yet an alternative embodiment, the analysis correlates the fault type to a system-level failure log, correlates extracted fault type from time-series analytics to predict, preferably in real-time, system levels of occurrences. In an alternative embodiment, a risk score is generated and communicated to the fleet manager and or driver which may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features. In various embodiments, machine learning algorithms are used for analyses and predictions, including but not limited to, Support Vector Machine (SVM), Artificial Neural Network, Logistic Regression, Decision Tree, Random Forest, or the like. In other embodiments, failure hazard estimation models are used to make predictions including but not limited to, said Cox’s proportional hazards, Kaplan- Meier, or the like.
FIG. 7 illustrates a method 700 of the invention for obtaining the most fitted survival function derived from different groups of vehicle health parameters. Health parameters may include an engine, tire, a brake, etc. In step 701, the telematics data time series of one or more vehicle components is broken down into successive survival lives that are divided by the component’s failure events (e.g., tire blown). The telematics data received from the TPMS can be used to detect a severe failure event. It is anticipated that a last open-ended time interval of component survival, representing a recovery from a failure with no observed failure event during the analysis period, known as data right censoring. In step 702, the fleet manager defines the time-varying periods of the survival function to represent the dynamically changing component’s failure hazard over time, preferably the periods are optimally set to provide a statistically sufficient/significant dynamic and accurate survival function. In step 703, the failure hazard h(t) for each telematics entry at time t experienced by the vehicle component over a survival life period is quantified and sampled as the ratio between (1) the net operating total hours consumed since the survival life start time to the corresponding time t; and (2) the survival life length L as the difference between the net tire operating total hours experienced during the survival period. In step 704, the outliers in the telematics data entries are identified and eliminated if the absolute studentized residual of any of its fields is bigger than 3.0.
In step 705, the telematics entries are distributed between the time-varying periods of the survival function, based on their time (t) values.
In step 706, the telematics entries of each time-varying period are randomly divided into two equal groups: (1) an estimation group that is used to estimate the hazard function regression coefficients (see step 707); and 2) a prediction group that is used to validate the hazard function and its estimated coefficient (see step 708).
In step 707, for each time period, the survival function baseline failure rate ho(t) and coefficients vector b(1) are estimated by applying the data linearization regression technique to the estimation group of telematics entries. The regression population includes all the telematics data estimation group within the corresponding time-varying survival function period. The fitness of the generated survival function and its coefficients can be evaluated using (1) the p-value for the constant and each covariate coefficient as generated from by regression analysis, which is used to test the hypothesis of survival function dependency on each of the covariates; (2) coefficients of determination (R-square, Multiple R-square, and adjusted R-square) to test the fit of the resulting survival function to the observed data; and (3) analysis of variance (ANOVA) significance level F, which quantifies the probability that the proposed function does not explain the variation in the equipment hazard.
In step 708, for each time period, the survival function coefficients are validated by using the survival function to calculate the failure hazard estimate values for every telematics entry in the prediction group and comparing the estimates with the observed values. The validity is assessed using the Pearson coefficient of correlation (Rcorr) and the student t-test to examine the hypothesis that no relation exists between the observed and estimated hazard values. In addition, the variance between estimated and observed failure hazard values is quantified using the root mean square error (RMSE), where its smaller values reflect higher prediction accuracy.
In step 709, repeat steps 707 and 708 to experiment with different combinations of the proposed covariates to find the survival function coefficients that provide the maximal fit to the estimation telematics data group (p-value, R-square, F) and the minimal variance with the prediction data group (i.e., RMSE) with validated correlation (Rcorr and t-test).
In one embodiment, the method can use for example to calculate the survival function, (1) tire pressure warning lights to model failure events and survival period boundaries; (2) the tire’s total run hour to calculate the failure hazard; and (3) additional parameters as the covariates of the survival function, which can include but not limited to, tire temperature, vehicle speed, vehicle operation hours, external environmental temperature, travel terrain, vehicle idling hours, vehicle odometer. In a preferred embodiment, the estimated survival function enables a proactive maintenance tool to estimate vehicle and components failure probability.
FIG. 8 schematically illustrates an architecture 800 for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior. In a non limiting embodiment, a vehicle motion and orientation rate 801 is determined with inputs from IMEG sensors 802. Similarly, a vehicle heading and speed variation 803 is determined from GPS data 804. Elements 801 and 803 are combined, via linear and on non-linear, by sensor aggregator 805 and the data is transmitted to analytics engine 806, which may reside on one or more remote cloud servers. Analytics engine may access one or more database 808, residing on the same server or another server, containing analytics logics, rules, algorithms, or the like to process, analyze, assess, or determine driver behavior 809, and subsequently a driving score 509 of FIG. 5.
In an embodiment, the variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter. The variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters. In an alternative embodiment, the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features. The telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system. The score generator may incorporate additional information to determine a score including a vehicle safety score, a cyber risk score, a software certification/testing risk score, an NHTSA level risk score, or the like. The telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
According the principle of the invention, the device, methods, and system can communicate with one or more servers external to the ecosystem described herein. The one or more servers may be remote servers accessible through a cellular network, a communication network or a through the Internet. The said server may comprise an enterprise server of a manufacturer, a vehicle service provider, a vehicle parts vendor, an OEM telematics service provider, or the like. The enterprise server may contain on or more ERP planning platform accessible through on or more front-end APIs. In one embodiment, the ERP platform and its function are accessible using the voice-controlled AI digital assistant described herein. In a preferred embodiment, one more user of said fleet management ecosystem described herein may a manufacture, a vendor, or a service provider enables to receive an order from a driver or a fleet manager and submit on or more quote based on receiving relevant telematics data (e.g., tire type, size, oil, coolant, etc.) afforded by the said platform.
FIG. 8 schematically illustrates an architecture 800 for an embodiment of a dynamically triggered vehicle telematics process for determining driver behavior. In a non limiting embodiment, a vehicle motion and orientation rate 801 is determined with inputs from IMU sensors 802. Similarly, a vehicle heading and speed variation 803 is determined from GPS data 804. Elements 801 and 803 are combined, via linear and on non-linear, by sensor aggregator 805 and the data is transmitted to analytics engine 806, which may reside on one or more remote cloud servers. Analytics engine may access one or more database 808, residing on the same server or another server, containing analytics logics, rules, algorithms, or the like to process, analyze, assess, or determine driver behavior 809, and subsequently a driving score 709 of FIG. 7.
In an embodiment, the variable driving score generated can include for example, but not limited to, speed and/or acceleration and/or braking and/or cornering and/or jerking, and/or a measure of distraction parameters comprising mobile phone usage while driving and/or a measure of fatigue parameter. The variable contextual/environmental score can include for example, but not limited to, road condition, road topology, traffic, road type and/or number of intersection and/or tunnels and/or elevation, and/or measured time of travel parameters, and/or measured weather parameters and/or measured location parameters, and or measured distance driven parameters, and or neighborhood parameters. In an alternative embodiment, the risk scores may include a measured maintenance (e.g., maintenance delinquency) and surveillance factor extracted from the automotive data associated with the motor vehicle or the use of active safety features. The telematics-based AI digital assistant feedback means of the system may, for example, comprise a dynamic alert feed via a data link to the motor vehicle's automotive control circuit, wherein the AI digital assistant alerts drivers immediately to one or more performance measures including, but not limited to, tachometer reading (e.g. high RPM), unsteady driving condition, excessive engine power, harsh acceleration, road anticipation, and/or ECO drive system. The score generator may incorporate additional information to determine a score including a vehicle safety score, a cyber risk score, a software certification/testing risk score, an NHTSA level risk score, or the like. The telematic system enables real-time dynamic driver adaption and improvement, providing instant feedback to drivers, training aids, behavior modification techniques, or the like, to ensure safe and secured driving.
In another embodiment, the invention provides for a device, methods, and a system that enables insurers to provide one or more insurance quote based on the score and other relevant telematics data (e.g., automatic capture and analysis of risk scores and reaction to data) afforded by the said platform. FIG. 9A is an illustration 900 of a process that occurs when a customer requests a product (e.g., an underwriting and/or insurance product) from an underwriter, customer service representative, distributor, underwriting system, insurer, insurance agent, or the like. According to some embodiments, the method may be illustrative of a process of self-service underwriting product pricing (such as the customer pricing an insurance policy online).
In some embodiments, the method may comprise initiating the quote process, at 901. An underwriter and/or customer may, for example, utilize an interface, such as an interface provided by client terminals 302 or 303 of FIG. 3 to search for, identify, and/or otherwise open or determine an existing account. In some embodiments, an account search may comprise an account login and/or associated credential check (e.g., password- protected account login). An account search may be based, in some embodiments, on a customer name, business name, account number, and/or other identification information that is or becomes known or practicable. In some embodiments, a computerized processing device such as a PC or computer server described herein, and/or a software program described herein and/or interface may conduct the search and/or may receive information descriptive of the search and/or one or more indications thereof. According to some embodiments, the method may comprise a determination, at 902, as to whether vehicle telematics described herein will be utilized in association with the desired policy/product. An agent, CSR, and/or underwriter may inquire, for example, as to whether a customer desires (and/or will allow) the use of vehicle telematic data (e.g., personal) in association with the desired policy/product. In some embodiments, information related to and/or descriptive of vehicle telematics may be received at 902. Such information may include, for example, but not limited to, information descriptive of a quantity and/or type of vehicle and/or information descriptive of vehicle. In the case that it is determined at 902 that vehicle telematics will not be utilized, the method may proceed directly to determine whether to accept and/or modify the application/request, at 903. According to some embodiments, such as in the case that it is determined that the application should and/or will be accepted and/or modified (at 903), the method may continue to product pricing, quote, and sale at 904. The product pricing may, according to some embodiments, comprise policy creation that may, for example, be based on policy type selection, customer detail entry, and/or account searching and/or data (e.g., a number and/or percentage of vehicles utilizing vehicle telematic devices). An underwriting program and/or associated device and/or interface may create a policy number, session, and/or account identifier, log, and/or other record of policy type selection, for example, in reference to the customer and/or underwriter desiring to price the policy or product. In some embodiments, the product pricing may comprise coverage selection and/or determination. The customer and/or underwriter may select various available coverage levels and/or types for the policy.
According to some embodiments, interface options may allow various available coverage parameters to be selected and/or input. In some embodiments, a computerized processing device such as a PC or computer server and/or a software program and/or interface described herein may receive the coverage selection and/or one or more indications thereof. In some embodiments, the underwriter may provide a quote at 904 for any number of underwriting products such as a quote for each of a plurality of insurance product types and/or tiers. According to some embodiments, the underwriter may determine, define, generate, and/or otherwise identify the quote at 904. The quote may then, for example, be provided, transmitted, displayed, and/or otherwise output to the customer via any methodology that is or becomes desirable or practicable. The quote provided (e.g., by the underwriting entity) may comprise (but is not limited to) one or more of the following: premium/price (which may include a high-risk price and/or a low-risk price), insurance and/or surety capacity (e.g., an aggregate line of credit), collateral requirements, indemnity requirements, international bond restrictions, surety product type restrictions, other risk restrictions/exclusions, and/or financial reporting requirements.
In some embodiments, such as in the case that it is determined at 902 that vehicle telematics will be utilized (or are desired to be utilized), the method may proceed to the risk management system 905. Various methodologies may be utilized, for example, to determine a level of risk associated with the customer (e.g., based on vehicle telematics and/or an extent and/or type of utilization thereof). In some embodiments, the risk management system 905 may comprise a risk control inspection interview, at 906. Risk control personnel (and/or electronic monitoring) may be utilized, for example, to inspect the customer's current or proposed use of vehicle telematics. Types, quantities, and/or configurations of vehicle telematic devices may be reviewed and inspected, for example, and/or safety program procedures and/or personnel may be reviewed. In some embodiments, results of the risk control inspection may be analyzed and/or processed during a risk control evaluation, at 906. Results of the risk control evaluation may then be utilized during the determination of whether to accept and/or modify the application/ policy, at 907, and/or during the product pricing at 904. Less desirable and/or effective (actual or predicted) safety programs utilizing vehicle telematics may, for example, result in higher perceived risk and accordingly warrant higher pricing/ premiums for the desired product. In some embodiments, an application and/or policy may be declined at 908. According to some embodiments, the risk management system may also or alternatively comprise a risk control interview at 906. The customer and/or a representative of the customer, such as a safety program manager may, for example, be interviewed (in person and/or via telephone or online via the software application described herein) to gather data regarding the customer's safety program and/or vehicle telematics usage. The information gathered during the interview may then be utilized, for example to inform and/or influence the risk control evaluation. In some embodiments, the risk management system 905 may also or alternatively comprise receiving vehicle telematic data, at 909.
Telematic data from one or more vehicle sensors or electronic device 101 of FIG. 1 and/or data obtained from a vehicle telematics service/data provider (and/or from the customer themselves) may, for example, be received by an insurance underwriter and/or risk control engineer.
In some embodiments, the received data may be analyzed at 910. Vehicle telematic data may be processed, for example, to determine an expected level of risk, risk profile, driver score, driver behavior past, present, predicted behavior, associated with the customer using the methods described herein (e.g. methods described in FIG. 5 and FIG. 6). In some embodiments, the results of the analyzing of the vehicle telematic data at 908 may be provided to and/or utilized in the risk control evaluation at 909. In some embodiments, such as in the case that an insurance product utilizing telematic data is already in place with the customer, the results of the analyzing of the vehicle telematic data at 908 may be provided directly to and/or may directly influence the determination of whether to accept and/or modify the product pricing at 904. Effective and/or diligent implementation of a safety program may, for example, allow the customer to earn a discount in premiums (examples further described below), such information may be conveyed in real-time to a customer/driver via the AI digital assistant described herein. According to some embodiments, the information generated by the risk management system 905 is an integral solution to UBI insurance schemes including, but not limited to, pay-as-you-drive (PA YD), pay-how-you drive (PHYD), manage-how-you- drive (MHYD), or n PHYD. In PHYD, the risk management system may allow an insurer to offer a discount based on driving behavior. In PA YD, the risk management system may for example offer a discount based on mileage (how much a person drives) and not where or how. In an alternative embodiment, the information generated by the system may be combined with additional information, including but not limited to; data and info of insurance policy, individual driving, crash forensics, credit scores, driving statistics, historic claims, market databases, driving license points, claims statistics, rewards, discounts, contextual data for weather, driving conditions, road type, environment, or the like. In yet another embodiment, the platform provides an insurer with a comprehensive risk-transfer structure comprising device and vehicle sensors collection and or combined with ADAS (advanced driver assistance systems) data for accurate risk analysis and incorporation within an automated risk-transfer system/coverage, claims notification, and value-added services (e.g., crash reporting, post-accident services, Emergency- Call/Breakdown-Call, vehicle theft, driver coaching/scoring, reward, driver safety training, etc.).
In still additional embodiments, the device, methods, and system of the invention can communicate with one or more servers external to the ecosystem described herein. The one or more servers may be remote servers accessible through a cellular network, a communication network or a through the Internet. The said server may comprise an enterprise server of a manufacturer, a vehicle service provider, a vehicle parts vendor, an OEM telematics service provider, or the like. The enterprise server may contain on or more ERP planning platform accessible through on or more front-end APIs. In one embodiment, the ERP platform and its function are accessible using the voice-controlled AI digital assistant described herein. In a preferred embodiment, one more user of said fleet management ecosystem described herein may a manufacture, a vendor, or a service provider enables to receive an order from a driver or a fleet manager and submit on or more quote based on receiving relevant telematics data (e.g., tire type, size, oil, coolant, etc.) afforded by the platform.
FIG. 9B is an illustration 910 of an alternate embodiment of the invention directed to a process that occurs when a customer requests a part or services from a vendor, a manufacturer, a vehicle service provider, or the like. According to some embodiments, the method may be illustrative of a process of self-service product pricing and purchase (such as the customer pricing a vehicle part online). In some embodiments, the method may comprise initiating the quote process, at the front end 921. A fleet manager may, for example, utilize an application interface, such as an interface provided by client terminals 302 or 303 of FIG. 3 to search for, identify, and/or otherwise open or determine an existing account. In some embodiments, an account search may comprise an account login and/or associated credential check (e.g., password-protected account login). An account search may be based, in some embodiments, on a customer name, business name, account number, voice-signature, and/or other identification information that is or becomes known or practicable. In some embodiments, a computerized processing device such as a PC or computer server described herein, and/or a software program described herein and/or interface may conduct the search and/or may receive information descriptive of the search and/or one or more indications thereof.
According to some embodiments, the method may comprise a determination, at 922, as to whether vehicle telematics described herein will be utilized in association with the desired product, part, or service request. A vendor representative may inquire, for example, as to whether a customer desires (and/or will allow) the use of vehicle telematic data (e.g., personal data) in association with the desired product, vehicle replace parts, or the like. In some embodiments, information related to and/or descriptive of vehicle telematics may be received at 922. Such information may include, for example, but not limited to, information descriptive of a quantity and/or type of vehicle and/or information descriptive of vehicle, vehicle health, vehicle components, vehicle supplies in need of replacement and or service. In the case that it is determined at 922 that vehicle telematics will not be utilized, the method may proceed directly to determine whether to accept and/or modify the application/request, at 923. According to some embodiments, such as in the case that it is determined that the application should and/or will be accepted and/or modified (at 923), the method may continue to product pricing, quote, and sale at 924. The product pricing may, according to some embodiments, comprise the creation of a customer profile or product profile policy creation that may, for example, be based on product type selection, customer detail entry, and/or account searching and/or data (e.g., previous product purchase) or a maintenance program. In some embodiments, the product pricing may comprise warranty selection and/or determination. The customer may select various available warranty levels and or maintenance schedule for the purchase. According to some embodiments, interface options may allow various available warranty options to be selected and/or input. In some embodiments, a computerized processing device such as a PC or computer server and/or a software program and/or interface described herein may receive the warranty selection and/or one or more indications thereof. In some embodiments, the vendor may provide a quote at 924 for any number of vehicle products or parts or service such as a quote for each of a plurality of available product types and/or tiers. According to some embodiments, the vendor may determine, define, generate, and/or otherwise identify the quote at 924. The quote may then, for example, be provided, transmitted, displayed, and/or otherwise output to the customer via any methodology that is or becomes desirable or practicable. The quote provided (e.g., by a vendor, manufacturer, or service provider) may comprise (but is not limited to) one or more of the following: price, warranty, parts replacement offer, installation offer, repair offer, maintenance schedule, discounts, coupons, or the like.
In some embodiments, such as in the case that it is determined at 922 that vehicle telematics will be utilized (or are desired to be utilized), the method may proceed to an ERP management system 925. Various methodologies may be utilized, for example, to determine a level of loyalty associated with the customer. In some embodiments, the ERP system 925 may comprise a product purchase request evaluation, at 926. Types, quantities, and/or configurations of vehicle components or parts available may be reviewed or automatically identified within a database 921. Results of the evaluation may then be utilized during the determination of whether to accept and/or modify, for example, a service request, at 927, and/or during the product pricing at 924. In some embodiments, a purchase request may be declined at 928, for various reasons (e.g., parts not available). According to some embodiments, the ERP system may also or alternatively comprise a customer interview at 926. The customer and/or a representative of the customer, such as a fleet manager may, for example, be interviewed (in person and/or via telephone or online via the software application, by the AI digital assistant, described herein) to gather data regarding the customer's needs for parts or services. The information gathered during the interview may then be utilized, for example to inform and/or influence the product pricing or quote 924. In some embodiments, the ERP system 925 may also or alternatively comprise receiving vehicle telematic data, at 929. Telematic data from one or more vehicle sensors or electronic device 101 of FIG. 1 and/or data obtained from a vehicle telematics service/data provider (and/or from the customer themselves) may, for example, be received by a vendor representative. In some embodiments, the received data may be analyzed at 930. Vehicle telematic data may be processed, for example, to determine in advance an expected type or level of service required stemming from one or more Diagnostic Trouble Code. In some embodiments, the results of the analyzing of the vehicle telematic data at 928 may be provided to and/or utilized in the request evaluation at 926. In some embodiments, the results of the analyzing of the vehicle telematic data at 928 may be provided directly to and/or may directly influence the determination of whether to accept and/or modify the product pricing at 924. Effective and/or diligent implementation of a vehicle maintenance program may, for example, allow the customer to earn a discount, such information may be conveyed in real-time to a customer/dri ver via the AI digital assistant described herein.
Example 1
This example is intended to serve as a demonstration of the possible voice interactions between an AI digital assistant and a driver of a vehicle. The AI digital assistant uses a control service (e.g., Amazon Lex) available from Amazon.com (Seattle, WA). Access to skills require the use a device wake word ("Alexa") as well as an invocation phrase ("Drivesafe") for skills specifically developed for the said device that embodies one or more components of the present invention. The following highlight one or more contemplated capabilities and uses of the invention:
Figure imgf000047_0001
Many different embodiments have been disclosed regarding the above descriptions and the drawings. It will be understood that it would be unduly repetitious to literally describe and illustrate every combination and sub-combination of these embodiments. Accordingly, the present specification, including the drawings, shall be construed to constitute a complete written description of all combinations and sub-combinations of the embodiments described herein, and of the manner and process of making and using them, and shall support claims to any such combination or sub-combination. In the drawings and specification, there have been disclosed various embodiments and, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation. Therefore, it will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the invention covers modifications and variations of this disclosure within the scope of the following claims and their equivalents.

Claims

What is claimed is:
1. A vehicle telematics device, comprising:
at least one processor contained in a housing, a portion of the housing comprising a voice-controlled user interface; and
a memory in communication with the at least one processor, the memory storing executable instructions for causing the at least one processor to provide at least one selected from the group consisting of automated voice recognition response, natural language understanding, speech-to-text processing and text-to-speech processing,
wherein the vehicle telematics device is an electronic wireless communication device.
2. The vehicle telematics device of claim 1, further comprising:
an audio coder-decoder (CODEC) in communication with the processor; and a speaker in communication with the audio CODEC,
wherein the memory further stores executable instructions for causing the at least one processor to broadcast feedback communication for a user and one or more network participants through the speaker.
3. The vehicle telematics device of claim 2, further comprising a microphone in
communication with the audio CODEC, the microphone configured to receive a voice command from the user, convert the voice command to a voice signal, and send the voice signal to the audio CODEC,
wherein the audio CODEC is configured to encode the voice signal to produce an encoded voice signal, and to transmit the encoded voice signal to the at least one processor;
wherein the memory further stores executable instructions for causing the at least one processor to transmit the encoded voice signal to at least one voice translation service in communication with the at least one database server; and
wherein the memory further stores executable instructions for causing the at least one processor to receive, from the voice translation service, a verbal response to the user’s voice command, and to broadcast the response through the speaker.
4. The vehicle telematics device of claim 1, wherein the device is portable and attachable to an interior portion of a vehicle.
5. The vehicle telematics device of claim 1, wherein the device is configured to operate with a vehicle on-board communication system.
6. The vehicle telematics device of claim 5, further comprising:
one or more measuring device for sensing GPS navigation information, operating parameters of the vehicle, and a driving environment of the vehicle.
7. A vehicle telematics system, comprising:
a wireless communication device, the wireless communication device comprising: at least one processor;
a voice-controlled user interface; and
a memory in communication with the at least one processor, the memory storing executable instructions; and
one or more remote cloud-based servers configured to perform speech-to-text (STT) and text-to-speech (TTS) conversion services,
wherein the wireless communication device and the conversion services located on the one or more remote cloud-based servers constitute serve as an Artificial Intelligence (AI) assistant, the AI assistant providing conversational interactions with a user utilizing automated voice recognition-response, natural language processing and predictive algorithms, and
wherein information generated from interactions of the user with the AI assistant are stored in application software on the one or more remote cloud-based servers so as to be accessible to multiple users of the system.
8. The vehicle telematics system of claim 7, wherein the device is configured to operate with a vehicle on-board communication system.
9. The vehicle telematics system of claim 8, wherein the AI digital assistant provides a risk factor status to the user based on vehicle status, vehicle environment, or driver behavior.
10. The vehicle telematics system of claim 7, wherein the one or more remote cloud-based servers comprise one or more application software modules selected from the group consisting of an I/O processing module, a speech-to-text (STT) processing module, a phonetic alphabet conversion module, user database, a vocabulary database, a service processing module, task flow processing module, a speech-to-text (STT) processing module, and a speech synthesis module.
PCT/US2019/030071 2018-04-30 2019-04-30 Vehicle telematic assistive apparatus and system WO2019213177A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862664816P 2018-04-30 2018-04-30
US201862664812P 2018-04-30 2018-04-30
US201862664824P 2018-04-30 2018-04-30
US62/664,816 2018-04-30
US62/664,824 2018-04-30
US62/664,812 2018-04-30

Publications (1)

Publication Number Publication Date
WO2019213177A1 true WO2019213177A1 (en) 2019-11-07

Family

ID=68386649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/030071 WO2019213177A1 (en) 2018-04-30 2019-04-30 Vehicle telematic assistive apparatus and system

Country Status (1)

Country Link
WO (1) WO2019213177A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142136A (en) * 2020-01-20 2020-05-12 苏州星恒通导航技术有限公司 Auxiliary driving device for engineering machinery
CN111768756A (en) * 2020-06-24 2020-10-13 华人运通(上海)云计算科技有限公司 Information processing method, information processing apparatus, vehicle, and computer storage medium
CN112349337A (en) * 2020-11-03 2021-02-09 中科创达软件股份有限公司 Vehicle-mounted machine detection method, system, electronic equipment and storage medium
US11037378B2 (en) * 2019-04-18 2021-06-15 IGEN Networks Corp. Method and system for creating driver telematic signatures
WO2021130678A1 (en) * 2019-12-23 2021-07-01 MDGo Ltd. Crash analysis device and methods of use
EP3944162A1 (en) * 2020-07-23 2022-01-26 Denso Corporation Method and system of managing a vehicle abnormality of a fleet vehicle
IT202000019822A1 (en) * 2020-08-07 2022-02-07 Ai Parts S R L AUTOMATED PREDICTIVE MAINTENANCE METHOD OF VEHICLES
IT202000019648A1 (en) * 2020-08-07 2022-02-07 Autolinee Nole’ S R L INTEGRATED ON-BOARD DEVICE AND ASSOCIATED KIT WITH IT FOR MONITORING AND LOCATION OF RENTAL VEHICLES, SUITABLE FOR MANAGING A FLEET OF VEHICLES
WO2022032237A1 (en) * 2020-08-07 2022-02-10 Talkgo, Inc. Voice-enabled external smart processing system with display
CN114157982A (en) * 2021-12-03 2022-03-08 智道网联科技(北京)有限公司 High-precision positioning method, device, equipment and storage medium
WO2022140178A1 (en) * 2020-12-21 2022-06-30 Cerence Operating Company Routing of user commands across disparate ecosystems
EP4148702A1 (en) * 2021-09-13 2023-03-15 AEON MOTOR Co., Ltd. Portable device for vehicle-information integration and warning
WO2023081628A1 (en) * 2021-11-02 2023-05-11 Caterpillar Inc. Systems and methods for determining machine usage severity
WO2023113717A1 (en) * 2021-12-13 2023-06-22 Di̇zaynvi̇p Teknoloji̇ Bi̇li̇şi̇m Ve Otomoti̇v Sanayi̇ Anoni̇m Şi̇rketi̇ Smart vehicle assistant with artificial intelligence
US11887411B2 (en) 2021-01-27 2024-01-30 Amazon Technologies, Inc. Vehicle data extraction service
WO2024022860A1 (en) * 2022-07-28 2024-02-01 Itt Italia S.R.L. Method and kit for replacement of a brake pad or brake jaw installed in a vehicle brake system
US11902374B2 (en) 2021-11-29 2024-02-13 Amazon Technologies, Inc. Dynamic vehicle data extraction service

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080103781A1 (en) * 2006-10-28 2008-05-01 General Motors Corporation Automatically adapting user guidance in automated speech recognition
US20080255722A1 (en) * 2006-05-22 2008-10-16 Mcclellan Scott System and Method for Evaluating Driver Behavior
US20140270108A1 (en) * 2013-03-15 2014-09-18 Genesys Telecommunications Laboratories, Inc. Intelligent automated agent and interactive voice response for a contact center

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080255722A1 (en) * 2006-05-22 2008-10-16 Mcclellan Scott System and Method for Evaluating Driver Behavior
US20080103781A1 (en) * 2006-10-28 2008-05-01 General Motors Corporation Automatically adapting user guidance in automated speech recognition
US20140270108A1 (en) * 2013-03-15 2014-09-18 Genesys Telecommunications Laboratories, Inc. Intelligent automated agent and interactive voice response for a contact center

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11037378B2 (en) * 2019-04-18 2021-06-15 IGEN Networks Corp. Method and system for creating driver telematic signatures
WO2021130678A1 (en) * 2019-12-23 2021-07-01 MDGo Ltd. Crash analysis device and methods of use
CN111142136A (en) * 2020-01-20 2020-05-12 苏州星恒通导航技术有限公司 Auxiliary driving device for engineering machinery
CN111768756A (en) * 2020-06-24 2020-10-13 华人运通(上海)云计算科技有限公司 Information processing method, information processing apparatus, vehicle, and computer storage medium
CN111768756B (en) * 2020-06-24 2023-10-20 华人运通(上海)云计算科技有限公司 Information processing method, information processing device, vehicle and computer storage medium
EP3944162A1 (en) * 2020-07-23 2022-01-26 Denso Corporation Method and system of managing a vehicle abnormality of a fleet vehicle
WO2022029808A1 (en) * 2020-08-07 2022-02-10 Ai Parts S.R.L. Automated predictive maintenance method of vehicles
IT202000019648A1 (en) * 2020-08-07 2022-02-07 Autolinee Nole’ S R L INTEGRATED ON-BOARD DEVICE AND ASSOCIATED KIT WITH IT FOR MONITORING AND LOCATION OF RENTAL VEHICLES, SUITABLE FOR MANAGING A FLEET OF VEHICLES
WO2022032237A1 (en) * 2020-08-07 2022-02-10 Talkgo, Inc. Voice-enabled external smart processing system with display
IT202000019822A1 (en) * 2020-08-07 2022-02-07 Ai Parts S R L AUTOMATED PREDICTIVE MAINTENANCE METHOD OF VEHICLES
CN112349337B (en) * 2020-11-03 2023-06-30 中科创达软件股份有限公司 Vehicle-mounted device detection method, system, electronic equipment and storage medium
CN112349337A (en) * 2020-11-03 2021-02-09 中科创达软件股份有限公司 Vehicle-mounted machine detection method, system, electronic equipment and storage medium
WO2022140178A1 (en) * 2020-12-21 2022-06-30 Cerence Operating Company Routing of user commands across disparate ecosystems
US11887411B2 (en) 2021-01-27 2024-01-30 Amazon Technologies, Inc. Vehicle data extraction service
EP4148702A1 (en) * 2021-09-13 2023-03-15 AEON MOTOR Co., Ltd. Portable device for vehicle-information integration and warning
US20230079801A1 (en) * 2021-09-13 2023-03-16 Aeon Motor Co., Ltd. Portable device for vehicle-information integration and warning
WO2023081628A1 (en) * 2021-11-02 2023-05-11 Caterpillar Inc. Systems and methods for determining machine usage severity
US11886179B2 (en) 2021-11-02 2024-01-30 Caterpillar Inc. Systems and methods for determining machine usage severity
US11902374B2 (en) 2021-11-29 2024-02-13 Amazon Technologies, Inc. Dynamic vehicle data extraction service
CN114157982A (en) * 2021-12-03 2022-03-08 智道网联科技(北京)有限公司 High-precision positioning method, device, equipment and storage medium
CN114157982B (en) * 2021-12-03 2024-03-22 智道网联科技(北京)有限公司 High-precision positioning method, device, equipment and storage medium
WO2023113717A1 (en) * 2021-12-13 2023-06-22 Di̇zaynvi̇p Teknoloji̇ Bi̇li̇şi̇m Ve Otomoti̇v Sanayi̇ Anoni̇m Şi̇rketi̇ Smart vehicle assistant with artificial intelligence
WO2024022860A1 (en) * 2022-07-28 2024-02-01 Itt Italia S.R.L. Method and kit for replacement of a brake pad or brake jaw installed in a vehicle brake system

Similar Documents

Publication Publication Date Title
WO2019213177A1 (en) Vehicle telematic assistive apparatus and system
US11375338B2 (en) Method for smartphone-based accident detection
JP6962316B2 (en) Information processing equipment, information processing methods, programs, and systems
US10078871B2 (en) Systems and methods to identify and profile a vehicle operator
US11436683B1 (en) System and method for incentivizing driving characteristics by monitoring operational data and providing feedback
US11562435B2 (en) Apparatus for a dynamic, score-based, telematics connection search engine and aggregator and corresponding method thereof
US10346925B2 (en) Telematics system with vehicle embedded telematics devices (OEM line fitted) for score-driven, automated risk-transfer and corresponding method thereof
US9667742B2 (en) System and method of conversational assistance in an interactive information system
US20170132016A1 (en) System and method for adapting the user-interface to the user attention and driving conditions
US20210225094A1 (en) Method and system for vehicular collision reconstruction
US11692836B2 (en) Vehicle safely calculator
US11734968B2 (en) Actionable event determination based on vehicle diagnostic data
Lin et al. Adasa: A conversational in-vehicle digital assistant for advanced driver assistance features
EP3570276A1 (en) Dialogue system, and dialogue processing method
KR20200006739A (en) Dialogue processing apparatus, vehicle having the same and dialogue processing method
US11449950B2 (en) Data processing systems with machine learning engines for dynamically generating risk index dashboards
WO2020072501A1 (en) Roadside assistance system
WO2021138341A1 (en) Pattern-based adaptation model for detecting contact information requests in a vehicle
CN110637327A (en) Method and apparatus for content push
KR20190011458A (en) Vehicle, mobile for communicate with the vehicle and method for controlling the vehicle
CN104769918A (en) Augmenting handset sensors with car sensors
KR20180041457A (en) Legal and insurance advice service system and method in the accident occurrence
JP7448350B2 (en) Agent device, agent system, and agent program
Qi et al. Autonomous Vehicles’ Car-Following Drivability Evaluation Based on Driving Behavior Spectrum Reference Model
US20240157934A1 (en) Systems and methods for generating vehicle safety scores and predicting vehicle collision probabilities

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19796585

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19796585

Country of ref document: EP

Kind code of ref document: A1