US20120215491A1 - Diagnostic Baselining - Google Patents

Diagnostic Baselining Download PDF

Info

Publication number
US20120215491A1
US20120215491A1 US13/031,565 US201113031565A US2012215491A1 US 20120215491 A1 US20120215491 A1 US 20120215491A1 US 201113031565 A US201113031565 A US 201113031565A US 2012215491 A1 US2012215491 A1 US 2012215491A1
Authority
US
United States
Prior art keywords
data
dus
aggregated
test
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/031,565
Other languages
English (en)
Inventor
Mark Theriot
Patrick S. Merg
Steve Brozovich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snap On Inc
Original Assignee
Snap On Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snap On Inc filed Critical Snap On Inc
Priority to US13/031,565 priority Critical patent/US20120215491A1/en
Assigned to SNAP-ON INCORPORATED reassignment SNAP-ON INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THERIOT, Mark, BROZOVICH, STEVE, MERG, Patrick S.
Priority to CA3171201A priority patent/CA3171201A1/en
Priority to CA2827893A priority patent/CA2827893C/en
Priority to CN201280019046.7A priority patent/CN103477366B/zh
Priority to BR112013020413-3A priority patent/BR112013020413B1/pt
Priority to EP12716731.0A priority patent/EP2678832B1/en
Priority to PCT/US2012/025802 priority patent/WO2012115899A2/en
Publication of US20120215491A1 publication Critical patent/US20120215491A1/en
Priority to US14/260,929 priority patent/US11048604B2/en
Priority to US17/325,184 priority patent/US20210279155A1/en
Assigned to SNAP-ON INCORPORATED reassignment SNAP-ON INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROZOVICH, ROY STEVEN
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0275Fault isolation and identification, e.g. classify fault; estimate cause or root of failure
    • G05B23/0281Quantitative, e.g. mathematical distance; Clustering; Neural networks; Statistical analysis

Definitions

  • Vehicles such as automobiles, light-duty trucks, and heavy-duty trucks, play an important role in the lives of many people. To keep vehicles operational, some of those people rely on vehicle technicians to diagnose and repair their vehicle.
  • Vehicle technicians use a variety of tools in order to diagnose and/or repair vehicles.
  • Those tools may include common hand tools, such as wrenches, hammers, pliers, screwdrivers and socket sets, or more vehicle-specific tools, such as cylinder hones, piston-ring compressors, and vehicle brake tools.
  • the tools used by vehicle technicians may also include electronic tools such as a vehicle scan tool or a digital voltage-ohm meter (DVOM), for use diagnosing and/or repairing a vehicle.
  • DVOM digital voltage-ohm meter
  • the vehicle scan tool and/or DVOM can be linked via wired and/or wireless link(s) to other devices, perhaps to communicate data about the vehicle.
  • the vehicle scan tool and/or DVOM can provide a significant amount of data to aid diagnosis and repair of the vehicle.
  • the data does not include contextual data, such as historical information.
  • the data is typically formatted such that data interpretation by skilled personnel, such as a vehicle technician, is required before a problem with the vehicle can be identified, diagnosed, and/or repaired.
  • an example embodiment can take the form of a method.
  • DUS device-under-service
  • the DUS-related data is determined to be aggregated into aggregated data at the server device.
  • the determination that the DUS-related data is to be aggregated is based on a classification of the DUS-related data.
  • An aggregated-data comparison of the DUS-related data and the aggregated data is generated at the server device.
  • a DUS report based on the aggregated-data comparison is then generated at the server device.
  • the DUS report includes one or more sub-strategies. At least one of the one or more sub-strategies includes a sub-strategy-success estimate.
  • the DUS report is then sent from the server device.
  • an example embodiment can take the form of a client device that includes a memory, a processor, and instructions.
  • the instructions are stored in the memory.
  • the instructions cause the client device to perform functions.
  • the functions can include: (a) receiving a diagnostic request for a DUS, (b) sending, to the DUS, a DUS-test request to perform a test related to the diagnostic request, (c) receiving, from the DUS, DUS-related data based on the test, (d) sending the DUS-related data, (e) receiving a DUS report based on the DUS-related data, and (f) generating a DUS-report display of the DUS report.
  • an example embodiment can take the form of a method.
  • a device receives a diagnostic request to diagnose a DUS.
  • a test based on the diagnostic request is determined at the device.
  • the test is related to a first operating state of the DUS.
  • the device requests performance of the test at the first operating state of the DUS.
  • First-operating-state data for the DUS is received at the device.
  • the first-operating-state data is based on the test.
  • Performance of the test at the second operating state of the DUS is requested by the client device.
  • the device verifies that the first-operating-state data is or is not related to the first operating state.
  • the device In response to verifying that the first-operating-state data is related to the first operating state, the device (a) generates a differential analysis based on the first-operating-state data, (b) generates a DUS-report display based on the differential analysis, and (c) sends the DUS-report display.
  • FIG. 1 is a block diagram of an example system
  • FIG. 2 is a block diagram of an example computing device
  • FIG. 3 is a block diagram of an example client device
  • FIG. 4 is a block diagram of an example server device
  • FIG. 5 depicts an example data collection display
  • FIG. 6A shows an example scenario for processing a diagnostic request, responsively generating a DUS-report display, and receiving success-related data
  • FIG. 6B shows an example scenario for processing DUS-related data and responsively generating a diagnostic request
  • FIG. 6C shows an example scenario for processing DUS-related data and responsively generating a DUS-report
  • FIG. 6D shows another example scenario for processing DUS-related data and responsively generating a DUS-report
  • FIG. 7A shows an example scenario for processing a diagnostic request, responsively generating a DUS-report display, and receiving success-related data
  • FIG. 7B shows an example scenario for processing a diagnostic request and responsively generating a DUS-test request
  • FIG. 7C shows an example scenario for processing DUS-related data and responsively generating a DUS-report display
  • FIG. 8A depicts an example flow chart that illustrates functions for generating a differential analysis
  • FIG. 8B shows an example grid with a grid cell corresponding to a first operating state and a grid cell corresponding to a second operating state
  • FIG. 9 depicts an example flow chart that illustrates functions that can be carried out in accordance with an example embodiment
  • FIG. 10 is another flow chart depicting functions that can be carried out in accordance with an example embodiment.
  • FIG. 11 is yet another flow chart depicting functions that can be carried out in accordance with an example embodiment.
  • Each device of a described system is operable independently (e.g., as a stand-alone device) as well as in combination with other devices of the system.
  • Each device of a described system can be referred to as an apparatus.
  • Each device of a described system is operable to carry out functions for servicing a DUS (DUS).
  • the DUS can comprise a vehicle, a refrigeration unit, a personal computer, or some other serviceable device. Additionally or alternatively, the DUS can comprise a system such as a heating, ventilation, and air conditioning (HVAC) system, a security system, a computer system (e.g., a network), or some other serviceable system.
  • HVAC heating, ventilation, and air conditioning
  • the functions for servicing the DUS can include but are not limited to diagnostic functions, measurement functions, and scanning functions.
  • the device of a described system is configured to communicate with another device via a communications network.
  • the communications network can comprise a wireless network, a wired network, or both a wireless network and a wired network. Data obtained by a device from the DUS or data otherwise contained in that device can be transmitted to another device via the communications network between those devices.
  • Wired and wireless connections can utilize one or more communication protocols arranged according to one or more standards, such as an SAE International, International Organization for Standardization (ISO), or Institute of Electrical and Electronics Engineers (IEEE) 802 standard.
  • the wired connection can be established using one or more wired communication protocols, such as the On-Board Diagnostic II (“OBD-II”) series of protocols (e.g., SAE J1850, SAE J2284, ISO 9141-2, ISO 14230, ISO 15765), IEEE 802.3 (“Ethernet”), or IEEE 802.5 (“Token Ring”).
  • OBD-II On-Board Diagnostic II
  • the wireless connection can be established using one or more wireless communication protocols, such as Bluetooth, IEEE 802.11 (“Wi-Fi”), or IEEE 802.16 (“WiMax”).
  • a client device of the described system is configured to communicate directly with the DUS; in part by sending test requests for diagnostic information and receiving test-related data in response.
  • the test requests and/or test-related data are formatted according to an OBD-II protocol.
  • the client device can prompt operation of the DUS at a variety of operating conditions to collect the test-related data.
  • the conditions and amount of data collected can be tailored based on a type of user (e.g., service technician, layperson, engineer, etc.) using the client device.
  • the client device can also collect a “complaint” about the DUS, which can include text about the operating condition of the DUS.
  • a complaint can be specified as a “compliant code”, such as an alphanumeric code.
  • a server device of the described system is configured to receive the test-related data and perhaps “device-related data” for the DUS and responsively generate a “DUS report.”
  • the DUS report can include a “strategy” for diagnosing and/or repairing the DUS to address a complaint.
  • the strategy may include a “statistical analysis” of the received test-related data and/or one or more “sub-strategies” (e.g., recommendations, directions, proposed actions, and/or additional tests).
  • the device-related data can include, but is not limited to data for device make, device manufacturer identity, device model, device time of manufacture (e.g., model year), mileage, device control unit information (e.g., for a vehicle, engine control unit (ECU) type and release information), time-of-operation data, device-identity information, device-owner-identity information, service provider, service location, service technician, and/or device location.
  • the client device can generate the DUS report.
  • the server device and/or the client device can create a “profile” for the DUS.
  • the profile can be configured to store the device-related data related to the DUS, complaints, DUS reports, and/or test-related data taken at various times during the life of the DUS.
  • “reference points” or “reference data” is stored, perhaps with the profile.
  • the reference data can be taken during an interval of complaint-free operation of the DUS.
  • Reference data can include data provided by an original equipment manufacturer (OEM) regarding ideal/recommended operating
  • a “data logger” can be installed on the DUS to collect the reference data.
  • the reference data can be communicated from the data logger to a “service provider” responsible for diagnosis, maintenance, and/or repair of the DUS.
  • the service provider collects the baseline data at a service facility.
  • the reference data can be compared with test-related data taken in response to a complaint about the DUS. Further, reference data and/or test-related data from a number of devices-under-service can be combined and/or aggregated into a set of “aggregated data.” The aggregation process can include determining a classification and/or reliability for the aggregated data.
  • Test-related data can be “classified” before being added to the aggregated data.
  • Example classifications of aggregated data can include a reference data classification and one or more diagnostic classifications. For example, if the DUS is operating without complaint, test-related data obtained from the DUS can be classified as reference data upon aggregation into the aggregated data. However, if the DUS is operating with a fault in the braking system, test-related data from the DUS can be classified as “faulty brake” data upon aggregation into the aggregated data. Many other types of classifications are possible as well.
  • Reference data for a DUS and perhaps other data can used in generation of “baseline data” for a DUS.
  • the baseline data can include a statistical summary over data taken for devices that share “core-device information” (e.g., year, model, make, and/or ECU type/release information).
  • the baseline data can get aggregated and/or updated over time. For example, as more test-related data is aggregated for devices under service that share core-device information, the baseline data can have higher confidence values and/or intervals for aggregated baseline data over time.
  • faulty brake data can get aggregated and, as more faulty brake data is aggregated over time, the faulty brake data can have increasingly higher confidence values and/or intervals for aggregated faulty-brake data over time. Data aggregation for other classifications is possible as well.
  • the DUS report can be generated based on a comparison of the test-related data and aggregated data.
  • the server device can store aggregated data, perhaps including core-device information and/or baseline data for the DUS.
  • the server device can determine a subset of the aggregated data based on the device-related data for the DUS. Then, the test-related data and the determined subset of aggregated data can be compared to determine the statistical analysis, including a number of statistics for the test-related data, for the DUS report.
  • the statistical analysis can be generated based on a “differential analysis” or comparison of the DUS operated in one or more “operating states.”
  • Example operating states for an engine of a vehicle include no-load/lightly-loaded operating states (e.g., an “idle” operating state), various operating states under normal loads (e.g., a “cruising” operating state, a “cranking” operating state), and operating states at or near maximum load (e.g., a “high-speed” operating state). Other operating states are possible as well.
  • the statistical analysis can be determined based on differences in test-related data as the DUS operated in the operating state at two or more different times and/or between two or more different measurements.
  • a “rules engine” can take the statistical analysis and information about the complaint, evaluate the statistical analysis in light of one or more rules about the DUS and the complaint data, and provide a strategy to investigate the complaint.
  • the rules engine can include an inference engine, such as an expert system or problem-solving system, with a knowledge base related at least to evaluation, diagnosis, operation, and/or repair of the DUS.
  • the rules engine can generate the DUS report, perhaps by combining the statistical analysis and the strategy for addressing the complaint.
  • the client device and/or the server device can include the rules engine.
  • the rules engine of the client device differs from the rules engine of the server device.
  • the DUS report can be displayed, perhaps on the client device, perhaps to permit carrying out a strategy of the DUS report. Feedback regarding sub-strategies of the strategy can be provided and used to adjust a “sub-strategy-success estimate” or likelihood that the sub-strategy can address a problem mentioned in the complaint data. Such sub-strategy-success estimates can be provided with the DUS report.
  • this system can evaluate the test-related data in the context of a larger population of data. By comparing the test-related data with classified aggregated data and/or baseline data, any discrepancies between the classified aggregated data and/or baseline data and the test-related data as shown in the statistical analysis can be more readily identified and thus speed diagnosis and repair of the DUS.
  • differential analyses using a testing procedure of merely operating a DUS in an operating state and/or at two (or more) different times/sources (e.g., aggregated/baseline data and test-related data), initial diagnostic procedures can be simplified.
  • a strategy provided with the DUS report can include sub-strategies for diagnosis and/or repair of the DUS, further decreasing time to repair.
  • the device-under-repair report greatly reduces, if not eliminates, guess work about the test-related data, and reduces down time for the device-under-repair.
  • FIG. 1 is a block diagram of an example system 100 .
  • System 100 comprises device-under-service (DUS) 102 and devices 104 and 106 .
  • DUS device-under-service
  • device 104 is referred to as a client device
  • device 106 is referred to as a server device.
  • FIG. 1 and other diagrams and flow charts accompanying this description are provided merely as examples and are not intended to be limiting. Many of the elements illustrated in the figures and/or described herein are functional elements that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Those skilled in the art will appreciate that other arrangements and elements (for example, machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead. In particular, some or all of the functionality described herein as client device functionality (and/or functionality of components of a client device) may be implemented by a server device, and some or all of the functionality described herein as server device functionality (and/or functionality of components of a server device) may be implemented by a client device.
  • client device functionality and/or functionality of components of a client device
  • server device functionality and/or functionality of components of a server device
  • DUS 102 can comprise a vehicle, such as an automobile, a motorcycle, a semi-tractor, farm machinery, or some other vehicle.
  • System 100 is operable to carry out a variety of functions, including functions for servicing DUS 102 .
  • the example embodiments can include or be utilized with any appropriate voltage or current source, such as a battery, an alternator, a fuel cell, and the like, providing any appropriate current and/or voltage, such as about 12 volts, about 42 volts, and the like.
  • the example embodiments can be used with any desired system or engine.
  • Those systems or engines can comprise items utilizing fossil fuels, such as gasoline, natural gas, propane, and the like, electricity, such as that generated by battery, magneto, fuel cell, solar cell and the like, wind and hybrids or combinations thereof.
  • Those systems or engines can be incorporated into other systems, such as an automobile, a truck, a boat or ship, a motorcycle, a generator, an airplane and the like.
  • Client device 104 and/or server device 106 can be computing devices, such as example computing device 200 described below in the context of FIG. 2 .
  • client device 104 comprises a digital volt meter (DVM), a digital volt ohm meter (DVOM), and/or some other type of measurement device.
  • DVM digital volt meter
  • DVOM digital volt ohm meter
  • Network 110 can be established to communicatively link devices 104 and 106 . Any one of these devices can communicate via network 110 once the device establishes a connection or link with network 110 .
  • FIG. 1 shows network 110 connected to: client device 104 via link 114 and device 106 connected via link 116 .
  • DUS 102 can be connected to network 110 as well.
  • Network 110 can include and/or connect to a data network, such as a wide area network (WAN), a local area network (LAN), one or more public communication networks, such as the Internet, one or more private communication networks, or any combination of such networks.
  • Network 110 can include wired and/or wireless links and/or devices utilizing one or more communication protocols arranged according to one or more standards, such as an SAE International, International Organization for Standardization (ISO), or Institute of Electrical and Electronics Engineers (IEEE) standard.
  • Network 110 can be arranged to carry out communications according to a respective air-interface protocol.
  • Each air-interface protocol can be arranged according to an industry standard, such as an Institute of Electrical and Electronics Engineers (IEEE) 802 standard.
  • the IEEE 802 standard can comprise an IEEE 802.11 standard for Wireless Local Area Networks (e.g., IEEE 802.11 a, b, g, or n), an IEEE 802.15 standard for Wireless Personal Area Networks, an IEEE 802.15.1 standard for Wireless Personal Area Networks—Task Group 1, an IEEE 802.15.4 standard for Wireless Personal Area Networks—Task Group 4, an IEEE 802.16 standard for Broadband Wireless Metropolitan Area Networks, or some other IEEE 802 standard.
  • a wireless network (or link) arranged to carry out communications according to an IEEE 802.11 standard is referred to as a Wi-Fi network (or link)
  • a wireless network (or link) arranged to carry out communications according to an IEEE 802.15.1 standard is referred to as a Bluetooth network (or link)
  • a wireless network (or link) arranged to carry out communications according to an IEEE 802.15.4 standard is referred to as a Zigbee network (or link)
  • a wireless network (or link) arranged to carry out communications according to an IEEE 802.16 standard is referred to as a Wi-Max network (or link).
  • Network 110 can be arranged to carry out communications according to a wired communication protocol.
  • Each wired communication protocol can be arranged according to an industry standard, such as IEEE 802.3 (“Ethernet”) or IEEE 802.5 (“Token Ring”).
  • IEEE 802.3 (“Ethernet”)
  • IEEE 802.5 (“Token Ring”).
  • a wired network (or link) arranged to carry out communications according to an OBD-II protocol is referred to as an OBD-II network (or link)
  • a wired network (or link) arranged to carry out communications according to an IEEE 802.3 standard is referred to as an Ethernet network (or link)
  • a wired network (or link) arranged to carry out communications according to an IEEE 802.5 standard is referred to as a Token Ring network (or link).
  • wireless links to network 110 can be established using one or more wireless air interface communication protocols, such as but not limited to, Bluetooth, Wi-Fi, Zigbee, and/or WiMax.
  • wired links to network 110 can be established using one or more wired communication protocols, such as but not limited to, Ethernet and/or Token Ring.
  • links 114 and 116 can be wired and/or wireless links to network 110 . Additional wired and/or wireless links and/or protocols now known or later developed can be used in network 110 as well.
  • point-to-point wired and/or wireless links can be established between client device 104 and server device 106 .
  • herein-described functionality of network 110 can be performed by these point-to-point links.
  • additional devices not shown in FIG. 1 e.g., computing device, smartphone, personal digital assistant, telephone, etc.
  • Client device 104 and/or server device 106 can operate to communicate herein-described data, reports, requests, queries, profiles, displays, analyses, and/or other data (e.g., automobile repair data and/or instruction data) to one or more of these additional devices not shown in FIG. 1 .
  • data e.g., automobile repair data and/or instruction data
  • Client device 104 can connect to DUS 102 via link 112 .
  • link 112 is a wired connection to DUS 102 , perhaps an OBD-II link or Ethernet link.
  • link 112 is a wireless link.
  • the wireless link is configured to convey at least data formatted in accordance with an OBD-II protocol.
  • OBD-II scanner can be utilized to convey OBD-II data via a wireless link.
  • the OBD-II scanner is a device with a wired OBD-II link to DUS 102 and a wireless transmitter.
  • the OBD-II scanner can retrieve data formatted in accordance with an OBD-II protocol from DUS 102 and transmit the OBD-II formatted data via a wireless link (e.g., a Bluetooth, Wi-Fi, or Zigbee link) established using the wireless transmitter.
  • a wireless link e.g., a Bluetooth, Wi-Fi, or Zigbee link
  • An example OBD-II scanner is the VERDICT S3 Wireless Scanner Module manufactured by Snap-on Incorporated of Kenosha, Wis.
  • protocols other than an OBD-II protocol now known or later developed can be specify data formats and/or transmission.
  • a data logger (not shown in FIG. 1 ) can be used to collect data from DUS 102 while in operation. Once link 112 to DUS 102 is connected, the data logger can communicate the collected data to client device 104 and perhaps server device 106 .
  • FIG. 2 is a block diagram of an example computing device 200 .
  • computing device 200 includes a user interface 210 , network-communication interface 212 , a processor 214 , and data storage 216 , all of which may be linked together via a system bus, network, or other connection mechanism 220 .
  • User interface 210 is operable to present data to and/or receive data from a user of computing device 200 .
  • the user interface 200 can include input unit 230 and/or output unit 232 .
  • Input unit 230 can receive input, perhaps from a user of the computing device 200 .
  • Input unit 230 can comprise a keyboard, a keypad, a touch screen, a computer mouse, a track ball, a joystick, and/or other similar devices, now known or later developed, capable of receiving input at computing device 200 .
  • Output unit 232 can provide output, perhaps to a user of the computing device 200 .
  • Output unit 232 can comprise a visible output device for generating visual output(s), such as one or more cathode ray tubes (CRT), liquid crystal displays (LCD), light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices, now known or later developed, capable of displaying graphical, textual, and/or numerical information.
  • CTR cathode ray tubes
  • LCD liquid crystal displays
  • LEDs light emitting diodes
  • DLP digital light processing
  • Output unit 232 can alternately or additionally comprise one or more aural output devices for generating audible output(s), such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices, now known or later developed, capable of conveying sound and/or audible information.
  • aural output devices for generating audible output(s) such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices, now known or later developed, capable of conveying sound and/or audible information.
  • Network-communication interface 212 can include wireless interface 240 and/or wired interface 242 , perhaps for communicating via network 110 and/or via point-to-point link(s).
  • Wireless interface 240 can include a Bluetooth transceiver, a Zigbee transceiver, a Wi-Fi transceiver, a WiMAX transceiver, and/or some other type of wireless transceiver.
  • Wireless interface 240 can carry out communications with devices 102 , 104 , 106 , network 110 , and/or other device(s) configured to communicate wirelessly.
  • Wired interface 242 can be configured to communicate according to a wired communication protocol (e.g., Ethernet, OBD-II Token Ring) with devices 102 , 104 , and/or 106 , network 110 , and/or other device(s) configured to communicate via wired links.
  • Wired interface 242 can comprise a port, a wire, a cable, a fiber-optic link or a similar physical connection to devices 102 , 104 , 106 , network 110 , and/or other device(s) configured to communicate via wire.
  • wired interface 242 comprises a Universal Serial Bus (USB) port.
  • the USB port can communicatively connect to a first end of a USB cable, while a second end of the USB cable can communicatively connect to a USB port of another device connected to network 110 or some other device.
  • wired interface 242 comprises an Ethernet port.
  • the Ethernet port can communicatively connect to a first end of an Ethernet cable, while a second end of the Ethernet cable can communicatively connect to an Ethernet port of another device connected to network 110 or some other device.
  • network-communication interface 212 can provide reliable, secured, and/or authenticated communications.
  • information for ensuring reliable communications i.e., guaranteed message delivery
  • a message header and/or footer e.g., packet/message sequencing information, encapsulation header(s) and/or footer(s), size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values.
  • CRC cyclic redundancy check
  • Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA.
  • cryptographic protocols and/or algorithms such as, but not limited to, DES, AES, RSA, Diffie-Hellman, and/or DSA.
  • Other cryptographic protocols and/or algorithms may be used as well or in addition to those listed herein to secure (and then decrypt/decode) communications.
  • Processor 214 may comprise one or more general purpose processors (e.g., microprocessors manufactured by Intel or Advanced Micro Devices) and/or one or more special purpose processors (e.g., digital signal processors). Processor 214 may execute computer-readable program instructions 250 that are contained in data storage 216 and/or other instructions as described herein.
  • general purpose processors e.g., microprocessors manufactured by Intel or Advanced Micro Devices
  • special purpose processors e.g., digital signal processors
  • Processor 214 may execute computer-readable program instructions 250 that are contained in data storage 216 and/or other instructions as described herein.
  • Data storage 216 can comprise one or more computer-readable storage media readable by at least processor 214 .
  • the one or more computer-readable storage media can comprise volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with processor 214 .
  • data storage 216 is implemented using one physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other embodiments data storage 216 is implemented using two or more physical devices.
  • Data storage 216 can include computer-readable program instructions 250 and perhaps data.
  • Computer-readable program instructions 250 can include instructions executable by processor 214 and any storage required, respectively, to perform at least part of the herein-described techniques and/or at least part of the functionality of the herein-described devices and networks.
  • FIG. 3 is a block diagram of an example client device 104 .
  • Client device 104 can include communications interface 310 , user interface 320 , feedback collector 330 , rules engine 340 , data collector 350 , text analyzer 360 , data analyzer 370 , and data storage interface 380 , all connected via interconnect 390 .
  • client device 104 The arrangement of components of client device 104 as shown in FIG. 3 is an example arrangement. In other embodiments, client device 104 can utilize more or fewer components than shown in FIG. 3 to perform the herein-described functionality of client device 104 .
  • Communications interface 310 is configured to enable communications between client device 104 and other devices, perhaps including enabling reliable, secured, and/or authenticated communications.
  • An example communication interface 310 is network-communication interface 212 , described above in the context of FIG. 2 .
  • User interface 320 is configured to enable communications between client device 104 and a user of client device 104 , including but not limited to, communicating reports (including data analysis, strategies, and/or sub-strategies), requests, messages, feedback, information and/or instructions described herein.
  • An example user interface 320 is user interface 210 , described above in the context of FIG. 2 .
  • Feedback collector 330 is configured to request, receive, store, and/or retrieve “feedback” or input on reports, strategies, and/or sub-strategies as described herein.
  • the Rules engine 340 is configured to receive data, analyze the received data, and generate corresponding DUS reports.
  • the received data can include, but is not limited to, the complaint data, unprocessed test-related data, processed test-related data (e.g., a statistical or differential analysis of test-related data), reference data, classifications/reliability determinations of received data, predetermined data values (e.g., hard-coded data), and/or other data.
  • the received data can be analyzed by matching the received data with one or more rules and/or an existing population of data.
  • the one or more rules can be stored in a diagnostic rules and strategy data base.
  • a query to the rules and strategy data base regarding received data e.g., a complaint
  • Each of the related one or more rules can “fire” (e.g., become applicable) based on the received data and/or existing population of data.
  • a query to the rules and strategy data base can include a complaint regarding “rough engine performance.”
  • the returned rules can include a first rule related to fuel flow that can be fired based on received fuel flow data and/or additional received data related to DUS performance related to fuel flow.
  • a second rule related to fuel flow can be fired upon a comparison of received fuel flow data with an “aggregated” (e.g., previously stored) population of fuel flow data.
  • an “aggregated” e.g., previously stored
  • rules engine 340 can determine which rule(s) should fire and determine one or more responses associated with the fired rules.
  • One possible response includes a set of one or more sub-strategies for diagnosis and/or repair of DUS 102 .
  • Example sub-strategies include recommendations to “replace fuel flow sensor” or “inspect fuel filter.”
  • Other example sub-strategies include request(s) that one or more additional tests be performed on DUS 102 ; e.g., “test battery voltage” or “operate engine at 2500-3000 revolutions per minute (RPMs).”
  • the request(s) for additional test(s) can include instructions (i.e., instructions to a technician, testing parameters, and/or computer-language instructions/commands) for performing the additional test(s) on the DUS.
  • the one or more responses can include diagnostic data, such as, but not limited to, one or more values of received data, aggregated data, and/or baseline data, one or more comparisons between received data, aggregated data, and/or baseline data, and one or more values and/or comparisons of similar data values in previously-received data, aggregated data, and/or baseline data.
  • diagnostic data such as, but not limited to, one or more values of received data, aggregated data, and/or baseline data, one or more comparisons between received data, aggregated data, and/or baseline data, and one or more values and/or comparisons of similar data values in previously-received data, aggregated data, and/or baseline data.
  • Other responses, sub-strategies, diagnostic data, and/or examples are possible as well.
  • a sub-strategy can be associated with a sub-strategy-success estimate expressed as a percentage (e.g., “Recommendation 2 is 28% likely to succeed”), as a ranked list of sub-strategies (e.g., a higher-ranked sub-strategy would have a better sub-strategy-success estimate than a lower-ranked sub-strategy), as a numerical and/or textual value, (e.g., “Action 1 has a grade of 95, which is an ‘A’ sub-strategy”), and/or by some other type of expression.
  • a percentage e.g., “Recommendation 2 is 28% likely to succeed”
  • a ranked list of sub-strategies e.g., a higher-ranked sub-strategy would have a better sub-strategy-success estimate than a lower-ranked sub-strate
  • the sub-strategy-success estimate for a given sub-strategy can be adjusted based on feedback, such as the feedback collected by feedback collector 330 , for the given sub-strategy.
  • feedback such as the feedback collected by feedback collector 330 .
  • SS 1 the feedback collected by feedback collector 330
  • SS 3 the sub-strategies: SS 1 , SS 2 , and SS 3 . Further suppose that feedback was received that SS 1 was unsuccessful, SS 2 was successful, and SS 3 was not utilized.
  • the sub-strategy-success estimate for SS 1 can be reduced (i.e., treated as unsuccessful), the sub-strategy-success estimate for SS 2 can be increased (i.e., treated as successful), and the sub-strategy-success estimate for SS 3 can be maintained.
  • Other adjustments based on feedback are possible as well.
  • sub-strategy-failure estimates can be determined, stored, and/or adjusted instead of (or along with) sub-strategy-success estimates; for example, in these embodiments, sub-strategy-failure estimates can be adjusted downward when corresponding sub-strategies are successfully utilized, and adjusted upward when corresponding sub-strategies are unsuccessfully utilized.
  • one or more given sub-strategies in the set of one or more sub-strategies can be excluded from a DUS report.
  • a maximum number of sub-strategies MaxSS can be provided and sub-strategies beyond the maximum number of sub-strategies MaxSS could be excluded.
  • the sub-strategies could then be selected based on various criteria; e.g., the first (or last) MaxSS sub-strategies generated by rules engine 340 , random selection of MaxSS sub-strategies, based on sub-strategy-success estimates, and/or based on sub-strategy-failure estimates.
  • sub-strategies whose sub-strategy-success estimate did not exceed a threshold-success-estimate value (or failed to exceed a threshold-failure-estimate value) can be excluded. Combinations of these criteria can be utilized; e.g., select the first MaxSS sub-strategies that exceed the threshold-success-estimate value or select the MaxSS sub-strategies that exceed the threshold-success-estimate value and have the highest sub-strategy-success estimates out of all sub-strategies.
  • the DUS report can include part or all of the statistical analysis, diagnostic data, and some or all of the sub-strategies of DUS 102 .
  • Other information such as information related to DUS 102 and/or complaint data can be provided with the DUS report.
  • the engine coolant for this make, model, and year of vehicle should be in the range of 190-220 degrees F. after operating for 100 seconds at 600-650 RPMs. Also, during baseline operation of this vehicle as performed on Aug. 1, 2008, the engine coolant was at 200 degrees F. after operating for 100 seconds at 600-650 RPMs. Diagnostic Strategy (3 sub-strategies most likely to be successful shown): 1. Add coolant and operate vehicle at idle. Inspect coolant system (radiator, hoses, etc.) for coolant drips/leaks during idle operation. Repair leaking components. Success likelihood: 90% 2. Drain coolant and replace radiator and hoses. Success likelihood: 40% 3. Inspect water pump for damage. If damaged, replace water pump. Success likelihood: 20% Additional sub-strategies available.
  • DUS report is shown as a text-only report
  • additional types of data can be used in the reports described herein (including but not limited to DUS reports) such as, but not limited to visual/graphical/image data, video data, audio data, links and/or other address information (e.g., Uniform Resource Locators (URLs), Uniform Resource Indicators (URIs), Internet Protocol (IP) addresses, Media Access Layer (MAC) addresses, and/or other address information), and/or computer instructions (e.g., HyperText Transfer Protocol (HTTP), eXtended Markup Language (XML), Flash®, JavaTM, JavaScriptTM, and/or other computer-language instructions).
  • URLs Uniform Resource Locators
  • URIs Uniform Resource Indicators
  • IP Internet Protocol
  • MAC Media Access Layer
  • HTML HyperText Transfer Protocol
  • XML eXtended Markup Language
  • Flash® JavaTM, JavaScriptTM, and/or other computer-language instructions
  • Data collector 350 can coordinate testing activities for one or more tests run during a “data collection session” of testing of DUS 102 .
  • data collector 350 can issue “DUS-test requests” or requests for data related to DUS 102 and receive “DUS-test-related data” in response.
  • a DUS-test request can be related to one or more “tests” for DUS 102 .
  • a test of DUS 102 can include performing one or more activities (e.g., repair, diagnostics) at DUS 102 , collecting data from DUS 102 (e.g., obtaining data from one or more sensors of DUS 102 ), and/or receive device-related data for DUS 102 (i.e., receive device-related data via a user interface or via a network-communication interface).
  • the DUS-test-related data for each test run during the data collection session can be combined into “DUS-related data” collected during the entire data collection session.
  • DUS-related data can include data obtained via a data logger operating to collect data during operation of DUS 102 .
  • Some DUS-test requests can be made in accordance with an OBD-II protocol, perhaps via communications using an OBD-II message format.
  • An OBD-II message format can include: start-of-frame and end-of-frame data, a message identifier, an identifier related to remote messaging, an acknowledgment flag, cyclic redundancy check (CRC) data, and OBD-II payload data.
  • the OBD-II payload data can include a control field indicating a number of bytes in an OBD-II payload field, and the OBD-II payload field.
  • the OBD-II payload field can specify an OBD-II mode, an OBD-II parameter ID (PID), and additional payload data.
  • Example OBD-II modes include, but are not limited to, modes to: show current data, show freeze frame data, show one or more frames of previously-recorded data (e.g., movies of OBD-II data), show stored Diagnostic Trouble Codes (DTCs), clear DTCs and stored values, test results for oxygen sensor monitoring, test results for other components, show DTCs detected during current or last driving cycle, control operation of on-board component/system, request vehicle information mode, and a permanent/cleared DTC mode.
  • Example OBD-II PIDs include, but are not limited to, freeze DTC, fuel system status, engine coolant temperature, fuel trim, fuel pressure, engine revolutions/minute (RPMs), vehicle speed, timing advance, and intake air temperature. Many other OBD-II modes and OBD-II PIDs are possible as well.
  • data collector 350 can update a data collection display to show progress of data collection and testing.
  • An example data collection display is described in more detail with respect to FIG. 5 below. Completion of data collection can be determined by a rules engine (e.g., rules engine 340 and/or rules engine 440 ).
  • Data collector 350 can receive and store test-related data, perhaps in a “DUS profile” associated with DUS 102 .
  • the DUS profile can be created, updated, and/or deleted by data collector 350 .
  • the DUS profile can be configured to updated and/or created to store device-related data, complaint data, DUS reports, and/or test-related data taken at various times during the life of the DUS (e.g., baseline data).
  • the data stored in the DUS profile can be a service history of a DUS 102 .
  • data collector 350 can generate a DUS-profile report of the service history.
  • the DUS-profile report can include a reference to test-related data obtained at various times related to DUS 102 . In other embodiments, some or all of the obtained test-related data can be included directly in the DUS-profile report. In still other embodiments, the DUS-profile report does not include references to the test-related data.
  • Text analyzer 360 can perform a “textual analysis” of the complaint data; that is, text analyzer 360 can parse, or otherwise examine, the complaint data to find key words and/or phrases related to service (i.e., testing, diagnosis, and/or repair) of DUS 102 .
  • the complaint data can be parsed for key words/phrases such as “running roughly,” “idle”, “runs smoothly” and “cruising” to request one or more tests of overall engine performance (e.g., based on the terms “running roughly” and “runs smoothly”) at both an “idle” condition and a “cruising” condition.
  • key words/phrases, complaint data, parsing/examination of complaint data, and/or test requests are possible as well.
  • the complaint can be specified using a “complaint code.”
  • a compliant can be specified as an alphanumeric code could be used; e.g., Code E0001 represents a general engine failure, code E0002 represents a rough idling engine, etc.
  • the complaint data can include the compliant code.
  • text analyzer 360 can generate one or more complaint codes as a result of textual analysis of the complaint.
  • Data analyzer 370 can analyze data related to DUS 102 .
  • data analyzer can generate a “statistical analysis” comparing received data related to DUS 102 and an existing population of data.
  • the existing population of data can include, but is not limited to, aggregated data, reference data, and/or stored data related to DUS 102 .
  • Reference data can include data from a manufacturer, component supplier, and/or other sources indicating expected values of data for DUS 102 when DUS 102 is operating normally.
  • Stored data related to DUS 102 can include data for device-under-test 102 captured and stored at time(s) prior to receiving the received data. This stored data can include baseline data for DUS 102 .
  • aggregated data can include some or all of the reference data and stored data related to DUS 102 . As such, the aggregated data can be treated as the existing population of data.
  • the statistical analysis can include matching received data with a subset of the existing population of data, such as by matching received data for a given DUS with an existing population of data for device(s) sharing the same core-device information (e.g., year, model, make, ECU information) with the given DUS.
  • core-device information e.g., year, model, make, ECU information
  • Many other types of subset matching of the existing population of data are possible as well, such as use of other information than the core-device information, narrowing a subset of data, and/or expanding a subset of data.
  • An example of narrowing the subset of data includes filtering the subset of data for a particular release of the ECU.
  • Example expansions of the subset of data include: adding similar models of vehicles sharing core-device information, adding earlier and/or later years of data, and/or adding data of different makes known to be manufactured by a common manufacturer. Many other examples of subset matching, narrowing subsets of data, and expanding subsets of data are possible as well.
  • the statistical analysis can include indications of matching values between the received data and the existing population of data, range(s) of values of the existing population of data and a comparison of received data relative to the range (e.g., determine coolant temperature for the existing population of data is between 155° F. and 175° F. and the received coolant temperature of 160° F. is within this range), and/or determine statistics for the received data and/or the existing population of data (e.g., mean, median, mode, variance, and/or standard deviation).
  • the statistical analysis can include analysis of data from one or more sensors and/or one or more types of data (e.g., analysis of both fuel trim and fuel pressure data).
  • the statistical analysis can include comparisons of data received from DUS 102 over time. For example, the received data can be compared with baseline data for DUS 102 to generate the statistical analysis and/or a differential analysis between the baseline data and the received data.
  • data analyzer 370 can use one or more of the techniques for classifying test-related data as discussed below in the context of FIG. 4 . For example, data analyzer 370 can classify one or more data values as baseline data.
  • received data generated within a testing interval of time can be statistically analyzed; for example, to determine statistics within the testing interval, to remove or determine outlying data points, and/or for other types of statistical analysis.
  • reference data and/or aggregated data can be used as baseline data.
  • data in the existing population of data can be statistically analyzed within testing intervals of time.
  • Data storage interface 380 is configured to store and/or retrieve data and/or instructions utilized by client device 104 .
  • An example data storage interface 380 is data storage 216 , described above in the context of FIG. 2 .
  • FIG. 4 is a block diagram of an example server device 106 .
  • Server device 106 can include communications interface 410 , data aggregator 420 , data analyzer 430 , rules engine 440 , text analyzer 450 , data collector 460 , feedback collector 470 , and data storage interface 480 , all connected via interconnect 490 .
  • server device 106 The arrangement of components of server device 106 as shown in FIG. 4 is an example arrangement. In other embodiments, server device 106 can utilize more or fewer components than shown in FIG. 4 to perform the herein-described functionality of server device 106 .
  • Communications interface 410 is configured to enable communications between server device 106 and other devices.
  • An example communication interface 410 is network-communication interface 212 , described above in the context of FIG. 2 .
  • Data aggregator 420 can create, update, and/or delete a DUS profile associated with DUS 102 and perhaps generate a related DUS-profile report using the techniques described above with respect to FIG. 3 .
  • client device 104 and/or server device 106 can maintain DUS profile(s) and generate DUS-profile report(s).
  • Data aggregator 420 can classify data based on a complaint. For example, all test-related data related to complaints about DUSs failing to start can be classified as data related to “failing to start” complaints. Upon aggregation into a set of data sharing a common classification, a portion of the data can be retained as aggregated data. For example, data in the “failing to start” classification related to starters, batteries, and electrical systems could be aggregated. Other data can be aggregated as well, aggregated into another classification, and/or discarded. For example, data likely to be unrelated to a complaint can be reclassified and aggregated based on the reclassification.
  • tire pressure data conveyed as part of “failing to start” test-related data
  • tire pressure data could be reclassified as “tire pressure data” and so aggregated.
  • Many other types of aggregation based on complaint-oriented classifications are possible as well.
  • Data aggregator 420 can classify test-related data based on reliability. Classifying test-related data for reliability can include comparing data values of test-related data with reference values and/or baseline data. Some example techniques for comparing data values with reference values/baseline data are to determine that:
  • predetermined values e.g., 28 PSI
  • ranges e.g., 28-34 PSI
  • thresholds e.g., a 3 PSI threshold
  • matching patterns e.g., “1*2” as a pattern matching a string that begins with a “1” and ends with a “2”.
  • Reference and/or baseline data can also be based on data values previously-classified as reliable. For example, suppose three devices had respective temperature readings of 98, 99, and 103 degrees, and that all three temperature readings were reliable. Then, the average A of these three values (100 degrees) and/or range R of these three values (5 degrees) can be used as reference values: e.g., a temperature data value can be compared to A, A+R, A ⁇ R, A ⁇ R, A ⁇ cR, for a constant value c, c1A ⁇ c2R for constant values c1, c2). Many other bases for use of reliable data values as reference and/or baseline data are possible as well.
  • Reference and/or baseline data can be based on a statistical screening of data.
  • the statistical screening can involve generating one or more statistics for data to be aggregated into reference and/or baseline data and then aggregating the data based on the generated statistics.
  • test-related data included a measurement value of Meas 1 taken using a sensor Sens 1 .
  • aggregated data related to the measurement value from sensor Sens 1 indicated a mean measurement value of MeanMeas 1 with a standard deviation of SDMeas 1 .
  • a number of standard deviations NSD from the mean MeanMeas 1 for Meas 1 could be determined, perhaps using the formula
  • N ⁇ ⁇ S ⁇ ⁇ D ⁇ MeanMeas ⁇ ⁇ 1 - Meas ⁇ ⁇ 1 ⁇ SDMeas ⁇ ⁇ 1 .
  • statistical screening for a set of data values can be performed only if a predetermined number of data values N have been aggregated into the set of data values. In these embodiments, if the number of aggregated data values is less than N then data values can be aggregated without statistical screening until at least N data values have been aggregated. In some embodiments, N can be large enough to gather data without screening for a considerable period of time (e.g., one or more months). Then, after the considerable amount of time, screening can be performed, thus permitting data gathering during the considerable amount of time without focusing on average good or failed values.
  • Data aggregator 420 can classify test-related data in connection with rules engine 440 .
  • rules engine 440 can instruct data aggregator 420 to use one or more techniques for classifying one or more data values in the test-related data.
  • data aggregator 420 can communicate some or all of the test-related data and/or some or all of the baseline values to rules engine 440 , rules engine 440 can classify of the test-related data and subsequently communicate a classification of the test-related data to data aggregator 420 .
  • data aggregator 420 can perform a preliminary classification for test-related data; and upon a preliminary classification that the test-related data is reliable, communicate some or all of the test-related data and/or some or all of the baseline values to rules engine 440 for a final determination of reliability. Finally-determined-reliable data can then be added to baseline data, as described above. In still other embodiments, data aggregator 420 can determine test-related data is reliable without communicating with rules engine 440 .
  • Such classified data values and/or reference data can be combined or aggregated into aggregated data by data aggregator 420 .
  • the aggregated data can be updated over time; for example, classified data values can be added or otherwise combined with the aggregated data based on a classification of the data values.
  • aggregated data can include data values that have not been classified; e.g., total populations of data, or all data for a specific DUS.
  • the aggregated data can be stored, perhaps in a database, and later retrieved and used for classifications and/or for other purposes.
  • Data analyzer 430 can analyze data related to DUS 102 , such as described above for data analyzer 370 in the context of FIG. 3 .
  • Rules engine 440 can receive data, perhaps including complaint data, analyze the received data, and generate corresponding DUS reports, such as described above for rules engine 340 in the context of FIG. 3 .
  • Text analyzer 450 can parse, or otherwise examine, complaint data to find key words and/or phrases related to service of DUS 102 , such as described above for text analyzer 360 in the context of FIG. 3 .
  • Data collector 460 can coordinate testing activities for one or more tests run during a data collection session of testing of DUS 102 , such as described above for data collector 450 in the context of FIG. 3 .
  • Feedback collector 470 is configured to request, receive, store, and/or retrieve “feedback” or input on reports and/or sub-strategies, such as described above for feedback collector 330 in the context of FIG. 3 .
  • Data storage interface 480 is configured to store and/or retrieve data and/or instructions utilized by server device 106 .
  • An example of data storage interface 460 is data storage 216 , described above in the context of FIG. 2 .
  • FIG. 5 depicts an example data collection display 500 , including DUS identification 510 , overall status bar 520 , detailed diagnostic status 530 , and test status bars 540 , 542 , 544 , and 546 .
  • DUS identification 510 can include device-related data that specifies a DUS.
  • Overall status bar 520 can visually, numerically, and/or textually show the status of a data collection session. As shown in FIG. 5 , overall status bar 520 graphically, textually and numerically shows percentage completion of the data collection session; in this example, the data collection session is 63% complete.
  • Detailed diagnostic status 530 can provide additional progress information about the data collection session, such as but not limited to, communication status (e.g., the “Communication Established” and “Communicating” indicators shown in FIG. 5 ), data input status (e.g., the “Complaint Captured” indicator shown in FIG. 5 ), test-related-data capture status (e.g., the “Checking Codes”, “Monitors”, and “Collecting Data” indicators shown in FIG. 5 ), and analysis status (e.g., the “Analyzing Data” indicator shown in FIG. 5 ).
  • communication status e.g., the “Communication Established” and “Communicating” indicators shown in FIG. 5
  • data input status e.g., the “Complaint Captured” indicator shown in FIG. 5
  • test-related-data capture status e.g., the “Checking Codes”, “Monitors”, and “Collecting Data” indicators shown in FIG. 5
  • analysis status e.g.,
  • Test status bars 540 , 542 , 544 , and 546 can provide status of one or more tests conducted during a data collection session. As shown in FIG. 5 , test status bars 540 , 542 , 544 , and 546 graphically, textually and numerically each respectively show the percentage completion of a test; for example, test status bar 540 of FIG. 5 shows the “Cranking Test” is 80% complete.
  • data collection display 500 can be enhanced with use of audible instructions and/or tones.
  • a tone and/or audible instruction can be used to inform a vehicle technician to change operating state and/or perform another test of a device-under-service; e.g., a tone or instruction to “Please increase acceleration to operate the vehicle at 2500 RPMs now.”
  • a tone and/or audible instruction can be used to inform that operation is out of expected ranges; e.g., for a 2500 RPM test, a tone and/or audible instruction can instruct the technician to increase acceleration when the RPMs rate is under the desired 2500 RPM rate.
  • text corresponding to such audible instructions can be displayed on data collection display 500 .
  • a variety of communications may be carried out via network 110 . Examples of those communications are illustrated in FIGS. 6A , 6 B, 6 C, 6 D, 7 A, 7 B, and 7 C.
  • the communications shown in FIGS. 6A , 6 B, 6 C, 6 D, 7 A, 7 B, and 7 C can be in the form of messages, signals, packets, protocol data units (PDUs), frames, fragments and/or any other suitable type of communication configured to be communicated between devices.
  • PDUs protocol data units
  • FIG. 6A shows an example scenario 600 for processing diagnostic request 610 , responsively generating DUS-report display 632 , and receiving success-related data 640 .
  • Scenario 600 begins with diagnostic request 610 being received at client device 104 .
  • Client device 104 inspects diagnostic request 610 to determine one or more tests related to DUS 102 and responsively generates DUS-test request 612 for performing the one or more tests and communicates DUS-test request 612 to DUS 102 .
  • data collector 350 of client device 104 generates DUS-test request 612 .
  • DUS-test request 612 is formatted in accordance with an OBD-II protocol.
  • Client device 104 also inspects diagnostic request 610 for a complaint (shown in FIG. 6A as “C 1 ” with diagnostic request 610 ).
  • complaint C 1 is not further inspected at client device 104 ; while in other embodiments, text analyzer 360 can perform a textual analysis of complaint C 1 .
  • Complaint C 1 can be provided by a user as text and/or as a complaint code, as mentioned above.
  • DUS-test request 612 Upon reception of DUS-test request 612 at DUS 102 , the one or more tests are performed. Data resulting from the one or more tests is gathered and communicated from DUS 102 to client device 104 as DUS-related data 614 . Client device 104 then communicates DUS-related data and complaint C 1 to server device 106 using DUS-related data 616 .
  • FIG. 6A shows that in response to DUS-related data 616 , server device generates diagnostic request 620 with a request for one or more additional tests (depicted as T 1 ). Details of generation of diagnostic request 620 are described below with respect to FIG. 6B .
  • client device 104 Upon reception of diagnostic request 620 , client device 104 communicates DUS-test request 622 to carry out the additional tests T 1 . Upon reception of DUS-test request 622 at DUS 102 , the one or more additional tests T 1 are performed. Data from the one or more additional tests is gathered and communicated from DUS 102 to client device 104 as DUS-related data 624 . Client device 104 then communicates DUS-related data and complaint C 1 to server device 106 using DUS-related data 626 . In some scenarios not shown in FIG. 6A , DUS-related data 626 does not include complaint C 1 as C 1 had already been communicated to server device 106 (via DUS-related data 616 ) and so C 1 could be stored by server device 106 .
  • FIG. 6A shows that in response to DUS-related data 626 , server device 106 generates DUS report 630 with strategy Si and communicates DUS report 630 to client device 630 .
  • strategy S 1 includes one or more sub-strategies SS 1 , SS 2 , etc. to address complaint C 1 . Sub-strategies to address complaints are discussed above in more detail with respect to FIG. 3 Details of the generation of DUS report 630 are described below with respect to FIG. 6C .
  • client device 104 In response to receiving DUS report 630 , client device 104 generates and communicates DUS-report display 632 .
  • An example DUS-report display is shown in Table 1 above.
  • scenario 600 continues with client device 104 receiving success-related data 640 .
  • FIG. 6A shows success-related data 640 with F(SS 1 ), which is feedback F for sub-strategy SS 1 of strategy S 1 . Feedback on sub-strategies is discussed above in more detail with respect to FIG. 3 .
  • client device 104 In response to success-related data 640 , client device 104 communicates corresponding success-related data 642 with F(SS 1 ) to server device 106 .
  • server device 106 can send a DUS-report in response to DUS-related data 616 (i.e., server device 106 does not request additional tests/data). In other scenarios not shown in FIG. 6A , server device 106 can send two or more diagnostic requests to request more additional test(s). In other scenarios not shown in FIG. 6A , client device 104 can receive and analyze DUS-related data 616 and 626 to generate DUS report 630 , such as described below in more detail with respect to FIGS. 7A , 7 B, and 7 C. That is, client device 104 can perform some or all of the functionality described herein with respect to server device 106 in scenarios 600 , 650 , and/or 680 . In still other scenarios not shown in FIG. 6A , no success-related data is received in response to DUS-report display 632 (i.e., no feedback on strategy S 1 is provided to client device 104 and/or server device 106 ).
  • FIG. 6B shows an example scenario 650 for processing DUS-related data 616 and responsively generating diagnostic request 620 .
  • DUS-related data with complaint C 1 616 is received at communications interface 410 of server device 106 .
  • FIG. 6B shows that complaint query 662 is generated by text analyzer 450 in response to complaint C 1 660 .
  • Complaint query 662 can include key words/phrases as determined based on textual analysis of complaint C 1 , such as described above with respect to FIG. 3 .
  • DUS-related data 670 is communicated from communications interface 410 to both data aggregator 420 and data analyzer 430 .
  • FIG. 6B shows that complaint C 1 is not included with DUS-related data 670 ; but in some embodiments not shown in FIG. 6B , DUS-related data 670 includes C 1 (i.e., is a copy of as DUS-related data 616 ).
  • data aggregator 420 and/or rules engine 440 can classify DUS-related data using the techniques described above in the context of FIG. 4 .
  • data aggregator 420 can query or otherwise access aggregated data 672 to determine baseline data 674 (shown in FIG. 6B as “Base Data 674 ”) for DUS-related data 670 .
  • classification 676 (shown in FIG. 6B as “Class 676 ”) can be generated by data aggregator 420 and/or rules engine 440 . Once generated, classification 676 can be communicated to rules engine 440 .
  • DUS-related data 670 can be stored, perhaps according to and/or along with classification 670 , by data aggregator 420 in aggregated data 672 .
  • data analyzer 430 Upon reception of DUS-related data 670 , data analyzer 430 can generate statistical analysis (SA) 678 of DUS-related data 670 , perhaps based on baseline data 674 , using the techniques described above in the context of FIGS. 3 and 4 . Data analyzer 430 can communicate statistical analysis 678 to rules engine 440 .
  • SA statistical analysis
  • rules engine 440 can communicate query 666 with complaint data (shown in FIG. 6B as “Comp”) and statistical analysis SA to diagnostic rules and strategy data base 664 (shown in FIG. 6B as “Diag Rules/Strat 664 ”) using the techniques described above in the context of FIGS. 3 and 4 .
  • diagnostic rules and strategy data base 664 can communicate strategy 668 (shown in FIG. 6B as “S 0 ”) including one or more rules and associated sub-strategies to rules engine 440 .
  • strategy 668 shown in FIG. 6B as “S 0 ”
  • rules engine 440 can determine which rule(s) of strategy 668 fire, and so determine the fired rule(s)' associated sub-strategy/sub-strategies.
  • rules engine 640 determines that additional data is required, based on the fired rule(s) and associated sub-strategy/sub-strategies.
  • Rules engine 640 can generate diagnostic request 620 to execute test(s) T 1 to obtain the additional data and communicate diagnostic request 620 to communications interface 410 . Communications interface 410 can then send diagnostic request 620 .
  • FIG. 6C shows an example scenario 680 for processing DUS-related data 626 and responsively generating DUS-report 630 .
  • DUS-related data with complaint C 1 626 is received at communications interface 410 of server device 106 .
  • complaint C 1 can be analyzed by a text analyzer to determine a complaint query.
  • scenario 680 assumes C 1 has already been analyzed by a text analyzer, such as described above with respect to FIGS. 3 and 6B .
  • DUS-related data 682 is communicated from communications interface 410 to data analyzer 430 .
  • DUS-related data 626 is provided to a data aggregator to possibly be combined with aggregated data, such as described above in FIGS. 4 and 6B .
  • FIG. 6C shows that complaint C 1 is not included with DUS-related data 682 ; but in some embodiments not shown in FIG. 6C , DUS-related data 682 includes C 1 (i.e., is a copy of DUS-related data 626 ).
  • data analyzer 430 can generate statistical analysis 686 (shown in FIG. 6C as “SA 2 ”) of DUS-related data 682 , using the techniques described above in the context of FIGS. 3 , 4 , and 6 B.
  • Data analyzer 430 can communicate statistical analysis 2 686 to rules engine 440 .
  • rules engine 440 can communicate query 688 with previously-determined complaint data (shown in FIG. 6C as “Comp”) and statistical analysis 686 to diagnostic rules and strategy data base 664 (shown in FIG. 6C as “Diag Rules/Strat 664 ”) using the techniques described above in the context of FIGS. 3 and 4 .
  • diagnostic rules and strategy data base 664 can communicate strategy 690 (shown in FIG. 6B as “S 1 +”) including one or more rules and associated sub-strategy/sub-strategies to rules engine 440 .
  • rules engine 440 can determine which rule(s) should fire and their associated sub-strategy/sub-strategies.
  • rules engine 640 generates DUS report 630 that can include some or all of statistical analysis 686 and/or some or all of the sub-strategies of strategy 690 (collectively shown in FIG. 6C as “S 1 ”) and communicates DUS report 630 to communication interface 410 . Communications interface 410 can then send DUS report 630 .
  • FIG. 6D shows another example scenario 600 a for processing diagnostic request 610 , responsively generating DUS-report display 632 , and receiving success-related data 640 .
  • Scenario 600 a is an alternative to scenario 600 where server device 106 directs testing of DUS 102 , rather than client device 104 .
  • Scenario 600 a begins with diagnostic request 610 being received at client device 104 .
  • Client device 104 forwards diagnostic request 610 as diagnostic request 610 a to server device 106 .
  • Server device 106 can examine diagnostic request 610 a to determine one or more tests related to DUS 102 and responsively generates DUS-test request 612 for performing the one or more tests and communicates DUS-test request 612 to DUS 102 .
  • Server device 106 can inspect diagnostic request 610 for a complaint (shown in FIG. 6C as “C 1 ” with diagnostic request 610 a ).
  • complaint C 1 is not further inspected at server device 106 ; while in other embodiments, text analyzer 450 can perform a textual analysis of complaint C 1 .
  • Complaint C 1 can be provided by a user as text and/or as a complaint code, as mentioned above.
  • DUS-test request 612 a Upon reception of DUS-test request 612 a at DUS 102 , the one or more tests are performed. Data resulting from the one or more tests is gathered and communicated from DUS 102 to server device 106 as DUS-related data 614 a.
  • FIG. 6D shows that in response to DUS-related data 614 a , server device 106 generates DUS-test request 622 a to carry out one or more additional tests (depicted as T 1 ). Server device 106 generates DUS-test request 622 a using similar techniques to the techniques described in FIG. 6B for generation of diagnostic request 620 .
  • the one or more additional tests T 1 are performed. Data from the one or more additional tests is gathered and communicated from DUS 102 to server device 106 as DUS-related data 624 a.
  • FIG. 6D shows that in response to DUS-related data 624 a , server device 106 generates DUS report 630 with strategy S 1 and communicates DUS report 630 to client device 630 . Details of the generation of DUS report 630 are described above with respect to FIG. 6C .
  • scenario 600 a The remainder of scenario 600 a regarding DUS-report display 632 , success-related data 640 ,and success-related data 642 with F(SS 1 ), is the same as discussed above for scenario 600 in the context of FIG. 6A .
  • server device 106 can send a DUS-report in response to DUS-related data 614 a (i.e., server device 106 does not request additional tests/data). In other scenarios not shown in FIG. 6D , server device 106 can send two or more DUS-Test requests to request additional test(s). In still other scenarios not shown in FIG. 6D , no success-related data is received in response to DUS-report display 632 (i.e., no feedback on strategy S 1 is provided to client device 104 and/or server device 106 ).
  • FIG. 7A shows an example scenario 700 for processing diagnostic request 710 , responsively generating DUS-report display 730 , and receiving success-related data for the DUS-report display 732 at client device 104 .
  • some or all of the techniques and communications involving client device 104 can be performed by server device 106 .
  • Client device 104 can determine one or more tests related to DUS 102 based on received diagnostic request 710 and responsively generate DUS-test request 720 to DUS 102 for performing the one or more tests.
  • data collector 350 generates DUS-test request 720 .
  • DUS-test request 720 is formatted in accordance with an OBD-II protocol.
  • the test(s) in DUS-test request 720 relate to a first operating state (shown as “State 1 ” in FIG. 7A ) of DUS 102 .
  • Example operating states of DUS 102 include a no-load/lightly-loaded operating state (e.g., an “idle” operating state), various operating states under normal loads (e.g., a “cruising” operating state, a “cranking” operating state), and operating states at or near maximum load (e.g., a “high-speed” operating state). Other operating states are possible as well.
  • Client device 104 can inspect diagnostic request 710 for a complaint (shown in FIG. 7A as “C 2 ” with diagnostic request 710 ).
  • Complaint C 2 can be provided by a user as text and/or as a complaint code, as mentioned above.
  • client device 104 e.g., text analyzer 360
  • DUS-test request 720 Upon reception of DUS-test request 720 at DUS 102 , the one or more tests associated with first operating state State 1 are performed. Data from the one or more tests are gathered and communicated to client device 104 as DUS-related data 722 .
  • one or more additional sequences of DUS-test requests and DUS-related data can be communicated between client device 104 and DUS 102 ; for example, to communicate additional data required while DUS 102 is operating in either first operating state State 1 or to communicate data while DUS 102 is operating in other operating state(s) than first operating state State 1 .
  • FIG. 7A shows that, in response to receiving DUS-related data 722 , client device 104 generates and communicates DUS-report display 730 related to a strategy S 2 .
  • An example DUS-report display is shown above in Table 1.
  • scenario 700 continues with client device 104 receiving success-related data 732 .
  • FIG. 6A shows success-related data 730 with F(SS 3 , SS 4 ), which is feedback F for sub-strategies SS 3 and SS 4 of strategy S 2 . Feedback on sub-strategies is discussed above in more detail with respect to FIG. 3 and FIG. 6A .
  • no success-related data is received in response to DUS-report display 730 (i.e., no feedback on strategy S 2 is provided to client device 104 ).
  • FIG. 7B shows an example scenario 750 for processing diagnostic request 710 and responsively generating DUS-test request 720 .
  • Diagnostic request with complaint C 2 710 is received at communications interface 310 of client device 104 .
  • FIG. 7B shows that, complaint query 762 is generated by text analyzer 360 in response to complaint C 2 760 .
  • Complaint query 762 can include key words/phrases as determined based on textual analysis of complaint C 2 , such as described above with respect to FIGS. 3 and 6B .
  • Diagnostic request 710 is communicated from communications interface 310 to both rules engine 340 and data collector 350 .
  • FIG. 6B shows that complaint C 2 is included with diagnostic request 710 ; but in some embodiments not shown in FIG. 7B , a diagnostic request without complaint C 2 can be provided to rules engine 340 and/or data collector 350 .
  • rules engine 340 can communicate query 764 with complaint data (shown in FIG. 7B as “Comp 2 ”) to diagnostic rules and strategy data base 770 (shown in FIG. 7B as “Diag Rules/Strat 770 ”) using the techniques described above in the context of FIGS. 3 , 4 , and 6 B.
  • diagnostic rules and strategy data base 770 can communicate differential test request 766 related to an operating states of DUS 102 , shown in FIG. 7B as “State 1 .”
  • Rules engine 640 can generate DUS-test request 720 to execute test(s) to obtain data related to first operating state State 1 and communicate DUS-test request 720 to communications interface 310 . Communications interface 310 can then send DUS-test request 720 .
  • data collector 350 can create or update DUS profile 776 , using the techniques described above in the context of FIG. 3 .
  • DUS profile 776 can be stored in a database of DUS profiles, such as profile data 772 shown in FIG. 7B .
  • Profile data 772 can be queried to create, update, and retrieve DUS profiles based on profile-related criteria such as described above in the context of FIG. 3 .
  • FIG. 7C shows an example scenario 780 for processing DUS-related data 722 and responsively generating DUS-report display 730 .
  • DUS-related data 722 related to first operating state State 1 of DUS 102 is received at communications interface 310 .
  • DUS-related data 732 can include complaint C 2 , which can in turn be analyzed by a text analyzer (e.g., text analyzer 360 ) to determine a complaint query.
  • scenario 780 assumes C 2 has already been analyzed by a text analyzer, such as described above with respect to FIGS. 3 and 7B .
  • data collector 350 can update DUS profile 776 as needed to include data related to first operating state State 1 , using the techniques described above in the context of FIG. 3 .
  • FIG. 7C depicts DUS profile 776 updated to store data for first operating state State 1 (shown in FIG. 7C as “State 1 Data”).
  • Data analyzer 370 can generate a differential analysis by comparing data of DUS 102 while operating in one or more “operating states.”
  • FIG. 7C shows data analyzer 370 communicating State 1 data request 786 to profile data 772 to request data related to first operating state State 1 DUS 102 .
  • profile data 772 retrieves the data related to first operating state State 1 from DUS profile 776 and communicates the retrieved data to data analyzer 370 via State 1 data response 788 .
  • Data analyzer 370 can compare data related to first operating state State 1 with aggregated data 796 .
  • aggregated data 796 can be equivalent to aggregated data 672 discussed above in the context of FIGS. 6B and 6C .
  • some or all of aggregated data 796 is not stored on client device 104 ; rather queries for aggregated data are sent via communications interface 310 for remote processing.
  • Data analyzer 370 can query aggregated data 796 to determine aggregated data for operating state State 1 798 (shown in FIG. 7C as “State 1 Agg Data”). Upon reception of data related to first operating state State 1 and aggregated data 798 , data analyzer 370 can generate differential analysis (DA) 790 .
  • DA differential analysis
  • FIG. 8A depicts an example flow chart that illustrates functions 800 for generating differential analysis 790 .
  • the operating state value n is set to 1.
  • a grid cell n is determined for data related to operating state n.
  • FIG. 8B shows an example grid 870 with grid cell 872 corresponding to first operating state State 1 and a grid cell 874 corresponding to second operating state State 2 .
  • Grid 870 is a two-dimensional grid with revolutions per minute (RPM) on the horizontal axis of grid 870 and load on the vertical axis of grid 870 .
  • RPM revolutions per minute
  • load can be determined based on a vacuum reading (e.g., manifold vacuum for a vehicle acting as DUS 102 ).
  • a vacuum reading e.g., manifold vacuum for a vehicle acting as DUS 102 .
  • each grid cell includes a range of revolutions per minute and a load range.
  • the data related to an operating state can be examined to determine revolutions per minute data and load data.
  • the revolutions per minute data can be compared with the ranges of revolutions per minute data of the grid to determine a grid row for the given operating state and the load data can be compared with ranges of load data of the grid to determine a grid column for the given operating state.
  • the grid cell for the given operating state is specified by the determined grid row/grid column pair. Other techniques for determining a grid cell for data related to an operating state are possible as well.
  • grid cells 872 and 874 can indicate that operating state State 1 is an “idle” or similar no/low-load state and operating state State 2 is a “cruising” or similar operation-under-normal-load state.
  • operating state State 1 is an “idle” or similar no/low-load state
  • operating state State 2 is a “cruising” or similar operation-under-normal-load state.
  • Many other examples are possible as well, including but not limited to grids with fewer or more grid cells and/or non-square grids.
  • the data can be verified as related to (or not related to) a specific operating state. For example, suppose that data D 1 is received as being related to an “idle” operating state and that G 1 is a grid cell determined for D 1 . By determining that G 1 is a grid cell related to the “idle” operating state, D 1 can be verified as being taken from of the specific “idle” operating state.
  • D 2 be data from a test requested from a “cruising” operating state
  • G 2 be a grid cell determined for D 2 using the techniques mentioned above, and suppose that G 2 does not relate to the “cruising” operating state (e.g., G 2 relates to the idle operating state instead). Since D 2 is not related to a grid cell for the specific “cruising” operating state, D 1 is not verified to be in the specific “cruising” operating state.
  • a request can be generated to re-execute a test in the appropriate operating state.
  • D 2 since D 2 was not verified as being from the “cruising” operating state, another test can be requested to generate data from the “cruising” operating state.
  • the data can be verified by other techniques than use of the grid cell.
  • a vehicle can be known, perhaps by direct observation and/or by data not used to assign grid cells, to be operating in a given operating state.
  • a driver of a vehicle operating in a “cruising” operating state could state that “I know I was consistently driving between 30 and 35 MPH throughout the test.”
  • the data can be verified as being from the given “cruising” operating state.
  • erroneous data used to assign data to grid cells and subsequent operating states that failed to indicate the vehicle was in the “cruising” operating state is indicative of a problem.
  • Consequent repair strategies to correct causes for the erroneous data can be utilized to address the problem.
  • aggregated data n is determined based on grid cell n.
  • data analyzer 370 can query aggregated data 796 to retrieve data related to the DUS and for data within a grid cell (i.e., data taken with ranges of revolutions per minute and load data for grid cell n).
  • data analyzer 370 can query aggregated data 796 to retrieve data related to the DUS and filter the retrieved data for data within grid cell n, thus determining aggregated data n.
  • Other techniques for determining aggregated data n are possible as well.
  • a differential analysis list (DA list) n is generated based on a comparison of data related to operating state n and aggregated data n.
  • the differential analysis list can be generated based on data related to operating state n and aggregated data n that differs.
  • Example techniques for determining differences between a data value related to operating state n and a value of aggregated data n include determining that: a data value is not the same as a value of aggregated data, the data value is not within a range of data values of aggregated data, the data value is either above or below a threshold value of the value(s) of aggregated data, the data value does not match one or more values of aggregated data, each of a number of data values is not the same, within a range, and/or within a threshold of an aggregated data value, computation(s) on the data value(s), perhaps including reference values, is/are compared to the reference values, and/or negations of these conditions.
  • a statistical analysis of the data related to the operating state n and/or the aggregated data n can be used to generate the differential analysis list n.
  • the statistical screening techniques discussed above in the context of FIG. 3 can be applied to the data related to the operating state n and/or the aggregated data n.
  • the statistical screening can involve generating one or more statistics for aggregated data n and then comparing the data related to operating state n based on the generated statistics.
  • n For example, suppose data related to the operating state n included a measurement value of Mn taken using a sensor Sens 1 . Further suppose that aggregated data n from sensor Sens 1 indicated a mean measurement value of AggMeanMn with a standard deviation of AggSDMn. Then, a number of standard deviations NSDMn from the mean AggMeanMn for Meas 1 could be determined, perhaps using the formula
  • N ⁇ ⁇ S ⁇ ⁇ D ⁇ ⁇ Mn ⁇ AggMeanMn ⁇ - Mn ⁇ AggSDMn .
  • the measurement value Meas 1 could be rated based on the number of standard deviations NSDMn and one or more threshold values. For example, suppose the ratings in Table 3 below were used to evaluate the number of standard deviations NSDMn:
  • the techniques described above for the example measurement Mn and more advanced statistical analysis techniques including variances, correlations and/or principle component analyses can be applied to multiple variables (e.g., measurement Mn and other measurements Mn 1 , Mn 2 . . . ) to perform a “multi-variable analysis” of the data related to the operating state n and the aggregated data. Further, relationships between two or more variables of the data related to the operating state n and the aggregated data can be examined during the multi-variable analysis.
  • One or more variables of the data, or principle contributors can be chosen that are (a) related to the operating state n and/or the aggregated data and (b) separate the related to the operating state n and/or the aggregated data into different categories.
  • the principle contributors can be determined through operations on the aggregated database using techniques to identify a reduced set of principle basis vectors and most likely failure vectors.
  • the techniques include but are not limited to, singular value decomposition (SVD), correlations and/or variances. Projecting these vectors onto the space of real vehicle parameters and variables gives rise to the diagnostic strategies and prognostics for a vehicle.
  • Multi-variable correlation analysis can be used to compare data related to operating state n and aggregated data n. For example, suppose a vector V AD includes a number SN of sensor values of aggregated data related a particular complaint, including values for one or more principle components, and also suppose that the data related to operating state n includes a vector V NAD of SN sensor values of non-aggregated data from a device-under-service with the particular compliant, also including values for one or more principle components.
  • a correlation analysis can be run between the data in the vectors V AD and V NAD .
  • a “pattern correlation” or Pearson product-moment correlation coefficient can be calculated between V AD and V NAD .
  • the Pearson product-moment correlation coefficient ⁇ for the vectors V AD and V NAD can be determined as
  • ⁇ 1 ⁇ +1, cov(X,Y) is the covariance of X and Y
  • ⁇ (X) is the standard deviation of X.
  • Another simplified example of multi-variable analysis can involve generating an n-dimensional space from a data set, such as the aggregated data, baseline data, and/or reference data, for one or more devices-under-test.
  • a data set such as the aggregated data, baseline data, and/or reference data
  • Each dimension in the n-dimensional space can represent a value of interest of a device-under-test, such as, but not limited to values of device-related data, values of sensor data from the device-under-test, reference and/or baseline data values, and statistics based on these values.
  • rules engine 340 and/or rules engine 440 can utilize the n-dimensional space.
  • rules engine 340 and/or rules engine 440 can receive n-dimensional input vector(s) corresponding to one or more measurements taken from one or more tests and perform vector and/or other operations to compare the input n-dimensional vector to one or more n-dimensional vectors of baseline data that share a basis with the input n-dimensional vector.
  • test data can be mapped into the n-dimensional vector space as an n-dimensional vector and rules engine 340 and/or rules engine 440 can process the n-dimensional vector(s) of baseline data and/or the input n-dimensional vector(s).
  • tire pressure tp in PSI
  • tire mileage tm in miles
  • tire temperature tt in degrees Fahrenheit
  • tire age to (in years) for this example, a basis set of Vbasis vectors for this space can be:
  • a 4-dimensional vector using the example basis set of vectors for test result indicating a 3-year-old tire has pressure of 30 pounds/square inch for a 3 year old tire is: [30 0 0 3] T .
  • Dimensions in the n vector space can be classified. For example, some dimensions can be classified as “static” dimensions, while others can be classified as “dynamic” dimensions.
  • a static dimension is a dimension that cannot be readily changed during a repair session of a DUS, if at all.
  • the tire age ta dimension when expressed in years, cannot be readily changed during a one-day repair session.
  • dynamic dimensions can readily be changed during a repair session.
  • the tire pressure tp dimension can be changed by a technician with access to an air hose, to add air and thus increase tp, and a screwdriver to release air and thus decrease tp.
  • measurements related to static dimensions can be used to classify one or more components of a DUS, and measurements related to dynamic dimensions can be adjusted during maintenance and/or repair procedures.
  • baseline values can be determined in the n-dimensional space, and adjustments to the device-under-test that correspond to the values of interest can be performed to align test-related data with baseline values.
  • baseline data for a 3-year tire at 70 degrees Fahrenheit is: [28 tm 70 3] T where tm is in the range of 10000*ta and 20000*ta; that is, between 40,000 and 80,000 miles, and that an example input vector of test-related data is: [37 50000 70 3] T .
  • Rules engine 340 and/or rules engine 440 can then fire a rule to provide a strategy or sub-strategy to lower the tire pressure.
  • a strategy or sub-strategy could be that tire pressure can be lowered by pressing a screwdriver onto a pin of a valve stem, permitting air to escape, and thus lowering the air pressure.
  • rules engine 340 and/or rules engine 440 can determine the tp dimension is a dynamic dimension and thus a sub-strategy can be used to adjust the value of the tp value for the DUT. Based on this determination, rules engine 340 and/or rules engine 440 can identify the above-mentioned sub-strategy to lower tire pressure.
  • a comparison is made between the operating state value n and a maximum number of operating states (“MaxOS” in FIG. 8B ).
  • MaxOS a maximum number of operating states
  • the maximum number of operating states is two. If the operating state value n is greater than or equal to the maximum number of operating states, the functions 800 continue at block 860 ; otherwise, the functions 800 continue at block 852 .
  • the operating state value n is incremented by 1 and the functions 800 continue at block 820 .
  • differential analysis 790 is determined by combining the differential analysis lists n, where n ranges from 1 to the maximum number of operating states.
  • the differential analysis lists can be combined by concatenating all lists, taking a union of the differential analysis lists, selecting some but not all data from each differential analysis list, or selecting some but not all differential analysis lists, and/or filtering each list for common differences.
  • Other techniques for combining the differential analysis lists are possible as well.
  • the maximum number of operating states can be equal to one.
  • the differential analysis would involve the comparison of block 840 between data related to operating state 1 and aggregated data for operating state 1 and the combining operation of block 860 could return the differential analysis list for operating state 1 .
  • the differential analysis for only one operating state involves a comparison of data related to the operating state and aggregated data for the operating state.
  • functions 800 can be used to generate a differential analysis by a comparison of data related to one or more operating states and aggregated data for those one or more operating states utilizing a grid of operating states.
  • data analyzer 370 can communicate differential analysis 790 to rules engine 340 .
  • rules engine 340 can communicate query 792 with previously-determined complaint data (shown in FIG. 7C as “Comp 2 ”) and differential analysis 790 to diagnostic rules and strategy data base 770 (shown in FIG. 7C as “Diag Rules/Strat 770 ”) using the techniques described above in the context of FIGS. 3 and 4 .
  • diagnostic rules and strategy data base 770 can communicate strategy 794 (shown in FIG. 6B as “S 2 +’”) including one or more rules and associated sub-strategies to rules engine 340 .
  • rules engine 340 can determine which rule(s) fire and their associated sub-strategy/sub-strategies.
  • rules engine 740 generates DUS-report display 740 that can include some or all of differential analysis 790 and/or some or all of the sub-strategies of strategy 794 (collectively shown in FIG. 7C as “S 2 ”) and communicates DUS-report display 740 to communication interface 310 . Communications interface 310 can then send DUS-report display 740 .
  • FIG. 9 depicts an example flow chart that illustrates functions 900 that can be carried out in accordance with an example embodiment.
  • the functions 900 can be carried out by one or more devices, such as server device 106 and/or client device 104 described in detail above in the context of FIGS. 1-7C .
  • Block 910 includes receiving DUS-related data for a device under service.
  • the DUS-related data could be received in a DUS-related data communication, such as described above in detail with respect to at least FIGS. 6A-7C .
  • the DUS-related data includes DUS-test data obtained from a DUS test performed on the DUS.
  • Block 920 includes determining that the DUS-related data is to be aggregated into aggregated data.
  • the determination to aggregate the DUS-related data can be based on a classification of the DUS-related data. The determination to aggregate the DUS-related data is described above in detail with respect to at least FIGS. 4 , 6 A, 6 B, and 6 C.
  • determining that the DUS-related data includes: determining one or more DUS attributes from the DUS-related data, selecting baseline data from the aggregated data based on the one or more DUS attributes, generating a baseline comparison between DUS-test data and baseline data, determining the classification for the DUS-related data based on the baseline comparison, and aggregating the DUS-related data into the aggregated data based on the classification.
  • Block 930 includes generating an aggregated-data comparison of the DUS-related data and the aggregated data. Comparisons of DUS-related data and aggregated data are described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 8 B.
  • generating the aggregated-data comparison of the DUS-related data and the aggregated data includes: (i) determining a basis of one or more vectors representing at least part of the aggregated data, (ii) determining a baseline-data vector of the baseline data, the baseline-data vector utilizing the basis, (iii) determining a DUS-data vector of the DUS-related data, the DUS-data vector utilizing the basis, and (iv) determining a vector difference between the baseline-data vector and the DUS-data vectors.
  • generating the aggregated-data comparison of the DUS-related data and the aggregated data includes generating a pattern correlation between at least some of the DUS-related data and at least some of the aggregated data. Pattern correlations are discussed above in more detail at least in the context of FIG. 8A .
  • Block 940 includes generating a DUS report based on the aggregated-data comparison.
  • the DUS report can include one or more sub-strategies; and in particular of these embodiments, at least one of the one or more sub-strategies can include sub-strategy-success estimate. DUS reports, sub-strategies, and sub-strategy-success estimates are described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 8 B.
  • the DUS-related data includes complaint data; in these embodiments, generating the DUS report includes generating the DUS report based on the complaint data. In particular of these embodiments, generating the DUS report includes: determining at least one complaint based on the complaint data, generating a query based on the at least one complaint, querying a rules engine of the device using the query; and in response to the query, receiving the one or more sub-strategies.
  • the compliant data includes complaint text; in these embodiments, determining the at least one complaint includes: generating a textual analysis of the complaint text and determining the at least one complaint based on the textual analysis.
  • the DUS-related data includes DUS-test data obtained from a first test performed on the DUS; in these embodiments generating the aggregated-data comparison includes performing a statistical analysis of the DUS-test data and the aggregated data, and generating the DUS report includes generating the query based on the statistical analysis and the at least one complaint.
  • the DUS-related data and the aggregated data each comprise data for a plurality of variables, and wherein the performing the statistical analysis comprises performing a multi-variable analysis of the data for at least two variables of the plurality of variables.
  • generating the DUS report based on the aggregated-data comparison can include determining at least one of the one or more sub-strategies based on a vector difference. Use of vector differences to determine sub-strategies is discussed above in more detail at least in the context of FIG. 8A .
  • the DUS-related data includes complaint data
  • generating the aggregated-data comparison of the DUS-related data and the aggregated data includes: (i) determining a reduced data set of the aggregated data based on the complaint data, (ii) determining a set of basis vectors based on the reduced dataset, and (iii) identifying one or more principle parameter components for a complaint in the complaint data based on a projection of the basis vectors onto the DUS-related data.
  • generating the DUS report based on the aggregated-data comparison includes: (iv) applying one or more rules about the principle parameter components, and (v) determining a sub-strategy based on the applied one or more rules.
  • Block 950 includes sending the DUS report. Sending the DUS report is described in more detail with respect to at least FIGS. 3 , 4 , 6 A, 6 C, 7 A, and 7 C.
  • functions 900 can further include generating a diagnostic request based on the aggregated-data comparison at the device, where the diagnostic request for requesting data related to a second DUS test performed on the DUS.
  • the diagnostic request includes instructions for performing the second DUS test.
  • functions 900 can further include receiving success-related data on a first sub-strategy of the one or more sub-strategies at the device and adjusting the sub-strategy-success estimate of at least the first sub-strategy based on the success-related data at the device.
  • FIG. 10 depicts an example flow chart that illustrates functions 1000 that can be carried out in accordance with an example embodiment.
  • the functions 1000 can be carried out by one or more devices, such as server device 106 and/or client device 104 described in detail above in the context of FIGS. 1-7C .
  • Block 1010 includes receiving a diagnostic request for a DUS. Diagnostic requests for devices under service are described above in detail with respect to at least FIGS. 3-7C .
  • Block 1020 includes sending a DUS-test request to perform a test related to the diagnostic request.
  • DUS test requests and tests of DUSs are described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.
  • Block 1030 includes receiving DUS-related data based on the test. Receiving DUS-related data is described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.
  • Block 1040 includes sending the DUS-related data.
  • Sending DUS-related data is described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.
  • the DUS-related data is sent via a network-communication interface.
  • Block 1050 includes receiving a DUS report based on the DUS-related data.
  • DUS reports are described above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.
  • the DUS report is received via a network-communication interface.
  • Block 1060 includes generating a DUS-report display of the DUS report. Generating the DUS-report display is described in more detail with respect to at least FIGS. 3 , 6 A, 7 A, and 7 C. In some embodiments, the DUS-report display is displayed via a user interface.
  • FIG. 11 depicts an example flow chart that illustrates functions 1100 that can be carried out in accordance with an example embodiment.
  • the functions 1100 can be carried out by one or more devices, such as server device 106 and/or client device 104 described in detail above in the context of FIGS. 1-7C .
  • Block 1110 includes receiving a diagnostic request to diagnose a DUS. Diagnostic requests for devices-under-service are described above in detail with respect to at least FIGS. 3-7C .
  • Block 1120 includes determining a test based on the diagnostic request.
  • the test can be related to a first operating state of the DUS. Operating states of devices-under-service and tests related to those operating states are discussed above in detail with respect to at least FIGS. 3 , 4 , and 7 A- 7 C.
  • the test includes a plurality of tests for the DUS.
  • Block 1130 includes requesting performance of the test at the first operating state of the DUS. Operating states of devices-under-service and tests at those operating states are discussed above in detail with respect to at least FIGS. 3 , 4 , and 7 A- 7 C.
  • Block 1140 includes receiving first-operating-state data for the DUS based on the test. Operating states of devices-under-service and data from tests at those operating states are discussed above in detail with respect to at least FIGS. 3 , 4 , and 7 A- 7 C.
  • the first-operating-state data includes data from at least two sensors associated with the DUS.
  • Block 1150 includes verifying that the first-operating-state data is or is not related to the first operating state.
  • verifying that the first-operating-state is related to the first operating state includes: determining a first grid cell for the first-operating state data, determining an operating state related to the first grid cell, and determining that the operating state related to the first grid cell is the first operating state.
  • verifying that the first-operating-state is not related to the first operating state includes: determining a first grid cell for the first-operating state data, determining an operating state related to the first grid cell, and determining that the operating state related to the first grid cell is not the first operating state. Verifying that data is or is not related to an operating state is discussed above in more detail with respect to at least FIG. 8 .
  • Block 1170 includes generating a differential analysis of the first-operating-state data. Differential analyses of data from devices-under-service are discussed above in detail with respect to at least FIGS. 3 , 4 , and 7 A- 8 B. In some embodiments, generating the differential analysis includes: determining first aggregated data for a first grid cell, and generating a first differential-analysis list for the first operating state based on a comparison of the first-operating-state data and the first aggregated data.
  • Block 1180 includes generating a DUS-report display.
  • the DUS-report display can be based on the differential analysis. Generating DUS-report displays is discussed above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.
  • Block 1190 includes sending the DUS-report display. Sending DUS-report displays is discussed above in detail with respect to at least FIGS. 3 , 4 , and 6 A- 7 C.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Algebra (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Automatic Analysis And Handling Materials Therefor (AREA)
  • Vehicle Cleaning, Maintenance, Repair, Refitting, And Outriggers (AREA)
  • Tests Of Electronic Circuits (AREA)
  • Prostheses (AREA)
  • Materials For Medical Uses (AREA)
  • Control And Other Processes For Unpacking Of Materials (AREA)
US13/031,565 2011-02-21 2011-02-21 Diagnostic Baselining Abandoned US20120215491A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
US13/031,565 US20120215491A1 (en) 2011-02-21 2011-02-21 Diagnostic Baselining
PCT/US2012/025802 WO2012115899A2 (en) 2011-02-21 2012-02-20 Diagnostic baselining
BR112013020413-3A BR112013020413B1 (pt) 2011-02-21 2012-02-20 método e dispositivo de cliente que realiza tal método
CA2827893A CA2827893C (en) 2011-02-21 2012-02-20 Diagnostic baselining
CN201280019046.7A CN103477366B (zh) 2011-02-21 2012-02-20 用于诊断服务中设备的方法和设备
CA3171201A CA3171201A1 (en) 2011-02-21 2012-02-20 Diagnostic baselining
EP12716731.0A EP2678832B1 (en) 2011-02-21 2012-02-20 Diagnostic baselining
US14/260,929 US11048604B2 (en) 2011-02-21 2014-04-24 Diagnostic baselining
US17/325,184 US20210279155A1 (en) 2011-02-21 2021-05-19 Diagnostic Baselining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/031,565 US20120215491A1 (en) 2011-02-21 2011-02-21 Diagnostic Baselining

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/260,929 Continuation US11048604B2 (en) 2011-02-21 2014-04-24 Diagnostic baselining

Publications (1)

Publication Number Publication Date
US20120215491A1 true US20120215491A1 (en) 2012-08-23

Family

ID=46001707

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/031,565 Abandoned US20120215491A1 (en) 2011-02-21 2011-02-21 Diagnostic Baselining
US14/260,929 Active 2032-01-15 US11048604B2 (en) 2011-02-21 2014-04-24 Diagnostic baselining
US17/325,184 Pending US20210279155A1 (en) 2011-02-21 2021-05-19 Diagnostic Baselining

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/260,929 Active 2032-01-15 US11048604B2 (en) 2011-02-21 2014-04-24 Diagnostic baselining
US17/325,184 Pending US20210279155A1 (en) 2011-02-21 2021-05-19 Diagnostic Baselining

Country Status (6)

Country Link
US (3) US20120215491A1 (zh)
EP (1) EP2678832B1 (zh)
CN (1) CN103477366B (zh)
BR (1) BR112013020413B1 (zh)
CA (2) CA3171201A1 (zh)
WO (1) WO2012115899A2 (zh)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090790A1 (en) * 2011-10-06 2013-04-11 GM Global Technology Operations LLC Acquisition of in-vehicle sensor data and rendering of aggregate average performance indicators
US20130198217A1 (en) * 2012-01-27 2013-08-01 Microsoft Corporation Techniques for testing rule-based query transformation and generation
US20130304278A1 (en) * 2012-05-09 2013-11-14 Ieon C. Chen Smart Phone App-Based Remote Vehicle Diagnostic System and Method
US9201930B1 (en) 2014-05-06 2015-12-01 Snap-On Incorporated Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool
US20160093123A1 (en) * 2014-09-25 2016-03-31 Volkswagen Ag Diagnostic procedures and method of collecting vehicles
US9317574B1 (en) 2012-06-11 2016-04-19 Dell Software Inc. System and method for managing and identifying subject matter experts
US9336244B2 (en) 2013-08-09 2016-05-10 Snap-On Incorporated Methods and systems for generating baselines regarding vehicle service request data
US9349016B1 (en) 2014-06-06 2016-05-24 Dell Software Inc. System and method for user-context-based data loss prevention
US9390240B1 (en) * 2012-06-11 2016-07-12 Dell Software Inc. System and method for querying data
US9501744B1 (en) 2012-06-11 2016-11-22 Dell Software Inc. System and method for classifying data
US9563782B1 (en) 2015-04-10 2017-02-07 Dell Software Inc. Systems and methods of secure self-service access to content
US9569626B1 (en) 2015-04-10 2017-02-14 Dell Software Inc. Systems and methods of reporting content-exposure events
US9578060B1 (en) 2012-06-11 2017-02-21 Dell Software Inc. System and method for data loss prevention across heterogeneous communications platforms
US9641555B1 (en) 2015-04-10 2017-05-02 Dell Software Inc. Systems and methods of tracking content-exposure events
WO2017079356A1 (en) * 2015-11-05 2017-05-11 Snap-On Incorporated Post-repair data comparison
CN106758945A (zh) * 2017-01-12 2017-05-31 中山市易达号信息技术有限公司 一种车位锁智能诊断方法
US9842220B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US9842218B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US20180017608A1 (en) * 2016-07-12 2018-01-18 Ford Motor Company Of Canada, Limited Electrical in-system process control tester
US20180032942A1 (en) * 2016-07-26 2018-02-01 Mitchell Repair Information Company, Llc Methods and Systems for Tracking Labor Efficiency
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US20180174221A1 (en) * 2016-12-15 2018-06-21 Snap-On Incorporated Methods and Systems for Automatically Generating Repair Orders
US20180247469A1 (en) * 2015-02-25 2018-08-30 Snap-On Incorporated Methods and Systems for Generating and Outputting Test Drive Scripts for Vehicles
US20180276208A1 (en) * 2017-03-27 2018-09-27 Dell Products, Lp Validating and Correlating Content
US10142391B1 (en) 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
US10227053B2 (en) * 2014-05-08 2019-03-12 Panasonic Intellectual Property Corporation Of America In-vehicle network system, electronic control unit, and update processing method
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US10432659B2 (en) * 2015-09-11 2019-10-01 Curtail, Inc. Implementation comparison-based security system
US10462256B2 (en) 2016-02-10 2019-10-29 Curtail, Inc. Comparison of behavioral populations for security and compliance monitoring
US10516768B2 (en) 2015-11-11 2019-12-24 Snap-On Incorporated Methods and systems for switching vehicle data transmission modes based on detecting a trigger and a request for a vehicle data message
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US10706645B1 (en) * 2016-03-09 2020-07-07 Drew Technologies, Inc. Remote diagnostic system and method
US11048604B2 (en) 2011-02-21 2021-06-29 Snap-On Incorporated Diagnostic baselining
US11609559B2 (en) 2019-12-30 2023-03-21 Industrial Technology Research Institute Data processing system and method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813308B2 (en) * 2014-06-04 2017-11-07 Verizon Patent And Licensing Inc. Statistical monitoring of customer devices
US11210871B2 (en) * 2015-08-05 2021-12-28 EZ Lynk SEZC System and method for remote emissions control unit monitoring and reprogramming
RU2626168C2 (ru) * 2015-12-30 2017-07-21 Общество с ограниченной ответственностью "ТМХ-Сервис" Способ технического диагностирования оборудования локомотива и устройство для его осуществления
CN105740086B (zh) * 2016-01-20 2019-01-08 北京京东尚科信息技术有限公司 一种故障智能诊断维修的方法及装置
CN107450507B (zh) * 2016-05-31 2021-03-09 优信拍(北京)信息科技有限公司 一种信息处理中间系统及方法
US10049512B2 (en) * 2016-06-20 2018-08-14 Ford Global Technologies, Llc Vehicle puddle lights for onboard diagnostics projection
US10055260B2 (en) * 2017-01-05 2018-08-21 Guardknox Cyber Technologies Ltd. Specially programmed computing systems with associated devices configured to implement centralized services ECU based on services oriented architecture and methods of use thereof
US11842149B2 (en) 2018-03-02 2023-12-12 General Electric Company System and method for maintenance of a fleet of machines

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050149566A1 (en) * 2003-10-31 2005-07-07 International Business Machines Corporation System, method and program product for management of life sciences data and related research
US20080004764A1 (en) * 2006-06-30 2008-01-03 Manokar Chinnadurai Diagnostics data collection and analysis method and apparatus to diagnose vehicle component failures

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6225234A (ja) * 1985-07-26 1987-02-03 Nissan Motor Co Ltd 車両用故障診断装置
JPH05280395A (ja) 1992-03-30 1993-10-26 Fuji Heavy Ind Ltd 空燃比制御系の異常検出方法
US5520160A (en) 1993-08-26 1996-05-28 Nippondenso Co., Ltd. Fuel evaporative gas and air-fuel ratio control system
US5574828A (en) * 1994-04-28 1996-11-12 Tmrc Expert system for generating guideline-based information tools
US6301531B1 (en) 1999-08-23 2001-10-09 General Electric Company Vehicle maintenance management system and method
US6513025B1 (en) * 1999-12-09 2003-01-28 Teradyne, Inc. Multistage machine learning process
US20020007237A1 (en) * 2000-06-14 2002-01-17 Phung Tam A. Method and system for the diagnosis of vehicles
US20020007289A1 (en) 2000-07-11 2002-01-17 Malin Mark Elliott Method and apparatus for processing automobile repair data and statistics
US20020016655A1 (en) * 2000-08-01 2002-02-07 Joao Raymond Anthony Apparatus and method for processing and/or for providing vehicle information and/or vehicle maintenance information
US6687596B2 (en) * 2001-08-31 2004-02-03 General Electric Company Diagnostic method and system for turbine engines
KR100497128B1 (ko) * 2001-12-08 2005-06-29 한국전자통신연구원 차량 성능의 진단 시스템 및 그 방법
US6760659B1 (en) * 2002-11-26 2004-07-06 Controls, Inc. Device and method for engine control
US6850071B1 (en) 2003-08-28 2005-02-01 Automotive Test Solutions, Inc. Spark monitor and kill circuit
JP4509602B2 (ja) 2004-02-27 2010-07-21 富士重工業株式会社 オペレータ側システムおよびモードファイルの特定方法
US7509538B2 (en) * 2004-04-21 2009-03-24 Microsoft Corporation Systems and methods for automated classification and analysis of large volumes of test result data
US6955097B1 (en) 2004-05-11 2005-10-18 Bei Sensors & Systems Company, Inc. Radial movement capacitive torque sensor
JP4032045B2 (ja) * 2004-08-13 2008-01-16 新キャタピラー三菱株式会社 データ処理方法及びデータ処理装置、並びに診断方法及び診断装置
US20060095230A1 (en) * 2004-11-02 2006-05-04 Jeff Grier Method and system for enhancing machine diagnostics aids using statistical feedback
US8412401B2 (en) * 2004-12-30 2013-04-02 Service Solutions U.S. Llc Method and system for retrieving diagnostic information from a vehicle
US7444216B2 (en) * 2005-01-14 2008-10-28 Mobile Productivity, Inc. User interface for display of task specific information
KR100764399B1 (ko) 2005-08-23 2007-10-05 주식회사 현대오토넷 텔레매틱스 시스템의 차량 관리 시스템 및 그 방법
JP4701977B2 (ja) * 2005-10-06 2011-06-15 株式会社デンソー 車載ネットワークの診断システム及び車載制御装置
US20070156311A1 (en) * 2005-12-29 2007-07-05 Elcock Albert F Communication of automotive diagnostic data
US7739007B2 (en) * 2006-03-29 2010-06-15 Snap-On Incorporated Vehicle diagnostic method and system with intelligent data collection
US7801671B1 (en) 2006-09-05 2010-09-21 Pederson Neal R Methods and apparatus for detecting misfires
US7765040B2 (en) * 2006-06-14 2010-07-27 Spx Corporation Reverse failure analysis method and apparatus for diagnostic testing
JP2008121534A (ja) 2006-11-10 2008-05-29 Denso Corp 内燃機関の異常診断装置
US7487035B2 (en) 2006-11-15 2009-02-03 Denso Corporation Cylinder abnormality diagnosis unit of internal combustion engine and controller of internal combustion engine
US7529974B2 (en) * 2006-11-30 2009-05-05 Microsoft Corporation Grouping failures to infer common causes
US7945438B2 (en) 2007-04-02 2011-05-17 International Business Machines Corporation Automated glossary creation
US8441484B2 (en) * 2007-09-04 2013-05-14 Cisco Technology, Inc. Network trouble-tickets displayed as dynamic multi-dimensional graph
JP4826609B2 (ja) 2008-08-29 2011-11-30 トヨタ自動車株式会社 車両用異常解析システム及び車両用異常解析方法
US8315760B2 (en) * 2008-12-03 2012-11-20 Mitchell Repair Information Company LLC Method and system for retrieving diagnostic information
US8095261B2 (en) 2009-03-05 2012-01-10 GM Global Technology Operations LLC Aggregated information fusion for enhanced diagnostics, prognostics and maintenance practices of vehicles
US20110077817A1 (en) * 2009-09-29 2011-03-31 Chin-Yang Sun Vehicle Diagnostic System And Method Thereof
US10665040B2 (en) * 2010-08-27 2020-05-26 Zonar Systems, Inc. Method and apparatus for remote vehicle diagnosis
US20120136802A1 (en) * 2010-11-30 2012-05-31 Zonar Systems, Inc. System and method for vehicle maintenance including remote diagnosis and reverse auction for identified repairs
US20120215491A1 (en) 2011-02-21 2012-08-23 Snap-On Incorporated Diagnostic Baselining

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050149566A1 (en) * 2003-10-31 2005-07-07 International Business Machines Corporation System, method and program product for management of life sciences data and related research
US20080004764A1 (en) * 2006-06-30 2008-01-03 Manokar Chinnadurai Diagnostics data collection and analysis method and apparatus to diagnose vehicle component failures

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11048604B2 (en) 2011-02-21 2021-06-29 Snap-On Incorporated Diagnostic baselining
US20130090790A1 (en) * 2011-10-06 2013-04-11 GM Global Technology Operations LLC Acquisition of in-vehicle sensor data and rendering of aggregate average performance indicators
US9299201B2 (en) * 2011-10-06 2016-03-29 GM Global Technology Operations LLC Acquisition of in-vehicle sensor data and rendering of aggregate average performance indicators
US20130198217A1 (en) * 2012-01-27 2013-08-01 Microsoft Corporation Techniques for testing rule-based query transformation and generation
US20130304278A1 (en) * 2012-05-09 2013-11-14 Ieon C. Chen Smart Phone App-Based Remote Vehicle Diagnostic System and Method
US9002554B2 (en) * 2012-05-09 2015-04-07 Innova Electronics, Inc. Smart phone app-based remote vehicle diagnostic system and method
US10146954B1 (en) 2012-06-11 2018-12-04 Quest Software Inc. System and method for data aggregation and analysis
US9317574B1 (en) 2012-06-11 2016-04-19 Dell Software Inc. System and method for managing and identifying subject matter experts
US9779260B1 (en) 2012-06-11 2017-10-03 Dell Software Inc. Aggregation and classification of secure data
US9390240B1 (en) * 2012-06-11 2016-07-12 Dell Software Inc. System and method for querying data
US9501744B1 (en) 2012-06-11 2016-11-22 Dell Software Inc. System and method for classifying data
US9578060B1 (en) 2012-06-11 2017-02-21 Dell Software Inc. System and method for data loss prevention across heterogeneous communications platforms
US9336244B2 (en) 2013-08-09 2016-05-10 Snap-On Incorporated Methods and systems for generating baselines regarding vehicle service request data
US9971815B2 (en) 2014-05-06 2018-05-15 Snap-On Incorporated Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool
US9201930B1 (en) 2014-05-06 2015-12-01 Snap-On Incorporated Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool
US10227053B2 (en) * 2014-05-08 2019-03-12 Panasonic Intellectual Property Corporation Of America In-vehicle network system, electronic control unit, and update processing method
US9349016B1 (en) 2014-06-06 2016-05-24 Dell Software Inc. System and method for user-context-based data loss prevention
US9805523B2 (en) * 2014-09-25 2017-10-31 Volkswagen Ag Diagnostic procedures and method of collecting vehicles
US20160093123A1 (en) * 2014-09-25 2016-03-31 Volkswagen Ag Diagnostic procedures and method of collecting vehicles
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10679433B2 (en) * 2015-02-25 2020-06-09 Snap-On Incorporated Methods and systems for generating and outputting test drive scripts for vehicles
US20180247469A1 (en) * 2015-02-25 2018-08-30 Snap-On Incorporated Methods and Systems for Generating and Outputting Test Drive Scripts for Vehicles
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US9563782B1 (en) 2015-04-10 2017-02-07 Dell Software Inc. Systems and methods of secure self-service access to content
US9569626B1 (en) 2015-04-10 2017-02-14 Dell Software Inc. Systems and methods of reporting content-exposure events
US9842218B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US10140466B1 (en) 2015-04-10 2018-11-27 Quest Software Inc. Systems and methods of secure self-service access to content
US9641555B1 (en) 2015-04-10 2017-05-02 Dell Software Inc. Systems and methods of tracking content-exposure events
US9842220B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US10432659B2 (en) * 2015-09-11 2019-10-01 Curtail, Inc. Implementation comparison-based security system
US11637856B2 (en) 2015-09-11 2023-04-25 Curtail, Inc. Implementation comparison-based security system
US10986119B2 (en) 2015-09-11 2021-04-20 Curtail, Inc. Implementation comparison-based security system
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
WO2017079356A1 (en) * 2015-11-05 2017-05-11 Snap-On Incorporated Post-repair data comparison
US9704141B2 (en) 2015-11-05 2017-07-11 Snap-On Incorporated Post-repair data comparison
US10516768B2 (en) 2015-11-11 2019-12-24 Snap-On Incorporated Methods and systems for switching vehicle data transmission modes based on detecting a trigger and a request for a vehicle data message
US11122143B2 (en) 2016-02-10 2021-09-14 Curtail, Inc. Comparison of behavioral populations for security and compliance monitoring
US10462256B2 (en) 2016-02-10 2019-10-29 Curtail, Inc. Comparison of behavioral populations for security and compliance monitoring
US10706645B1 (en) * 2016-03-09 2020-07-07 Drew Technologies, Inc. Remote diagnostic system and method
US10142391B1 (en) 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US20180017608A1 (en) * 2016-07-12 2018-01-18 Ford Motor Company Of Canada, Limited Electrical in-system process control tester
US20180032942A1 (en) * 2016-07-26 2018-02-01 Mitchell Repair Information Company, Llc Methods and Systems for Tracking Labor Efficiency
US10692035B2 (en) * 2016-07-26 2020-06-23 Mitchell Repair Information Company, Llc Methods and systems for tracking labor efficiency
US20180174221A1 (en) * 2016-12-15 2018-06-21 Snap-On Incorporated Methods and Systems for Automatically Generating Repair Orders
US11222379B2 (en) * 2016-12-15 2022-01-11 Snap-On Incorporated Methods and systems for automatically generating repair orders
CN106758945A (zh) * 2017-01-12 2017-05-31 中山市易达号信息技术有限公司 一种车位锁智能诊断方法
US10628496B2 (en) * 2017-03-27 2020-04-21 Dell Products, L.P. Validating and correlating content
US20180276208A1 (en) * 2017-03-27 2018-09-27 Dell Products, Lp Validating and Correlating Content
US11609559B2 (en) 2019-12-30 2023-03-21 Industrial Technology Research Institute Data processing system and method

Also Published As

Publication number Publication date
WO2012115899A3 (en) 2013-05-02
WO2012115899A2 (en) 2012-08-30
CN103477366B (zh) 2017-03-08
CN103477366A (zh) 2013-12-25
EP2678832A2 (en) 2014-01-01
CA2827893C (en) 2022-11-22
BR112013020413A2 (pt) 2017-08-08
EP2678832B1 (en) 2019-04-10
US20140244213A1 (en) 2014-08-28
CA2827893A1 (en) 2012-08-30
US11048604B2 (en) 2021-06-29
CA3171201A1 (en) 2012-08-30
US20210279155A1 (en) 2021-09-09
BR112013020413B1 (pt) 2021-02-09

Similar Documents

Publication Publication Date Title
US20210279155A1 (en) Diagnostic Baselining
EP3267400B1 (en) Vehicle-diagnostic client-server system
US9740993B2 (en) Detecting anomalies in field failure data
CN106104636B (zh) 使用基于网络的计算基础结构的汽车检测系统
US10769870B2 (en) Method and system for displaying PIDs based on a PID filter list
US11694491B2 (en) Method and system for providing diagnostic filter lists
US20170024943A1 (en) System and Method for Service Assessment
WO2010067547A1 (ja) 車両の故障診断装置
CN114511302A (zh) 用于提供车辆维修提示的方法和系统
US20190130668A1 (en) System and method for generating augmented checklist
US20210365309A1 (en) Method and System of Performing Diagnostic Flowchart
CN107450507B (zh) 一种信息处理中间系统及方法
CN118012019A (zh) 整车诊断方法、装置、设备及存储介质
MX2007010350A (es) Dispositivo y metodo para el diagnostico y cotización del funcionamiento vehicular.

Legal Events

Date Code Title Description
AS Assignment

Owner name: SNAP-ON INCORPORATED, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THERIOT, MARK;MERG, PATRICK S.;BROZOVICH, STEVE;SIGNING DATES FROM 20110216 TO 20110218;REEL/FRAME:025837/0571

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SNAP-ON INCORPORATED, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROZOVICH, ROY STEVEN;REEL/FRAME:056336/0038

Effective date: 20210518