US20210312567A1 - Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting - Google Patents
Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting Download PDFInfo
- Publication number
- US20210312567A1 US20210312567A1 US17/353,621 US202117353621A US2021312567A1 US 20210312567 A1 US20210312567 A1 US 20210312567A1 US 202117353621 A US202117353621 A US 202117353621A US 2021312567 A1 US2021312567 A1 US 2021312567A1
- Authority
- US
- United States
- Prior art keywords
- loss
- historical
- machine learning
- documents
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 170
- 238000012544 monitoring process Methods 0.000 title description 15
- 238000010801 machine learning Methods 0.000 claims abstract description 184
- 238000012549 training Methods 0.000 claims abstract description 72
- 230000015654 memory Effects 0.000 claims description 25
- 238000003058 natural language processing Methods 0.000 claims description 17
- 238000003860 storage Methods 0.000 claims description 8
- 238000009877 rendering Methods 0.000 claims 2
- 238000013528 artificial neural network Methods 0.000 abstract description 202
- 238000004422 calculation algorithm Methods 0.000 abstract description 142
- 238000013473 artificial intelligence Methods 0.000 abstract description 118
- 238000004458 analytical method Methods 0.000 description 51
- 230000006378 damage Effects 0.000 description 46
- 239000010410 layer Substances 0.000 description 39
- 238000003062 neural network model Methods 0.000 description 32
- 230000008569 process Effects 0.000 description 32
- 238000012545 processing Methods 0.000 description 31
- 210000002569 neuron Anatomy 0.000 description 22
- 230000000875 corresponding effect Effects 0.000 description 20
- 208000027418 Wounds and injury Diseases 0.000 description 17
- 238000010586 diagram Methods 0.000 description 17
- 208000014674 injury Diseases 0.000 description 17
- 230000006870 function Effects 0.000 description 16
- 230000002776 aggregation Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 13
- 238000004220 aggregation Methods 0.000 description 12
- 230000009471 action Effects 0.000 description 8
- 230000000306 recurrent effect Effects 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000001133 acceleration Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000036541 health Effects 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 230000000116 mitigating effect Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000000047 product Substances 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 241000282994 Cervidae Species 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000013480 data collection Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 238000007792 addition Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000007257 malfunction Effects 0.000 description 3
- 230000002459 sustained effect Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 239000006227 byproduct Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 210000002364 input neuron Anatomy 0.000 description 2
- 238000011068 loading method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000003442 weekly effect Effects 0.000 description 2
- 208000036119 Frailty Diseases 0.000 description 1
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002730 additional effect Effects 0.000 description 1
- 238000005054 agglomeration Methods 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 230000036626 alertness Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000013476 bayesian approach Methods 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000007659 motor function Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/016—Personal emergency signalling and security systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/26—Techniques for post-processing, e.g. correcting the recognition result
- G06V30/262—Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
- G06V30/274—Syntactic or semantic context, e.g. balancing
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B19/00—Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B23/00—Alarms responsive to unspecified undesired or abnormal conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the failure of an autonomous or semi-autonomous vehicle system may be discovered via the neural network (or other artificial intelligence or machine learning algorithm or model) analysis described above.
- An autonomous vehicle system failure such as “lane departure malfunction” may be used by loss reserving unit 154 .
- a user's completing a repair within a pre-set window of time, or one computed based upon loss probability, may cause the user to receive advantageous pricing as regards an existing or new insurance policy.
- claim input data may include images, including hand-written notes
- the AI platform may include a neural network (or other artificial intelligence or machine learning algorithm or model) trained to recognize hand-writing and to convert hand-writing to text.
- text-based content may be formatted in any acceptable data format, including structured query language (SQL) tables, flat files, hierarchical data formats (e.g., XML, JSON, etc.) or as other suitable electronic objects.
- SQL structured query language
- JSON JSON
- image and audio data may be fed directly into the neural network(s) without being converted to text first.
- the AI platform may modify the information available within an electronic claim record. For example, the AI platform may predict a series of labels as described above that pertain to a given claim. The labels may be saved in a risk indication data store, such as loss data 142 with respect to FIG. 1 . Next, the labels and corresponding weights, in one embodiment, may be received by loss reserve aggregation platform 106 , where they may be used in conjunction with base rate information to predict a claim loss value. Claims labeled with historical loss amounts may be used as training data.
- the method 600 may include receiving information corresponding to the vehicle by an AI platform (e.g., the AI platform 104 may accept input data such as input data 102 and may process that input by the use of an input analysis unit such as input analysis unit 120 ) (block 620 ).
- an AI platform e.g., the AI platform 104 may accept input data such as input data 102 and may process that input by the use of an input analysis unit such as input analysis unit 120 ) (block 620 ).
- the trained neural network model in block 914 may correspond to the machine learning algorithm trained in block 904 of FIG. 9A .
- the method may include identifying information 916 which may include a type of the damaged insured vehicle, a respective feature or characteristic of the damaged insured vehicle, a peril associated with the damaged insured vehicle, and/or a repair or replacement cost associated with the damaged insured vehicle.
- the information 916 may be used to facilitate handling an insurance claim associated with the damaged insured vehicle.
- Method 1300 may include receiving an indication of a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a data set (block 1310 ).
- the indication may be a pair of integers or other values respectively uniquely identifying a trained neural network (or other artificial intelligence or machine learning algorithm or model) and a data set.
- the trained neural network (or other artificial intelligence or machine learning algorithm or model) may have been trained in advance by a user, using, for example, loss reserving application 216 .
- Server 204 may include instructions for receiving the indication, selecting the appropriate neural network (or other artificial intelligence or machine learning algorithm or model) and data set, applying the data set to the neural network (or other artificial intelligence or machine learning algorithm or model), and returning execution output including at least identification of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a loss reserve amount produced via operation of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Technology Law (AREA)
- Development Economics (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Emergency Management (AREA)
- Medical Informatics (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Computer Security & Cryptography (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
Abstract
A method of determining loss reserves and/or providing automatic financial reporting related thereto via one or more processors includes (1) receiving a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount; (2) normalizing each respective claim loss amount and training an artificial intelligence or machine learning algorithm, module, or model, such as an artificial neural network, by applying the plurality of electronic claim documents to the artificial intelligence or machine learning algorithm, module, or model. The method may include receiving a user claim and predicting a loss reserve amount by applying the user claim to the trained artificial intelligence or machine learning algorithm, module, or model, and may include unreported claims.
Description
- This application claims priority to and the benefit of:
- U.S. Application No. 62/564,055, filed Sep. 27, 2017 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
- U.S. Application No. 62/580,655, filed Nov. 2, 2017 and entitled REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
- U.S. Application No. 62/610,599, filed Dec. 27, 2017 and entitled “AUTOMOBILE MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
- U.S. Application No. 62/621,218, filed Jan. 24, 2018 and entitled “AUTOMOBILE MONITORING SYSTEMS AND METHODS FOR LOSS MITIGATION AND CLAIMS HANDLING;”
- U.S. Application No. 62/621,797, filed Jan. 25, 2018 and entitled “AUTOMOBILE MONITORING SYSTEMS AND METHODS FOR LOSS RESERVING AND FINANCIAL REPORTING;”
- U.S. Application No. 62/580,713, filed Nov. 2, 2017 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
- U.S. Application No. 62/618,192, filed Jan. 17, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR DETECTING DAMAGE AND OTHER CONDITIONS;”
- U.S. Application No. 62/625,140, filed Feb. 1, 2018 and entitled “SYSTEMS AND METHODS FOR ESTABLISHING LOSS RESERVES FOR BUILDING/REAL PROPERTY INSURANCE;”
- U.S. Application No. 62/646,729, filed Mar. 22, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR LOSS MITIGATION AND CLAIMS HANDLING;”
- U.S. Application No. 62/646,735, filed Mar. 22, 2018 and entitled “REAL PROPERTY MONITORING SYSTEMS AND METHODS FOR RISK DETERMINATION;”
- U.S. Application No. 62/646,740, filed Mar. 22, 2018 and entitled “SYSTEMS AND METHODS FOR ESTABLISHING LOSS RESERVES FOR BUILDING/REAL PROPERTY INSURANCE;”
- U.S. Application No. 62/617,851, filed Jan. 16, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE PRICING AND UNDERWRITING;”
- U.S. Application No. 62/622,542, filed Jan. 26, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE LOSS MITIGATION AND CLAIMS HANDLING;”
- U.S. Application No. 62/632,884, filed Feb. 20, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE LOSS RESERVING AND FINANCIAL REPORTING;”
- U.S. Application No. 62/652,121, filed Apr. 3, 2018 and entitled “IMPLEMENTING MACHINE LEARNING FOR LIFE AND HEALTH INSURANCE CLAIMS HANDLING;”
- the entire disclosures of which are hereby incorporated by reference herein in their entireties.
- This disclosure generally relates to detecting damage, loss, injury and/or other conditions associated with an automobile using a computer system using an automobile monitoring system; and for processing, estimating, and optimizing loss reserves and financial reporting.
- As computer and computer networking technology has become less expensive and more widespread, more and more devices have started to incorporate digital “smart” functionalities. For example, controls and sensors capable of interfacing with a network may now be incorporated into devices such as vehicles. Furthermore, it is possible for one or more vehicle and/or central controllers to interface with the smart devices or sensors.
- However, conventional systems may not be able to automatically detect and characterize various conditions (e.g., damage, injury, etc.) associated with a vehicle and/or the vehicle's occupants, occupants of other vehicles, and/or pedestrians. Additionally, conventional systems may not be able to detect or sufficiently identify and describe damage that is hidden from human view, and that typically has to be characterized by explicit human physical exploration, extent and range of electrical malfunctions, etc. Conventional systems may not be able to formulate precise characterizations of loss without including unconscious biases, and may not be able to equally weight all historical data in determining loss reserving estimates.
- In general, “loss reserves” may be funds that may be pre-allocated by an insurer or other company (e.g., a mutual insurance company or capital stock insurance company) to offset known or anticipated losses. Some level of disclosure of loss reserves may be required (e.g., by public statute or contractual bylaws). Disclosure of loss reserves may be issued periodically (e.g., yearly) and may be included in a financial report such as an annual report, shareholder report (e.g., S.E.C. form 10-K), or other statement.
- Accurate loss reserves prediction historically may be a manual process in which actuaries or other financial scientists manually review claims and make guesses as to the final loss amounts associated with those claims. This manual process may be influenced by significant error margin due to human inexperience, limitations of recollection, bias, and other frailties. Accurate loss reserving practice may be very difficult to get right, and may have serious consequences for companies that neglect to do it properly. Underestimation of loss reserves may cause a company to believe that it has adequate capitalization, when in reality, it does not. Once the final loss amounts become known, the liquidity of the company may be negatively affected. On the other hand, overestimation of loss reserves may cause a company to set aside funds in excess of the necessary capital reserves. Doing so may prevent the company from using the capital for other purposes. Conventional techniques may have other drawbacks as well.
- The present disclosure generally relates to systems and methods for detecting damage, loss, injury and/or other conditions associated with a vehicle using a computer system; and methods and systems for processing, estimating, and optimizing loss reserving and financial reporting obligations. Embodiments of exemplary systems and computer-implemented methods are summarized below. The methods and systems summarized below may include additional, less, or alternate components, functionality, and/or actions, including those discussed elsewhere herein.
- In one aspect, a computer-implemented method of determining loss reserves is provided. The method may include receiving a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount, normalizing each respective claim loss amount, and training an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module or model) by applying the plurality of electronic claim documents to the artificial neural network. The method may further include receiving a user claim and predicting a loss reserve amount by applying the user claim to the trained artificial neural network (or other artificial intelligence or trained machine learning algorithm, program, module, or model).
- In another aspect computing system having one or more processor and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to receive a plurality of historical electronic claim documents, each respectively labeled with a claim loss amount, normalize each respective claim loss amount, and train an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) by applying the plurality of electronic claim documents to the artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model). The instructions may further cause the computing system to receive a user claim and predict a loss reserve amount by applying the user claim to the trained artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model).
- Advantages will become more apparent to those skilled in the art from the following description of the preferred embodiments which have been shown and described by way of illustration. As will be realized, the present embodiments may be capable of other and different embodiments, and their details are capable of modification in various respects. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
- The Figures described below depict various aspects of the system and methods disclosed therein. It should be understood that each Figure depicts one embodiment of a particular aspect of the disclosed system and methods, and that each of the Figures is intended to accord with a possible embodiment thereof. Further, wherever possible, the following description refers to the reference numerals included in the following Figures, in which features depicted in multiple Figures are designated with consistent reference numerals.
- There are shown in the drawings arrangements which are presently discussed, it being understood, however, that the present embodiments are not limited to the precise arrangements and instrumentalities shown, wherein:
-
FIG. 1 depicts an exemplary computing environment in which techniques for training a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) to determine a loss reserve associated with a vehicle and/or vehicle operator may be implemented, according to one embodiment; -
FIG. 2 depicts an exemplary computing environment in which techniques for collecting and processing user input, and training a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) to determine loss reserving and financial reporting information may be implemented, according to one embodiment; -
FIG. 3 depicts an exemplary artificial neural network which may be trained by the neural network unit ofFIG. 1 or the neural network training application ofFIG. 2 , according to one embodiment and scenario; -
FIG. 4 depicts an exemplary neuron, which may be included in the artificial neural network ofFIG. 3 , according to one embodiment and scenario; -
FIG. 5 depicts text-based content of an exemplary electronic claim record that may be processed by an artificial neural network, in one embodiment; -
FIG. 6 depicts a flow diagram of an exemplary computer-implemented method of determining a risk level posed by an operator of a vehicle, according to one embodiment; -
FIG. 7 depicts a flow diagram of an exemplary computer-implemented method of identifying risk indicators from vehicle operator information, according to one embodiment; -
FIG. 8 is a flow diagram depicting an exemplary computer-implemented method of detecting and/or estimating damage to personal property, according to one embodiment; -
FIG. 9A is an example flow diagram depicting an exemplary computer-implemented method of determining damage to personal property, according to one embodiment; -
FIG. 9B is an example data flow diagram depicting an exemplary computer-implemented method of determining damage to an insured vehicle using a trained machine learning algorithm (or other artificial intelligence or machine learning algorithm, program, module, or model) to facilitate handling an insurance claim associated with the damaged insured vehicle, according to one embodiment; -
FIG. 10A is an example flow diagram depicting an exemplary computer-implemented method for determining damage to personal property, according to one embodiment; -
FIG. 10B is an example data flow diagram depicting an exemplary computer-implemented method of determining damage to an undamaged insurable vehicle using a trained machine learning algorithm (or other artificial intelligence or machine learning algorithm, program, module, or model) to facilitate generating an insurance quote for the undamaged insurable vehicle, according to one embodiment; -
FIG. 11 depicts an example loss reserving user interface, in which a user may train and/or operate a neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) using a customized data set, according to one embodiment and scenario; -
FIG. 12 depicts a flow diagram of an exemplary computer-implemented method of determining loss reserves, according to one embodiment; and -
FIG. 13 depicts a flow diagram of an exemplary computer-implemented method of executing a trained artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) and data set, and displaying the result of such execution, according to one embodiment. - The Figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein.
- The present embodiments are directed to, inter alia, machine learning and/or training of models using historical automobile claim data to determine optimal loss reserving amounts and financial reporting information. Systems and methods may include natural language processing of free-form notes/text, or free-form speech/audio, recorded by call center and/or claim adjustor, photos, and/or other evidence. The free-form text and/or free-form speech may also be received from a customer who is inputting the text or speech into a mobile device app or into a smart vehicle controller, and/or into a chat bot or robo-advisor.
- Other inputs to a machine learning/training model may be harvested from historical claims, and may include make, model, year, miles, technological features, and/or other characteristics of a vehicle including any software updates that have been applied to the vehicle (including versions thereof), claim paid or not paid, liability (e.g., types of injuries, where treated, how treated, etc.), disbursements related to claim such as rental costs and other payouts, etc. Additional inputs to the machine learning/training model may include vehicle telematics data, such as how long and when are the doors unlocked, how often is the security system armed, how long the vehicle is operated and/or during which times of the day, etc.
- Vehicle inspection and/or maintenance records may be of particular interest in some embodiments, as being highly correlated to vehicle malfunction. A driver's history may also be used as inputs to artificial intelligence or machine learning algorithms or models in some embodiments, including without limitation, the driver's age, number and type of moving violations and any fines associated therewith, etc.
- As noted above, “loss reserves” are amounts of capital set aside in advance of claim settlement, and in some cases, prior to the filing of claims. For example, an insurer may learn via statistical analysis that a given amount of broken-windshield claims may occur each year, and may be able to derive a claim payout average. With this information, the insurer may be able to extrapolate the amount of loss reserves that should be set aside for the next year's worth of broken-windshield claims. However, a model that is specific to broken-windshield claims will not provide the insurer with information related to other claim types, and may not be very accurate. The methods and systems herein may be used to build general models for determining loss reserves taking into account decades worth of historical claims information, which may appear counter-intuitive to human observers.
- The present embodiments may also be directed to machine learning and/or training a model using historical auto claim data to discover loss reserving data. The present embodiments may include natural language processing of free-form notes recorded by call center and/or claim adjustor (e.g., “hit a deer”, “surgery”, “hospital”, etc.), photos, and/or other evidence to use as input to machine learning/training model. Other inputs to a machine learning/training model may be harvested from historical claims, and may include make, model, year, claim paid or not paid, liability (e.g., types of injuries, where treated, how treated, etc.), disbursements related to the claim such as rental car and other payouts, etc. It should be appreciated that the inputs to the trained model may be very complex and may include many (e.g., millions) of inputs. For example, a single network may base an amount of loss reserve on a zip code, medical diagnosis, treatment plan, age of injured person, gender of injured person, point of impact, G-forces at impact, air bag deployment(s), striking vehicle weight and/or size, etc. Many more, or fewer, inputs may be included in some embodiments. The presence or absence of autonomous vehicle features may be determined with respect to historical auto claims, and may be used in the training process.
- The embodiments described herein may relate to, inter alia, determining one or more loss reserves. The embodiments described herein may also relate to financial reporting. Different loss reserve amounts may be generated by separate models examining a set of inputs, in some embodiments. In some embodiments, one or more neural network models (or other artificial intelligence or machine learning algorithms, programs, modules, or models) may be trained using a subset of historical claims data as training input. A separate subset of the historical claims data may be used for validation and cross-validation purposes. An application may be provided to a client computing device (e.g., a smartphone, tablet, laptop, desktop computing device, wearable, or other computing device) of a user. A user of the application, who may be an employee of a company employing the methods described herein or a customer of that company, may enter input into the application via a user interface or other means. The input may be transmitted from the client computing device to a remote computing device (e.g., one or more servers) via a computer network, and then processed further, including by applying input entered into the client to the one or more trained neural network models (or other artificial intelligence or machine learning algorithms, programs, modules, or models) to produce labels and weights indicating net or individual risk factors, based upon existing claim data.
- For example, the remote computing device may receive the input and determine, using a trained neural network (or other artificial intelligence or machine learning algorithm, program, module, or model), one or more loss reserve amounts applicable to the input. Herein loss reserve amounts may be expressed numerically, as strings (e.g., as labels), or in any other suitable format. Loss reserves may be expressed as a dollar amount, or as a multiplier with respect to a past amount (e.g., 1.2 or 120%). The determined loss reserve amounts may be displayed to an end user, or an employee or owner of a business utilizing the methods and systems described herein. Similarly, the loss reserve amounts may be provided as input to another application (e.g., to an application which provides the loss reserve amounts to an end user). The loss reserve amounts may be joined with other information (e.g., claim category) and may be formatted (e.g., in a table or other suitable format) and automatically inserted in a financial report, such as a PDF file.
- A loss reserve aggregate may include one or more loss reserve amounts respective of a claim category or subtype, and may include a gross or net reserve amount. For example, an “automobile” loss reserve aggregate may be created which predicts a $1 m total loss reserve. This aggregate may include a plurality of categorical loss reserve, which may themselves be aggregate loss reserves or individual loss reserves. For example, the automobile loss reserve aggregate may include a “motorcycle” loss reserve and a “passenger car” loss reserve, wherein the motorcycle loss reserve may be associated with an amount of ($2 m) and the passenger car loss reserve may be associated with an amount of $3.2 m. Herein, parentheses may be used to denote negative loss reserves (e.g., shortfalls) and lack of parentheses may be used to denote positive loss reserves (e.g., surpluses). The gross amount of the automobile (i.e., combined motorcycle and passenger car) loss reserves may be $1.2 m. An amount of money may be deducted from the gross amount for miscellaneous expenses associated with the automobile loss reserve and/or constituent loss reserves (e.g., audit fees, storage fees, etc.) to arrive at a net loss reserve amount. Gross and/or net loss reserves may be included in financial reports.
- It should be appreciated that the fully automated and dynamic learning methods and systems described herein may estimate loss reserves not only for insurance claims that have been reported but also for claims that have occurred but have not yet been reported or recorded, as may be required by actuarial standards of practice and/or applicable law.
- Turning to
FIG. 1 , anexemplary computing environment 100, representative of automobile monitoring systems and methods for loss reserve determination and financial reporting, is depicted.Environment 100 may includeinput data 102 andhistorical data 108, both of which may comprise a list of parameters, a plurality (e.g., thousands or millions) of electronic documents, or other information. As used herein, the term “data” generally refers to information related to a vehicle operator, which exists in theenvironment 100. For example, data may include an electronic document representing a vehicle (e.g., automobile, truck, boat, motorcycle, etc.) insurance claim, demographic information about the vehicle operator and/or information related to the type of vehicle or vehicles being operated by the vehicle operator, and/or other information. - Data may be historical or current. Although data may be related to an ongoing claim filed by a vehicle operator, in some embodiments, data may consist of raw data parameters entered by a human user of the
environment 100 or which is retrieved/received from another computing system. Data may or may not relate to the claims filing process, and while some of the examples described herein refer to auto insurance claims, it should be appreciated that the techniques described herein may be applicable to other types of electronic documents, in other domains. For example, the techniques herein may be applicable to determining loss reserves and generating financial reports in other insurance domains, such as agricultural insurance, homeowners insurance, health or life insurance, renters insurance, etc. In that case, the scope and content of the data may differ. - As another example, data may be collected from an existing customer filing a claim, a potential or prospective customer applying for an insurance policy, or may be supplied by a third party such as a company other than the proprietor of the
environment 100. In some cases, data may reside in paper files that are scanned or entered into a digital format by a human or by an automated process (e.g., via a scanner). Generally, data may comprise any digital information, from any source, created at any time. -
Input data 102 may be loaded into anartificial intelligence system 104 to organize, analyze, andprocess input data 102 in a manner that facilitates determining optimal loss reserves via a lossreserve aggregation platform 106. The loading ofinput data 102 may be performed by executing a computer program on a computing device that has access to theenvironment 100, and the loading process may include the computer program coordinating data transfer betweeninput data 102 and AI platform 104 (e.g., by the computer program providing an instruction toAI platform 104 as to an address or location at whichinput data 102 is stored). AI platform may reference this address to retrieve records frominput data 102 to perform loss reserves determinations.AI platform 104 may be thought of as a collection of algorithms configured to receive and process parameters, and to produce labels and, in some embodiments, loss reserves and financial reports. - As discussed below with respect to
FIGS. 3, 4, and 5 ;AI platform 104 may be used to train multiple neural network models relating to different granular segments of vehicle operators. For example,AI platform 104 may be used to train a neural network model for use in determining loss reserves related to motorcycle operators. In another embodiment,AI platform 104 may be used to train a neural network model (or other artificial intelligence or machine learning algorithm, program, module, or model) for use in determining optimal loss reserves, a priori, relating to windshield damage claims. The precise manner in which neural networks are created and trained is described below. In some embodiments, large-scale/distributed computing tools (e.g., Apache Hadoop) may be used to implement some ofartificial intelligence platform 104. - In the embodiment of
FIG. 1 ,AI platform 104 may includeclaim analysis unit 120.Claim analysis unit 120 may include speech-to-text unit 122 andimage analysis unit 124 which may comprise, respectively, algorithms for converting human speech into text and analyzing images (e.g., extracting information from hotel and rental receipts). In this way, data may comprise audio recordings (e.g., recordings made when a customer telephones a customer service center) that may be converted to text and further used byAI platform 104. In some embodiments, customer behavior represented in data—including the accuracy and truthfulness of a customer—may be encoded byclaim analysis unit 120 and used byAI platform 104 to train and operate neural network models (or other artificial intelligence or machine learning algorithms or models).Claim analysis unit 120 may also includetext analysis unit 126, which may includepattern matching unit 128 and natural language processing (NLP)unit 130. In some embodiments,text analysis unit 126 may determine facts regarding claim inputs (e.g., the amount of money paid under a claim). Amounts may be determined in a currency- and inflation-neutral manner, so that claim loss amounts may be directly compared. In some embodiments,text analysis unit 126 may analyze text produced by speech-to-text unit 122 orimage analysis unit 124. - In some embodiments,
pattern matching unit 128 may search textual claim data loaded intoAI platform 104 for specific strings or keywords in text (e.g., “hospital” or “surgery”) which may be indicative of particular types of injury. Such keywords may be associated with a respective occurrence (e.g., the number 5 may indicate that a person sustained five surgeries).NLP unit 130 may be used to identify, for example, entities or objects indicative of risk (e.g., that an injury occurred to a person, and that the person's leg was injured).NIT unit 130 may identify human speech patterns in data, including semantic information relating to entities, such as people, vehicles, homes, and other objects. For example, the location and time of an accident may be identified, as well as a quantity related to an accident (e.g., the number of passengers) - Relevant verbs and objects, as opposed to verbs and objects of lesser relevance, may be determined by the use of a machine learning algorithm analyzing historical claims. For example, both a driver and a deer may be relevant objects. Verbs indicating collision or injury may be relevant verbs. In some embodiments,
text analysis unit 126 may comprise text processing algorithms (e.g., lexers and parsers, regular expressions, etc.) and may emit structured text in a format which may be consumed by other components. For example,text analysis unit 126 may receive output from a trained neural network. - In the embodiment of
FIG. 1 ,AI platform 104 may include aloss classifier 140 to classify, or group, losses. Such classification may use standard clustering techniques used in machine learning, such as k-means algorithms. In some embodiments,loss classifier 140 may group losses into groups by pre-defined categories (e.g., large/small, personal injury/property, etc.). In other embodiments, classification may determine categories by agglomeration or other known methods. Loss classifier may associate claims with loss category information, and such information may be stored in aloss data 142.Loss classifier 140 may be used to build a predictive model that pertains to a category (e.g., motorcycle operators) as described above. -
Loss classifier 140 may analyze a subset of claims inhistorical data 110. The subset of claims may contain a mixture of severe claims (e.g., those claims in which complications from surgery post-accident resulted in the greatest level of damage, whether quantified by pecuniary loss or the loss of human life, motor function, and/or cognitive function) and non-severe claims (e.g., those claims in which only minor first aid was rendered post-accident).Loss classifier 140 may be trained to categorize claims based upon membership in one or more “severity” categories. Onceloss classifier 140 has classified a given claim, its severity may be saved to and/or retrieved from an electronic database, such asloss data 142, or associated with a set ofinput data 102.Loss classifier 140 severity information may also be passed to other components, such asneural network unit 150. Random forest trees may be used to classify claims, and may be capable of determining which of several criteria or features associated with each claim was paramount in the classifier's decision. -
Neural network unit 150 may use an artificial neural network, or simply “neural network.” The neural network may be any suitable type of neural network, including, without limitation, a recurrent neural network, feed-forward neural network, and/or deep learning network. The neural network may include any number (e.g., thousands) of nodes or “neurons” arranged in multiple layers, with each neuron processing one or more inputs to generate a decision or other output. In some embodiments,neural network unit 150 may use other types of artificial intelligence or machine learning algorithms or models, including those discussed elsewhere herein. - In some embodiments, neural network models may be chained together, so that output from one model is piped or transferred into another model as input. For example,
loss classifier 140 may, in one embodiment, applyinput data 102 to a first neural network model that is trained to categorize claims (e.g., by vehicle type, severity and/or other criteria). The output of this first neural network model may be fed as input to a second neural network model which has been trained to generate loss reserves for claims corresponding to the categories. - As noted, a neural network may include a series of nodes connected by weighted links, and the neural network may be continuously trained from a randomized initial state, using a subset of historical claims as input, until the outputs corresponding to the sum of the weights at each layer in the neural network converge to the particular values assigned to a “truth” data set. A truth data set may contain claims along with correct, or optimal, loss reserving amounts, and may be based upon historical loss reserves of an insurer. For example, the insurer may include a plurality of claims with corresponding loss reserves that resulted in the insurer not overestimating or underestimating the payouts of the plurality of claims. The error of the network may be measured as the difference between the particular values and the weights. Once trained, the neural network (or other artificial intelligence or machine learning algorithm, program, module, or model) may be validated with another subset of data, and its parameters and/or structure adjusted accordingly.
-
Neural network unit 150 may includetraining unit 152, andloss reserving unit 154. To train the neural network to identify risk,neural network unit 150 may access electronic claims withinhistorical data 108.Historical data 108 may comprise a corpus of documents comprising many (e.g., millions) of insurance claims which may contain data linking a particular customer or claimant to one or more vehicles, and which may also contain, or be linked to, information pertaining to the customer. In particular,historical data 108 may be analyzed byAI platform 104 to generate claim records 110-1 through 110-n, where n is any positive integer. Each claim 110-1 through 110-n may be processed bytraining unit 152 to train one or more neural networks to predict loss reserves, including by pre-processing ofhistorical data 108 usinginput analysis unit 120 as described above. Claim records 110-1 through 110-n may be assigned to a time series by, for example, date parsing or other methods. -
Neural network 150 may, from a trained model, identify labels that correspond to specific data, metadata, and/or attributes withininput data 102, depending on the embodiment. For example,neural network 150 may be provided with instructions frominput analysis unit 120 indicating that one or more particular type of insurance is associated with one or more portions ofinput data 102. - Neural network 150 (or other artificial intelligence or machine learning algorithm, program, module, or model) may identify one or more insurance types associated with the one or more portions of input data 102 (e.g., bodily injury, property damage, collision coverage, comprehensive coverage, liability insurance, need pay, or personal injury protection (PIP) insurance) and by
input analysis unit 120. In one embodiment, the one or more insurance types may be identified by training the neural network 150 (or other artificial intelligence or machine learning algorithm or model) based upon types of peril. For example, the neural network model (or other artificial intelligence or machine learning algorithm or model) may be trained to determine that fire, theft, or vandalism may indicate comprehensive insurance coverage. Insurance types and/or types of peril may be used to categorize claim records for the purpose of training a model. For example, a “vandalism” loss reserve model may be trained using a categorized subset of such data. - In addition,
input data 102 may indicate a particular customer and/or vehicle. In that case,loss classifier 140 may look up additional customer and/or vehicle information fromcustomer data 160 andvehicle data 162, respectively. For example, the age of the vehicle operator and/or vehicle type may be obtained. The additional customer and/or vehicle information may be provided to neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) and may be used to analyze and labelinput data 102 and, ultimately, may be used to train the artificial neural network model (or other artificial intelligence or machine learning algorithm or model). For example, neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) may be used to predict risk based upon inputs obtained from a person applying for an auto insurance policy, or based upon a claim submitted by a person who is a holder of an existing insurance policy. That is, in some embodiments where neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) is trained on claim data, neural network unit 150 (or other artificial intelligence or machine learning algorithm or model) may determine loss reserves based upon raw information unrelated to the claims filing process, or based upon other data obtained during the filing of a claim (e.g., a claim record retrieved from historical data 108). - In one embodiment, the training process may be performed in parallel, and
training unit 152 may analyze all or a subset of claims 110-1 through 110-n. Specifically,training unit 152 may train a neural network (or other artificial intelligence or machine learning algorithm or model) to predict loss reserves in claim records 110-1 through 110-n. As noted,AI platform 104 may analyzeinput data 102 to arrange the historical claims into claim records 110-1 through 110-n, where n is any positive integer. Claim records 110-1 through 110-n may be organized in a flat list structure, in a hierarchical tree structure, or by means of any other suitable data structure. For example, the claim records may be arranged in a tree wherein each branch of the tree is representative of one or more customer. - There, each of claim records 110-1 through 110-n may represent a single non-branching claim, or may represent multiple claim records arranged in a group or tree. Further, claim records 110-1 through 110-n may comprise links to customers and vehicles whose corresponding data is located elsewhere. In this way, one or more claims may be associated with one or more customers and one or more vehicles via one-to-many and/or many-to-one relationships. Risk factors may be data indicative of a particular risk or risks associated with a given claim, customer, and/or vehicle. The status of claim records may be completely settled or in various stages of settlement.
- As used herein, the term “claim” or “vehicle claim” generally refers to an electronic document, record, or file, that represents an insurance claim (e.g., an automobile insurance claim) submitted by a policy holder of an insurance company. Herein, “claim data” or “historical data” generally refers to data directly entered by the customer or insurance company including, without limitation, free-form text notes, photographs, audio recordings, written records, receipts (e.g., hotel and rental car), and other information including data from legacy, including pre-Internet (e.g., paper file), systems. Notes from claim adjusters and attorneys may also be included.
- In one embodiment, claim data may include claim metadata or external data, which generally refers to data pertaining to the claim that may be derived from claim data or which otherwise describes, or is related to, the claim but may not be part of the electronic claim record. Claim metadata may have been generated directly by a developer of the
environment 100, for example, or may have been automatically generated as a direct product or byproduct of a process carried out inenvironment 100. For example, claim metadata may include a field indicating whether a claim was settled or not settled, and amount of any payouts, and the identity of corresponding payees. Another example of claim metadata is the geographic location in which a claim is submitted, which may be obtained via a global positioning system (GPS) sensor in a device used by the person or entity submitting the claim. - Yet another example of claim metadata includes a category of the claim type (e.g., collision, liability, uninsured or underinsured motorist, etc.). For example, a single claim in
historical data 108 may be associated with a married couple, and may include the name, address, and other demographic information relating to the couple. Additionally, the claim may be associated with multiple vehicles owned or leased by the couple, and may contain information pertaining to those vehicles including without limitation, the vehicles' make, model, year, condition, mileage, etc. The claim may include a plurality of claim data and claim metadata, including metadata indicating a relationship or linkage to other claims inhistorical claim data 108. In this way,neural network unit 150 may produce a neural network that has been trained to associate the presence of certain input parameters with higher or lower risk levels. A specific example of a claim is discussed with respect toFIG. 5 , below. - Once the neural network (or other artificial intelligence or machine learning algorithm or model) has been trained,
loss reserving unit 154 may analyze, combine, and/or validate prediction information fromtraining unit 152. For example,loss reserving unit 154 may check whether predicted loss reserving amounts or percentages are within a given range (e.g., positive or negative).Loss reserving unit 154 may use pre-determined parameters retrieved fromloss data 142 or another electronic database in conjunction withtraining unit 152 output. A trained neural network (or other artificial intelligence or machine learning algorithm or model) may, based upon analyzing claim data, output a loss reserving amount that is analyzed byloss reserving unit 154. Multiple loss reserving outputs, or estimates, may be aggregated by lossreserve aggregation platform 106. -
AI platform 104 may further includecustomer data 160 andvehicle data 162, whichloss classifier 140 may use to provide useful input parameters to neural network unit 150 (or other artificial intelligence or machine learning algorithm or model).Customer data 160 may be an integral part ofAI platform 104, or may be located separately fromAI platform 104. In some embodiments,customer data 160 orvehicle data 162 may be provided toAI platform 104 via separate means (e.g., via an API call), and may be accessed by other units or components ofenvironment 100. Either may be provided by a third-party service. For example, in some embodiments, a trained neural network (or other artificial intelligence or machine learning algorithm or model) may require a vehicle type as a parameter. Based solely on a claim input from claim 110-1 through 110-n,loss classifier 140 may look up the vehicle type fromvehicle data 162 as the claim is being passed toneural network unit 150. It should be appreciated that many sources of additional data may be used as inputs to train and operate artificial neural network models. The neural network modules may include other types of artificial intelligence or machine learning algorithms, models, and/or modules. -
Vehicle data 162 may be a database comprising information describing vehicle makes and models, including information about model years and model types (e.g., model edition information, engine type, any upgrade packages, etc.).Vehicle data 162 may indicate whether certain make and model year vehicles are equipped with safety features (e.g., lane departure warnings). Thevehicle data 162 may also relate to autonomous or semi-autonomous vehicle features or technologies of the vehicle, and/or sensors, software, and electronic components that direct the autonomous or semi-autonomous vehicle features or technologies. For example, the information describing vehicle makes and models may specify, at the model type and/or model year level, the degree to which a vehicle is equipped with autonomous and/or semiautonomous capabilities, and/or the degree to which a vehicle may be adequately retrofitted to accept such capabilities. - In some embodiments, the failure of an autonomous or semi-autonomous vehicle system may be discovered via the neural network (or other artificial intelligence or machine learning algorithm or model) analysis described above. An autonomous vehicle system failure such as “lane departure malfunction” may be used by
loss reserving unit 154. In one embodiment, a user's completing a repair within a pre-set window of time, or one computed based upon loss probability, may cause the user to receive advantageous pricing as regards an existing or new insurance policy. - Vehicle capabilities may be listed individually. For example, a database table may be constructed within the electronic database which specifies whether a vehicle has a steering wheel, gas pedal, and/or brake pedal. In addition, or alternately, the database table may classify a vehicle as belonging to a particular category/taxonomic classification of autonomous or semi-autonomous vehicles as measured by a vehicle automation ratings system (e.g., by Society of Automotive Engineers (S.A.E.) automation ratings system), by which the set of features may be automatically determined, by reference to the standards established by the vehicle automation ratings system. In some embodiments, autonomous and/or semi-autonomous capabilities known to be installed in a vehicle, or which may be determined based upon a known vehicle classification/adherence to a standard, may be provided as input to an artificial neural network or other algorithm used to mitigate loss and/or handle claims. Vehicle owners may be advised (e.g., via a message displayed in a display such as display 224) that moving from one level of vehicle autonomy to another, may improve aggregate risk and decrease premiums.
- In some embodiments, users who have been involved in an accident recently (e.g., within one month) may be incentivized to mitigate further injury by utilizing autonomous driving features. Such incentives may be communicated to users after a trained neural network analyzes a claim as described above.
- The types of autonomous or semi-autonomous vehicle-related functionality or technology that may be used with the present embodiments to replace human driver actions may include and/or be related to the following types of functionality: (a) fully autonomous (driverless); (b) limited driver control; (c) vehicle-to-vehicle (V2V) wireless communication; (d) vehicle-to-infrastructure (and/or vice versa), and/or vehicle-to-device (such as mobile device or smart vehicle controller) wireless communication; (e) automatic or semi-automatic steering; (f) automatic or semi-automatic acceleration; (g) automatic or semi-automatic braking; (h) automatic or semi-automatic blind spot monitoring; (i) automatic or semi-automatic collision warning; (j) adaptive cruise control; (k) automatic or semi-automatic parking/parking assistance; (l) automatic or semi-automatic collision preparation (windows roll up, seat adjusts upright, brakes pre-charge, etc.); (m) driver acuity/alertness monitoring; (n) pedestrian detection; (o) autonomous or semi-autonomous backup systems; (p) road mapping systems; (q) software security and anti-hacking measures; (r) theft prevention/automatic return; (s) automatic or semi-automatic driving without occupants; and/or other functionality.
- All of the information pertaining to a claim, including customer and vehicle information, may be provided to
neural network unit 150 for training a model to determine loss reserving amounts. In some embodiments, loss reserve overrides that are stored separately fromAI platform 104, may be used to force human oversight of. For example, the methods and systems herein may contain instructions which, when executed, cause any set of claim being analyzed by a neural network (or other artificial intelligence or machine learning algorithm or model) that cause a loss reserving amount of over $1 m to be predicted to require human review and confirmation. Over time, as the model is trained, such overrides may be removed. In other embodiments, the models may be completely automated and unattended. - It should also be appreciated that the methods and techniques described herein may not be applied to seek profit in an insurance marketplace. Rather, the methods and techniques may be used to more fairly and equitably allocate risk among customers in a way that is revenue-neutral, yet which strives for fairness to all market participants, and may only be used on an opt-in basis. In one embodiment, a claim may be related to the operation of a vehicle. In other words, the claim may relate to physical injury sustained by a driver and/or passenger, damage to the vehicle being driven by the vehicle operator, another vehicle, or other persons/property. The models trained using the methods and systems herein may be trained incrementally, so that when new claims are settled, they are used to improve an existing model without completely retraining the model on all data.
- The methods and systems described herein may help risk-averse customers to lower their insurance premiums by taking affirmative steps to mitigate risk of loss before, during, and after the filing of a claim. The methods and systems may also allow customers to interact with claims handling in a transparent, streamlined, and scalable fashion. All of the benefits provided by the methods and systems described herein may be realized much more quickly than traditional modeling approaches.
- With reference to
FIG. 2 , a high-level block diagram of vehicle insurance loss reservingmodel training system 200 is illustrated that may implement communications between aclient device 202 and aserver device 204 vianetwork 206 to provide vehicle insurance loss mitigation and/or claims handling.FIG. 2 may correspond to one embodiment ofenvironment 100 ofFIG. 1 , and also includes various user/client-side components. For simplicity,client device 202 is referred to herein asclient 202, andserver device 204 is referred to herein asserver 204, but either device may be any suitable computing device (e.g., a laptop, smart phone, tablet, server, wearable device, etc.).Server 204 may host services relating to neural network training and operation, and may be communicatively coupled toclient 202 vianetwork 206. In general, training the neural network model may include establishing a network architecture, or topology, and adding layers that may be associated with one or more activation functions (e.g., a rectified linear unit, softmax, etc.), loss functions and/or optimization functions. Multiple different types of artificial neural networks may be employed, including without limitation, recurrent neural networks, convolutional neural networks, and deep learning neural networks. Data sets used to train the artificial neural network(s) may be divided into training, validation, and testing subsets; these subsets may be encoded in an N-dimensional tensor, array, matrix, or other suitable data structures. Training may be performed by iteratively training the network using labeled training samples. Training of the artificial neural network may produce byproduct weights, or parameters which may be initialized to random values. The weights may be modified as the network is iteratively trained, by using one of several gradient descent algorithms, to reduce loss and to cause the values output by the network to converge to expected, or “learned”, values, in an embodiment, a regression neural network may be selected which lacks an activation function, wherein input data may be normalized by mean centering, to determine loss and quantify the accuracy of outputs. Such normalization may use a mean squared error loss function and mean absolute error. The artificial neural network model may be validated and cross-validated using standard techniques such as hold-out, K-fold, etc. In some embodiments, multiple artificial neural networks may be separately trained and operated, and/or separately trained and operated in conjunction. - Although only one client device is depicted in
FIG. 2 , it should be understood that any number ofclient devices 202 may be supported.Client device 202 may include amemory 208 and aprocessor 210 for storing and executing, respectively, amodule 212. While referred to in the singular,processor 210 may include any suitable number of processors of one or more types (e.g., one or more CPUs, graphics processing units (GPUs), cores, etc.). Similarly,memory 208 may include one or more persistent memories (e.g., a hard drive and/or solid state memory). -
Module 212, stored inmemory 208 as a set of computer-readable instructions, may be related to anloss reserve client 216 which, when executed by theprocessor 210, may cause input data to be stored inmemory 208 or data to be transferred to/fromserver 204 vianetwork 206. The data stored inmemory 208 may correspond to, for example, raw data retrieved frominput data 102.Loss reserve client 216 may be implemented as web page (e.g., HTML, JavaScript, CSS, etc.) and/or as a mobile application for use on a standard mobile computing platform. -
Loss reserve client 216 may store information inmemory 208, including the instructions required for its execution. While the user is usingloss reserve client 216, scripts and other instructions comprisingloss reserve client 216 may be represented inmemory 208 as a web or mobile application. The input data collected byloss reserve client 216 may be stored inmemory 208 and/or transmitted toserver device 204 bynetwork interface 214 vianetwork 206, where the input data may be processed as described above to determine a series of risk indications and/or a risk level. In one embodiment,loss reserve client 216 may be data used to train a model (e.g., scanned claim data). -
Client device 202 may also includeGPS sensor 218, animage sensor 220, user input device 222 (e.g., a keyboard, mouse, touchpad, and/or other input peripheral device), and display interface 224 (e.g., an LED screen).User input device 222 may include components that are integral toclient device 202, and/or exterior components that are communicatively coupled toclient device 202, to enableclient device 202 to accept inputs from the user.Display 224 may be either integral or external toclient device 202, and may employ any suitable display technology. In some embodiments,input device 222 anddisplay 224 are integrated, such as in a touchscreen display. Execution of themodule 212 may further cause theprocessor 210 to associate device data collected fromclient 202 such as a time, date, and/or sensor data (e.g., a camera for photographic or video data) with vehicle and/or customer data, such as data retrieved fromcustomer data 160 andvehicle data 162, respectively. - In some embodiments,
client 202 may receive data fromloss data 142 and lossreserve aggregation platform 106. Such data, loss labels and plan data, may be presented to a user ofclient 202 by adisplay interface 224. Aggregation data may include gross and net amounts related to categories of loss reserves, in some embodiments. Aggregation data may include an acceptability indicator, demonstrative of whether aggregate amounts are more or less than an acceptable dollar amount or multiplier. An action may be taken if an acceptability indicator demonstrates an amount beyond an acceptable range (e.g., a warning message emitted or an email sent). In this way, the lossreserve aggregation platform 106 may provide an insurer with a view of loss reserves at a global level (e.g., across a division, such as automotive, or with respect to an organizational unit or subsidiary of a company) or at a level wherein the granularity is configurable by the insurer all the way down to the individual customer level. - Execution of the
module 212 may further cause theprocessor 210 of theclient 202 to communicate with theprocessor 250 of theserver 204 vianetwork interface 214 andnetwork 206. As an example, an application related tomodule 212, such asloss reserve client 216, may, when executed byprocessor 210, cause a user interface to be displayed to a user ofclient device 202 viadisplay interface 224. The application may include graphical user input (GUI) components for acquiring data (e.g., photographs) fromimage sensor 220, GPS coordinate data fromGPS sensor 218, and textual user input from user input device(s) 222. - The
processor 210 may transmit the aforementioned acquired data toserver 204, andprocessor 250 may pass the acquired data to an artificial neural network (or other artificial intelligence or machine learning algorithm, program, module, or model), which may accept the acquired data and perform a computation (e.g., training of the model, or application of the acquired data to a trained artificial neural network model (or other artificial intelligence or machine learning algorithm or model) to obtain a result). With specific reference toFIG. 1 , the data acquired byclient 202 may be transmitted vianetwork 206 to a server implementingAI platform 104, and may be processed byinput analysis unit 120 before being applied to a trained neural network (or other artificial intelligence or machine learning algorithm or model) byloss classifier 140. - As described with respect to
FIG. 1 , the processing of input fromclient 202 may include associatingcustomer data 160 andvehicle data 162 with the acquired data. The output of the neural network (or other artificial intelligence or machine learning algorithm or model) may be transmitted, by a loss classifier corresponding toloss classifier 140 inserver 204, back toclient 202 for display (e.g., in display 224) and/or for further processing. -
Network interface 214 may be configured to facilitate communications betweenclient 202 andserver 204 via any hardwired or wireless communication network, includingnetwork 206 which may be a single communication network, or may include multiple communication networks of one or more types (e.g., one or more wired and/or wireless local area networks (LANs), and/or one or more wired and/or wireless wide area networks (WANs) such as the Internet).Client 202 may cause insurance risk/loss/claim related data and/or metadata to be stored inserver 204memory 252 and/or a remote insurance related database such ascustomer data 160. -
Server 204 may include aprocessor 250 and amemory 252 for executing and storing, respectively, amodule 254.Module 254, stored inmemory 252 as a set of computer-readable instructions, may facilitate applications related to loss reserving and financial reporting including data storage and retrieval (e.g., data and claim metadata, and insurance policy application data). For example,module 254 may includeinput analysis application 260,loss reserving application 262, and neuralnetwork training application 264, in one embodiment.Module 254 may be responsible for interpreting output from trained neural network models (or other types of artificial intelligence or machine learning algorithms or models), and for generating loss reserving information, in some embodiments. -
Input analysis application 260 may correspond to inputanalysis unit 120 ofenvironment 100 ofFIG. 1 .Loss reserving application 262 may correspond toloss reserving unit 154 ofFIG. 1 , and neuralnetwork training application 264 may correspond toneural network unit 150 ofenvironment 100 ofFIG. 1 .Module 254 and the applications contained therein may include instructions which, when executed byprocessor 250,cause server 204 to receive and/or retrieve input data from (e.g., raw data and/or an electronic claim) fromclient device 202. In one embodiment,input analysis application 260 may process the data fromclient 202, such as by matching patterns, converting raw text to structured text via natural language processing, by extracting content from images, by converting speech to text, and so on. - In another embodiment,
client device 202 may he used by an employee of the insurer to view results produced byloss reserving application 262. For example,loss reserving unit 262 may display/interpret results of a trained neural network (or other artificial intelligence or machine learning algorithm or model). In some cases,loss reserving unit 262 may continuously interpret results produced by a trained neural network (e.g., hourly, weekly, or monthly). As time passes and the neural network receives additional claim data, the predicted loss reserves may be updated. In one embodiment, an increase in loss reserves predicted by a neural network model may cause a withdrawal or transfer of funds into a bank account or trust specifically created for the purpose of holding loss reserving funds. - Throughout the aforementioned processing,
processor 250 may read data from, and write data to, a location ofmemory 252 and/or to one or more databases associated withserver 204. For example, instructions included inmodule 254 may causeprocessor 250 to read data from anhistorical data 270, which may be communicatively coupled toserver device 204, either directly or viacommunication network 206.Historical data 270 may correspond tohistorical data 108, andprocessor 250 may contain instructions specifying analysis of a series of electronic claim documents fromhistorical data 270, as described above with respect to claims 110-1 through 110-n ofhistorical data 108 inFIG. 1 . -
Processor 250 may querycustomer data 272 andvehicle 274 for data related to respective electronic claim documents and raw data, as described with respect toFIG. 1 . In oneembodiment customer data 272 andvehicle data 274 correspond, respectively,customer data customer data 272 and/orvehicle data 274 may not be integral toserver 204.Module 254 may also facilitate communication betweenclient 202 andserver 204 vianetwork interface 256 andnetwork 206, in addition to other instructions and functions. - Although only a
single server 204 is depicted inFIG. 2 , it should be appreciated that it may be advantageous in some embodiments to provision multiple servers for the deployment and functioning ofAI system 102. For example, thepattern matching unit 128 and naturallanguage processing unit 130 ofinput analysis unit 120 may require CPU-intensive processing. Therefore, deploying additional hardware may provide additional execution speed. Each ofhistorical data 270,customer data 272,vehicle data 274, andrisk indication data 276 may be geographically distributed. - While the databases depicted in
FIG. 2 are shown as being communicatively coupled toserver 204, it should be understood thathistorical claim data 270, for example, may be located within separate remote servers or any other suitable computing devices communicatively coupled toserver 204. Distributed database techniques (e.g., sharding and/or partitioning) may be used to distribute data. In one embodiment, a free or open source software framework such as Apache Hadoop® may be used to distribute data and run applications (e.g., loss reserving application 262). It should also be appreciated that different security needs, including those mandated by laws and government regulations, may in some cases affect the embodiment chosen, and configuration of services and components. - In a manner similar to that discussed above in connection with
FIG. 1 , historical claims fromhistorical claim data 270 may be ingested byserver 204 and used by neuralnetwork training application 264 to train an artificial neural network (or other artificial intelligence or machine learning algorithm or model). In one embodiment, a claim may be classified according to a multilabel, multiclass scheme. For example, an algorithm may be trained using a portion of historical claims as input that are pre-labeled with a set of labels. The set of labels may comprise any information found in a claim before processing (e.g., whether settled, and a payout amount, if any) and after processing byinput analysis unit 120. A set of several thousand, or even millions, of claims may be associated with such informational labels, and a percentage (e.g., 80%) may be used to train a neural network (or other artificial intelligence or machine learning algorithm or model). For example, a recurrent neural network may be created that uses a number of hidden layers and has as its last layer a densely connected network in which all neurons are interconnected. Additional layers or “chains” may be formed, in which models of differing network architectures are coupled to the recurrent neural network. The output of the chained network may be a set of labels to which the claim is predicted to belong (e.g., MOTORCYCLE, PASSENGER-CAR, etc.). - Then, when
module 254 processes input fromclient 202, the data output by the neural network(s) (e.g., data indicating labels, loss reserving amounts, weights, etc.) may be passed toloss reserving application 262 for analysis/display. As discussed,loss reserving application 262 may take additional actions based upon the output of the trained model(s). - It should be appreciated that the client/server configuration depicted and described with respect to
FIG. 2 is but one possible embodiment. In some cases, a client device such asclient 202 may not be used. In that case, input data may be entered—programmatically, or manually—directly intodevice 204. A computer program or human may perform such data entry. In that case, device may contain additional or fewer components, including input device(s) and/or display device(s). - In one embodiment, a
client device 202 may be an integral device to a vehicle of a user (not depicted), or may be communicatively coupled to a network communication device of a vehicle. The vehicle may be an autonomous or semi-autonomous vehicle, andloss reserve client 216 may include instructions which, when executed, may collect information pertaining to the autonomous capabilities of the vehicle. For example, theloss reserve client 216 may periodically receive/retrieve the status of individual autonomous vehicle components (e.g., the engagement/disengagement of a collision avoidance mechanism) and/or whether a particular dynamic driving system is active or disabled (e.g., by intentional interference or accidental damage). - The status of autonomous (and in some embodiments, semi-autonomous) systems may be determined by polling input devices, such as
input device 222, or by other methods (e.g., by receiving streamed status information, or by retrieving cached values). Such status information may be used as training data for an artificial neural network (or other artificial intelligence or machine learning algorithm or model) (e.g., by neural network training application 264), and/or may be used as input to a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) to determine the risk represented by a vehicle and/or a driver. Vehicle risk and driver risk may be independently calculated. For example, an SAE Level 3 autonomous vehicle may be associated with a baseline risk level, and a user's risk may be factored in to the baseline level risk. Multiple variables e.g., vehicle category, driver age, autonomous features, etc.) may be used to make a single prediction of loss reserves. - As noted, the risk factors or labels determined by trained neural networks (or other artificial intelligence or machine learning algorithms or models) analyzing historical claim data may appear counter-intuitive or unrelated to the optimal loss reserve level. For example, in a vehicle wherein a dynamic driving system includes functionality to take control away from the automated system, a neural network (or other artificial intelligence or machine learning algorithm or model) may predict high risk wherein the instances of revocation of control from the automated system is low. This may indicate an over-reliance on the automated system by a vehicle operator. It may also be the case that revocation of control may indicate high risk with respect to some vehicle operators, and lower risk with respect to others.
- Artificial neural network models (or other artificial intelligence or machine learning algorithms or models) may be trained to output compound labels (e.g., AUTONOMOUS-RURAL). Once autonomous vehicle information is determined, it may be transmitted to a remote computing system, such as
AI platform 104, orserver device 204 for further analysis.Input analysis application 260 may format and/or store autonomous vehicle information in a database, such asvehicle data 274, and/or a trained neural network (or other artificial intelligence or machine learning algorithm or model) may immediately (or at a later time) process the autonomous vehicle information to determine loss reserving amounts, whether individual or aggregated. - The loss reserving and financial reporting information may be associated with one or both of a vehicle and a vehicle operator, by, respectively storage in
vehicle data 274 andcustomer data 272. In some embodiments, the set of loss reserving information may be stored in an electronic database such asloss data 142. In some embodiments, the set of loss reserving and financial reporting information may be provided to an additional application, such as lossreserve aggregation platform 106, or an application executing inmodule 212. As noted, once a set of loss reserving information is identified, the set may be used to compute an aggregate, which may be used for many purposes, such as underwriting insurance policies, adjusting capitalization, forecasting profit/loss, etc. - In one embodiment, an automated control system may perform dynamic vehicle control, which may include instructions to operate the vehicle, including without limitation, real-time functions, trip generation, steeling control, acceleration and deceleration, environmental monitoring, and instructions for operating various vehicle components (e.g., headlights, turn signals, traction control, etc.). The automated control system may perform dynamic vehicle control for a period of time (e.g., hours or days) with respect to the vehicle, during which time an application executing in
module 212 may collect telematics data. Telematics data may include such data as GPS information, vehicle location, braking, speed, acceleration, cornering, movement, status, orientation, position, behavior, mobile device, and/or other types of data; and may be determined using a combination of sensors and computing/storage devices. For example,loss reserve client 216 may determine the vehicle's position by reading data fromGPS device 218. Other sensors may provide information regarding the vehicle's speed, acceleration, instrumentation, and path. - Telematics data may be periodically sampled, or retrieved on a continuous basis. Telematics data may be transmitted in real-time from a wireless networking transceiver e.g., network interface 214) via a network (e.g., network 206) to a communicatively coupled server (e.g., server device 204). In some embodiments, telematics data may be cached in a
memory 208 and transmitted toserver 204 at a later time, or processed in situ. Telematics data may be provided as input to a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model). Each individual data type within telematics data may be referred to as a “telematics attribute.” For example, “speed” may be a telematics attribute. - In one embodiment, the real-time use of autonomous vehicle features and telematics data may be used to train a neural network (or other artificial intelligence or machine learning algorithm or model) to predict loss reserving amounts. For example, the percentage of vehicles equipped with autonomous vehicle features in which the drivers do not take sharp corners may be directly correlated with lower loss reserving requirements. As noted above, such models may be continuously trained using data input from
client device 202.Client device 202 may be located inside/integral to a vehicle, according to some embodiments. - A set of periodic telematics data, (e.g., a month's worth of telematics data) may be stored in association with a user's account in an electronic database coupled to
client device 202. and/orserver device 204. The electronic database may include physical and/or software anti-tampering measures intended to prevent unauthorized alteration of modification of the telematics data. For example, telematics data stored in a device onboard the vehicle may be encrypted in a server computing device using a secret key that is not stored in the vehicle. As noted above,customer data 272 may be associated with data corresponding to one or more vehicle invehicle data 274. - In some embodiments, a neural network may be trained to automatically provide financial reporting. The provision of automatic financial reporting may be based on output of a first neural network establishing loss reserve amounts. Financial reporting may be triggered in response to a set of inputs or a learned value. For example, in an embodiment, financial reporting may be performed if a trained algorithm encounters a series of inputs that increase loss reserving beyond a preconfigured amount or cause another threshold value to be exceeded. In other embodiments, financial reports may include aggregations of loss reserve amounts. For example, the output of a loss reserving neural network may be collected over a period of time (e.g., hourly, quarterly, etc.). A financial report may be generated which includes a summary and/or aggregation of the loss reserve outputs, in textual and/or graphical format.
- An artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained in neural
network training application 264 which includes a plurality of input layers for customer data, a plurality of input layers for vehicle data, and a plurality of layers for telematics data, wherein the vehicle data, customer data, and telematics data relate to the operation of a vehicle by a vehicle operator. For example, the vehicle data may include the make and model of the vehicle, as well as a manifest of the autonomous or dynamic driving capabilities standardly supported by the vehicle, including a status indication of each respective capability. The customer data may include demographic or other customer data as described herein, and the telematics data may include the information as described above. - In one embodiment, telematics data may include indications of user driving such as braking, cornering, speed, and acceleration. The neural network (or other artificial intelligence or machine learning algorithm or model) may associate such behaviors with higher loss reserves. The neural network (or other artificial intelligence or machine learning algorithm or model) may learn to weight such activities higher due to association with factors in other data sets (e.g., higher claim payouts, and vehicles having higher top speeds and/or lacking automated driving capabilities).
- The artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained to output loss reserving information with respect to customers by analyzing historical claims data in addition to the telematics data, vehicle data, and/or claims data. For example, the artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be trained using claims data filed customers in a geographic area who are between the ages of 16 and 25, wherein the vehicle subject to the claim is a pickup truck, and wherein the pickup truck includes partial driving automation (e.g., minimally, lane departure warnings). Such a subset of claims may be identified by querying the electronic databases described above, or by any other suitable method.
- It should be appreciated that the ability to create models that are able to calculate loss reserves for an arbitrary set of customers may be a very valuable tool, and may have applications beyond merely setting loss reserves. It should be appreciated that the foregoing example is simplified for expository purposes, and that more complex training scenarios are envisioned. Although some scenarios may include a trained neural network executing in
module 254, it may be possible to package the trained neural network for distribution to a client 202 (i.e., the trained neural network (or other artificial intelligence or machine learning algorithm or model) may be operated on theclient 202 without the use of a server 204). - In operation, the user of
client device 202, by operatinginput device 222 andviewing display 224, may openloss reserve client 216, which depending on the embodiment, may allow the user to enter information. The user may be an employee of a company controllingAI platform 104 or a customer or end user of the company. For example,loss reserve client 216 may walk the user through the steps of training a loss reserving neural network (or other artificial intelligence or machine learning algorithm or model) using a specific subset of training data, and also operating the trained model, as described with respect toFIG. 11 . - Before the user can fully access
loss reserve client 216, the user may be required to authenticate (e.g., enter a valid username and password). The user may then utilizeloss reserve client 216.Module 212 may contain instructions that identify the user and causeloss reserve client 216 to present a particular set of questions or prompts for input to the user, based upon any informationloss reserve client 216 collects, including without limitation information about the user or any vehicle. Further,module 212 may identify a subset ofhistorical data 270 to be used in training a neural network (or other artificial intelligence or machine learning algorithm or model), and/or may indicate toserver device 204 that the use of a particular neural network (or other artificial intelligence or machine learning) model or models is appropriate. - In some embodiments, location data from
client device 202 may be used by a neural network (or other artificial intelligence or machine learning algorithm or model) to label risk, and labels may be linked, in that a first label implies a second label. As noted above, location may be provided to one or more neural networks (or other artificial intelligence or machine learning algorithms or models) in the AI platform to generate labels and determine risk. For example, the zip code of a vehicle operator, whether provided via GPS or entered manually by a user, may cause the neural network (or other artificial intelligence or machine learning algorithm or model) to generate a label applicable to the vehicle operator such as RURAL, SUBURBAN, or URBAN. Such qualifications may be used in the calculation of optimal loss reserve estimations, and may be weighted accordingly. For example, the neural network (or other artificial intelligence or machine learning algorithm or model) may assign a higher severity score to the RURAL label, due to the fact that the vehicle operator recently underwent surgery and should not be driving longer distances. The generation of a RURAL label may be accompanied by additional labels such as COLLISION. Alternatively, or in addition, the collision label weight may be increased along with the addition of the RURAL label. - Another label, such as LONG-TRIP, to reflect that the vehicle operator drives longer trips than other drivers, on average, may be associated with vehicle operators who the neural network (or other artificial intelligence or machine learning algorithm or model) labels as RURAL. In some embodiments, label generation may be based upon seasonal information, in whole or in part. For example, the neural network (or other artificial intelligence or machine learning algorithm or model) may generate labels, and/or adjust label weights based upon location provided in input data. It should be appreciated that the quick and automatic generation of labels is a benefit of the methods and systems disclosed herein, and that some of the associations may appear counter-intuitive when analyzing large data sets.
- All of the information collected by
loss reserve client 216 may be associated with a session identification number so that it may be referenced as a whole.Server 204 may process the information as it arrives, and thus may process information collected byloss reserve client 216. Once information sufficient to process the user's request has been collected,server 204 may pass all of the processed information (e.g., from input analysis application) toloss reserving application 262, which may apply the information to the trained neural network model (or other artificial intelligence or machine learning algorithm or model). While the loss reserve calculation is ongoing,client device 202 may display an indication to that effect. - When the loss reserving estimate is available, an indication of completeness may be transmitted to
client 202 and displayed to user, for example viadisplay 224. Missing information may cause the model to abort with an error. In one embodiment, the settlement of a claim may trigger an immediate update of one or more neural network models (or other artificial intelligence or machine learning algorithms or models) included in the AI platform. For example, the settlement of a claim involving personal injury that occurs on a boat may trigger updates to a set of personal injury neural network models (or other artificial intelligence or machine learning algorithms or models) pertaining to boat insurance, or to a monolithic model. - In addition, or alternatively, as new claims are filed and processed, new labels may be dynamically generated, based upon claim mitigation or loss information identified and generated during the training process. In some embodiments, a human reviewer or team of reviewers may be responsible for approving the generated labels and any associated weightings before they are used. For example, claims may be labeled with settlement amounts, as well as the amount of time that the claim remained unsettled, wherein such time is normalized across all claims (e.g., represented as seconds). Both the dollar amount and timing information may be used to train a neural network (or other artificial intelligence or machine learning algorithm or model), such that the loss reserving prediction may include both a dollar amount as well as an amount of time that the claim may remain unsettled.
- In some embodiments,
AI platform 104 may be trained and/or updated to provide one or more dynamic insurance rating models which may be provided to, for example, a governmental agency. As discussed above, models are historically difficult to update and updates may be performed on a yearly basis. Using the techniques described herein, models may be dynamically updated in real-time, or on a shorter schedule (e.g., weekly) based upon new claim data. - While
FIG. 2 depicts a particular embodiment, the various components ofenvironment 100 may interoperate in a manner that is different from that described above, and/or theenvironment 100 may include additional components not shown inFIG. 2 . For example, an additional server/platform may act as an interface betweenclient device 202 andserver device 204, and may perform various operations associated with providing the loss reserving and financial reporting operations ofserver 204 toclient device 202 and/or other servers. -
FIG. 3 depicts an exemplary artificialneural network 300 which may be trained byneural network unit 150 ofFIG. 2 or neuralnetwork training application 264 ofFIG. 2 , according to one embodiment and scenario. The exampleneural network 300 may include layers of neurons, includinginput layer 302, one or more hidden layers 304-1 through 304-n, andoutput layer 306. Each layer comprisingneural network 300 may include any number of neurons—i.e., q and r may be any positive integers. It should be understood that neural networks may be used to achieve the methods and systems described herein that are of a different structure and configuration than those depicted inFIG. 3 . -
Input layer 302 may receive different input data. For example,input layer 302 may include a first input al which represents an insurance type (e.g., collision), a second input a2 representing patterns identified in input data, a third input a3 representing a vehicle make, a2 fourth input a4 representing a vehicle model, a fifth input a5 representing whether a claim was paid or not paid, a sixth input a6 representing an inflation-adjusted dollar amount disbursed under a claim, and so on.Input layer 302 may comprise thousands or more inputs. In some embodiments, the number of elements used byneural network 300 may change during the training process, and some neurons may be bypassed or ignored if, for example, during execution of the neural network, they are determined to be of less relevance. - Each neuron in hidden layer(s) 304-1 through 304-n may process one or more inputs from
input layer 302, and/or one or more outputs from a previous one of the hidden layers, to generate a decision or other output.Output layer 306 may include one or more outputs each indicating a label, confidence factor, and/or weight describing one or more inputs. A label may indicate the presence (ACCIDENT, DEER) or absence (DROUGHT) of a condition. In some embodiments, however, outputs ofneural network 300 may be obtained from a hidden layer 304-1 through 304-n in addition to, or in place of, output(s) from output layer(s) 306. - In some embodiments, each layer may have a discrete, recognizable, function with respect to input data. For example, if n=3, a first layer may analyze one dimension of inputs, a second layer a second dimension, and the final layer a third dimension of the inputs, where all dimensions are analyzing a distinct and unrelated aspect of the input data. For example, the dimensions may correspond to aspects of a vehicle operator considered strongly determinative, then those that are considered of intermediate importance, and finally those that are of less relevance.
- In other embodiments, the layers may not be clearly delineated in terms of the functionality they respectively perform. For example, two or more of hidden layers 304-1 through 304-n may share decisions relating to labeling, with no single layer making an independent decision as to labeling.
- In some embodiments,
neural network 300 may be constituted by a recurrent neural network, wherein the calculation performed at each neuron is dependent upon a previous calculation. It should be appreciated that recurrent neural networks may be more useful in performing certain tasks, such as automatic labeling of images. Therefore, in one embodiment, a recurrent neural network may be trained with respect to a specific piece of functionality with respect toenvironment 100 ofFIG. 1 . For example, in one embodiment, a recurrent neural network may be trained and utilized as part ofimage processing unit 124 to automatically label images. -
FIG. 4 depicts anexample neuron 400 that may correspond to the neuron labeled as “1,1” in hidden layer 304-1 ofFIG. 3 , according to one embodiment. Each of the inputs to neuron 400 (e.g., the inputs comprising input layer 302) may be weighted, such that input a1 through ap corresponds to weights w1 through wp, as determined during the training process ofneural network 300. - In some embodiments, some inputs may lack an explicit weight, or may be associated with a weight below a relevant threshold. The weights may be applied to a function α, which may be a summation and may produce a value z1 which may be input to a
function 420, labeled as ƒ1,1(z1). Thefunction 420 may be any suitable linear or non-linear, or sigmoid, function. As depicted inFIG. 4 , thefunction 420 may produce multiple outputs, which may be provided to neuron(s) of a subsequent layer, or used directly as an output ofneural network 300. For example, the outputs may correspond to index values in a dictionary of labels, or may be calculated values used as inputs to subsequent functions. - It should be appreciated that the structure and function of the
neural network 300 andneuron 400 depicted are for illustration purposes only, and that other suitable configurations may exist. For example, the output of any given neuron may depend not only on values determined by past neurons, but also future neurons. - In some embodiments, a percentage of the data set used to train the neural network (or other artificial intelligence or machine learning algorithm or model) may be held back as testing data until after the neural network (or other artificial intelligence or machine learning algorithm or model) is trained using the balance of the data set. In embodiments wherein the neural network involves a time series or other temporally-ordered data, all elements composing the testing data set may be posterior of those composing training data set in time.
- The specific manner in which the one or more neural networks employ machine learning to label and/or quantify risk may differ depending on the content and arrangement of training documents within the historical data (e.g.,
historical data 108 ofFIG. 1 andhistorical data 270 ofFIG. 2 ) and the input data provided by customers or users of the AI platform (e.g.,input data 102 ofFIG. 1 and the data collected byloss reserve client 216 ofFIG. 2 ), as well as the data that is joined to the historical data and input data, such ascustomer data 160 ofFIG. 1 andcustomer data 272 ofFIG. 2 , andcustomer data 160 ofFIG. 1 andvehicle data 274 ofFIG. 2 . - The nature and characteristics of the data to be predicted may also necessitate changes to the structure of the neural network (e.g., number of layers, number of input parameters, number of output parameters, number of neurons per layer), as well as the determination of whether or not to chain or stack multiple neural networks together to form predictions based upon multiple input types (e.g., text, images, etc.). The initial structure of the neural networks (e.g., the number of neural networks, their respective types, number of layers, and neurons per layer, etc.) may also affect the manner in which the trained neural network processes the input and claims. Also, as noted above, the output produced by neural networks may be counter-intuitive and very complex. For illustrative purposes, intuitive and simplified examples will now be discussed in connection with
FIG. 5 . -
FIG. 5 depicts text-based content of an exampleelectronic claim record 500 which may be processed using an artificial neural network, such asneural network 300 ofFIG. 3 or a different neural network generated byneural network unit 150 ofFIG. 1 or neuralnetwork training application 264 ofFIG. 2 . The term “text-based content” as used herein includes printing (e.g., characters A-Z and numerals 0-9), in addition to non-printing characters (e.g., whitespace, line breaks, formatting, and control characters). Text-based content may be in any suitable character encoding, such as ASCII or UTF-8 and text-based content may include HTML. - Although text-based-content is depicted in the embodiment of
FIG. 5 , as discussed above, claim input data may include images, including hand-written notes, and the AI platform may include a neural network (or other artificial intelligence or machine learning algorithm or model) trained to recognize hand-writing and to convert hand-writing to text. Further, “text-based content” may be formatted in any acceptable data format, including structured query language (SQL) tables, flat files, hierarchical data formats (e.g., XML, JSON, etc.) or as other suitable electronic objects. In some embodiments, image and audio data may be fed directly into the neural network(s) without being converted to text first. - With respect to
FIG. 5 ,electronic claim record 500 includes three sections 510 a-510 c, which respectively represent policy information, loss information, and external information.Policy information 510 a may include information about the insurance policy under which the claim has been made, including the person to whom the policy is issued, the name of the insured and any additional insureds, the location of the insured, etc.Policy information 510 a may be read, for example byinput analysis unit 120 analyzing historical data such ashistorical data 108 and individual claims, such as claims 110-1 through 110-n. Similarly, vehicle information may be included inpolicy information 510 a, such as a vehicle identification number (VIN). - Additional information about the insured and the vehicle (e.g., make, model, and year of manufacture) may be obtained from data sources and joined to input data. For example, additional customer data may be obtained from
customer data 160 orcustomer data 272, and additional vehicle data may be obtained fromvehicle data 162 andvehicle data 274. In some embodiments, make and model information may be included inelectronic claim record 500, and the additional lookup may be of vehicle attributes (e.g., the number of passengers the vehicle seats, the available options, etc.). - In addition to
policy information 510 a,electronic claim record 500 may includeloss information 510 b. Loss information generally corresponds to information regarding a loss event in which a vehicle covered by the policy listed inpolicy information 510 a sustained loss, and may be due to an accident or other peril.Loss information 510 b may indicate the date and time of the loss, the type of loss (e.g., whether collision, comprehensive, etc.), whether personal injury occurred, whether the insured made a statement in connection with the loss, the number of vehicle operators and/or passengers involved, whether traffic citations were issued, whether the loss was settled, and if so for how much money. - In some embodiments, more the than one loss may be represented in
loss information 510 b. For example, a single accident may give rise to multiple losses under a given policy, for example to two vehicles involved in a crash operated by vehicle operators not covered under the policy. In addition to loss information,electronic claim record 500 may includeexternal information 510 c, including but not limited to correspondence with the vehicle operator, statements made by the vehicle operator, etc.External information 510 c may be textual, audio, or video information. The information may include file name references, or may be file handles or addresses that represent links to other files or data sources, such as linked data 520 a-g. It should be appreciated that although only links 520 a-g are shown, more or fewer links may be included, in some embodiments. -
Electronic claim record 500 may include links to other records, including other electronic claim records. For example,electronic claim record 500 may link to notice ofloss 520 a, one ormore photographs 520 b, one or moreaudio recordings 520 c, one or more investigator'sreports 520 d, one or moreforensic reports 520 e, one or more diagrams 520 f, and one ormore payments 520 g. Data in links 520 a-520 g may be ingested by an AI platform such asAI platform 120. For example, as described above, each claim may be ingested and analyzed byinput analysis unit 120. -
AI platform 104 may include instructions which causeinput analysis unit 120 to retrieve, for each link 520 a-520 g, all available data or a subset thereof. Each link may be processed according to the type of data contained therein; for example, with respect toFIG. 1 ,input analysis unit 120 may process, first, all images from one ormore photograph 520 b usingimage processing unit 124.Input analysis unit 120 may processaudio recording 520 c using speech-to-text unit 122. - In some embodiments, a relevance order may be established, and processing may be completed according to that order. For example, portions of a claim that are identified as most dispositive of risk may be identified and processed first. If, in that example, they are dispositive of pricing, then processing of further claim elements may be abated to save processing resources. In one embodiment, once a given number of labels is generated (e.g., 50) processing may automatically abate.
- Once the various input data comprising
electronic claim record 500 has been processed, the results of the processing may, in one embodiment, be passed to a text analysis unit, and then to neural network (or other artificial intelligence or machine learning algorithm or model). If the AI platform is being trained, then the output ofinput analysis unit 120 may be passed directly toneural network unit 150. The neurons comprising a first input layer of the neural network being trained byneural network unit 150 may be configured so that each neuron receives particular input(s) which may correspond, in one embodiment, to one or more pieces of information frompolicy information 510 a,loss information 510 b, andexternal information 510 c. Similarly, one or more input neurons may be configured to receive particular input(s) from links 520 a-520 g. - In some embodiments, analysis of input entered by a user may be performed on a client device, such as
client device 202. In that case, output from input analysis may be transmitted to a server, such asserver 204, and may be passed directly as input to neurons of an already-trained neural network, such as a neural network trained by neuralnetwork training application 264. - In one embodiment, the value of a new claim may be predicted directly by a neural network model (or other artificial intelligence or machine learning algorithm or model) trained on
historical data 108, without the use of any labeling. For example, a neural network (or other artificial intelligence or machine learning algorithm or model) may be trained such that input parameters correspond to, for example,policy information 510 a, loss information 512 b, external information 512 c, and linked information 520 a-520 g. - The trained model may be configured so that inputting sample parameters, such as those in the example
electronic claim record 500, may accurately predict, for example, the estimate of damage ($25,000) and settled amount ($24,500). In this case, random weights may be chosen for all input parameters. - The model may then be provided with a subset of training data from claims 110-1 through 110-n, which are each pre-processed by the techniques described herein with respect to
FIGS. 1 and 2 to extract individual input parameters. Theelectronic claim record 500 may then be tested against the model, and the model trained with new training data claims, until the predicted dollar values and the correct or “truth” dollar values converge. - In one embodiment, the AI platform may modify the information available within an electronic claim record. For example, the AI platform may predict a series of labels as described above that pertain to a given claim. The labels may be saved in a risk indication data store, such as
loss data 142 with respect toFIG. 1 . Next, the labels and corresponding weights, in one embodiment, may be received by lossreserve aggregation platform 106, where they may be used in conjunction with base rate information to predict a claim loss value. Claims labeled with historical loss amounts may be used as training data. - In some embodiments, information pertaining to the claim, such as the coverage amount and vehicle type from
policy information 510 a, may be passed along with the labels and weights to lossreserve aggregation platform 106 and may be used in the computation of a gross or net claim loss value. After the aggregated loss reserve is computed, it may be associated with the claim, for example by writing the amount to the loss information section of the electronic claim record (e.g., to theloss information section 510 b ofFIG. 5 ). - As noted above, the methods and systems described herein may be capable of analyzing decades of electronic claim records to build neural network models, and the formatting of electronic claim records may change significantly from decade to decade, even year to year. Therefore, it is important to recognize that the flexibility built into the methods and systems described herein allows electronic claim records in disparate formats to be consumed and analyzed. Additionally, unlike human actuaries, who may naturally weight the most recently-analyzed information most heavily, and may not recall all information analyzed, the computerized methods described may treat all claims equally, regardless of temporal ordering, and may have practically unlimited memory capacity.
- Turning to
FIG. 6 , an exemplary computer-implementedmethod 600 for determining a risk level posed by an operator of a vehicle is depicted. Themethod 600 may be implemented via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices. Themethod 600 may include training a neural network (or other artificial intelligence or machine learning algorithm or model) to identify risk factors within electronic vehicle claim records (e.g., by an AI platform such asAI platform 104 training a neural network (or other artificial intelligence or machine learning algorithm or model) by aninput analysis unit 120 processing data before passing the results of the analysis to atraining unit 152 that uses the results to train a neural network model (or other artificial intelligence or machine learning algorithm or model)) (block 610). - The
method 600 may include receiving information corresponding to the vehicle by an AI platform (e.g., theAI platform 104 may accept input data such asinput data 102 and may process that input by the use of an input analysis unit such as input analysis unit 120) (block 620). Themethod 600 may include analyzing the information using the trained neural network (or other artificial intelligence or machine learning algorithm or model) (e.g., arisk indication unit 154 applies the output of theinput analysis unit 120 to trained neural network model) to generate one or more risk indicators corresponding to the information (e.g., the neural network produces a plurality of labels and/or corresponding weights) (block 630) which are used to determine a risk level corresponding to the vehicle based upon the one or more risk indicators (e.g., risk indications are stored inrisk indication data 142, and/or passed to risklevel analysis platform 106 for computation of a risk level, which may be based upon weights also generated by the trained neural network (or other artificial intelligence or machine learning algorithm or model)) (block 640). The method may include additional, less, or alternate actions, including those discussed elsewhere herein. - Turning to
FIG. 7 , a flow diagram for an exemplary computer-implementedmethod 700 of determining risk indicators from vehicle operator information. Themethod 700 may be implemented by a processor (e.g., processor 250) executing, for example, a portion ofAI platform 104, includinginput analysis unit 120,pattern matching unit 128, naturallanguage processing unit 130, andneural network unit 150. In particular, theprocessor 220 may execute an inputdata collection application 216 and aninput device 222 to cause the processor 225 to acquireapplication input 710 from a user of aclient 202. - The
processor 220 may further execute the inputdata collection application 216 to cause theprocessor 220 to transmitapplication input 710 from the user vianetwork interface 214 and anetwork 206 to a server (e.g., server 204).Processor 250 ofserver 204 may causemodule 254 ofserver 204 to processapplication input 710.Input analysis application 260 may analyzeapplication input 710 according to the methods describe above. For example, vehicle information may be queried from a vehicle data such asvehicle data 274. A VIN number inapplication input 710 may be provided as a parameter tovehicle data 274. -
Vehicle data 274 may return a result indicating that a corresponding vehicle was found invehicle data 274, and that it is a gray minivan that is one year old. Similarly, the purpose provided inapplication input 710 may be provided to a natural language processing unit (e.g., NLP unit 130), which may return a structured result indicating that the vehicle is being driven by a person who is an employed student athlete. The result of processing theapplication input 710 may be provided to a risk level unit (e.g., risk level unit 140) which will apply the input parameters to a trained neural network model (or other artificial intelligence or machine learning algorithm or model). - In one embodiment, the trained neural network model (or other artificial intelligence or machine learning algorithm or model) may produce a set of labels and confidence factors 720. The set of labels and
confidence factors 720 may contain labels that are inherent in the application input 710 (e.g. LOW-MILEAGE) or that are queried based upon information provided in the application input 710 (e.g., MINIVAN, based upon VIN). However, the set of labels andconfidence factors 720 may include additional labels (e.g., COLLISION and DEER) that are not evident from theapplication input 710 or any related/queried information. After being generated by the neural network (or other artificial intelligence or machine learning algorithm or model), the set of labels andconfidence factors 720 may then be saved to an electronic database such asrisk indication data 276, and/or passed to a risklevel analysis platform 106, whereupon a total risk may be computed and used in a pricing quote provided to the user ofclient 202. - It should be appreciated that many more types of information may be extracted from the application input 710 (e.g., from example links 520 a-520 g as shown in
FIG. 5 ). In one embodiment, the pricing quote may be a weighted average of the products of label weights and confidences. Themethod 700 may be implemented, for example, in response to a vehicleoperator accessing client 202 for the purpose of applying for an insurance policy, or adding (via an application) an additional insured to an existing policy. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. - With respect to
FIG. 8 , a flow diagram for an exemplary computer-implementedmethod 800 of detecting and/or estimating damage to personal property is depicted, according to an embodiment. Themethod 800 may be implemented, for instance, by a processor (e.g., processor 250) executing, for example, a portion ofAI platform 104, includinginput analysis unit 120,pattern matching unit 128, naturallanguage processing unit 130, andneural network unit 150. In particular, theprocessor 250 may execute aninput analysis application 260 to causeprocessor 250 to receive free-form text or voice/speech associated with a submitted insurance claim for a damaged insured vehicle (block 802). The method may include identifying one or more key words within the free-form text or voice/speech (block 804). The identification of key words within free-form text may be performed by a module of AI platform 104 (e.g., bytext analysis unit 126,pattern matching unit 128, and/or natural language processing unit 130). The identification of key words within voice/speech may be performed by, for example, speech-to-text unit 122. The method may further include determining a cause of loss and/or peril that caused damage to the damaged insured vehicle (block 806). A cause of loss and/or peril may be chosen from a set of causes of loss known to the insurer (e.g., a set stored in risk indication data 142) or may be identified or generated byrisk indication unit 154. - In some embodiments, the free-form text may be associated with a webpage or user interface of a client device accessed by a customer or employee of the proprietor of AI system 104 (e.g., an insurance agent) or by a user interface of an intranet page accessed by an employee of a call center. For example, the free-form text may be entered by a person utilizing
input device 222 and display 224 ofclient device 202, and the input may be caused to be collected byprocessor 210 executing instructions in inputdata collection application 216. Voice/speech of a user may be collected byprocessor 210 causing instructions ininput data collection 216 to be executed which read audio signals from an input device such as a microphone. In one embodiment, free-form text or voice/speech may be input toserver device 204 via other means (e.g., directly loaded onto server device 204). In some embodiments, a neural network (or other artificial intelligence or machine learning algorithm or model) may be trained (e.g., by neural network training unit 264) to identify, or determine, a key word (or words) associated with a cause of loss and/or peril using free-form text or voice/speech and a type corresponding to the insured vehicle as training data. For example, multiple neural networks may be trained that individually correspond to multiple different respective vehicle types and sets of free-form text or voice/speech. - In one embodiment, the machine learning algorithms may be dynamically or continuously trained (i.e., trained online) to dynamically update a set of key words associated with respective cause of loss and/or peril information. The cause of loss and/or peril information may be similarly dynamically updated. Such a dynamic set may be stored and updated in an electronic database, such as
risk indication data 276. - In one embodiment, a first cause of loss and/or first a peril may be identified, and an image may be received. For example, a user may capture an image, e.g., a digital image, of a vehicle (e.g., a vehicle that is damaged and/or insured) via
image sensor 220, or other type of camera. The image may be collected bymodule 212 and transmitted vianetwork interface 214 andnetwork 206 tonetwork interface 256, whereupon the image may be analyzed byinput analysis application 260. The image may be input toneural network unit 150 and passed to a trained neural network model or algorithm (or other artificial intelligence or machine learning algorithm or model), which may analyze the image determine a second cause of loss and/or second peril. Then, the first cause of loss and/or peril (e.g., that were identified in a free-form submission, such as a claim) may be compared to the second cause of loss and/or peril corresponding to the image, to verify the accuracy of the submitted claim and/or to identify potential fraud or inflation of otherwise legitimate claims. In some embodiments the image received viaimage sensor 220 may be analyzed to estimate damages, in terms of cost and/or severity. - Repair and replacement cost may be determined, in one embodiment, by training a neural network model (or other artificial intelligence or machine learning algorithm or model) to accept an image of a damaged vehicle, and to output an estimate of the severity or cost of damages, repair, and/or replacement cost. Such models may be trained using the methods described herein including, without limitation, using a subset of
historical data 108 as training data. - In some embodiments, an insurance policy associated with the damaged insured vehicle may be received or retrieved. The cause of loss and/or peril may be analyzed to determine whether the cause of loss and/or peril are covered under the insurance policy. For example, a user of
client device 202 may be required to login to an application inmodule 212 using a username or password. The user may be prompted to upload an image of a damaged vehicle during the claims submission process by the application inmodule 212, and the user may do so by capturing an image of a damaged vehicle the user owns viaimage sensor 220. The image, and an indication of the user's identity, may be transmitted vianetwork 206 toserver device 204. -
Server device 204 may determine the cause of loss as described above by analyzing the image, and may retrieve an insurance policy corresponding to the user by querying, for example,customer data 272.Server 204 may contain instructions that cause the cause of loss or peril associated with the uploaded image to be analyzed in light of the insurance policy. The insurance policy may be machine readable, such that the cause of loss and peril information is directly comparable to the insurance policy. - In one embodiment, another means of comparison may be employed (e.g., a deep learning or Bayesian approach).
Server 204, or more precisely an application executing inserver 204, may then determine whether or not, or to what extent, the cause of loss associated with the image captured by the user is covered under the user's insurance policy. In one embodiment, an indication of the coverage may be transmitted to the user (e.g., via network 206). The causes of loss, perils, and key words/concepts that may be identified and/or determined by the above-described methods include, without limitation: collision, comprehensive, bodily injury, property damage, liability, medical, rental, towing, and ambulance. -
FIG. 9A is an example flow diagram depicting an exemplary computer-implementedmethod 900 of determining damage to personal property, according to one embodiment. Themethod 900 may include inputting historical claim data into a machine learning algorithm, or model, to train the algorithm to identify an insured vehicle, a type of insured vehicle, vehicle features or characteristics, a peril associated with the vehicle, and/or a cost associated with the vehicle (block 902). Themethod 900 may be implemented by a processor (e.g., processor 250) executing, for example, a portion ofAI platform 104, includinginput analysis unit 120, and/or otherwise implemented via, for instance, one or more processors, sensors, servers, and/or transceivers.Processor 250 may execute aninput analysis application 260 to causeprocessor 250 to receive an image of the damaged insured vehicle (block 904). - The method may further include inputting an image of the damaged insured vehicle into the trained machine learning algorithm to identify a type of insured vehicle, vehicle features or characteristics, peril associated with the vehicle, and/or a cost associated with the vehicle. A type of vehicle may include any attribute of the vehicle, including without limitation, whether the body type (e.g., coupe, sedan), make, model, model year, options (e.g., sport package), whether the vehicle is autonomous or not, etc. In some embodiments, the features and characteristics may include an indication of whether the vehicle includes autonomous or semi-autonomous technologies or systems. In some embodiments, the peril associated with the damaged insured vehicle may comprise collision, comprehensive, tire, water, smoke, hail, wind, or storm surge.
- In one embodiment, an insurance policy associated with the damaged insured vehicle may be retrieved by
AI platform 104, for example, fromcustomer data 160, and the type of peril compared to the insurance policy to determine whether or not the peril is a covered peril under the insurance policy. As noted above, the applicable policy may be identified by a user identification passed from a client device, but in some embodiments, the applicable policy may be identified by other means. For example, a VIN number or license plate may be digitized by optical character recognition (e.g., by image processing unit 124) from the image provided to theAI platform 104, and the digitization used to searchcustomer data 160 for a matching insurance policy. -
FIG. 9B is an example data flow diagram depicting an exemplary computer-implementedmethod 910 of determining damage to an insured vehicle using a trained machine learning algorithm to facilitate handling an insurance claim associated with the damaged insured vehicle, according to one embodiment. Themethod 910 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices. - The
method 910 may include receiving a photograph of a damagedinsured vehicle 912. The image may be received by, for example,image processing unit 124 ofAI platform 104. The image may originate in a sensor of a client device, such asimage sensor 220 ofclient device 202, and may be captured in response to an action taken by a user, such as the user pressing a user interface button (e.g., a button or screen element of input device 222). The photograph may be analyzed by image processing unit 124 (e.g., sharpened, contrasted, or converted to a dot matrix) before being passed toneural network unit 150, where it may be input to a trained machine learning algorithm, or neural network model (block 914). The trained neural network model inblock 914 may correspond to the machine learning algorithm trained inblock 904 ofFIG. 9A . The method may include identifyinginformation 916 which may include a type of the damaged insured vehicle, a respective feature or characteristic of the damaged insured vehicle, a peril associated with the damaged insured vehicle, and/or a repair or replacement cost associated with the damaged insured vehicle. Theinformation 916 may be used to facilitate handling an insurance claim associated with the damaged insured vehicle. -
FIG. 10A is an example flow diagram depicting an exemplary computer-implementedmethod 1000 for determining damage to personal property, according to one embodiment. Themethod 1000 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices. - The
method 1000 may include inputting historical claim information into a machine learning algorithm, or model, to train the algorithm to develop a risk profile for an undamaged insurable vehicle based upon a type, feature, and/or characteristic of the vehicle (block 1002). The type, feature, and/or characteristic of the vehicle may include an indication of the geographic area of the vehicle, the vehicle make or model, information about the vehicle's transmission, information about the type and condition of the vehicle's tires, information about the vehicle's engine, information pertaining to whether the vehicle includes autonomous or semi-autonomous features, information about the vehicle's air conditioning or lack thereof, information specifying whether the vehicle has power brakes and windows, and the color of the vehicle. The method may further include receiving an image of an undamaged insurable vehicle (block 1004). The method may further include inputting the image of the undamaged insurable vehicle into a machine learning algorithm to identify a risk profile for the undamaged insurable vehicle (block 1006). - A risk profile may include a predicted loss amount, likelihood of loss, or a risk relative to other vehicles. For example, for a minivan may be lower than a risk profile for a sports car. Similarly, the risk of being rear-ended in a sports car may be lower than the risk of being rear-ended in a minivan. A risk profile may also include multiple risks with respect to one or more peril (e.g., respective risks for collision, liability, and comprehensive) in addition to an overall, or aggregate, risk profile.
- The risk profile may include an indication of behaviors and/or vehicle features that may be adopted to lower aggregate risk. For example, the risk profile may indicate that upgrading a vehicle to include a rear-facing camera may lower risk by a certain percentage, or that trading a vehicle of a first model year to a vehicle of a second model year may result in an insurance premium discount with respect to the risk level or underwriting price of the first model year.
- Such determinations may be based upon a vehicle owner making smaller, more granular changes. For example, a neural network (or other artificial intelligence or machine learning algorithm or model) may determine that such discounts may be available to a hybrid or electric vehicle owner by the vehicle owner charging the vehicle battery to a greater than or equal level (e.g., >=60%), or up/downgrading the firmware of an onboard computer from a first version to a second version.
- In some embodiments, the methods and systems herein may prompt a vehicle operator to improve their risk profile, and/or reduce an insurance premium linked to such a profile, by adopting certain behaviors. For example, in vehicles wherein driving automation or dynamic driving is user-selectable, or optional, a driver may be encouraged to activate (or deactivate) automated driving capabilities (e.g., steering control). It will be appreciated by those skilled in the art that the foregoing are intended to be simple examples for purposes of illustration, and that more complex embodiments are envisioned.
-
FIG. 10B is an example data flow diagram depicting an exemplary computer-implementedmethod 1010 of using a trained machine learning algorithm to facilitate generating an insurance quote for an undamaged insurable vehicle, according to one embodiment. Themethod 1010 may be implemented, for instance, via one or more processors, sensors, servers, transceivers, and/or other computing or electronic devices. - The method may include receiving an image, or photograph, of an
undamaged vehicle 1012. The photograph may originate in a client device, such asclient 202, and may be captured and transmitted to a server via the methods described above. Themethod 1010 may include inputting the image of an undamaged vehicle into a trainedmachine learning algorithm 1014. The trained neural network (or other artificial intelligence or machine learning algorithm or model) may correspond to the neural network trainedblock 1002 ofFIG. 10A , and the machine learning algorithm may be trained using historical claim information corresponding tohistorical data 108 ofFIG. 1 . The neural network may be configured to accept historical claim data and to predict damage amounts, or other risks. - The method may include inputting the image of the undamaged insurable vehicle into the trained machine learning algorithm to identify a risk profile for the undamaged insurable vehicle, wherein the risk profile may correspond to the risk profile described above with respect to block 1006. It should be appreciated that the use of neural networks may cause variables to emerge from large data sets that are not expected, but which are highly correlated to risk. In some cases, the risk profile associated with a given vehicle may contain information that seems unforeseeable and/or counter-intuitive.
- In one embodiment, the risk profile described above may be used to generate an insurance policy and/or determine a rate quotation corresponding to the undamaged insurable vehicle wherein the policy and/or rate are based upon the risk profile. In one embodiment, the rate may include a usage-based insurance (UBI) rate. In some embodiments, the generated insurance policy and/or rate quotation may be transmitted to the vehicle owner for a review and/or approval process. For example, a user of
client device 202 may submit an image of their vehicle viaprocessor 210 andmodule 212, and the above-described analysis involving the trained neural network model (or other artificial intelligence or machine learning algorithm or model) may then take place onserver 204. Then, when a rate quote or policy is generated on the server, the quote or policy may be transmitted bynetwork interface 256 tonetwork 206 and ultimately to networkinterface 214, back on the client. - The client may include an application in
module 212 which causes the policy or rate to be displayed to the user of client 202 (e.g., via display 224), and the user may review the policy/quote, and may be prompted to enter (e.g., via input device 222) their approval with the terms of the policy/quote. The user's approval may be transmitted back to theserver 204 vianetwork 206, and a contract for insurance formed. In this way, a user may successfully register for an insurance policy covering an insurable vehicle, by capturing an image of the vehicle, uploading the image of that vehicle, and reviewing a policy corresponding to that vehicle that has been generated by a neural network model (or other artificial intelligence or machine learning algorithm or model) analyzing the image, wherein the neural network model (or other artificial intelligence or machine learning algorithm or model) has been trained on historical claim data and/or images of similar vehicles, according to at least one preferred embodiment. - Turning to
FIG. 11 , an exemplaryuser interface environment 1100 for training and operating artificial neural network models (or other types of artificial intelligence or machine learning algorithms or models) is depicted, according to one embodiment and scenario.User interface environment 1100 may include auser interface 1110, which may be implemented in a web browser, mobile application, or other suitable user interface display program.User interface 1110 may correspond toloss reserve client 216, and may be executing inmemory 208, and may be displayed indisplay 224. A user may interact withuser interface 1110 viainput device 222. -
User interface 1110 may include pages/sections 1112-A through 1112-D. Section 1110-A may allow the user to select an existing data set, and may include a button or other suitable graphical user interface (GUI) element which, when pressed, causes a request to be transmitted to a server (e.g., server 204) including an indication of the user's selected data set(s). The server may query an electronic database, retrieve the selected data set(s), and train a model based upon the selection. The user of section 1112-A may then be redirected to a result page, such as section 1112-D, wherein the results of operating the trained model using the selected data set(s) may be displayed. Section 1112-B may allow the user to specify a custom query (e.g., in Structured Query Language or another suitable query language) of an electronic database for records (e.g., claim, user, and/or vehicle records), along with a button or other suitable GUI element. - In this way, a user may train a model using arbitrarily complex subsets and/or aggregations of claim, user, and vehicle data. Once the user activates the “Train” button in section 1112-B, the model may be trained using a data set corresponding to the user's custom query. The model may then be added to the “Trained Models” list of section 1112-C, and the user may be directed to section 1112-D, wherein the user may view the results of the query being fed to the trained loss reserving model. The training that occurs when a user activates the “Train” button in sections 1112-A and 1112-B may be fully automated, including the validation steps.
- Section 1112-C may be a list of all trained models. The user may edit or operate the individual models by interacting with
user interface 1100. Section 1112-D may be a results page which lists the output of executing a trained neural network model (or other artificial intelligence or machine learning algorithm or model). For example, as depicted, section 1112-D displays a first output, representative of executing a model trained using a data set containing all historical passenger car claim records, and a second output, representative of executing a model trained using a complex data set including motorcycle claims relating to mopeds, sport bikes, and tricycles. The first output includes an indication of the data set used and a loss reserve amount. The second output includes an indication of a complex data set used including three subsets, each having a respective loss reserve amount which is aggregated into a loss reserve aggregate. It should be understood that additional standard scaffolding may be included in some embodiments, for example, to create, update, delete, and retrieve trained models. In some embodiments, the structure of neural networks (or other artificial intelligence or machine learning algorithms or models), and the parameters used in their creation, may be accessible vialoss reserving client 1100. - With regard to
FIG. 12 , anexemplary method 1200 of determining loss reserves is depicted, according to an embodiment.Method 1200 may include receiving a plurality of labeled historical claim documents (block 1210). The labels may be a claim payout amount, a claim pendency time, a number indicating whether the loss reserve was adequate or inadequate, and any shortfall or surplus that was associated with the loss reserves allocated before the claim was settled.Method 1200 may include normalizing the claim loss/payout/settlement amount (block 1220). Normalization may include converting the claim amount into a standard currency (e.g., USD) and/or adjusting the claim settlement amount for inflation or other circumstances related to monetary policy. -
Method 1200 may include training an artificial neural network (or other artificial intelligence or machine learning algorithm or model) using the historical claim documents (block 1230). Training the artificial neural network may include creating a neural network having a plurality of input neurons in an input layer, and a plurality of hidden layers, each having a respective number of neurons. The neural network may be dense and interconnected, and may have an output layer having one or more output neurons. A subset of the labeled historical claims may be used to train the neural network (or other artificial intelligence or machine learning algorithm or model) to predict an optimal loss reserve for a type of claim (e.g., a motorcycle claim) or across all claim types. An optimal loss reserve may be neither too large nor too small, with respect to historical claim settlement amounts. A validation set of historical claims may be held back for testing the trained neural network (or other trained artificial intelligence or machine learning algorithm or model) for accuracy. -
Method 1200 may include receiving a user claim (block 1240). The user claim may be submitted by the user via an application, such as an application executing inmodule 212 ofclient device 202. In some embodiments, a user claim may be retrieved fromhistorical data 108. The user claim may correspond toelectronic claim record 500. A plurality of attributes of claims (e.g., payments, type of loss, policy deductible, etc.) may be used to train the neural network (or other artificial intelligence or machine learning algorithm or model) and, the same attributes respective to the user claim may be provided as inputs to the neural network (or other artificial intelligence or machine learning algorithm or model) by applying the user claim to the neural network (or other artificial intelligence or machine learning algorithm or model) to predict a loss reserve amount (block 1250). In some embodiments, additional or fewer steps may be used, and in some embodiments, loss reserving models may be created that apply to a specific type of claim, vehicle, and/or customer. - Turing to
FIG. 13 , anexemplary method 1300 of training and executing artificial neural networks using a customized data set is depicted, according to one embodiment.Method 1300 may include receiving an indication of a trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a data set (block 1310). The indication may be a pair of integers or other values respectively uniquely identifying a trained neural network (or other artificial intelligence or machine learning algorithm or model) and a data set. The trained neural network (or other artificial intelligence or machine learning algorithm or model) may have been trained in advance by a user, using, for example,loss reserving application 216. - In one embodiment, the neural network (or other artificial intelligence or machine learning algorithm or model) may have been trained using module 254 (e.g., using command line tools by a user accessing server device 204). The data set may be a pre-existing labeled data set that is listed on a user interface and selectable by the user, or may be built by the user by the user entering an SQL expression into an input box (e.g., via
input device 222 and display 224). The user may press a button, in response to which,method 1300 may transmit an execution request including the indication to a remote computing device (e.g., server 204). -
Server 204 may include instructions for receiving the indication, selecting the appropriate neural network (or other artificial intelligence or machine learning algorithm or model) and data set, applying the data set to the neural network (or other artificial intelligence or machine learning algorithm or model), and returning execution output including at least identification of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) and a loss reserve amount produced via operation of the selected trained artificial neural network (or other artificial intelligence or machine learning algorithm or model). - An application such as
loss reserve client 216 may receive the execution result/output (block 1330) and may display the output of the trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) (block 1340). In some embodiments, the data set may be a compound data set, and the output of the trained artificial neural network (or other artificial intelligence or machine learning algorithm or model) may be a result that includes individual loss reserving amounts with respect to a plurality of data subsets, and/or an aggregate loss reserving amount applicable to all of the plurality of data subsets. - Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible, which may include additional or fewer features. For example, additional knowledge may be obtained using identical methods. The labeling techniques described herein may be used in the identification of fraudulent claim activity. The techniques may be used in conjunction with co-insurance to determine the relative risk of pools of customers. External customer features, such as payment histories, may be taken into account in pricing risk. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions described herein.
- The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors (such as processors, transceivers, servers, and/or sensors mounted on drones, vehicles or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer-executable instructions stored on non-transitory computer-readable media or medium.
- Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium.
- A processor or a processing element may be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, a reinforcement or reinforced learning algorithm or model, or a combined learning module or program that learns in two or more fields or areas of interest, In some embodiments, deep learning strategies may be applied, in addition to random forest trees for classification. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. For instance, machine learning may involve identifying and recognizing patterns in existing text or voice/speech data in order to facilitate making predictions for subsequent data. Voice recognition and/or word recognition techniques may also be used. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs.
- Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as drone, autonomous or semi-autonomous drone, image, mobile device, smart or autonomous vehicle, and/or intelligent vehicle telematics data. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples. The machine learning programs may include deep, combined, or reinforced learning algorithms or models, Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing—either individually or in combination. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or other types of machine learning, such as deep learning, combined learning, and/or reinforced learning.
- Supervised or unsupervised machine learning may also be employed. In supervised machine learning, a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so a when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may be required to find its own structure in unlabeled example inputs.
- With the foregoing, any users (e.g., insurance customers) whose data is being collected and/or utilized may first opt-in to a rewards, insurance discount, or other type of program. After the user provides their affirmative consent, data may be collected from the user's device (e.g., mobile device, smart or autonomous vehicle controller, smart vehicle controller, or other smart devices). In return, the user may be entitled insurance cost savings, including insurance discounts for auto, homeowners, mobile, renters, personal articles, and/or other types of insurance.
- In other embodiments, deployment and use of neural network models at a user device (e.g., the
client 202 ofFIG. 2 ) may have the benefit of removing any concerns of privacy or anonymity, by removing the need to send any personal or private data to a remote server (e.g., theserver 204 ofFIG. 2 ). - The following additional considerations apply to the foregoing discussion. Throughout this specification, plural instances may implement operations or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
- The patent claims at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the claim(s). The systems and methods described herein are directed to an improvement to computer functionality, and improve the functioning of conventional computers.
- Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
- As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present and both A and B are true (or present).
- In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
- Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
- Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
- In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
- Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
- Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory product to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory product to retrieve and process the stored output. Hardware modules may also initiate communications with input or output products, and can operate on a resource (e.g., a collection of information),
- The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
- Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a vehicle environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
- The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a vehicle environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
- Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context.
- Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the method and systems described herein through the principles disclosed herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Claims (21)
1-20. (canceled)
21. A method implemented by one or more processors, comprising:
receiving a plurality of historical claim documents;
rendering a graphical user interface to a user, the graphical user interface comprising a selection section and a query section;
receiving a user input corresponding to the selection section;
receiving a user query corresponding to the query section;
selecting a subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the user query;
training a machine learning model using the selected subset of historical claim documents;
receiving a claim document; and
predicting a loss reserve amount by applying the trained machine learning model to the claim document;
22. The method of claim 21 , further comprising:
selecting a first subset of historical claim documents based upon the user input;
selecting a second subset of historical claim documents based upon the user query; and
generating the subset of historical claim documents based upon the first subset of historical claim documents and the second subset of historical claim documents.
23. The method of claim 21 , further comprising:
compiling a structured query based upon the user query; and
selecting the subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the structured query.
24. The method of claim 21 , wherein the machine learning model comprises a first machine learning model and a second machine learning model, the method further comprising:
selecting a first subset of historical claim documents based upon the user input;
selecting a second subset of historical claim documents based upon the user query;
training the first machine learning model using the first subset of historical claim documents;
training the second machine learning model using the second subset of historical claim document;
predicting a first loss reserve amount by applying the first trained machine learning model to the claim document;
predicting a second loss reserve amount by applying the second trained machine learning model to the claim document; and
determining the loss reserve amount based at least in part upon the first loss reserve amount and the second loss reserve amount.
25. The method of claim 24 , further comprising:
determining the loss reserve amount to be the sum of the first loss reserve amount and the second loss reserve amount.
26. The method of claim 21 , wherein the claim document includes free-form text and an image, the method further comprising:
determining a first cause of loss by applying the trained machine learning model to the free-form text of the claim document;
determining a second cause of loss by applying the trained machine learning model to the image of the claim document; and
predicting the loss reserve amount using the trained machine learning model based at least in part upon the first cause of loss and the second cause of loss.
27. The method of claim 26 , wherein the trained machine learning model comprising a natural language processing model, the method further comprising:
identifying a keyword in the free-form text of the claim document using the natural language processing model; and
determining the first cause of loss based at least upon the identified keyword.
28. The method of claim 21 , wherein each historical claim document of the plurality of history claim documents is associated with a plurality of labels.
29. A system, comprising:
one or more memories having instructions stored thereon; and
one or more processors configured to execute the instructions and configured to perform operations comprising:
receiving a plurality of historical claim documents;
rendering a graphical user interface to a user, the graphical user interface comprising a selection section and a query section;
receiving a user input corresponding to the selection section;
receiving a user query corresponding to the query section;
selecting a subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the user query;
training a machine learning model using the selected subset of historical claim documents;
receiving a claim document; and
predicting a loss reserve amount by applying the trained machine learning model to the claim document;
30. The system of claim 29 , wherein the operations further comprise:
selecting a first subset of historical claim documents based upon the user input;
selecting a second subset of historical claim documents based upon the user query; and
generating the subset of historical claim documents based upon the first subset of historical claim documents and the second subset of historical claim documents.
31. The system of claim 29 , wherein the operations further comprise:
compiling a structured query based upon the user query; and
selecting the subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the structured query.
32. The system of claim 29 , wherein the machine learning model comprises a first machine learning model and a second machine learning model, wherein the operations further comprise:
selecting a first subset of historical claim documents based upon the user input;
selecting a second subset of historical claim documents based upon the user query;
training the first machine learning model using the first subset of historical claim documents;
training the second machine learning model using the second subset of historical claim document;
predicting a first loss reserve amount by applying the first trained machine learning model to the claim document;
predicting a second loss reserve amount by applying the second trained machine learning model to the claim document; and
determining the loss reserve amount based at least in part upon the first loss reserve amount and the second loss reserve amount.
33. The system of claim 32 , wherein the operations further comprise:
determining the loss reserve amount to be the sum of the first loss reserve amount and the second loss reserve amount.
34. The system of claim 29 , wherein the claim document includes free-form text and an image, wherein the operations further comprise:
determining a first cause of loss by applying the trained machine learning model to the free-form text of the claim document;
determining a second cause of loss by applying the trained machine learning model to the image of the claim document; and
predicting the loss reserve amount using the trained machine learning model based at least in part upon the first cause of loss and the second cause of loss.
35. The system of claim 34 , wherein the trained machine learning model comprising a natural language processing model, wherein the operations further comprise:
identifying a keyword in the free-form text of the claim document using the natural language processing model; and
determining the first cause of loss based at least upon the identified keyword.
36. The system of claim 29 , wherein each historical claim document of the plurality of history claim documents is associated with a plurality of labels.
37. A non-transitory computer-readable storage media comprising instructions that cause a programmable processor to:
receive a plurality of historical claim documents;
render a graphical user interface to a user, the graphical user interface comprising a selection section and a query section;
receive a user input corresponding to the selection section;
receive a user query corresponding to the query section;
select a subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the user query;
train a machine learning model using the selected subset of historical claim documents;
receive a claim document; and
predict a loss reserve amount by applying the trained machine learning model to the claim document.
38. The non-transitory computer-readable storage media of claim 37 , wherein the instructions further cause the programmable processor to:
select a first subset of historical claim documents based upon the user input;
select a second subset of historical claim documents based upon the user query; and
generate the subset of historical claim documents based upon the first subset of historical claim documents and the second subset of historical claim documents.
39. The non-transitory computer-readable storage media of claim 37 , wherein the instructions further cause the programmable processor to:
compile a structured query based upon the user query; and
select the subset of historical claim documents from the plurality of historical claims based at least in part upon the user input and the structured query.
40. The non-transitory computer-readable storage media of claim 37 , wherein the instructions further cause the programmable processor to:
determine a first cause of loss by applying the trained machine learning model to the free-form text of the claim document;
determine a second cause of loss by applying the trained machine learning model to the image of the claim document; and
predict the loss reserve amount using the trained machine learning model based at least in part upon the first cause of loss and the second cause of loss.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/353,621 US20210312567A1 (en) | 2017-09-27 | 2021-06-21 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
Applications Claiming Priority (17)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762564055P | 2017-09-27 | 2017-09-27 | |
US201762580713P | 2017-11-02 | 2017-11-02 | |
US201762580655P | 2017-11-02 | 2017-11-02 | |
US201762610599P | 2017-12-27 | 2017-12-27 | |
US201862617851P | 2018-01-16 | 2018-01-16 | |
US201862618192P | 2018-01-17 | 2018-01-17 | |
US201862621218P | 2018-01-24 | 2018-01-24 | |
US201862621797P | 2018-01-25 | 2018-01-25 | |
US201862622542P | 2018-01-26 | 2018-01-26 | |
US201862625140P | 2018-02-01 | 2018-02-01 | |
US201862632884P | 2018-02-20 | 2018-02-20 | |
US201862646729P | 2018-03-22 | 2018-03-22 | |
US201862646740P | 2018-03-22 | 2018-03-22 | |
US201862646735P | 2018-03-22 | 2018-03-22 | |
US201862652121P | 2018-04-03 | 2018-04-03 | |
US16/136,401 US20210287297A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
US17/353,621 US20210312567A1 (en) | 2017-09-27 | 2021-06-21 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/136,401 Continuation US20210287297A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210312567A1 true US20210312567A1 (en) | 2021-10-07 |
Family
ID=68696030
Family Applications (12)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/136,370 Abandoned US20210256616A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Risk Determination |
US16/136,387 Active 2039-01-21 US11783422B1 (en) | 2017-09-27 | 2018-09-20 | Implementing machine learning for life and health insurance claims handling |
US16/136,357 Active 2039-12-01 US11373249B1 (en) | 2017-09-27 | 2018-09-20 | Automobile monitoring systems and methods for detecting damage and other conditions |
US16/136,365 Abandoned US20210256615A1 (en) | 2017-09-27 | 2018-09-20 | Implementing Machine Learning For Life And Health Insurance Loss Mitigation And Claims Handling |
US16/136,401 Abandoned US20210287297A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
US16/136,501 Active US10497250B1 (en) | 2017-09-27 | 2018-09-20 | Real property monitoring systems and methods for detecting damage and other conditions |
US16/136,519 Abandoned US20210390624A1 (en) | 2017-09-27 | 2018-09-20 | Real Property Monitoring Systems and Methods for Risk Determination |
US16/668,072 Active US10943464B1 (en) | 2017-09-27 | 2019-10-30 | Real property monitoring systems and methods for detecting damage and other conditions |
US17/353,621 Pending US20210312567A1 (en) | 2017-09-27 | 2021-06-21 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
US17/466,722 Abandoned US20210398227A1 (en) | 2017-09-27 | 2021-09-03 | Real property monitoring systems and methods for risk determination |
US17/752,702 Pending US20220284517A1 (en) | 2017-09-27 | 2022-05-24 | Automobile Monitoring Systems and Methods for Detecting Damage and Other Conditions |
US18/139,131 Pending US20230260048A1 (en) | 2017-09-27 | 2023-04-25 | Implementing Machine Learning For Life And Health Insurance Claims Handling |
Family Applications Before (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/136,370 Abandoned US20210256616A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Risk Determination |
US16/136,387 Active 2039-01-21 US11783422B1 (en) | 2017-09-27 | 2018-09-20 | Implementing machine learning for life and health insurance claims handling |
US16/136,357 Active 2039-12-01 US11373249B1 (en) | 2017-09-27 | 2018-09-20 | Automobile monitoring systems and methods for detecting damage and other conditions |
US16/136,365 Abandoned US20210256615A1 (en) | 2017-09-27 | 2018-09-20 | Implementing Machine Learning For Life And Health Insurance Loss Mitigation And Claims Handling |
US16/136,401 Abandoned US20210287297A1 (en) | 2017-09-27 | 2018-09-20 | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting |
US16/136,501 Active US10497250B1 (en) | 2017-09-27 | 2018-09-20 | Real property monitoring systems and methods for detecting damage and other conditions |
US16/136,519 Abandoned US20210390624A1 (en) | 2017-09-27 | 2018-09-20 | Real Property Monitoring Systems and Methods for Risk Determination |
US16/668,072 Active US10943464B1 (en) | 2017-09-27 | 2019-10-30 | Real property monitoring systems and methods for detecting damage and other conditions |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/466,722 Abandoned US20210398227A1 (en) | 2017-09-27 | 2021-09-03 | Real property monitoring systems and methods for risk determination |
US17/752,702 Pending US20220284517A1 (en) | 2017-09-27 | 2022-05-24 | Automobile Monitoring Systems and Methods for Detecting Damage and Other Conditions |
US18/139,131 Pending US20230260048A1 (en) | 2017-09-27 | 2023-04-25 | Implementing Machine Learning For Life And Health Insurance Claims Handling |
Country Status (1)
Country | Link |
---|---|
US (12) | US20210256616A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11472420B2 (en) | 2020-08-20 | 2022-10-18 | Toyota Jidosha Kabushiki Kaisha | Machine learning device and machine learning system |
US11675999B2 (en) * | 2020-08-20 | 2023-06-13 | Toyota Jidosha Kabushiki Kaisha | Machine learning device |
Families Citing this family (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10726492B2 (en) * | 2016-08-15 | 2020-07-28 | Allstate Insurance Company | Customized platform for host protection in home sharing |
DE102017202415A1 (en) * | 2017-02-15 | 2018-08-16 | Bayerische Motoren Werke Aktiengesellschaft | Collision avoidance with cross traffic |
US10762385B1 (en) * | 2017-06-29 | 2020-09-01 | State Farm Mutual Automobile Insurance Company | Deep learning image processing method for determining vehicle damage |
US10698421B1 (en) * | 2017-09-25 | 2020-06-30 | State Farm Mutual Automobile Insurance Company | Dynamic autonomous vehicle train |
US20210256616A1 (en) | 2017-09-27 | 2021-08-19 | State Farm Mutual Automobile Insurance Company | Automobile Monitoring Systems and Methods for Risk Determination |
US11636281B2 (en) * | 2018-04-24 | 2023-04-25 | Visa International Service Association | Model management system for developing machine learning models |
EP3791389A4 (en) * | 2018-05-08 | 2022-01-26 | 3M Innovative Properties Company | Hybrid batch and live natural language processing |
US11403558B1 (en) * | 2018-09-18 | 2022-08-02 | Iqvia Inc. | GxP artificial intelligence / machine learning (AI/ML) platform |
US11568207B2 (en) | 2018-09-27 | 2023-01-31 | Deepmind Technologies Limited | Learning observation representations by predicting the future in latent space |
US20240177536A9 (en) * | 2018-09-30 | 2024-05-30 | Strong Force Tp Portfolio 2022, Llc | Intelligent transportation systems including digital twin interface for a passenger vehicle |
WO2020092574A1 (en) * | 2018-10-30 | 2020-05-07 | Laboratory Corporation Of America Holdings | Express tracking for patient flow management in a distributed environment |
US11270213B2 (en) * | 2018-11-05 | 2022-03-08 | Convr Inc. | Systems and methods for extracting specific data from documents using machine learning |
US20230222610A1 (en) * | 2018-12-10 | 2023-07-13 | Wells Fargo Bank, N.A. | Virtual Re-Inspection Process for a Property |
US11783423B1 (en) | 2018-12-14 | 2023-10-10 | Allstate Insurance Company | Connected home system with risk units |
US20210142590A1 (en) * | 2018-12-26 | 2021-05-13 | Allstate Insurance Company | System generated damage analysis using scene templates |
US11741763B2 (en) * | 2018-12-26 | 2023-08-29 | Allstate Insurance Company | Systems and methods for system generated damage analysis |
US12008667B1 (en) * | 2019-02-07 | 2024-06-11 | State Farm Mutual Automobile Insurance Company | Systems and methods for detecting building events and trends |
US11915179B2 (en) * | 2019-02-14 | 2024-02-27 | Talisai Inc. | Artificial intelligence accountability platform and extensions |
CN109859847A (en) * | 2019-02-15 | 2019-06-07 | 京东方科技集团股份有限公司 | Electronic equipment, Weight management earnings forecast device and storage medium |
US11315691B2 (en) * | 2019-02-22 | 2022-04-26 | Impactivo, Llc | Method for recommending continuing education to health professionals based on patient outcomes |
EP3942488A1 (en) * | 2019-03-22 | 2022-01-26 | Swiss Reinsurance Company Ltd. | Structured liability risks parametrizing and forecasting system providing composite measures based on a reduced-to-the-max optimization approach and quantitative yield pattern linkage and corresponding method |
AU2020253756A1 (en) * | 2019-04-04 | 2021-09-16 | Valmont Industries, Inc. | System and method for latching solenoid activation detection for vri and other irrigation uses |
US11378485B2 (en) * | 2019-04-26 | 2022-07-05 | Mikael Sven Johan Sjoblom | Structural monitoring system |
US20200349528A1 (en) * | 2019-05-01 | 2020-11-05 | Stoa USA, Inc | System and method for determining a property remodeling plan using machine vision |
US11928737B1 (en) * | 2019-05-23 | 2024-03-12 | State Farm Mutual Automobile Insurance Company | Methods and apparatus to process insurance claims using artificial intelligence |
US11514528B1 (en) * | 2019-06-20 | 2022-11-29 | Express Scripts Strategic Development, Inc. | Pharmacy benefit management machine learning systems and methods |
US11669907B1 (en) * | 2019-06-27 | 2023-06-06 | State Farm Mutual Automobile Insurance Company | Methods and apparatus to process insurance claims using cloud computing |
US11954736B1 (en) * | 2019-08-28 | 2024-04-09 | State Farm Mutual Automobile Insurance Company | Systems and methods for generating mobility insurance products using ride-sharing telematics data |
US20210312568A1 (en) * | 2019-09-23 | 2021-10-07 | Mitchell International, Inc. | Systems and methods for vehicle intake for damaged vehicles |
US11351995B2 (en) | 2019-09-27 | 2022-06-07 | Zoox, Inc. | Error modeling framework |
US11734473B2 (en) * | 2019-09-27 | 2023-08-22 | Zoox, Inc. | Perception error models |
US11625513B2 (en) * | 2019-09-27 | 2023-04-11 | Zoox, Inc. | Safety analysis framework |
US11749265B2 (en) * | 2019-10-04 | 2023-09-05 | Disney Enterprises, Inc. | Techniques for incremental computer-based natural language understanding |
US11417208B1 (en) * | 2019-10-29 | 2022-08-16 | BlueOwl, LLC | Systems and methods for fraud prevention based on video analytics |
US11315196B1 (en) * | 2019-12-03 | 2022-04-26 | Data-Core Systems, Inc. | Synthesized invalid insurance claims for training an artificial intelligence / machine learning model |
US11620294B2 (en) * | 2020-01-30 | 2023-04-04 | Panasonic Avionics Corporation | Dynamic media data management |
CN111311668B (en) * | 2020-02-12 | 2024-01-05 | 东南大学 | Fair-faced concrete surface air hole analysis method based on convolutional neural network |
US11468515B1 (en) * | 2020-02-18 | 2022-10-11 | BlueOwl, LLC | Systems and methods for generating and updating a value of personal possessions of a user for insurance purposes |
US11423305B2 (en) * | 2020-02-26 | 2022-08-23 | Deere & Company | Network-based work machine software optimization |
EP4115306A4 (en) * | 2020-03-02 | 2024-03-20 | Strong Force Tp Portfolio 2022 Llc | Intelligent transportation systems including digital twin interface for a passenger vehicle |
WO2021181488A1 (en) * | 2020-03-09 | 2021-09-16 | 富士通株式会社 | Dialog control program, dialog control method, and information processing device |
US11798095B1 (en) * | 2020-03-30 | 2023-10-24 | Allstate Insurance Company | Commercial claim processing platform using machine learning to generate shared economy insights |
US20210304879A1 (en) * | 2020-03-31 | 2021-09-30 | Change Healthcare Holdings Llc | Methods, systems, and computer program products for dividing health care service responsibilities between entities |
WO2021195689A1 (en) * | 2020-04-03 | 2021-10-07 | Presagen Pty Ltd | Method for artificial intelligence (ai) model selection |
US11950166B2 (en) * | 2020-04-13 | 2024-04-02 | Waymo Llc | Predicting occupancy probabilities of surrounding agents |
US11668624B2 (en) * | 2020-04-13 | 2023-06-06 | Independence Materials Group, Llc | System and method for alerting third-parties of an unfavorable condition |
US11710186B2 (en) | 2020-04-24 | 2023-07-25 | Allstate Insurance Company | Determining geocoded region based rating systems for decisioning outputs |
US11852495B1 (en) * | 2020-05-26 | 2023-12-26 | BlueOwl, LLC | Computational model for creating personalized routes based at least in part upon predicted total cost of claim frequency or severity |
US11550325B2 (en) | 2020-06-10 | 2023-01-10 | Nvidia Corp. | Adversarial scenarios for safety testing of autonomous vehicles |
US11390301B2 (en) * | 2020-06-10 | 2022-07-19 | Nvidia Corp. | Tensor-based driving scenario characterization |
US11829150B2 (en) * | 2020-06-10 | 2023-11-28 | Toyota Research Institute, Inc. | Systems and methods for using a joint feature space to identify driving behaviors |
US11769332B2 (en) * | 2020-06-15 | 2023-09-26 | Lytx, Inc. | Sensor fusion for collision detection |
US11755939B2 (en) * | 2020-06-24 | 2023-09-12 | Microsoft Technology Licensing, Llc | Self-supervised self supervision by combining probabilistic logic with deep learning |
US20220005089A1 (en) * | 2020-07-01 | 2022-01-06 | Daniel SHAKED | Property management pricing system and method |
JP2022018415A (en) * | 2020-07-15 | 2022-01-27 | キヤノンメディカルシステムズ株式会社 | Medical data processing device and method |
US11488255B1 (en) * | 2020-08-03 | 2022-11-01 | State Farm Mutual Automobile Insurance Company | Apparatuses, systems and methods for mitigating property loss based on an event driven probable roof loss confidence score |
US20220053010A1 (en) * | 2020-08-13 | 2022-02-17 | Tweenznet Ltd. | System and method for determining a communication anomaly in at least one network |
US20220059230A1 (en) * | 2020-08-21 | 2022-02-24 | Optum, Inc. | Machine-learning-based predictive behaviorial monitoring |
US20220107977A1 (en) * | 2020-10-05 | 2022-04-07 | Modern Adjusting Services, LLC | Methods, systems, and software for inspection of a structure |
CN112446169A (en) * | 2020-11-05 | 2021-03-05 | 美的集团股份有限公司 | Water heater water consumption prediction method, water heater and storage medium |
US11984124B2 (en) * | 2020-11-13 | 2024-05-14 | Apple Inc. | Speculative task flow execution |
US20220164892A1 (en) * | 2020-11-23 | 2022-05-26 | Computed Futures Inc | Systems and methods for detecting and mitigating cyber security threats |
CN112396026B (en) * | 2020-11-30 | 2024-06-07 | 北京华正明天信息技术股份有限公司 | Fire image feature extraction method based on feature aggregation and dense connection |
US20220198572A1 (en) * | 2020-12-23 | 2022-06-23 | Hippo Analytics Inc. dba Hippo Insurance Services | System for augmenting third party data |
US20220237563A1 (en) * | 2021-01-25 | 2022-07-28 | Master Plumbing Corporation | System and method for appraising damage claims |
CA3215366A1 (en) * | 2021-04-13 | 2022-10-20 | Sina Chehrazi | Machine-learning driven real-time data analysis |
US11803535B2 (en) | 2021-05-24 | 2023-10-31 | Cdk Global, Llc | Systems, methods, and apparatuses for simultaneously running parallel databases |
US20220405856A1 (en) * | 2021-06-16 | 2022-12-22 | Cape Analytics, Inc. | Property hazard score determination |
US20230037656A1 (en) * | 2021-08-06 | 2023-02-09 | Rain Technology, Inc. | Handsfree information system and method |
ES2970858A2 (en) * | 2021-09-28 | 2024-05-30 | Bdeo Tech S L | Automated home claims rating system |
WO2023051904A1 (en) * | 2021-09-29 | 2023-04-06 | Swiss Reinsurance Company Ltd. | Measuring and control system measuring sensor equipped smart home perils and individual safety scores using digital home objects and mutual calibrated measuring parameter values, and method thereof |
US20230186315A1 (en) * | 2021-11-08 | 2023-06-15 | Super Home Inc. | System and method for covering cost of delivering repair and maintenance services to premises of subscribers including adjudication |
US12003491B2 (en) * | 2021-11-12 | 2024-06-04 | Authentic, Inc. | Method and system for asynchronous medical patient data communication between multiple parties |
US11676298B1 (en) | 2021-12-16 | 2023-06-13 | Cape Analytics, Inc. | System and method for change analysis |
WO2023205228A1 (en) * | 2022-04-19 | 2023-10-26 | Tractable Ltd | Remote real property inspection |
US20240046361A1 (en) * | 2022-08-02 | 2024-02-08 | Allstate Insurance Company | Systems and methods for vehicle damage identification and insurance claim processing |
US11983145B2 (en) | 2022-08-31 | 2024-05-14 | Cdk Global, Llc | Method and system of modifying information on file |
CN116433991B (en) * | 2023-06-14 | 2023-08-22 | 中国地质大学(武汉) | Post-earthquake building damage assessment method for emergency rescue |
US12014428B1 (en) | 2023-08-22 | 2024-06-18 | EmergIP, LLC | Apparatus and a method for the generation of provider data |
Citations (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5893072A (en) * | 1996-06-20 | 1999-04-06 | Aetna Life & Casualty Company | Insurance classification plan loss control system |
US20030135725A1 (en) * | 2002-01-14 | 2003-07-17 | Schirmer Andrew Lewis | Search refinement graphical user interface |
US20050154617A1 (en) * | 2000-09-30 | 2005-07-14 | Tom Ruggieri | System and method for providing global information on risks and related hedging strategies |
US20060136273A1 (en) * | 2004-09-10 | 2006-06-22 | Frank Zizzamia | Method and system for estimating insurance loss reserves and confidence intervals using insurance policy and claim level detail predictive modeling |
US20060293926A1 (en) * | 2003-02-18 | 2006-12-28 | Khury Costandy K | Method and apparatus for reserve measurement |
US7392201B1 (en) * | 2000-11-15 | 2008-06-24 | Trurisk, Llc | Insurance claim forecasting system |
US20100131304A1 (en) * | 2008-11-26 | 2010-05-27 | Fred Collopy | Real time insurance generation |
US20110302188A1 (en) * | 2005-12-30 | 2011-12-08 | Google Inc. | Dynamic search box for web browser |
US20120221553A1 (en) * | 2011-02-24 | 2012-08-30 | Lexisnexis, A Division Of Reed Elsevier Inc. | Methods for electronic document searching and graphically representing electronic document searches |
US8452621B1 (en) * | 2012-02-24 | 2013-05-28 | Guy Carpenter & Company, LLC. | System and method for determining loss reserves |
US20140195273A1 (en) * | 2012-09-12 | 2014-07-10 | Guy Carpenter & Company, Llc | System And Method For Providing Systemic Casualty Reserve Protection |
WO2015077557A1 (en) * | 2013-11-22 | 2015-05-28 | California Institute Of Technology | Generation of weights in machine learning |
US20150220601A1 (en) * | 2014-02-06 | 2015-08-06 | International Business Machines Corporation | Searching content managed by a search engine using relational database type queries |
US20160132194A1 (en) * | 2014-11-06 | 2016-05-12 | Dropbox, Inc. | Searching digital content |
US9449080B1 (en) * | 2010-05-18 | 2016-09-20 | Guangsheng Zhang | System, methods, and user interface for information searching, tagging, organization, and display |
US9489397B1 (en) * | 2011-07-27 | 2016-11-08 | Aon Benfield Global, Inc. | Impact data manager for dynamic data delivery |
US20170185723A1 (en) * | 2015-12-28 | 2017-06-29 | Integer Health Technologies, LLC | Machine Learning System for Creating and Utilizing an Assessment Metric Based on Outcomes |
US20170300712A1 (en) * | 2016-04-14 | 2017-10-19 | Salesforce.Com, Inc. | Fine grain security for analytic data sets |
US20170371924A1 (en) * | 2016-06-24 | 2017-12-28 | Microsoft Technology Licensing, Llc | Aggregate-Query Database System and Processing |
US20180107734A1 (en) * | 2016-10-18 | 2018-04-19 | Kathleen H. Galia | System to predict future performance characteristic for an electronic record |
US20190121900A1 (en) * | 2017-10-23 | 2019-04-25 | Citrix Systems, Inc. | Multi-Select Dropdown State Replication |
US20190147083A1 (en) * | 2017-11-14 | 2019-05-16 | Mindbridge Analytics Inc. | Method and system for presenting a user selectable interface in response to a natural language request |
US10373260B1 (en) * | 2014-03-18 | 2019-08-06 | Ccc Information Services Inc. | Imaging processing system for identifying parts for repairing a vehicle |
US10497250B1 (en) * | 2017-09-27 | 2019-12-03 | State Farm Mutual Automobile Insurance Company | Real property monitoring systems and methods for detecting damage and other conditions |
US20200335188A1 (en) * | 2019-04-17 | 2020-10-22 | Tempus Labs | Systems and Methods for Interrogating Clinical Documents for Characteristic Data |
US20210209115A1 (en) * | 2020-01-07 | 2021-07-08 | Elastic Flash Inc. | Data ingestion with spatial and temporal locality |
US20210279297A1 (en) * | 2016-05-13 | 2021-09-09 | Equals 3 LLC | Linking to a search result |
US20220365982A1 (en) * | 2021-05-17 | 2022-11-17 | Docusign, Inc. | Document package merge in document management system |
Family Cites Families (184)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0630150B1 (en) | 1993-06-18 | 1998-08-05 | Hewlett-Packard Company | Neural network for color translations |
US7249114B2 (en) | 1998-08-06 | 2007-07-24 | Cybersettle Holdings, Inc. | Computerized dispute resolution system and method |
BR0007250A (en) | 1999-01-13 | 2002-10-15 | Computer Ass Think Inc | Method and device for authenticating signatures and e-learning methods and comparing them with previously stored representations |
US6281790B1 (en) | 1999-09-01 | 2001-08-28 | Net Talon Security Systems, Inc. | Method and apparatus for remotely monitoring a site |
US7149347B1 (en) | 2000-03-02 | 2006-12-12 | Science Applications International Corporation | Machine learning of document templates for data extraction |
AU2002243431A1 (en) | 2000-10-23 | 2002-06-24 | Deloitte And Touche Llp | Commercial insurance scoring system and method |
GB2378544A (en) | 2001-04-26 | 2003-02-12 | Nihon Dot Com Co Ltd | Online purchase of shipping and insurance services |
US20030032871A1 (en) | 2001-07-18 | 2003-02-13 | New England Medical Center Hospitals, Inc. | Adjustable coefficients to customize predictive instruments |
US7289965B1 (en) | 2001-08-10 | 2007-10-30 | Freddie Mac | Systems and methods for home value scoring |
US7835919B1 (en) | 2001-08-10 | 2010-11-16 | Freddie Mac | Systems and methods for home value scoring |
US7711574B1 (en) | 2001-08-10 | 2010-05-04 | Federal Home Loan Mortgage Corporation (Freddie Mac) | System and method for providing automated value estimates of properties as of a specified previous time period |
US20030149603A1 (en) | 2002-01-18 | 2003-08-07 | Bruce Ferguson | System and method for operating a non-linear model with missing data for use in electronic commerce |
US7009510B1 (en) | 2002-08-12 | 2006-03-07 | Phonetics, Inc. | Environmental and security monitoring system with flexible alarm notification and status capability |
US8595031B1 (en) | 2002-12-13 | 2013-11-26 | Manning & Napier Information Services, Llc | Method and apparatus for providing access to healthcare funds |
CA2504810A1 (en) | 2003-09-10 | 2005-03-17 | Swiss Reinsurance Company | System and method for the automated establishment of experience ratings and/or risk reserves |
US20050075912A1 (en) | 2003-10-06 | 2005-04-07 | Bealke Bruce B. | Computerized settlement method |
US20050096944A1 (en) | 2003-10-30 | 2005-05-05 | Ryan Shaun P. | Method, system and computer-readable medium useful for financial evaluation of risk |
US9609003B1 (en) | 2007-06-12 | 2017-03-28 | Icontrol Networks, Inc. | Generating risk profile using data of home monitoring and security system |
US10522026B2 (en) | 2008-08-11 | 2019-12-31 | Icontrol Networks, Inc. | Automation system user interface with three-dimensional display |
US7912770B2 (en) * | 2004-10-29 | 2011-03-22 | American Express Travel Related Services Company, Inc. | Method and apparatus for consumer interaction based on spend capacity |
US20060277077A1 (en) | 2004-11-22 | 2006-12-07 | Coleman James L Jr | Method for writing an insurance policy entitled insured tenant leasing |
US7664662B1 (en) * | 2006-03-16 | 2010-02-16 | Trurisk Llc | Computerized medical modeling of group life and disability insurance using medical claims data |
US8027850B1 (en) | 2005-11-28 | 2011-09-27 | Millennium Information Services | Property insurance risk assessment processing system and method |
US8494152B1 (en) | 2006-02-28 | 2013-07-23 | Allstate Insurance Company | Systems and methods for automated call-handling and processing |
US8930204B1 (en) | 2006-08-16 | 2015-01-06 | Resource Consortium Limited | Determining lifestyle recommendations using aggregated personal information |
US8200506B2 (en) | 2006-12-19 | 2012-06-12 | Accenture Global Services Limited | Integrated health management platform |
US8412269B1 (en) * | 2007-03-26 | 2013-04-02 | Celio Technology Corporation | Systems and methods for providing additional functionality to a device for increased usability |
US8032480B2 (en) | 2007-11-02 | 2011-10-04 | Hunch Inc. | Interactive computing advice facility with learning based on user feedback |
US7958066B2 (en) | 2007-11-02 | 2011-06-07 | Hunch Inc. | Interactive machine learning advice facility |
CA2721708C (en) | 2008-04-17 | 2018-01-09 | The Travelers Indemnity Company | A method of and system for determining and processing object structure condition information |
US20090287509A1 (en) | 2008-05-16 | 2009-11-19 | International Business Machines Corporation | Method and system for automating insurance claims processing |
US8976937B2 (en) | 2008-06-27 | 2015-03-10 | Adt Us Holding, Inc. | Method and apparatus for communication between a security system and a monitoring center |
CN102281816B (en) | 2008-11-20 | 2015-01-07 | 人体媒介公司 | Method and apparatus for determining critical care parameters |
US8401878B2 (en) | 2009-01-06 | 2013-03-19 | Mark Stender | Method and system for connecting an insured to an insurer using a mobile device |
US9443226B2 (en) * | 2009-01-12 | 2016-09-13 | Sri International | Electronic assistant for making predictions based on user messages |
US7966203B1 (en) | 2009-02-27 | 2011-06-21 | Millennium Information Services | Property insurance risk assessment using application data |
US8346577B2 (en) | 2009-05-29 | 2013-01-01 | Hyperquest, Inc. | Automation of auditing claims |
US9916625B2 (en) | 2012-02-02 | 2018-03-13 | Progressive Casualty Insurance Company | Mobile insurance platform system |
US20100324943A1 (en) * | 2009-06-19 | 2010-12-23 | Genowledge Llc | Genetically predicted life expectancy and life insurance evaluation |
US8762180B2 (en) * | 2009-08-25 | 2014-06-24 | Accenture Global Services Limited | Claims analytics engine |
US8359259B2 (en) | 2009-11-12 | 2013-01-22 | Hartford Fire Insurance Company | System and method for administering telematics based reinsurance pools |
US20110161119A1 (en) | 2009-12-24 | 2011-06-30 | The Travelers Companies, Inc. | Risk assessment and control, insurance premium determinations, and other applications using busyness |
US8352292B2 (en) * | 2009-12-31 | 2013-01-08 | Hampton Thurman B | Personal injury valuation systems and method |
US20110213731A1 (en) | 2010-02-26 | 2011-09-01 | Banker's Toolbox, Inc. | Techniques for identifying high-risk portfolio with automated commercial real estate stress testing |
WO2011150132A1 (en) | 2010-05-25 | 2011-12-01 | Underwriters Laboratories Inc. | Insurance policy data analysis and decision support system and method |
US8504393B2 (en) | 2010-09-10 | 2013-08-06 | State Farm Mutual Automobile Insurance Company | Systems and methods for grid-based insurance rating |
US9342817B2 (en) * | 2011-07-07 | 2016-05-17 | Sony Interactive Entertainment LLC | Auto-creating groups for sharing photos |
AU2012290296B2 (en) | 2011-07-29 | 2016-03-17 | Adt Us Holding, Inc. | Security system and method |
US9916538B2 (en) | 2012-09-15 | 2018-03-13 | Z Advanced Computing, Inc. | Method and system for feature detection |
US9536052B2 (en) | 2011-10-28 | 2017-01-03 | Parkland Center For Clinical Innovation | Clinical predictive and monitoring system and method |
US9819711B2 (en) | 2011-11-05 | 2017-11-14 | Neil S. Davey | Online social interaction, education, and health care by analysing affect and cognitive features |
US20140257862A1 (en) | 2011-11-29 | 2014-09-11 | Wildfire Defense Systems, Inc. | Mobile application for risk management |
US9367814B1 (en) * | 2011-12-27 | 2016-06-14 | Google Inc. | Methods and systems for classifying data using a hierarchical taxonomy |
US10515414B2 (en) | 2012-02-03 | 2019-12-24 | Eagle View Technologies, Inc. | Systems and methods for performing a risk management assessment of a property |
US8595037B1 (en) | 2012-05-08 | 2013-11-26 | Elwha Llc | Systems and methods for insurance based on monitored characteristics of an autonomous drive mode selection system |
US10387960B2 (en) * | 2012-05-24 | 2019-08-20 | State Farm Mutual Automobile Insurance Company | System and method for real-time accident documentation and claim submission |
US8799125B2 (en) * | 2012-05-24 | 2014-08-05 | Hartford Fire Insurance Company | System and method for rendering dynamic insurance quote interface |
US20140067433A1 (en) * | 2012-08-02 | 2014-03-06 | David G. Hargrove | Method and System for Insurance Claims Adjustment |
CN109785972B (en) | 2012-08-16 | 2023-09-26 | 橙点公司 | Method for modeling behavioral and health changes |
US8510196B1 (en) * | 2012-08-16 | 2013-08-13 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US10783585B1 (en) * | 2012-08-16 | 2020-09-22 | Allstate Insurance Company | Agent-facilitated claims damage estimation |
US8490006B1 (en) | 2012-09-04 | 2013-07-16 | State Farm Mutual Automobile Insurance Company | Scene creation for building automation systems |
US10332059B2 (en) | 2013-03-14 | 2019-06-25 | Google Llc | Security scoring in a smart-sensored home |
US9208676B2 (en) | 2013-03-14 | 2015-12-08 | Google Inc. | Devices, methods, and associated information processing for security in a smart-sensored home |
US20140122133A1 (en) | 2012-10-31 | 2014-05-01 | Bodyshopbids, Inc. | Method of virtually settling insurance claims |
US9501799B2 (en) | 2012-11-08 | 2016-11-22 | Hartford Fire Insurance Company | System and method for determination of insurance classification of entities |
US8533144B1 (en) | 2012-11-12 | 2013-09-10 | State Farm Mutual Automobile Insurance Company | Automation and security application store suggestions based on usage data |
US8527306B1 (en) | 2012-11-12 | 2013-09-03 | State Farm Mutual Automobile Insurance Company | Automation and security application store suggestions based on claims data |
US8924241B2 (en) | 2012-11-19 | 2014-12-30 | Hartford Fire Insurance Company | System and method to determine an insurance policy benefit associated with an asset |
US20140156315A1 (en) * | 2012-12-05 | 2014-06-05 | ExlService Holdings, Inc. | Settlement Evaluation Tool for Subrogation Recovery Enhancement |
WO2014093524A1 (en) | 2012-12-11 | 2014-06-19 | Adt Us Holdings, Inc. | Security panel communication system |
US8890680B2 (en) | 2013-01-11 | 2014-11-18 | State Farm Mutual Automobile Insurance Company | Alternative billing modes for security and automation applications |
US9049168B2 (en) | 2013-01-11 | 2015-06-02 | State Farm Mutual Automobile Insurance Company | Home sensor data gathering for neighbor notification purposes |
US10504159B2 (en) | 2013-01-29 | 2019-12-10 | Truecar, Inc. | Wholesale/trade-in pricing system, method and computer program product therefor |
US20140278561A1 (en) | 2013-03-14 | 2014-09-18 | Stoneriver National Flood Services, Inc. | Computerized system and method for determining flood risk |
US9025756B1 (en) | 2013-03-14 | 2015-05-05 | Progressive Casualty Insurance Company | Loyalty structured call routing system |
US9898168B2 (en) | 2013-03-15 | 2018-02-20 | Adt Us Holdings, Inc. | Security system access profiles |
CA2906170C (en) | 2013-03-15 | 2021-05-04 | Adt Us Holdings, Inc. | Security system using visual floor plan |
US9082015B2 (en) | 2013-03-15 | 2015-07-14 | State Farm Mutual Automobile Insurance Company | Automatic building assessment |
US8731977B1 (en) | 2013-03-15 | 2014-05-20 | Red Mountain Technologies, LLC | System and method for analyzing and using vehicle historical data |
US20140322676A1 (en) | 2013-04-26 | 2014-10-30 | Verizon Patent And Licensing Inc. | Method and system for providing driving quality feedback and automotive support |
US9026551B2 (en) | 2013-06-25 | 2015-05-05 | Hartford Fire Insurance Company | System and method for evaluating text to support multiple insurance applications |
US10529026B2 (en) | 2013-07-16 | 2020-01-07 | Esurance Insurance Services, Inc. | Property inspection using aerial imagery |
WO2015013619A1 (en) | 2013-07-26 | 2015-01-29 | Adt Us Holdings, Inc. | User management of a response to a system alarm event |
US20150039352A1 (en) | 2013-08-05 | 2015-02-05 | G.D. van Wagenen Financial Services, Inc. | System and method for blind-rating of risk to collateral in a secured transaction |
US9552681B2 (en) | 2013-08-08 | 2017-01-24 | Alcohol Countermeasure Systems (International) Inc. | Apparatus for assessing or mitigating insurance risk |
US9947051B1 (en) | 2013-08-16 | 2018-04-17 | United Services Automobile Association | Identifying and recommending insurance policy products/services using informatic sensor data |
US9245191B2 (en) * | 2013-09-05 | 2016-01-26 | Ebay, Inc. | System and method for scene text recognition |
US20150073835A1 (en) | 2013-09-11 | 2015-03-12 | Tata Consultancy Services Limited | System and method for generating an insurance quote of a property in real-time |
US20150081578A1 (en) | 2013-09-16 | 2015-03-19 | Hartford Fire Insurance Company | System and method for behavioral program selection and administration |
US20150088556A1 (en) | 2013-09-25 | 2015-03-26 | State Farm Mutual Automobile Insurance Company | Systems and methods for processing property insurance |
US10319035B2 (en) * | 2013-10-11 | 2019-06-11 | Ccc Information Services | Image capturing and automatic labeling system |
AU2014337037A1 (en) | 2013-10-17 | 2016-04-07 | Adt Us Holdings, Inc. | Portable system for managing events |
US9824397B1 (en) * | 2013-10-23 | 2017-11-21 | Allstate Insurance Company | Creating a scene for property claims adjustment |
US20150127389A1 (en) | 2013-11-07 | 2015-05-07 | Wagesecure, Llc | System, method, and program product for calculating premiums for employer-based supplemental unemployment insurance |
US10089691B2 (en) * | 2013-12-04 | 2018-10-02 | State Farm Mutual Automobile Insurance Company | Systems and methods for detecting potentially inaccurate insurance claims |
US10023114B2 (en) | 2013-12-31 | 2018-07-17 | Hartford Fire Insurance Company | Electronics for remotely monitoring and controlling a vehicle |
CA2898078C (en) | 2014-02-05 | 2020-09-15 | Grace Castillo Soyao | Systems, devices, and methods for analyzing and enhancing patient health |
US9798993B2 (en) | 2014-02-11 | 2017-10-24 | State Farm Mutual Automobile Insurance Company | Systems and methods for simulating home loss prevention |
US20150235321A1 (en) | 2014-02-18 | 2015-08-20 | Mastercard International Incorporated | Insurance risk modeling method and apparatus |
US10803525B1 (en) * | 2014-02-19 | 2020-10-13 | Allstate Insurance Company | Determining a property of an insurance policy based on the autonomous features of a vehicle |
US20150235322A1 (en) | 2014-02-20 | 2015-08-20 | Buildfax (A D/B/A Of Builderadius, Inc.) | Computer-implemented method for estimating the condition or insurance risk of a structure |
US8917186B1 (en) | 2014-03-04 | 2014-12-23 | State Farm Mutual Automobile Insurance Company | Audio monitoring and sound identification process for remote alarms |
US20150254766A1 (en) | 2014-03-05 | 2015-09-10 | Marc Abramowitz | System and method for generating a dynamic credit risk rating for a debt security |
US20150254719A1 (en) | 2014-03-05 | 2015-09-10 | Hti, Ip, L.L.C. | Prediction of Vehicle Transactions and Targeted Advertising Using Vehicle Telematics |
US10380696B1 (en) * | 2014-03-18 | 2019-08-13 | Ccc Information Services Inc. | Image processing system for vehicle damage |
US20150302529A1 (en) | 2014-04-18 | 2015-10-22 | Marshall & Swift/Boeckh, LLC | Roof condition evaluation and risk scoring system and method |
US10679292B1 (en) | 2014-04-25 | 2020-06-09 | State Farm Mutual Automobile Insurance Company | Systems and methods for managing insurance associated with devices populated within a property |
US10198771B1 (en) | 2014-06-20 | 2019-02-05 | Allstate Insurance Company | Data hub |
US9786158B2 (en) | 2014-08-15 | 2017-10-10 | Adt Us Holdings, Inc. | Using degree of confidence to prevent false security system alarms |
US20160055589A1 (en) * | 2014-08-20 | 2016-02-25 | Midwest Employers Casualty Company | Automated claim risk factor identification and mitigation system |
US10572947B1 (en) | 2014-09-05 | 2020-02-25 | Allstate Insurance Company | Adaptable property inspection model |
US9800570B1 (en) | 2014-09-26 | 2017-10-24 | Adt Us Holdings, Inc. | Method of persistent authentication with disablement upon removal of a wearable device |
US10515372B1 (en) | 2014-10-07 | 2019-12-24 | State Farm Mutual Automobile Insurance Company | Systems and methods for managing building code compliance for a property |
US9875509B1 (en) | 2014-10-09 | 2018-01-23 | State Farm Mutual Automobile Insurance Company | Method and system for determining the condition of insured properties in a neighborhood |
US20160086185A1 (en) | 2014-10-15 | 2016-03-24 | Brighterion, Inc. | Method of alerting all financial channels about risk in real-time |
WO2016065307A1 (en) | 2014-10-23 | 2016-04-28 | Insurance Services Office, Inc. | Systems and methods for computerized fraud detection using machine learning and network analysis |
US9660869B2 (en) | 2014-11-05 | 2017-05-23 | Fair Isaac Corporation | Combining network analysis and predictive analytics |
US9152737B1 (en) | 2014-11-26 | 2015-10-06 | Sense Labs, Inc. | Providing notifications to a user |
US9443195B2 (en) | 2014-11-26 | 2016-09-13 | Sense Labs, Inc. | Assisted labeling of devices with disaggregation |
US9739813B2 (en) | 2014-11-26 | 2017-08-22 | Sense Labs, Inc. | Determining information about devices in a building using different sets of features |
US9057746B1 (en) | 2014-11-26 | 2015-06-16 | Sense Labs, Inc. | Determining information about devices in a building using different sets of features |
US9923971B2 (en) | 2015-02-02 | 2018-03-20 | State Farm Mutual Automobile Insurance Company | Systems and methods for identifying unidentified plumbing supply products |
US10970990B1 (en) | 2015-02-19 | 2021-04-06 | State Farm Mutual Automobile Insurance Company | Systems and methods for monitoring building health |
US9524450B2 (en) | 2015-03-04 | 2016-12-20 | Accenture Global Services Limited | Digital image processing using convolutional neural networks |
WO2016145089A1 (en) * | 2015-03-09 | 2016-09-15 | Skytree, Inc. | System and method for using machine learning to generate a model from audited data |
CA3209826A1 (en) * | 2015-03-27 | 2016-10-06 | Equifax, Inc. | Optimizing neural networks for risk assessment |
US10107708B1 (en) | 2015-04-02 | 2018-10-23 | State Farm Mutual Automobile Insurance Company | Smart carpet, pad, or strip for leak detection and loss mitigation |
US9505494B1 (en) * | 2015-04-30 | 2016-11-29 | Allstate Insurance Company | Enhanced unmanned aerial vehicles for damage inspection |
EP3295301A4 (en) | 2015-05-15 | 2018-10-31 | Cox Automotive, Inc. | Parallel processing for solution space partitions |
US20170011313A1 (en) | 2015-07-06 | 2017-01-12 | The Boeing Company | Systems and methods for determining contract clauses |
US10755357B1 (en) | 2015-07-17 | 2020-08-25 | State Farm Mutual Automobile Insurance Company | Aerial imaging for insurance purposes |
US10229394B1 (en) | 2015-08-10 | 2019-03-12 | State Farm Mutual Automobile Insurance Company | Systems and methods for sending diagnostic information during scheduling of home equipment repair |
US10217068B1 (en) | 2015-08-10 | 2019-02-26 | State Farm Mutual Automobile Insurance Company | Systems and methods for pre-scheduling repair of home equipment |
JP6678930B2 (en) * | 2015-08-31 | 2020-04-15 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Method, computer system and computer program for learning a classification model |
US11151654B2 (en) | 2015-09-30 | 2021-10-19 | Johnson Controls Tyco IP Holdings LLP | System and method for determining risk profile, adjusting insurance premiums and automatically collecting premiums based on sensor data |
GB201517462D0 (en) | 2015-10-02 | 2015-11-18 | Tractable Ltd | Semi-automatic labelling of datasets |
US10733979B2 (en) | 2015-10-09 | 2020-08-04 | Google Llc | Latency constraints for acoustic modeling |
US11127082B1 (en) | 2015-10-12 | 2021-09-21 | Allstate Insurance Company | Virtual assistant for recommendations on whether to arbitrate claims |
US10323860B1 (en) | 2015-11-06 | 2019-06-18 | State Farm Mutual Automobile Insurance Company | Automated water heater flushing and monitoring system |
US9888371B1 (en) | 2015-11-13 | 2018-02-06 | State Farm Mutual Automobile Insurance Company | Portable home and hotel security system |
CA3006102A1 (en) | 2015-11-24 | 2017-06-01 | Dacadoo Ag | Automated health data acquisition, processing and communication system and method |
KR20170061222A (en) | 2015-11-25 | 2017-06-05 | 한국전자통신연구원 | The method for prediction health data value through generation of health data pattern and the apparatus thereof |
US10176526B2 (en) * | 2015-11-30 | 2019-01-08 | Hartford Fire Insurance Company | Processing system for data elements received via source inputs |
US11113704B2 (en) | 2015-12-07 | 2021-09-07 | Daniel J. Towriss | Systems and methods for interactive annuity product services using machine learning modeling |
US9705695B1 (en) | 2015-12-21 | 2017-07-11 | Hartford Fire Insurance Company | Sensors and system for accessing and validating sensor data |
US20170186120A1 (en) | 2015-12-29 | 2017-06-29 | Cerner Innovation, Inc. | Health Care Spend Analysis |
US10482746B1 (en) | 2016-01-06 | 2019-11-19 | State Farm Mutual Automobile Insurance Company | Sensor data to identify catastrophe areas |
US10726494B1 (en) | 2016-01-14 | 2020-07-28 | State Farm Mutual Automobile Insurance Company | Identifying property usage type based upon smart sensor data |
US20170221152A1 (en) | 2016-02-02 | 2017-08-03 | Loss Technology Services, Inc. | Water damage mitigation estimating system and method |
US10515419B1 (en) * | 2016-02-17 | 2019-12-24 | United Services Automobile Association | Systems and methods for leveraging remotely captured images |
US10248716B2 (en) | 2016-02-19 | 2019-04-02 | Accenture Global Solutions Limited | Real-time guidance for content collection |
US11468372B2 (en) | 2016-03-08 | 2022-10-11 | Tata Consultancy Services Limited | Data modeling systems and methods for risk profiling |
US10511676B2 (en) * | 2016-03-17 | 2019-12-17 | Conduent Business Services, Llc | Image analysis system for property damage assessment and verification |
CA3018326A1 (en) | 2016-03-21 | 2017-09-28 | Mastercard International Incorporated | Method and system for recording point to point transaction processing |
US20170286622A1 (en) | 2016-03-29 | 2017-10-05 | International Business Machines Corporation | Patient Risk Assessment Based on Machine Learning of Health Risks of Patient Population |
US11003334B1 (en) | 2016-03-31 | 2021-05-11 | Allstate Insurance Company | Home services condition monitoring |
US10692050B2 (en) * | 2016-04-06 | 2020-06-23 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
US10157405B1 (en) * | 2016-04-18 | 2018-12-18 | United Services Automobile Association | Systems and methods for implementing machine vision and optical recognition |
US11288789B1 (en) * | 2016-05-20 | 2022-03-29 | Ccc Intelligent Solutions Inc. | Systems and methods for repairing a damaged vehicle using image processing |
US9911042B1 (en) | 2016-07-01 | 2018-03-06 | State Farm Mutual Automobile Insurance Company | Real property image analysis system to identify similar properties |
GB2554361B8 (en) * | 2016-09-21 | 2022-07-06 | Emergent Network Intelligence Ltd | Automatic image based object damage assessment |
US10529029B2 (en) | 2016-09-23 | 2020-01-07 | Aon Benfield Inc. | Platform, systems, and methods for identifying property characteristics and property feature maintenance through aerial imagery analysis |
US10650285B1 (en) | 2016-09-23 | 2020-05-12 | Aon Benfield Inc. | Platform, systems, and methods for identifying property characteristics and property feature conditions through aerial imagery analysis |
US10445576B2 (en) * | 2016-09-23 | 2019-10-15 | Cox Automotive, Inc. | Automated vehicle recognition systems |
US11521271B2 (en) * | 2017-02-06 | 2022-12-06 | Allstate Insurance Company | Autonomous vehicle control systems with collision detection and response capabilities |
US9800958B1 (en) | 2017-02-22 | 2017-10-24 | Sense Labs, Inc. | Training power models using network data |
US9699529B1 (en) | 2017-02-22 | 2017-07-04 | Sense Labs, Inc. | Identifying device state changes using power data and network data |
US10750252B2 (en) | 2017-02-22 | 2020-08-18 | Sense Labs, Inc. | Identifying device state changes using power data and network data |
US11126708B2 (en) | 2017-02-24 | 2021-09-21 | The Adt Security Corporation | Automatic password reset using a security system |
CA3054563C (en) | 2017-02-24 | 2023-12-12 | Adt Us Holdings, Inc. | Detecting an intruder's wireless device during a break in to a premises |
US20180268305A1 (en) * | 2017-03-20 | 2018-09-20 | International Business Machines Corporation | Retrospective event verification using cognitive reasoning and analysis |
US20180292471A1 (en) * | 2017-04-06 | 2018-10-11 | Intel Corporation | Detecting a mechanical device using a magnetometer and an accelerometer |
US10922623B2 (en) | 2017-04-18 | 2021-02-16 | At&T Intellectual Property I, L.P. | Capacity planning, management, and engineering automation platform |
US10375585B2 (en) | 2017-07-06 | 2019-08-06 | Futurwei Technologies, Inc. | System and method for deep learning and wireless network optimization using deep learning |
US11620713B2 (en) | 2017-08-22 | 2023-04-04 | Accenture Global Solutions Limited | Automated regulatory compliance for insurance |
US11087292B2 (en) | 2017-09-01 | 2021-08-10 | Allstate Insurance Company | Analyzing images and videos of damaged vehicles to determine damaged vehicle parts and vehicle asymmetries |
US20210326992A1 (en) | 2017-09-06 | 2021-10-21 | State Farm Mutual Automobile Insurance Company | Using a Distributed Ledger for Subrogation Recommendations |
US10891694B1 (en) | 2017-09-06 | 2021-01-12 | State Farm Mutual Automobile Insurance Company | Using vehicle mode for subrogation on a distributed ledger |
US11416942B1 (en) | 2017-09-06 | 2022-08-16 | State Farm Mutual Automobile Insurance Company | Using a distributed ledger to determine fault in subrogation |
WO2019113546A1 (en) | 2017-12-08 | 2019-06-13 | FairClaims, Inc. | Assistance engine for multiparty mediation |
US20190362430A1 (en) | 2018-04-23 | 2019-11-28 | Stephen Jass | Electronic fulfillment system and method for completing life insurance settlement transactions and obtaining and managing electronic signatures for life insurance settlement transaction documents |
US10832347B1 (en) | 2018-05-14 | 2020-11-10 | State Farm Mutual Automobile Insurance Company | Methods and systems for smart claim routing and smart claim assignment |
US10740691B2 (en) | 2018-10-02 | 2020-08-11 | Sense Labs, Inc. | Identifying devices connected to a smart plug |
-
2018
- 2018-09-20 US US16/136,370 patent/US20210256616A1/en not_active Abandoned
- 2018-09-20 US US16/136,387 patent/US11783422B1/en active Active
- 2018-09-20 US US16/136,357 patent/US11373249B1/en active Active
- 2018-09-20 US US16/136,365 patent/US20210256615A1/en not_active Abandoned
- 2018-09-20 US US16/136,401 patent/US20210287297A1/en not_active Abandoned
- 2018-09-20 US US16/136,501 patent/US10497250B1/en active Active
- 2018-09-20 US US16/136,519 patent/US20210390624A1/en not_active Abandoned
-
2019
- 2019-10-30 US US16/668,072 patent/US10943464B1/en active Active
-
2021
- 2021-06-21 US US17/353,621 patent/US20210312567A1/en active Pending
- 2021-09-03 US US17/466,722 patent/US20210398227A1/en not_active Abandoned
-
2022
- 2022-05-24 US US17/752,702 patent/US20220284517A1/en active Pending
-
2023
- 2023-04-25 US US18/139,131 patent/US20230260048A1/en active Pending
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5893072A (en) * | 1996-06-20 | 1999-04-06 | Aetna Life & Casualty Company | Insurance classification plan loss control system |
US20140358824A1 (en) * | 2000-09-30 | 2014-12-04 | Advisen, Ltd. | System and method for providing global information on risks and related hedging strategies |
US20050154617A1 (en) * | 2000-09-30 | 2005-07-14 | Tom Ruggieri | System and method for providing global information on risks and related hedging strategies |
US7392201B1 (en) * | 2000-11-15 | 2008-06-24 | Trurisk, Llc | Insurance claim forecasting system |
US20030135725A1 (en) * | 2002-01-14 | 2003-07-17 | Schirmer Andrew Lewis | Search refinement graphical user interface |
US20060293926A1 (en) * | 2003-02-18 | 2006-12-28 | Khury Costandy K | Method and apparatus for reserve measurement |
US20060136273A1 (en) * | 2004-09-10 | 2006-06-22 | Frank Zizzamia | Method and system for estimating insurance loss reserves and confidence intervals using insurance policy and claim level detail predictive modeling |
US20110302188A1 (en) * | 2005-12-30 | 2011-12-08 | Google Inc. | Dynamic search box for web browser |
US20100131304A1 (en) * | 2008-11-26 | 2010-05-27 | Fred Collopy | Real time insurance generation |
US9449080B1 (en) * | 2010-05-18 | 2016-09-20 | Guangsheng Zhang | System, methods, and user interface for information searching, tagging, organization, and display |
US20120221553A1 (en) * | 2011-02-24 | 2012-08-30 | Lexisnexis, A Division Of Reed Elsevier Inc. | Methods for electronic document searching and graphically representing electronic document searches |
US9489397B1 (en) * | 2011-07-27 | 2016-11-08 | Aon Benfield Global, Inc. | Impact data manager for dynamic data delivery |
US8452621B1 (en) * | 2012-02-24 | 2013-05-28 | Guy Carpenter & Company, LLC. | System and method for determining loss reserves |
US20140195273A1 (en) * | 2012-09-12 | 2014-07-10 | Guy Carpenter & Company, Llc | System And Method For Providing Systemic Casualty Reserve Protection |
WO2015077557A1 (en) * | 2013-11-22 | 2015-05-28 | California Institute Of Technology | Generation of weights in machine learning |
US20150220601A1 (en) * | 2014-02-06 | 2015-08-06 | International Business Machines Corporation | Searching content managed by a search engine using relational database type queries |
US10373260B1 (en) * | 2014-03-18 | 2019-08-06 | Ccc Information Services Inc. | Imaging processing system for identifying parts for repairing a vehicle |
US20160132194A1 (en) * | 2014-11-06 | 2016-05-12 | Dropbox, Inc. | Searching digital content |
US20170185723A1 (en) * | 2015-12-28 | 2017-06-29 | Integer Health Technologies, LLC | Machine Learning System for Creating and Utilizing an Assessment Metric Based on Outcomes |
US20170300712A1 (en) * | 2016-04-14 | 2017-10-19 | Salesforce.Com, Inc. | Fine grain security for analytic data sets |
US20210279297A1 (en) * | 2016-05-13 | 2021-09-09 | Equals 3 LLC | Linking to a search result |
US20170371924A1 (en) * | 2016-06-24 | 2017-12-28 | Microsoft Technology Licensing, Llc | Aggregate-Query Database System and Processing |
US20180107734A1 (en) * | 2016-10-18 | 2018-04-19 | Kathleen H. Galia | System to predict future performance characteristic for an electronic record |
US10497250B1 (en) * | 2017-09-27 | 2019-12-03 | State Farm Mutual Automobile Insurance Company | Real property monitoring systems and methods for detecting damage and other conditions |
US20190121900A1 (en) * | 2017-10-23 | 2019-04-25 | Citrix Systems, Inc. | Multi-Select Dropdown State Replication |
US20190147083A1 (en) * | 2017-11-14 | 2019-05-16 | Mindbridge Analytics Inc. | Method and system for presenting a user selectable interface in response to a natural language request |
US20200335188A1 (en) * | 2019-04-17 | 2020-10-22 | Tempus Labs | Systems and Methods for Interrogating Clinical Documents for Characteristic Data |
US20210209115A1 (en) * | 2020-01-07 | 2021-07-08 | Elastic Flash Inc. | Data ingestion with spatial and temporal locality |
US20220365982A1 (en) * | 2021-05-17 | 2022-11-17 | Docusign, Inc. | Document package merge in document management system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11472420B2 (en) | 2020-08-20 | 2022-10-18 | Toyota Jidosha Kabushiki Kaisha | Machine learning device and machine learning system |
US11675999B2 (en) * | 2020-08-20 | 2023-06-13 | Toyota Jidosha Kabushiki Kaisha | Machine learning device |
Also Published As
Publication number | Publication date |
---|---|
US20210398227A1 (en) | 2021-12-23 |
US20210256615A1 (en) | 2021-08-19 |
US20220284517A1 (en) | 2022-09-08 |
US10497250B1 (en) | 2019-12-03 |
US10943464B1 (en) | 2021-03-09 |
US20210256616A1 (en) | 2021-08-19 |
US11783422B1 (en) | 2023-10-10 |
US11373249B1 (en) | 2022-06-28 |
US20210390624A1 (en) | 2021-12-16 |
US20210287297A1 (en) | 2021-09-16 |
US20230260048A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210312567A1 (en) | Automobile Monitoring Systems and Methods for Loss Reserving and Financial Reporting | |
US11403712B2 (en) | Methods and systems for injury segment determination | |
US11842404B2 (en) | Enhancement using analytics based on vehicle kinematic data | |
US20220005121A1 (en) | Machine learning systems and methods for analyzing emerging trends | |
JP2023524609A (en) | How to Determine Paint Requirements for Damaged Vehicles | |
US11367141B1 (en) | Systems and methods for forecasting loss metrics | |
US20080077451A1 (en) | System for synergistic data processing | |
Borselli | Smart contracts in insurance: a law and futurology perspective | |
US11449950B2 (en) | Data processing systems with machine learning engines for dynamically generating risk index dashboards | |
US11367142B1 (en) | Systems and methods for clustering data to forecast risk and other metrics | |
US20210312560A1 (en) | Machine learning systems and methods for elasticity analysis | |
CN115861983A (en) | Intelligent management system and method for mechanical equipment | |
US20230289887A1 (en) | Optical Fraud Detector for Automated Detection Of Fraud In Digital Imaginary-Based Automobile Claims, Automated Damage Recognition, and Method Thereof | |
Bhuiyan et al. | Crash severity analysis and risk factors identification based on an alternate data source: a case study of developing country | |
Meng et al. | Actuarial intelligence in auto insurance: Claim frequency modeling with driving behavior features and improved boosted trees | |
Khodadadi et al. | A Natural Language Processing and deep learning based model for automated vehicle diagnostics using free-text customer service reports | |
Wanke et al. | Unveiling drivers of sustainability in Chinese transport: an approach based on principal component analysis and neural networks | |
Arun Kumar et al. | Disruptive innovation for auto insurance entrepreneurs: new paradigm using telematics and machine learning | |
Cunha et al. | Automobile Usage-Based-Insurance:: Improving Risk Management using Telematics Data | |
Shatnawi et al. | Prediction of risk factors influencing severity level of traffic accidents using artificial intelligence | |
US11574366B1 (en) | Method and system for early identification and settlement of total loss claims | |
CN111260484A (en) | Data processing method, device, server and system for human injury identification | |
Ogungbire et al. | Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction | |
Viegas | The impact of ADAS in the insurance world | |
Faramarz et al. | Identifying the Effective Factors in the Profit and Loss of Vehicle Third Party Insurance for Insurance Companies via Data Mining Classification Algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |