EP1530780A1 - System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung - Google Patents

System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung

Info

Publication number
EP1530780A1
EP1530780A1 EP03817702A EP03817702A EP1530780A1 EP 1530780 A1 EP1530780 A1 EP 1530780A1 EP 03817702 A EP03817702 A EP 03817702A EP 03817702 A EP03817702 A EP 03817702A EP 1530780 A1 EP1530780 A1 EP 1530780A1
Authority
EP
European Patent Office
Prior art keywords
development
values
neural network
time interval
events
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP03817702A
Other languages
German (de)
English (en)
French (fr)
Inventor
Frank Cuypers
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Swiss Re AG
Original Assignee
Swiss Reinsurance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Swiss Reinsurance Co Ltd filed Critical Swiss Reinsurance Co Ltd
Publication of EP1530780A1 publication Critical patent/EP1530780A1/de
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Definitions

  • the invention relates in particular to a computer program product for performing this method.
  • Experience classification refers to the state of the art
  • Time intervals The event values of the same event show a dependent time-related development over the different development years or development time intervals.
  • the experience rating of the values takes place by extrapolation or comparison with the development of values of known similar events in the past.
  • a typical example in the state of the art is the multi-year experience rating based on damage claims, for example the payment status Z or the reserve balance R of a loss case with insurance companies or reinsurers.
  • an insurance company knows the development of each individual damage case from the time of the damage report up to the current status or up to the regulation.
  • the classic credibility formula was founded about 30 years ago using a stochastic model; Since then, numerous variants of the model have been developed, so that today one can speak of an actual credibility theory.
  • the main problem with the application of credibility formulas are the unknown parameters are determined by the structure of the stock.
  • the actuary or actuary knows bounds for the parameters and determines the optimal premium for the worst case.
  • Credibility theory also includes a number of models for reserving late claims. There are a variety of reservation procedures that, unlike the credibility formula, do not depend on unknown parameters.
  • the prior art includes methods using stochastic models that describe the generation of the data. A number of results are available especially for the chain ladder method as one of the best known methods for calculating outstanding payment claims or for extrapolating claims.
  • the strengths of the chain ladder process are, on the one hand, its simplicity, on the other hand, that the process is almost distribution-free, ie the process is based on almost no assumptions.
  • Distribution-free or non-parametric methods are particularly suitable for cases in which the user can only provide insufficient or no information about the expected distribution (eg Gaussian distribution, etc.) of the parameters to be developed.
  • an event Pj f consists of a sequence of points from which the first K + 1 - i points are known and the still unknown points (Pj, ⁇ + 2-i, f, ⁇ • ⁇ , Pi, ⁇ , f) are to be forecast.
  • the values of the events Pj f form a so-called damage triangle or more generally an event value triangle
  • the rows and columns are formed by the years of loss and the settlement years. Expressed in general terms, the rows contain, for example, the initial years and the columns the years of development of the events examined, although the presentation can also be different.
  • the chain ladder procedure is now based on the cumulative damage triangles, the entries Cy of which are, for example, either pure damage payments or damage expenses (damage payments plus change in damage reserves). The same applies to the cumulative matrix elements Cy
  • the individual event can also be inferred from the cumulative values interpolated using the chain ladder method, in that a specific distribution, for example typically a Pareto distribution, is adopted for the values.
  • the Pareto distribution is particularly suitable for types of insurance such as insurance for large claims or reinsurers etc.
  • T is a threshold
  • is the fit parameter.
  • the simplicity of the chain ladder process lies in the fact that it does not need more than the above damage triangle (cumulated over the development values of the individual events) and, for example, no information about registration data, reservation processing or assumptions about possible damage amount distributions etc.
  • Damage frequencies are to be attributed, since the estimators of the chain ladder method correspond to the maximum likelihood estimators of a model by means of a modified Poisson distribution. Caution is therefore advisable, for example, for years in which changes to the distribution of damage amounts (eg an increase in the maximum amount of liability or changes in the deductible) are made, since these changes can lead to broken structures in the chain ladder process.
  • changes to the distribution of damage amounts eg an increase in the maximum amount of liability or changes in the deductible
  • the use of the chain ladder method also leads to useful results in many cases, although information such as a reliable estimate of the final loss rate is rarely available due to the long processing time.
  • the main disadvantage of the chain ladder process is that the chain ladder process is based on the cumulative damage triangle, i.e.
  • the IBNER reserve is particularly useful for classifying the experience of loss reinsurance reinsurance contracts, where the reinsurer generally receives the necessary individual loss data at least for the relevant major claims.
  • the development of a portfolio of risks over time describes a risk process in which claims numbers and amounts are modeled, which leads to the phenomenon of accidental dilution of the risk process in loss excess reinsurance during the transition from primary to reinsurer; on the other hand, reinsurance brings together holdings from several primary insurers, thereby overlaying risk processes.
  • the effects of dilution and overlay have so far been examined primarily for Poisson's risk processes.
  • ITiaX d P .. f , P.
  • (Maximum distance) l ⁇ j ⁇ K + 1-i UJ J or (current distance) is minimal.
  • the current distance is normally used. This means that for a damage (P ⁇ , ..., P k ), the processing of which is known until the kth year of development, of all other damages (i 5 ⁇ , ..., i 3 J ), the development of which is at least until year of development j> k + 1 is known, the one for which the current distance d (P k , P k ) is smallest is considered the most similar.
  • Neural networks are fundamentally known in the prior art and are used, for example, to solve optimization tasks, image recognition (pattern recovery), in artificial intelligence, etc.
  • a neural network consists of a large number of network nodes, so-called neurons, which are interconnected via weighted connections (synapses). The neurons are organized and interconnected in network layers. The individual neurons are activated depending on their input signals and generate a corresponding output signal. The activation of a neuron takes place via an individual weighting factor through the summation via the input signals.
  • Such neural networks are capable of learning by systematically changing the weighting factors as a function of predetermined exemplary input and output values until the neural network shows a desired behavior in a defined, predictable error range, such as e.g. B.
  • Neural networks thus have adaptive skills for learning and storing knowledge and associative skills for comparing new information with stored knowledge.
  • the neurons can assume an idle state or an excited state.
  • Each neuron has several inputs and exactly one output, which is connected to the inputs of other neurons of the subsequent network layer or represents a corresponding output value in the case of an output node.
  • a neuron changes to the excitation state when a sufficient number of the inputs of the neuron are excited above a certain threshold value of the neuron, ie if the summation above the inputs reaches a certain threshold value.
  • Knowledge is stored in the weights of the inputs of a neuron and in the threshold value of the neuron by adaptation.
  • the weights of a neural network are trained by means of a learning process (see, for example, G. Cybenko, "Approximation by Superpositions of a sigmoidal function", Math. Control, Sig. Syst., 2, 1989, pp 303-314; MT Hagan, MB Menjaj, "Training Feedforward Networks with the Marquardt Algorithm", IEEE Transactions on Neural Networks, Vol. 5, No. 6, pp 989-993, November 1994; K. Hornik, M. Stinchcombe, H. White, "Multilayer Feedforward Networks are universal Approximators ", Neural Networks, 2, 989, pp 359-366 etc.).
  • an automated, simple and rational method is to be proposed in order to further develop a given damage with an individual increase or factor, so that subsequently all the information about the development of a single damage is available.
  • as few as possible assumptions about the distribution should be made from the outset and at the same time the maximum possible information of the given cases should be used.
  • ⁇ , P ⁇ , ⁇ , f at least one neural network is used.
  • the start time interval can be assigned to a start year and the development intervals can be assigned to development years.
  • the development values Pj kf of the various events Pj , f can be standardized according to their starting time interval using at least one standardization factor.
  • the standardization of the development values Pj kf has the advantage, among other things, that the development values are comparable at different times.
  • This embodiment variant has the further advantage, among other things, that no model assumptions, for example via value distributions, system dynamics, etc., have to be assumed for automated experience classification. In particular, the experience rating is free from proxy prerequisites such as the Euclidean measure etc. This is not possible with the prior art.
  • the entire information of the data sample is used without the data records being accumulated. The complete information about the individual events remains in each stage received and can be retrieved at the end.
  • the advantage of standardization is that data records with different initial time intervals receive comparable orders of magnitude and can therefore be compared better.
  • This embodiment variant has the advantage, among other things, that, as in the previous embodiment variant, the entire information of the data sample is used without the data records being accumulated. The complete information about the individual events is retained in each stage and can be called up again at the end.
  • the networks can be further optimized by minimizing a globally introduced error.
  • the neural networks Nj j are trained identically for the same development years and / or development intervals j , the neural network being used for an initial time interval and / or initial year i + 1 is generated and all other neural networks Nj + ⁇ , j ⁇ j are taken from earlier start time intervals and / or start years.
  • This embodiment variant has the advantage, among other things, that only known data are used to classify experience and certain data are no longer used by the system, thereby preventing the correlation of the errors or the data.
  • f with an initial time interval i ⁇ 1 are additionally used for the determination, all development values Pj ⁇ , k , f being known for the events Pj ⁇ 1 , f.
  • This embodiment variant has the advantage, among other things, that the additional data sets enable the neural networks to be better optimized and their error to be minimized.
  • a system comprises neural networks Nj, each with an input layer with at least one input segment and an output layer, which input and output layer comprises a multiplicity of neurons which are weightedly connected to one another, the neural networks Nj using a computing unit software and / or can be generated iteratively in terms of hardware, a neural network N i + ⁇ recursively depending on the neural network Nj and each network Nj + i each comprising one input segment more than the network Nj, each neural network Nj starting with the neural network Ni by means of a minimization module by minimization of a locally propagated error can be trained, and the recursive system of neural networks using a minimization module by minimizing a globally propagated one Error based on the local errors of the neural networks Nj is trainable.
  • This embodiment variant has the advantage, among other things, that the recursively generated neural networks can be additionally optimized by means of the global error. Among other things, it is the combination of the recursive generation of the neural network structure with a double minimization by means of locally propagated errors and globally propagated errors, which gives the advantages of this embodiment variant.
  • the output layer of the neural network Nj is associated with at least one input segment of the input layer of the neural network Nj + i.
  • Design variant has among other things the advantage that the system of neural networks can in turn be understood as a neural network. In this way, subnetworks of an entire network can be weighted locally and their behavior can also be checked and monitored in the case of global learning by means of the corresponding data sets. Until now, this was not possible in the prior art.
  • the present invention also relates to a system for carrying out this method. Furthermore, it is not limited to the system and method mentioned, but also relates to recursively nested systems of neural networks and a computer program product for implementing the method according to the invention.
  • Figure 1 shows a block diagram which schematically shows the training or. Determination phase or presentation phase of a neural network for determining the event value P 2
  • 5 , f of an event Pf in an upper 5x5 matrix, ie at K 5.
  • the dashed line T gives the training phase and the solid line R the determination phase after learning.
  • FIG. 2 also shows a block diagram which, like FIG. 1, schematically shows the training or determination phase of a neural network for determining the event value P ⁇ for the third year of the beginning.
  • FIG. 3 shows a block diagram which, like FIG. 1, schematically shows the training or determination phase of a neural network for determining the event value P 3 , 5 , f for the third year.
  • Figure 4 shows a block diagram which schematically shows only the
  • Training phase for determining P 3 ⁇ 4 , f and P 3 , 5 , f shows, the calculated values P 3 ⁇ 4 ⁇ f being used to train the network for determining P3, 5 , f .
  • FIG. 5 shows a block diagram which schematically shows the recursive generation of neural networks for determining the values in line 3 of a 5 ⁇ 5 matrix, 2 networks being generated.
  • FIG. 6 shows a block diagram which schematically shows the recursive generation of neural networks for determining the values in line 5 of a 5 ⁇ 5 matrix, 4 networks being generated.
  • FIG. 7 shows a block diagram, which likewise schematically shows a system according to the invention, the training basis being restricted to the known event values Ay.
  • FIG. 1 to 7 schematically illustrate an architecture that can be used to implement the invention.
  • a specific event Pj, f of an initial year i includes development values Pjkf for automated experience classification of events and / or damage reservation.
  • the development value Pj kf (Z ikf , Rj kf , ...) is an arbitrary vector and / or n-tuple of development parameters j kf , Ri kf . • ⁇ • which should be developed for an event.
  • Pj kf Zj kf can be the payment status, Rj kf the reserve status, etc. Any other relevant parameters for an event can be envisaged without affecting the scope of the invention.
  • Pi kf is therefore an n-tuple consisting of the sequence of points and / or matrix elements
  • Block matrix will.
  • Pj kf is a sequence of points
  • Pi, ⁇ , f (Zi, ⁇ + 2-i, f, Ri, ⁇ + 2-i, f), -, (Zi, ⁇ , f, R ⁇ , ⁇ , f) includes the system and / or method at least one neural network.
  • neural networks For example, conventional static and / or dynamic neural networks, such as feedforward (heteroassociative) networks such as a perceptron or a multi-layer perceptron (MLP) can be selected, but other network structures, such as recurrent network structures, are also conceivable.
  • feedforward networks such as a perceptron or a multi-layer perceptron (MLP) can be selected, but other network structures, such as recurrent network structures, are also conceivable.
  • the different network structure of the feedforward networks in contrast to networks with feedback (recurrent networks) determines the way in which information is processed by the network.
  • MLP multi-layer perceptrons
  • An MLP consists of several neuron layers with at least one input layer and one output layer.
  • the structure is strictly forward-looking and belongs to the group of feed-forward networks.
  • neural networks map an m-dimensional input signal to an n-dimensional output signal.
  • the information to be processed is taken up by a layer with input neurons, the input layer, in the feed forward network considered here.
  • the input neurons process the input signals and pass them on via weighted connections, so-called synapses, to one or more hidden neuron layers, the hidden layers.
  • the signal is also transmitted from the hidden layers to neurons of an output layer by means of weighted synapses, which in turn generate the output signal of the neural network.
  • each neuron in a particular layer is connected to all neurons in the subsequent layer.
  • the simplest way is to determine the ideal network structure empirically. It should be noted that if the number of neurons selected is too large, the network has a purely imaging function instead of learning, while if the number of neurons is too small, the parameters shown are correlated. In other words, if the number of neurons is chosen too small, the function may not be shown.
  • the matter is allowed to cool so slowly that the molecules still have enough energy to jump out of a local minimum.
  • the variable T is introduced in a slightly modified error function. Ideally, this converges to a global minimum.
  • neural networks with at least a three-layer structure have proven to be useful for the application of experience classification. This means that the networks include at least one input layer, one hidden layer and one output layer. The three processing steps of propagation, activation and output take place within each neuron. The output of the i-th neuron of the k-th layer results
  • Ni becomes the number of neurons in the layer k-1.
  • w is referred to as weight and b as bias (threshold value).
  • the bias b can be selected to be the same or different for all neurons of a specific layer.
  • a log-sigmoidal function for example, can be selected as the activation function
  • the activation function (or transfer function) is used in every neuron. Other activation functions such as tangential functions etc., however, are also possible according to the invention. With the back propagation method, however, it must be ensured that a differentiable activation function, such as a sigmoid function, as this is a prerequisite for the method. Ie, for example, binary activation function such as
  • the set of training patterns (index ⁇ ) consists of the input signal
  • the training patterns also include the known events Pj ⁇ f the known development values Pj kf for all k, f and i.
  • the development values of the events to be extrapolated can of course not be used for training the neural networks, since the corresponding output value is missing for them.
  • the initialization of the weights of the hidden layers, in this exemplary embodiment thus of the neurons can be carried out, for example with a log-sigmoid activation function, for example according to Nguyen-Widrow (D. Nguyen, B.
  • the task of the training method is to determine the synapse weights Wj j and Determine the bias bj j within the weighting matrix W or the bias matrix B in such a way that the input patterns Y ⁇ are mapped to the corresponding output patterns U ⁇ .
  • the absolute quadratic error can be used to assess the learning stage be used.
  • the error Err takes into account all the patterns P ikf of the training base, in which the effective output signals U ⁇ ff show the target reactions U “specified in the training base.
  • the back propagation method should be selected as the learning method.
  • the back propagation method is a recursive method for optimizing the weight factors w, j.
  • an input pattern Y ⁇ is selected at random and propagated through the network (forward propagation).
  • the error generated by the network-generated output signal with the target reaction U ⁇ “specified in the training base is converted to the error Err ⁇ presented input patterns determined.
  • the changes in the individual weights w after the presentation of the ⁇ th training pattern are proportional to the negative partial derivation of the error Err ⁇ according to the weight wy (so-called gradient descent method)
  • the adaptation rules for the elements of the weighting matrix known as back propagation rule for the presentation of the ⁇ th training pattern can be derived from the partial derivation.
  • ⁇ ⁇ f ⁇ ⁇ (u ⁇ - ue ⁇ ff ) for the output layer or for the hidden layers.
  • the error starts with the
  • the proportionality factor s is called the learning factor.
  • the training phase a limited number of training patterns are presented to a neural network, which characterize the mapping to be learned with sufficient accuracy.
  • the training patterns can include all known events Pj, f with the known development values Pj kf for all k, f and i. A selection from the known events Pj, f is also conceivable. If the network is then presented with an input signal that does not match the If the pattern of the training base matches, the network interpolates or extrapolates between the training patterns as part of the learned mapping function.
  • neural networks This property is known as the generalization capability of the networks. It is characteristic of neural networks that neural networks have a good fault tolerance. This is another advantage over the prior art systems. Since neural networks map a large number of (partially redundant) input signals to the desired output signal (s), the networks prove to be robust against failure of individual input signals or against signal noise. Another interesting property of neural networks is their ability to learn. In principle, it is therefore possible to have a system once trained trained / relearned or adapted permanently during operation, which is also an advantage over the systems of the prior art. Of course, other methods can also be used for the learning method, such as a method according to Levenberg-Marquardt (D.
  • Marquardt "An Algorithm for least Square estimation of nonlinear Parameters", J.Soc.lnd.Appl.Math, pp 431-441 , 1963 and MT Hagan, MBMenjaj, "Training Feedforward Networks with the Marquardt Algorithm", IEEE-Transactions on Neural Networks, Vol 5, No. 6, pp 989-993, November 1994).
  • the Levenberg-Marquardt method is a combination of the gradient method and the Newton method and has the advantage that it converges faster than the backpropagation method mentioned above, but requires a higher storage capacity during the training phase.
  • the neural network Njj + ⁇ depends recursively on the neural network Njj. For weighting, ie for training, a certain neural network Njj, for example, everyone can
  • the data of the events P pq can be read out from a database and via, for example a computing unit are presented to the system.
  • a calculated development value P i ⁇ klf can, for example, be assigned to the corresponding event Pj , f of an initial year i and can itself be presented to the system for determining the next development value (for example Pi, k + ⁇ , f) (Fig. 1 to 6) or the assignment can be found only after the end of the determination of all sought development values P instead (Fig. 7).
  • Each input segment comprises at least one input neuron or at least as many input neurons in order to obtain the input signal for a development value Pj, k , f .
  • the neural networks are generated automatically by the system and can be implemented in hardware or software.
  • the neural network Nj j with the available events Ej, f of all initial years m 1, .., (i-1) using the development values Pm, ⁇ .. ⁇ - (ij), f as input and Pm , ⁇ .. ⁇ - (ij) + ⁇ , f weighted as output.
  • the neural network Nj j determines the output values Oj , f for all events P i ⁇ f of the starting year i, the output value Oj, f being the development value Pi, ⁇ - (i- j ) + ⁇ , f of the event Pj , f is assigned and the neural network Nj j recursively depends on the neural network Nj j + i.
  • the dashed line T indicates the training phase and the solid line R the determination phase after learning.
  • Figure 2 shows the same for the third year of the beginning for the determination of P 3 , 4 , f (B 34 ) and Figure 3 for
  • FIG. 4 shows only the training phase for determining P 3 , 4, f and P 3 , 5 , f , the generated values P 3 ⁇ 4 ⁇ f (B 34 ) being used to train the network for determining P 3 , 5 , f.
  • Ay indicates the known values in the figures, while By determined using the networks Displays values.
  • FIG. 5 shows the recursive generation of the neural networks for determining the values in line 3 of a 5x5 matrix, whereby i-1 networks are generated, that is to say 2.
  • FIG. 6, shows the recursive generation of the neural networks for determining the values in line 3 of a 5x5 Matrix, whereby again i-1 networks are generated, i.e. 4.
  • FIG. 7 shows such a method, the training basis being restricted to the known event values Ay.
  • different neural networks can be trained, for example, based on different data.
  • the networks can be trained based on the paid claims, based on the accrued claims, based on the paid and outstanding claims (reserves) and / or based on the paid and accrued claims.
  • the best neural network for each case can be determined, for example, by minimizing the absolute mean error of the predicted values and the real values.
  • the ratio of the mean error to the mean predicted value (known requirements) can be applied to the predicted values of the modeled values to obtain the error.
  • the error must of course be accumulated accordingly. This can, for example can be achieved using the square root of the sum of the squares of the individual errors of each model.
  • the predicted values are also fitted using the Pareto distribution mentioned.
  • Estimation can also be used to e.g. to determine the best neural network of neural networks trained with (as described in the last section) different data sets (e.g. paid claims, outstanding claims etc.). This follows with the Pareto distribution
  • T (i) Th ((lP (i) y- i! A) )
  • is the fit parameter
  • Th is the threshold value
  • T (i) is the theoretical value of the i-th payment claim
  • O (i ) the observed value of the i-th payment claim
  • E (i) is the error of the i-th payment claim
  • P (i) is the cumulative probability of the i-th payment claim with
  • the error shown here corresponds to the standard deviation, ie the ⁇ i error, of the specified values.
  • the system based on neural networks shows a clear advantage over the methods of the prior art when determining the values in that the errors remain essentially stable. This is not the case in the prior art, since the error there does not increase proportionally for increasing i.
  • i there is a clear difference in the amount of the cumulative values between the chain ladder values and those which were obtained with the method according to the invention. This deviation is due to the fact that the IBNYR (Incurred But Not Yet Reported) damage is also taken into account in the chain ladder process.
  • the IBNYR damage would have to be added to the values of the method according to the invention shown above.
  • the IBNYR damage can be taken into account by means of a separate development (eg chain ladder).
  • the IBNYR losses do not play a role in the reservation of individual losses or in the determination of loss level distributions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Technology Law (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Complex Calculations (AREA)
EP03817702A 2003-09-10 2003-09-10 System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung Ceased EP1530780A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CH2003/000612 WO2005024717A1 (de) 2003-09-10 2003-09-10 System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung

Publications (1)

Publication Number Publication Date
EP1530780A1 true EP1530780A1 (de) 2005-05-18

Family

ID=34230818

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03817702A Ceased EP1530780A1 (de) 2003-09-10 2003-09-10 System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung

Country Status (7)

Country Link
US (1) US20060015373A1 (ja)
EP (1) EP1530780A1 (ja)
JP (1) JP2006522376A (ja)
CN (1) CN1689036A (ja)
AU (1) AU2003257361A1 (ja)
CA (1) CA2504810A1 (ja)
WO (1) WO2005024717A1 (ja)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030125997A1 (en) * 2001-12-20 2003-07-03 Allison Stoltz System and method for risk assessment
US20030177032A1 (en) * 2001-12-31 2003-09-18 Bonissone Piero Patrone System for summerizing information for insurance underwriting suitable for use by an automated system
US7899688B2 (en) 2001-12-31 2011-03-01 Genworth Financial, Inc. Process for optimization of insurance underwriting suitable for use by an automated system
US20030182159A1 (en) * 2001-12-31 2003-09-25 Bonissone Piero Patrone Process for summarizing information for insurance underwriting suitable for use by an automated system
US7630910B2 (en) * 2001-12-31 2009-12-08 Genworth Financial, Inc. System for case-based insurance underwriting suitable for use by an automated system
US7844476B2 (en) * 2001-12-31 2010-11-30 Genworth Financial, Inc. Process for case-based insurance underwriting suitable for use by an automated system
US8005693B2 (en) * 2001-12-31 2011-08-23 Genworth Financial, Inc. Process for determining a confidence factor for insurance underwriting suitable for use by an automated system
US8793146B2 (en) 2001-12-31 2014-07-29 Genworth Holdings, Inc. System for rule-based insurance underwriting suitable for use by an automated system
US7895062B2 (en) 2001-12-31 2011-02-22 Genworth Financial, Inc. System for optimization of insurance underwriting suitable for use by an automated system
US7844477B2 (en) 2001-12-31 2010-11-30 Genworth Financial, Inc. Process for rule-based insurance underwriting suitable for use by an automated system
US7818186B2 (en) * 2001-12-31 2010-10-19 Genworth Financial, Inc. System for determining a confidence factor for insurance underwriting suitable for use by an automated system
US7813945B2 (en) * 2003-04-30 2010-10-12 Genworth Financial, Inc. System and process for multivariate adaptive regression splines classification for insurance underwriting suitable for use by an automated system
US20040236611A1 (en) * 2003-04-30 2004-11-25 Ge Financial Assurance Holdings, Inc. System and process for a neural network classification for insurance underwriting suitable for use by an automated system
US7567914B2 (en) * 2003-04-30 2009-07-28 Genworth Financial, Inc. System and process for dominance classification for insurance underwriting suitable for use by an automated system
US7801748B2 (en) * 2003-04-30 2010-09-21 Genworth Financial, Inc. System and process for detecting outliers for insurance underwriting suitable for use by an automated system
US7383239B2 (en) 2003-04-30 2008-06-03 Genworth Financial, Inc. System and process for a fusion classification for insurance underwriting suitable for use by an automated system
US9311676B2 (en) 2003-09-04 2016-04-12 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US7711584B2 (en) 2003-09-04 2010-05-04 Hartford Fire Insurance Company System for reducing the risk associated with an insured building structure through the incorporation of selected technologies
US20050125253A1 (en) * 2003-12-04 2005-06-09 Ge Financial Assurance Holdings, Inc. System and method for using medication and medical condition information in automated insurance underwriting
US7698159B2 (en) * 2004-02-13 2010-04-13 Genworth Financial Inc. Systems and methods for performing data collection
US7555438B2 (en) * 2005-07-21 2009-06-30 Trurisk, Llc Computerized medical modeling of group life insurance using medical claims data
US7555439B1 (en) 2005-07-21 2009-06-30 Trurisk, Llc Computerized medical underwriting of group life insurance using medical claims data
US7664662B1 (en) * 2006-03-16 2010-02-16 Trurisk Llc Computerized medical modeling of group life and disability insurance using medical claims data
US7249040B1 (en) * 2006-03-16 2007-07-24 Trurisk, L.L.C. Computerized medical underwriting of group life and disability insurance using medical claims data
US20080077451A1 (en) * 2006-09-22 2008-03-27 Hartford Fire Insurance Company System for synergistic data processing
US8359209B2 (en) * 2006-12-19 2013-01-22 Hartford Fire Insurance Company System and method for predicting and responding to likelihood of volatility
WO2008079325A1 (en) 2006-12-22 2008-07-03 Hartford Fire Insurance Company System and method for utilizing interrelated computerized predictive models
US20090043615A1 (en) * 2007-08-07 2009-02-12 Hartford Fire Insurance Company Systems and methods for predictive data analysis
US9665910B2 (en) * 2008-02-20 2017-05-30 Hartford Fire Insurance Company System and method for providing customized safety feedback
US20100070398A1 (en) * 2008-08-08 2010-03-18 Posthuma Partners Ifm Bv System and method for combined analysis of paid and incurred losses
US8355934B2 (en) * 2010-01-25 2013-01-15 Hartford Fire Insurance Company Systems and methods for prospecting business insurance customers
US9460471B2 (en) 2010-07-16 2016-10-04 Hartford Fire Insurance Company System and method for an automated validation system
US8727991B2 (en) * 2011-08-29 2014-05-20 Salutron, Inc. Probabilistic segmental model for doppler ultrasound heart rate monitoring
US10937102B2 (en) * 2015-12-23 2021-03-02 Aetna Inc. Resource allocation
US10394871B2 (en) 2016-10-18 2019-08-27 Hartford Fire Insurance Company System to predict future performance characteristic for an electronic record
CN107784590A (zh) * 2017-02-16 2018-03-09 平安科技(深圳)有限公司 一种理赔准备金的评估方法和装置
US20210390624A1 (en) 2017-09-27 2021-12-16 State Farm Mutual Automobile Insurance Company Real Property Monitoring Systems and Methods for Risk Determination
US10460214B2 (en) * 2017-10-31 2019-10-29 Adobe Inc. Deep salient content neural networks for efficient digital object segmentation
CN109214937A (zh) * 2018-09-27 2019-01-15 上海远眸软件有限公司 保险理赔智能反欺诈判定方法和系统
WO2020130171A1 (ko) * 2018-12-18 2020-06-25 (주)아크릴 뉴럴 네트워크를 이용한 언더라이팅 자동화 장치 및 방법
JP6813827B2 (ja) * 2019-05-23 2021-01-13 株式会社アルム 情報処理装置、情報処理システム、および情報処理プログラム
CN110245879A (zh) * 2019-07-02 2019-09-17 中国农业银行股份有限公司 一种风险评级方法及装置
US20210295170A1 (en) * 2020-03-17 2021-09-23 Microsoft Technology Licensing, Llc Removal of engagement bias in online service
US12020400B2 (en) 2021-10-23 2024-06-25 Adobe Inc. Upsampling and refining segmentation masks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761442A (en) * 1994-08-31 1998-06-02 Advanced Investment Technology, Inc. Predictive neural network means and method for selecting a portfolio of securities wherein each network has been trained using data relating to a corresponding security
US5987444A (en) * 1997-09-23 1999-11-16 Lo; James Ting-Ho Robust neutral systems
US6430539B1 (en) * 1999-05-06 2002-08-06 Hnc Software Predictive modeling of consumer financial behavior
CA2424588A1 (en) * 2000-10-18 2002-04-25 Steve Shaya Intelligent performance-based product recommendation system
WO2002047026A2 (de) * 2000-12-07 2002-06-13 Kates Ronald E Verfahren zur ermittlung konkurrierender risiken

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DUHOUX M. ET AL: "Improved Long-Term Temperature Prediction by Chaining of Neural Networks", INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, vol. 11, no. 1, February 2001 (2001-02-01), pages 1 - 10, XP002352477 *

Also Published As

Publication number Publication date
US20060015373A1 (en) 2006-01-19
WO2005024717A1 (de) 2005-03-17
CA2504810A1 (en) 2005-03-17
JP2006522376A (ja) 2006-09-28
CN1689036A (zh) 2005-10-26
AU2003257361A1 (en) 2005-03-29

Similar Documents

Publication Publication Date Title
EP1530780A1 (de) System und verfahren zur automatisierten erfahrungstarifierung und/oder schadensreservierung
EP2112568B1 (de) Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
EP2185980B1 (de) Verfahren zur rechnergestützten steuerung und/oder regelung mit hilfe neuronaler netze
DE102019116305A1 (de) Pipelining zur verbesserung der inferenzgenauigkeit neuronaler netze
DE102012009502A1 (de) Verfahren zum Trainieren eines künstlichen neuronalen Netzes
DE112018004992B4 (de) Übertragung synaptischer gewichte zwischen leitfähigkeitspaaren mitpolaritätsumkehr zum verringern fester einheitenasymmetrien
DE68927014T2 (de) Assoziatives Musterkonversionssystem und Anpassungsverfahren dafür
EP2106576A1 (de) Verfahren zur rechnergestützten steuerung und/oder regelung eines technischen systems
DE60125536T2 (de) Anordnung zur generierung von elementensequenzen
EP0925541B1 (de) Verfahren und vorrichtung zur rechnergestützten generierung mindestens eines künstlichen trainingsdatenvektors für ein neuronales netz
DE10139682B4 (de) Verfahren zum Generieren von neuronalen Netzen
DE112020005613T5 (de) Neuromorphe Einheit mit Kreuzschienen-Array-Struktur
Kendall A multi-agent based simulated stock market-testing on different types of stocks
DE60022398T2 (de) Sequenzgenerator
Yaakob et al. A hybrid intelligent algorithm for solving the bilevel programming models
BOISSEAU et al. Highway traffic forecasting using artificial neural networks
DE10356655B4 (de) Verfahren und Anordnung sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemzustandes eines dynamischen Systems
DE102019216973A1 (de) Lernverfahren für neuronale netze basierend auf evolutionären algorithmen
EP0548127A1 (de) Neuronales Netzwerk und Schaltungsanordnung zur Bool'schen Realisierung neuronaler Netze vom ADALINE-Typ.
DE102004059684B3 (de) Verfahren und Anordnung sowie Computerprogramm mit Programmmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemzustandes eines dynamischen Systems
WO2005048143A1 (de) System und verfahren zur automatisierten kreditrisikoindexierung
WO2000063751A1 (de) Verfahren und anordnung zur modellierung eines technischen systems
EP3710992A1 (de) Künstliches neuronales netz und verfahren hierzu
WO2013182176A1 (de) Verfahren zum trainieren eines künstlichen neuronalen netzes und computerprogrammprodukte
Bulinskaya LIMIT BEHAVIOR AND STABILITY OF APPLIED PROBABILITY SYSTEMS

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050107

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20070226