A NEW VERSION (nominal revision 1520, dated 2024-02-16) of the open-source Reference Implementation of "Let's Be Rational".

Revision 1520 adds an exported / externally accessible version of erfinv() based on the internal branch logic of inv_norm_cdf() but structured such that it avoids catastrophic subtractive cancellation for small arguments to erfinv(). The specialisation for the ATM case now goes via erfinv(). Thanks to Leif Andersen for pointing this out (I had previously effectively duplicated the same logic in my internal function 'implied_normalised_volatility_atm'). The timing main programme has been made slightly more robust in the way it identifies the attainable s_min and s_max values for its internal loop. Timing is done by 'make timing' on the command line, or running the main programme inside the Visual Studio solution (but detached, invoke with Ctl-F5). No further functional/accuracy/speed changes otherwise.

On a laptop (Asus Zenbook 14, purchased in the summer of 2023 at Argos in the UK) with CPU type '12th Gen Intel(R) Core(TM) i5-12500H', on average (~42 million invocations), the calculation of a single implied volatility is just 180 nanoseconds.

That's more than 5.5 million implied volatilities per second.

In this version, we also provide python frontends, octave (GNU open-source alternative to matlab) frontends, and even a gnuplot addin.

NEW: includes a new inverse cumulative normal function algorithm ("PJ-2024-Inverse-Normal") that is just as accurate as AS241 (perfect for all intents and purposes) but about 20%-30% faster. Please see files README.txt and normaldistribution.cpp for details. Naturally, this may be of use in contexts beyond "Let's Be Rational".

Permission to use, copy, modify, and distribute this software is freely granted, provided that the contained copyright notice is preserved. WARRANTY DISCLAIMER: The Software is provided "as is" without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.
Here is my wrapped & adapted source code for NL2SOL, aka 'ACM TOMS 573'.

Calibration and parameter fitting is a ubiquious numerical problem in quantitative finance. I have been using NL2SOL, a non-linear least square fitter (big brother of Levenberg-Marquardt, if you will), for more than 25 years and it still beats anything I have compared it against (though Google's CERES algorithm may have similar performance, albeit reportedly a somewhat more involved API).

NL2SOL is mentioned under "More advanced methods" in section 15.5.4 in Numerical Recipes:

NL2SOL is a highly regarded nonlinear least-squares implementation with many advanced features. For example, it keeps the second-derivative term we dropped in the Levenberg-Marquardt method whenever it would be better to do so, a so-called full Newton-type method.

It is also known as a variable metric method. The comment regarding the "second-derivative term" in Numerical Recipes refers to its internal careful operations on the Hessian of the problem.

I have long loved the fact that you can specify constraints by the simple provision of a true/false flag function, meaning, you provide the test function and NL2SOL will call it on any new vector coordinate candidate and if your function returns false, NL2SOL will reject that and come up with another candidate. In essence, it can thus in a binary nesting fashion find the best possible coordinates near a domain boundary, but you never need to bother with any arithmetic specification of the domain geometry!

Another favourite feature of mine is that it offers the automatic calculation of the Jacobian, when it needs it, by internal finite differencing. That's not always the most run-time efficient way to use it, but it can help enormously when the analytical derivation of the Jacobian is cumbersome.

The original Fortran77 version of ACM TOMS 573 is available in the TOMS subdirectory of the NETLIB web site.

I converted the original version to C with f2c about 25 years ago, wrapped it into a C++ class, and used it ever since. It is or was in use in various places I have worked at in the past, for various types of calibration or fitting purposes, including global (cross-currency, i.e., FX and basis-swap dependent) interest rates curve construction. I tripped up only three times in all my years of using it on essentially two bugs in the code, plus one case where overzealous parameter specification can trigger an issue.
  1. The first identified-and-fixed problem one is a simple code typo where what was meant to be V(XMSAVE) was accidentally coded up as X(XMSAVE) (lines 2328 and 2335 in the original ACM TOMS 573 code).

  2. The second one is related to a rare occurrence of a mathematical failure when an estimate of an upper bound happens to be too low for the algorithm to succeed. This was discovered by a colleague of mine nearly 20 years ago who at the time consulted with the original NL2SOL authors by email to confirm his findings. I wish to preserve his anonymity but declare that it was not me who identified and solved the issue.

  3. The last potential pitfall was when the input specification of the false convergence threshold is not greater than DBL_EPSILON since the algorithm in function ASSESS does essentially not allow for this setting. However, since we often want to suppress spurious false convergence assessments, we typically have V[XFTOL] == 0.0, i.e., we often set the false convergence threshold to zero. This led to NL2SOL then reporting that it is unable to compute the Jacobian at a location where it was in fact able to compute the function value, which, when we provide the Jacobian analytically, should not be possible. I fixed this by restoring the the stored last good point and returning with FALSE CONVERGENCE (IV[1] = 8) which is in most situations an acceptable form of convergence.
In addition to the original TOMS (netlib) pages, there are some furher places with source code for NL2SOL:

NL2SOL appears to be in use in several third-party projects such as NL2sol.jl, COPASI, and DAKOTA. Here is what DAKOTA say about NL2SOL:

The NL2SOL algorithm [DGW81] is a secant-based least-squares algorithm that is q-superlinearly convergent. It adaptively chooses between the Gauss-Newton Hessian approximation and this approximation augmented by a correction term from a secant update. NL2SOL tends to be more robust (than conventional Gauss-Newton approaches) for nonlinear functions and “large residual” problems, i.e., least-squares problems for which the residuals do not tend towards zero at the solution.

At the time of this writing (February 2024), to the best of my knowledge, the above mentioned 3 pitfalls are not corrected in any of the other versions of NL2SOL out there. For example, issue #1 is still present in version 6.19 of DAKOTA (lines 217 and 227 in packages/external/NL2SOL/df7hes.c in their source code). The second issue was seemingly noticed and the following code was inserted as of line 589 in packages/external/NL2SOL/dg7qts.c of the DAKOTA source code:
if (alphak <= 0.) { alphak = uk * .5; } if (alphak <= 0.) { alphak = uk; }
In this context, uk is the estimate for the upper bound and the objective is to find a suitable positive value for alphak. When uk, the estimate for the upper bound, is too low (meaning, uk is not an upper bound), it is in fact possible that uk is equal to the associated lower bound (lk in the code). The above inserted code can then in the first step end up dropping below the lower bound (not good). The second step would only ever be entered into if uk <= DBL_MIN, i.e., effectively zero, and it is not clear how the second step then remedies the situation: the original problem in this situation is that uk, the estimate for the (unknown) true upper bound, was too low to start with.

References:
Dennis, J.E., Gay, D.M, and Welsch, R.E. August 1977, “An Adaptive Nonlinear Least-Squares Algorithm. Working Paper No. 196”, Computer Research Center for Economics and Management Science, National Bureau of Economic Research, Inc., Cambridge, Massachusetts 02139.
Dennis, J.E., Gay, D.M, and Welsch, R.E. 1981, “An Adaptive Nonlinear Least-Squares Algorithm”, ACM Transactions on Mathematical Software, vol. 7, pp. 348–368.
Dennis, J.E., Gay, D.M, and Welsch, R.E. 1981, “Algorithm 573: NL2SOL—An Adaptive Nonlinear Least-Squares Algorithm”, ACM Transactions on Mathematical Software, vol. 7, pp. 369–383.

Permission to use, copy, modify, and distribute this software is freely granted, provided that the contained copyright notice is preserved. WARRANTY DISCLAIMER: The Software is provided "as is" without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.

Here is a new version (2.1.1) of Bas de Bakker's bprof statistical sampling profiler as featured in an article in the Linux Journal in May 1998. That article is still online at www.linuxjournal.com/article/2622.

I took Bas's last version (version 2.1, still available at bbwt.home.xs4all.nl/bas/bprof-2.1.tar.gz), modified it to get it to compile and work under current versions of Linux, tweaked a couple of issues relating to the structure of the text section layout of optimised executables generated by contemporary versions of gcc/g++, updated it to use POSIX high frequency timer technology when available, and added some convenience features. This was discussed with Bas who gave me permission to put it online ("You are more than welcome to take my code and post it anywhere you like. I'm happy to hear that someone gets some use out of it.").

I used bprof for my optimisation efforts of "Let's Be Rational". I found gprof somewhat unwieldy and I am confounded by the timings it gives me, e.g., it says that the core implied volatility function lets_be_rational(...) is on average executed in about 20nsec whereas all my timing tells me it is about 180 nsec. With bprof, I focussed on actual-time-spent (as a proportion of run time) per source code line to see where I could gain speed.

Permission to use, copy, modify, and distribute this software is freely granted, provided that the contained copyright notice is preserved. WARRANTY DISCLAIMER: The Software is provided "as is" without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.

Gaussian Kissing - meshless not pointless (~70Mb, requires Adobe Reader) is a presentation about the phenomenon of the Kissing Number in classical geometry appearing as the approximate cluster node count threshold at which meshless backward induction ("Cluster Induction") calculations with radial basis functions begin to show a somewhat erratic convergence behaviour. This presentation was given at both the TopQuants Autumn 2023 event in Amsterdam (1st of November 2023) and at the QuantMinds London 2023 conference (14th to 16th of November 2023).

You can find the presentation slides here and the print (handout) version (~23Mb) here.

For further background on "Cluster Induction", see also my presentation on that subject from 2017/2018 further down.

WARNING: the presentation version is approximately 70Mb in size. It contains animations on slides 50, 70, and 71. These, however, seemingly, only ever work in Adobe Reader - they were generated with '\animategraphics{}' (animate package to iterate through a sequence of figures within the same frame on page 50) and '\includemedia[...]{}{VPlayer.swf}' (media9 package, involving shockwave player by Adobe to play back MP4 videos on pages 70 and 71) in a beamer presentation.

Open-source reference implementation of "Implied Normal Volatility". December 2022.

Permission to use, copy, modify, and distribute this software is freely granted, provided that the contained copyright notice is preserved. WARRANTY DISCLAIMER: The Software is provided "as is" without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.

Implied Normal Volatility (first version December 2016; Wilmott, pages 52-54, March 2017) is a short communication on an analytical formula for the calculation of implied normal volatility (also known as Bachelier volatility) from vanilla option prices.

In this technical note on Time-weighted volatility (July 2020; Wilmott, pages 60-65, November 2020), I recap the well-known method of assigning different amounts of effective volatility time to each of the calendar days between the valuation date and the expiry of an option, according to whether they are standard trading days, holidays, weekends, special announcement days, or otherwise, and how to interpolate implied volatility accordingly. The interesting part is section 4 in which I give details as to how we can get a numerical 1-business-day theta within such a framework that is in line with the spirit of the time-weighted volatility idea and interpolation. I also show how such time-weighted volatility interpolation reproduces the market observable phenomenon of the Thursday-to-Friday downwards jump of the "ON" option implied volatility by about 1/√3.

In my presentation on Industry-grade function approximation, presented at the WBS Quant Conference in Rome in October 2019 and the QuantMinds conference in Vienna in May 2019, I explain the mathematical tools behind some of my previous numerical analysis publications, e.g., the below mentioned Strike from volatility and delta-with-premium and the analytical Implied Normal Volatility formula. The presentation also reveals what I call the Nonlinear Remez algorithm which in its essence comes down to the classical Remez-II method under a nonlinear transformation [introduced on slide 51 in equation (8.1)] as a logical extension of Remez's original q(x) linear weighting function. Whilst this nonlinear extension formally disbandons any hope for a general consistency or even convergence gurarantee, in practical applications, it is straightforward to use (with the caveat that any version of the Remez algorithm still requires extreme care and never is for the faint-hearted, see the respective documentation sections in the Boost C++ library, for instance) and enabled me to arrive at some numerical function approximation formulae of astounding accuracy and usability.

Strike from volatility and delta-with-premium (November 2019; Quantitative Finance, Vol. 20, Issue 8, 2020, pp. 1227-1235) is the description of an efficient two-step procedure to compute the strike implied by the quote pair of a Black volatility and a delta-with-premium, thus effectively providing an analytical solution to this ubiquitous problem in FX markets. Includes a reference implementation and precompiled 32-bit and 64-bit XLLs.

In autumn 2018, at the WBS Quant Conference in Nice, I gave a presentation on Composite and spread options as a double digital quadrature, also for use in Asian composite options. I give details how to represent a composite option as a spread option, and how to efficiently value a spread option, or in general any bilinear option, directly from the smiles of the two underlyings connected with the Gaussian copula via a single one-dimensional quadrature. This requires a universally robust and efficient bivariate cumulative normal function, for which we give C code based on Genz's 2004 method with an additional catastrophic-cancellation avoiding, full machine precision, rational function approximation for √(1+x)-1. The valuation is so efficient and accurate that it can also be used as a defacto composite-option implied volatility generator for arbitrary strikes and expiries, by using a full valuation for any given strike/expiry pair plus an implied volatility calculation with "Let's Be Rational" (see further down). A perfectly viable application (this is tested in real production) is the valuation of Asian-composite options based on the Geodesic Strike logic (see below) which requires first an at-the-forward composite volatility for each observation date, then the geodesic (composite) strike calculation, and then the at-the-composite-geodesic-strike composite volatility calculation.

At the WBS Quant Conference in Florence in September 2017 and at the QuantMinds conference in Lisbon in May 2018, I spoke about Cluster Induction:
  1. Introduction: the case for multi-dimensional HJB calculations of moderate accuracy
  2. Scattered (cluster) data interpolation
  3. Meshless backward induction
  4. There’s more to life than Crank-Nicolson
You can find the presentation slides here and the print (handout) version here.

Economically justifiable dividend modelling is a presentation on methods to incorporate a realistically chosen dividend model (where the actually paid dividend depends on the concurrent share price) into a real market making system: cash dividends that can be downsized if the share price collapsed, arbitrage-free and consistent with the market's implied volatility surface, stochastic volatility parameter generation of implied volatility surfaces that are consistent by construction with the dividend forecasts (no numerical/iterative calibration involved), and local volatility dynamics that are also by construction, i.e., without any numerical calibration, exactly calibrated to the chosen dividend process (ICBI Global Derivatives Conference, Amsterdam, May 2015, and WBS Fixed Income Conference, Paris, October 2015).

Ultra-Sparse Finite-Differencing For Arbitrage-Free Volatility Surfaces From Your Favourite Stochastic Volatility Model [Joint work with colleagues at VTB Capital].

Inspired by the Hyperbolic-Hyperbolic parametric local-stochastic volatility model, we present a practical method for an arbitrage-free definition of an implied volatility surface with wide ranging parametric flexibility, based on high-efficiency ultra-sparse finite differencing techniques combined with arbitrage-free implied volatility inter- and extrapolation (ICBI Global Derivatives Conference, Amsterdam, May 2014, and WBS Fixed Income Conference, Barcelona, September 2014).
  • 40 years of evolution of smile parametrisation
  • Commitment to Spatial Discreteness and the importance of being Metzler
  • Stencil Viability, Boundary Conditions, Continuous-Time stability, Ito's lemma of pure jump processes for Exact Martingales, Algebraic Splitting, Anti-Diffusion Limiting, and other lessons to learn from 60 years of Computational Fluid Dynamics.
  • From Continuous Time to Numerical Integration
  • Box Transition Probability Translation
  • Cash dividends without arbitrage or approximations

In Clamping Down on Arbitrage (December 2013; Wilmott, pp 54-69, May 2014), the subject of arbitrage-free interpolation of implied Black volatility is addressed. The presented method preserves a preselected (and thus preferred) interpolation method as much as possible, and only invokes corrections where needed. All calculations are analytical without any numerical fitting that could so easily lead to undesirable shapes of implied volatility profiles. The article also contains considerations on arbitrage-free extrapolation, and an effectively analytical procedure for the inverse of the logarithm of the scaled complementary error function (which is required for one of the analytical arbitrage-free extrapolation methods), i.e., the solution to ln(Phi(x))+x²/2 = c for x. We mention that the same equation occurs in other context, e.g., as the maximum attainable call option delta in the context of FX option quotations of volatilities over delta-with-premium.

Let's Be Rational (November 2013; Wilmott, pp 40-53, January 2015) is a follow-up article on By Implication (July 2006). In this newer article, we show how Black's volatility can be implied from option prices with as little as two iterations to maximum attainable precision on standard (64 bit floating point) hardware for all possible inputs.

Open-source Reference Implementation of "Let's Be Rational" (November 2013).

Permission to use, copy, modify, and distribute this software is freely granted, provided that the contained copyright notice is preserved. WARRANTY DISCLAIMER: The Software is provided "as is" without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.

In A default copula in a lattice-based credit model (June 2013), we add a default event copula to the framework of a multi-factor credit model with correlated stochastic hazard rates. We find that the codependence structure of default events in the limit of infinitesimal time steps in a finite-differencing framework is dominated by the lower tail dependence coefficient of the employed copula. Collaboration article with Christian Kahl, Lee Wild, and Ioannis Chryssochoos.

Finite differencing schemes as Padé approximants (April 2013) is a summary of various numerical schemes for the solution of parabolic partial differential equations with particular emphasis on their representation as Padé approximants of the exponential function. Of applied interest are some of the higher order schemes that are discussed, some of which go back to the 1960's, and several others to the 1980's, though they seem to be not in the commonly known toolset of practitioners in financial mathematics. This note is an attempt make these powerful methods more widely known.

Geodesic strikes for composite, basket, Asian, and spread options (July 2012, updated February 2017) is a note on simple methodologies for the selection of most relevant or effective strikes for the assessment of appropriate implied volatilities used for the valuation of composite, basket, Asian, and spread options following the spirit of geodesic strikes in Reconstructing volatility by M. Avellaneda, D. Boyer-Olson, J. Busca, and P. Friz, Risk, pages 91–95, October 2002.

[Normal and] Gamma Hazard Quanto CDS (ICBI Global Derivatives Conference, Barcelona, April 2012) is a presentation on quanto CDS valuation. We first review the conventional normal (Ornstein-Uhlenbeck) hazard framework, and then present a quanto CDS hazard model based on a Gamma process. The Gamma hazard model enables us to match the steep front end of quanto CDS market spread curves (which is unattainable by a normal hazard rate framework). The Gamma hazard model also avoids the highly unsatisfactory and uneconomical feature of high probabilities for reverse defaults posed by any calibrated normal hazard rate model.

Quanto Skew with stochastic volatility (March 2010) is a continuation of the analysis in Quanto Skew to the presence of both local and stochastic volatility for the underlying asset and the FX rate process.

Quanto Skew (July 2009) presents an analysis of the humble quanto vanilla option. A conventionally used quanto adjustment is compared with exact results using a simple double displaced diffusion model. Arguably (not) surprisingly, it turns out that the conventional quanto adjustment results in price and (quanto-) implied volatility differences that are negligible only for short-dated contracts.

The following articles appeared in the Encyclopedia of Quantitative Finance (John Wiley and Sons, 2010):

A singular Variance Gamma expansion (May 2009) is a note on an analytical expansion for option prices and Black implied volatilities generated by the Variance Gamma model based on a singular expansion of the standard gamma density in terms of the Dirac functions and its derivatives. The expansion is done up to fifth order in the Variance Gamma kurtosis parameter ν using the open source computer algebra system Maxima. The Maxima code BlackVolatilityExpansionForVarianceGammaModel_macsyma.txt is straightforward and could easily be translated to any other symbolic mathematics package.

Positive semi-definite correlation matrix completion for stochastic volatility models (joint paper with Christian Kahl, May 2009) outlines how one can, for any stochastic volatility model, given cross-asset, and asset-volatility correlations, fill in the remaining elements of the complete correlation matrix in a flexible way that is guaranteed to always give a positive semi-definite matrix.

The Discrete Gamma Pool model (September 2007; Wilmott Journal, 1(1):23-40, 2009) is a model for the dynamics of losses and spreads on portfolios for the purpose of pricing exotic variations of synthetic collateralised tranche obligations such as Loss Triggered Leveraged Super-Senior notes, multi-callable CDOs, and, by implication of the latter, options on forward starting CDOs. Also discussed are features such as the counterparty's right to deleverage upon a loss trigger event in a leveraged super senior can be understood as an embedded Bermudan swaption, and how this can be catered for in a numerical implementation.

Implementation of the Discrete Gamma Pool model (February 2008) gives details as to how the numerical quadratures required for the valuation of contracts within the Discrete Gamma Pool model framework can be done.

The Gamma Loss and Prepayment model (November 2007, published in Risk Magazine in September 2008, pp 134-139). We present a model for the dynamics of fractional notional losses and prepayments on asset backed securities for the valuation and risk management of derivatives such as the so-called waterfall structures and other structured debt obligations.

Hyp Hyp Hooray (June 2007, joint paper with Christian Kahl; Wilmott, pages 70-81, March 2008). A new stochastic-local volatility model is introduced. The new model's structural features are carefully selected to accommodate economic principles, financial markets' reality, mathematical consistency, and ease of numerical tractability when used for the pricing and hedging of exotic derivative contracts. Also, we present a generic analytical approximation for Black volatilities for plain vanilla options implied by any parametric-local-and-stochastic-volatility model, apply it to the new model, and demonstrate its accuracy.

Hyperbolic local volatility (November 2006). A parametric local volatility form based on a hyperbolic conic section is introduced, and details are given as to how this alternative local volatility form can be used as a drop-in replacement for the popular Constant Elasticity of Variance local volatility, and what parameter restrictions apply.

An asymptotic FX option formula in the cross currency Libor market model (joint paper with Atsushi Kawai, October 2006; Wilmott, pages 74-84, March 2007). Libor market models are becoming more and more popular, and approximate formulae for swaptions and caplets in aid of fast calibration are available. This article is about plain vanilla FX option approximations in a cross currency Libor market model with explicit (displaced diffusion) control over the skew of both domestic and foreign interest rates, as well as the spot FX process.

By Implication (July 2006; Wilmott, pages 60-66, November 2006). Probably the most complicated trivial issue in financial mathematics: how to compute Black's implied volatility robustly, simply, efficiently, and fast.

Semi-analytic valuation of credit linked swaps in a Black-Karasinski framework (Quant Congress Europe, London, October 2006) is a presentation on a simple model for the valuation of credit linked swaps in a framework that allows for strictly positive default hazard rates and permits explicit control over the market-observable skew of implied volatilities for options on the underlying swap. We discuss different aspects of calibration depending on the nature of the underlying swap. For speedy numerical evaluation, the resulting pricing equations are reduced to a dimensionality-pruned quadrature over a generic Ornstein-Uhlenbeck process path space.

Not-so-complex logarithms in the Heston model (joint paper with Christian Kahl, Wilmott, pages 94-103, September 2005). We use a rotation count algorithm to handle the multivalued nature of the complex logarithim in the characteristic function.

Fast strong approximation Monte-Carlo schemes for stochastic volatility models (joint paper with Christian Kahl, September 2005, published in Quantitative Finance, Vol. 6, No. 6, 2006, pp. 513-536). Fast numerical integration methods for stochastic volatility models in financial markets are discussed. We use the strong convergence behaviour as an indicator for the approximation quality.

Options on Credit Default Index Swaps (joint paper with Yunkang Liu, Wilmott, pages 92-97, July 2005). We explain how the knock-in/knock-out feature of options on credit default index swaps generates a valuation dependence on the correlation skew on index tranche prices.

A practical method for the valuation of a variety of hybrid products (ICBI Global Derivatives Conference, Paris, May 2005) is a presentation on a flexible model framework that can be used to price products on multiple underlyings, from different asset classes, allowing for arbitrary volatility smiles. The model is effectively an approximate Markov functional model. Its numerical implementation allows for very fast pricing of fully smile dependent contracts similar to local volatility models, but without any numerical short time-stepping, and without any numerical calibration noise as is so often associated with local volatility models.

A note on multivariate Gauss-Hermite quadrature (May 2005). Univariate Gauss-Hermite quadrature is a very powerful and well understood tool in numerical analysis. In this document, I discuss some of the choices we have when it comes to more than one dimension. I also provide an explanation how polar coordinates can be used in two dimensions for which an unusual kind of one-dimensional quadrature is required: radial Gauss-Hermite quadrature. This is essentially the same as standard Gauss-Hermite, only that the integration domain starts at zero. I have precomputed the required roots and weights up to order 40. They are tabulated in RootsAndWeightsForRadialGaussHermiteQuadrature.cpp.

A toy example for weighted sampling for variance reduction (October 2004). DOI: 10.13140/RG.2.2.10819.17441. This is a demonstration how biasing the variates used for a Monte Carlo simulation can significantly reduce the variance of the simulation result. As is so often the case with this technique, its applicability in practice depends on having a good estimate for the optimal bias in a least-variance sense. In this example for a digital option in a Gaussian model, I give analytical approximations for the optimal bias derived from its defining transcendental equation.

The practicalities of Libor Market models. There are many publications on the theory of the Libor market model and its extensions. There are very few sources on the issues a pracitioner faces during implementation and opertion of the model. This presentation (~160 slides) is the material for a one-day training course (first given in 2005) on the subject of how to make a Libor Market Model work in practice.

Stabilised multidimensional root finding. Underdetermined fitting and root finding problems can be stabilised by the addition of quantifiable desirable features to the task. Simply defining a weighted objective function containing the original problem and the desiderata function is generally not robust. By adding a Lagrange-multiplier weighted Newton-Raphson step condition to the desiderata function, however, even very large problems can be solved surprisingly efficiently.

More likely than not (DOI: 10.13140/RG.2.2.28015.82082). In a nutshell: this is a collection of likelihood ratio formulae.

Splitting the core (DOI: 10.13140/RG.2.2.11238.60483). Ever wondered how to (approximately) decompose the correlation matrix used in the semianalytical pricing of CDOs in the default-time-copula model into the factor weights of a single systemic factor with a really simple formula, i.e. without the need for iterations or principal components analysis? Here is how!

Valuing American options in the presence of user-defined smiles and time-dependent volatility: scenario analysis, model stress and lower-bound pricing applications. (The Journal of Risk, 4(1), pages 35-61,2001).

This paper is also the first publication that mentions the "practitioner's trick" to schedule the time steps in the backward induction on a square-root scale to improve convergence.

Stochastic volatility models - past, present, and future. (Presentation at the "Quantitative Finance Review" conference in November 2003 in London).

The Future is Convex. (Wilmott, pages 2-13, February 2005).

The link between caplet and swaption volatilities in a Brace-Gatarek-Musiela/Jamshidian framework: approximate solutions and empirical evidence. (The Journal of Computational Finance, 6(4), 2003, pages 41-59, submitted in 2000).

Mind the Cap. (Wilmott, pages 54-68, September 2003).

The handling of continuous barriers for derivatives on many underlyings. (Presentation at the Quantitative Finance Conference in London, November 2002).

Errata in Monte Carlo methods in finance (John Wiley and Sons, February 2002).
These are already corrected in the latest print batch.

The most general methodology for creating a valid correlation matrix for risk management and option pricing purposes (Journal of Risk, Volume 2, Number 2 (Winter 1999) Pages: 17-27).