A new approach to assess the emanation of

Radioactive decay chain kinetics are described by a linear time invariant
(LTI) system of ordinary differential equations. Historically, this has been
expressed in terms of the Bateman equations (Bateman, 1910), but recently
this has been more conveniently written in matrix form (Pressyanov, 2002;
Levy, 2019; Amaku et al., 2010). Undistorted radioactive decay kinetics of a
decay chain of

The release of

By first principles, however, the

To this point, the best method to derive

Recursive Bayesian estimation describes a class of algorithms to perform
statistical inference in dynamical systems that can be modeled by a
(first-order) Markov process. The general idea of these methods is to
sequentially form priors for a state vector

Conversely, smoothing refers to computing the density of the state given all
measurements in a specific time interval, or the complete collection. In
most cases, smoothing can be defined recursively using information inferred
from the filtering and starting a backward recursion at the last
time instant at which the smoothing and filtering densities are equal.
Formally, the smoothing density is recursively defined by Eq. (6) for the
above conditional independence assumptions.

For the problem at hand, a stochastic differential equation (SDE) is needed
to express the (time-varying) uncertainty associated with the latent
continuous variable

While this choice is not entirely representative of the physical mechanisms
related to

Unlike most applications of such filtering algorithms for discretized LTI
systems, here the supporting measurements cannot be made at instantaneous
moments in time because disintegrations of a specific nuclide can only be
recorded over a finite time interval,

The way we have chosen to model this behavior in the present study is to
start by stating that the uncorrupted (i.e., noise-free) measurements are
given by Eq. (12), where, for now, we assume that

The elements of

Since integration is a linear operator, the joint distribution of

Radioactivity measurements generally follow Poisson statistics. In the
framework of Bayesian inference and recursive Bayesian estimation,
non-Gaussian noise considerably complicates the inference procedure, since
the measurements are then no longer a Gaussian process and thus no longer
conjugate with the state. For this reason, there is no exact closed-form
solution for the filtering recursions in the case of non-Gaussian noise.
Considerable work has been done to address this, including sampling
procedures like Markov chain Monte Carlo, particle filtering, or expectation
propagation (Minka, 2013), or by assuming that all arising probability density functions (PDFs) are Gaussian
combined with an estimation strategy for the moments (e.g., unscented
Kalman filter; Julier et al., 2000). Generally, we found such approaches
unsuitable due to their computational complexity but also because the
measurements can be well approximated as Gaussian (due to the high number of
counts that are being observed). Instead, we approximate the measurement
likelihood as Gaussian where the moments are evaluated from the previous
time step filtering distribution of

The variance

These compound distributions can be decomposed into discrete and continuous
components which are approximately represented as mixtures of Gaussians.
This kind of system is also called a switching linear dynamical system
(SLDS). In the SLDS, exact filtering is not computationally feasible
(Barber, 2006; Hartikainen and Särkkä, 2012), since the filtering
distribution is a mixture of Gaussians whose components are being multiplied
by the number of models at each time step resulting in the exponential
growth of components. Most approaches for approximation, like the one
employed here, replace the resultant mixture at each step of filtering and
smoothing with a smaller one, limiting the number of kept components in the
Gaussian mixtures to some fixed upper value

Illustration of the iterative computational methods applied in the
Gaussian sum filtering (GSF). The filtering distributions (prior and posterior at time steps

In this setting, the smoothing distribution is also a compound distribution in which the components of each mixture are multiplied within each smoothing step in a backwards recursive formulation by the number of linear dynamical models. Therefore, smoothing is also only possible approximately, once again, on the basis of approximating each arising Gaussian mixture with a smaller one. One way to approximately obtain smoothed results in the SLDS setting is thus given by the expectation correction (EC) algorithm introduced in Barber (2006) and shown therein to provide state-of-the-art results, both in terms of computational efficiency and accuracy, using the results of the GSF forward pass and performing a backwards recursion through the time series. The EC algorithm requires, analogous to the GSF forward pass, the propagation and correction methods, in the form of the filtering and smoothing steps, for the parameterized linear dynamical system outlined in Sect. 2.3 and 2.4 acting on each Gaussian mixture component. In this case, however, the integrating behavior of the measurements does not lead to any required adjustments, and the applied equations are thus exactly analogous to the classical Rauch–Tung–Striebel smoother that represent a formal reversal of the dynamics. These have also been used in the original presentation of the EC algorithm (Algorithm 5 in Barber, 2006). Figure 2 provides a graphical illustration similar to the filtering method in Fig. 1 of the EC smoothing backward pass. The EC algorithm provided in Barber (2006) was used without further modifications, and for more information on this algorithm, the reader is directed to this work.

While not a main contribution to this paper, for completeness sake and to facilitate possible re-implementation, both the GSF forward pass and the EC backward pass are outlined in pseudo-code in Appendix A2 in the way they have been implemented here.

Illustration of the backward iterative computational steps (1)–(4)
in the expectation correction algorithm applied for correcting the results from the Gaussian sum filtering (GSF)
outlined in Fig. 1 to the approximate smoothing solution in the switching
linear dynamical system. Input to the algorithm are the results from the GSF
and associated time stamps and values for all parameters. Output is the
decomposed smoothing distribution

To model the two distinct regimes outlined before, two linear dynamical
models are used that share their

In practice, the components of

Implementation of the presented algorithms was carried out in Python using
the JAX framework (Bradburry et al., 2018), which provides automatic batching
and vectorization, just-in-time compilation, and automatic forward- and
reverse-mode differentiation. This allows the optimization of the
hyperparameters

Data for this experiment were generated using an electroplated

For each spectrum, counts above 200 keV were summed, a lifetime scaled
background count rate (with associated uncertainty that defines

Both the confidence intervals and the median in Fig. 3 were computed from the
marginal cumulative density of the Gaussian mixtures using numerical
root finding. Additionally, the confidence intervals include a systematic,
Gaussian 1 % uncertainty on the specific value of

Column

In the present work, we have summarized and explained the limitations of
previously available approaches to estimate

During the application of the resultant algorithms to real-world data, the deviations of the steady-state approximation of the previous method (red dots in the second row of Fig. 3) from the estimated true values (black lines in the second row of Fig. 3) become apparent, underpinning the theoretical derivations. In turn, this means that a thorough analysis of data obtained in this way is restricted to models that account for the dynamic nature of the system, which has not previously been reported.

The specific structure of the filtering and smoothing results in the second
row of Fig. 3, showing peaked emanation upon increases in humidity, can
easily be explained physically as follows, which justifies the results of
the applied method. Considering the time series of count data of the SLP
within the source (input data; red dots in the first row of Fig. 3), regions
can be seen where changes are occurring much more quickly than would be
possible based on the well-known radioactive decay kinetics. As was
discussed in Sect. 2.2, the time series of counts is theoretically given
by a discretely sampled convolved version of the emanation. Hence, peaked
emanation must be occurring, such that the observed time series is possible
within the theoretically known decay kinetics. Conversely, the drop in
humidity and thus emanation at approximately 70 d does not show this behavior, and
the observed ingrowth of the counts directly follows the decay kinetics.
Apparently, the behavior depends on the direction of change in the
emanation characteristics. This is explained by the fact that upon an
increase of the effective diffusion coefficient, the source retains more

In constant regimes, results obtained from previously reported methods
converge to the values obtained using the deconvolution approach presented
here, as illustrated at approximately days 60 to 70 and past 90 d of the
data shown in Fig. 3. While the method we present might not seem beneficial
in such constant regimes, the recursive Bayesian approach provides a
computationally convenient, mathematically coherent, and flexible way to
refine the uncertainty upon observation of streaming data (e.g., obtained by
continuous operation of spectrometers) also within constant regimes. As
such, here we report for the first time the application of a method whereby
time series data of an emanation source can be used to derive correct (near)
real time values of the emanation, irrespective of the state of the source.
Specifically, the use case of this method and our initial motivation is the
implementation of surveillance systems for emanation sources based on
spectrometric measurements to improve current state-of-the-art realization,
and especially the dissemination of the unit Bq m

To obtain approximate filtering and smoothing algorithms in the context of radioactivity measurements, we extended the well-known computational methods for inference in linear dynamical systems (i.e., Kalman filter and Rauch–Tung–Striebel smoother) with a computationally convenient approximation for the observed Poisson statistics and the integrating behavior of the measurements in Sect. 2.3 and 2.4. In doing so, we demonstrated that integrating measurements results in a Gaussian process with certain covariance with the latent continuous state which retains the convenient closed form of filtering and smoothing through conjugacy in such linear dynamical models. As was shown, the integrating measurements lead to additional additive uncertainty depending on the variance of the Brownian motion which we consider an intuitive result. These results were used in order to construct the final switching linear dynamical system inference algorithms applied during the experiments.

Within the recorded time series, distinct domains were observed in response
to the way the humidity in the chamber was modified, which lends itself to
the applied switching dynamical system model, differentiating between stable
and non-stable regimes. This approach allows smaller uncertainty to be
achieved and smoother functional realizations of

For an approach like the present one to be applicable in metrology,
uncertainty estimates closely related to the guide to the expression of uncertainty in measurement (GUM; Joint Committee for
Guides in Metrology, 2008) are needed. At this point, the GUM is restricted
to static measurements and first steps are being taken for an extension to
dynamic scenarios (e.g., in Eichstädt and Elster, 2012; Elster and Link,
2008; Link and Elster, 2009), where a slightly different formulation for
error propagation has been carried out. In the present case, systematic
contributions to the uncertainty are dominated by the uncertainty of the
measurement mapping through matrix

Assuming that the density

By combination of the definition for

Analogously, the cross-covariance between

Gamma-ray spectra and environmental data obtained for the experimental section as well as the implementation of the presented algorithms and processing software are available at

AR and SR acquired funding, supervised and conceptualized this work, and reviewed the draft. FM derived the methodology, designed the models, implemented the software, acquired the experimental data, and prepared the original draft.

The contact author has declared that none of the authors has any competing interests.

Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the special issue “Sensors and Measurement Science International SMSI 2021”. It is a result of the Sensor and Measurement Science International, 3–6 May 2021.

The authors thank Alan Griffiths (ANSTO) and Scott D. Chambers (ANSTO) for their comments on the paper.

This project has received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. 19ENV01 traceRadon denotes the EMPIR project reference.This open-access publication was funded by the Physikalisch-Technische Bundesanstalt.

This paper was edited by Alexander Bergmann and reviewed by two anonymous referees.