X-ray computed tomography as a measurement system faces some difficulties concerning the quality of the acquired measurements due to energy-dependent interaction of polychromatic radiation with the examined object at hand. There are many different techniques to reduce the negative influences of these artefact phenomena, which is also the aim of this newly introduced method. The key idea is to create several measurements of the same object, which only differ in their orientation inside the ray path of the measurement system. These measurements are then processed to selectively correct faulty surface regions. To calculate the needed geometrical transformations between the different measurements with the goal of a congruent alignment in one coordinate system, an extension of the iterative closest point (ICP) algorithm is used. To quantitatively classify any surface point regarding its quality value to determine the individual need of correction for each point, the local quality value (LQV) method is used, which has been developed at the Institute of Manufacturing Metrology. Different data fusion algorithms are presented whose performances are tested and verified using nominal–actual comparisons.

The measurement principle of X-ray computed tomography (CT) makes it possible to determine the distribution of attenuation coefficients of a measurement volume, which is achieved by creating and evaluating a set of radiographs. The inevitable polychromatic character of the X-rays and the physical interaction of matter with that radiation combined with introduced simplifications of those phenomena within the reconstruction routine causes image artefacts to occur within the reconstructed picture of the measurement object. Various methods have been proposed to prevent those unwanted phenomena from emerging at different locations of the measurement chain: pre-filtration is used to change the polychromatic character of the radiation, locally adaptive surface determination algorithms try to take account of the shifting radiation spectrum due to beam hardening. Additionally, there are different techniques to use data fusion of several faulty measurements to achieve an exact representation of the measurement object (Heinzl et al., 2007; Guhathakurta et al., 2015). Because of the requirements of certain boundary conditions (dual-energy CT, Heinzl et al., 2007; orthogonal orientations for different measurements, Guhathakurta et al., 2015) those methods are not always practicable.

This paper presents a newly developed procedure to correct artefacts of X-ray computed tomography measurements. An important aspect of the solution presented is the qualitative classification of single-surface vertices with the help of the LQV (local quality value) method, which has been developed at the Institute of Measurement Metrology (Fleßner and Hausotte, 2016; Fleßner et al., 2015a). Given the necessary expert knowledge, this method is capable of detecting artefacts in measurement data to provide rated surface points for further evaluation. Depending on the chosen quality parameter, the resulting quality classification is also well suited for multi-material problems, because the underlying transitions can be evaluated for different shape criterions, relative to other transitions within that measurement. The basic principle behind the presented data fusion routine is to produce several single measurements of a measurement object, which only differ in terms of the location and direction of their rotation axis in the cone beam CT system. These measurements subsequently differ regarding the appearance of artefacts, which allows for selective mathematical combination of measurements to acquire a final measurement result with higher precision and validity.

The following chapter presents the general procedure behind the idea of
fusing the determined surfaces out of several single measurements into one
data set with improved quality measures. Subsequently, the main goal is to
correct locally incorrect surface determinations, which are provoked mainly
by beam hardening and cone beam artefacts. Verification of the data fusion
results will be achieved by using and evaluating nominal–actual comparisons.
The complete process is implemented fulfilling the following framework
conditions:

The starting points of the procedure are the triangulated surfaces resulting from the surface determination process.

The orientations of the different single measurements respectively to each other are unknown and can take arbitrary values. This leads to high requirements for the necessary registration procedure.

Information regarding the local surface quality will be applied at different process steps by utilizing the LQV method (Sect. 2.2).

The registration and fusion process will be implemented without using a CAD-reference file of the measurement object. This ensures the usability of the method even when no representation of a reference is available.

In order to be able to provoke certain artefact appearances in the
measurement data, all data sets were acquired using the simulation tool
aRTist, developed by the Federal Institute for Materials Research and Testing
(BAM) in Berlin, Germany. The (virtual) CT settings were chosen as follows:
130 kV tube voltage, 275

LQV-parameter point reflection: grey value profiles (green and red)
are constructed and sampled perpendicular to the determined surface (VGS,
small image bottom right). The position

In order to classify different surface points during processing, a local
quality measure was used. The following patented (Fleßner and Hausotte, 2016)
framework has been developed at the Institute of Manufacturing Metrology
(Fleßner et al., 2015a, b) and is currently subject to ongoing research
efforts. The procedure is characterized by extraction of grey value profiles
in the vicinity of the surface point and evaluation of those profiles
according to certain criteria. Starting from one single-surface point, a
search ray is constructed inside the CT volume data following the vertex
normal vector in both possible directions for a certain length (approximately

In order to classify surface points with the LQV method for the assessment of
the introduced measurement series (see Sect. 2.1), a point symmetry measure
(point reflection) is evaluated for each grey value transition. The idea
behind this procedure is that symmetric transitions with a high maximum grey
value gradient and therefore a high contrast are expected to be of higher
quality, because it makes surface determination at this point very stable and
robust. If this transition has a lower point reflection quality parameter,
the transition is anticipated as being invaluable and thus representing a
local artefact appearance. The procedure is visualized in Fig. 1. The sampled
grey value transition of an underlying surface point alongside its vertex
normal vector results in a sigmoidal curve (green lines) or a disrupted
sigmoidal curve (caused by artefacts, red lines). To determine the LQV
parameter “point reflection”, one of the function branches (line with dots)
is mirrored (point reflection at

Figure 2 shows one measurement of the mentioned measurement series with a
certain orientation of the measurement object in the CT-ray beam. In the
illustrated figure, the surface coordinates are depicted in the volume grid
coordinate system of the volume data representing the measurement. That means
that the rotation of the object within the cone beam CT was performed around
an axis parallel to the

Detection of locally occurring artefacts provoked by wolfram insertions with the LQV method (point reflection). The surface regions belonging to the transitions depicted in Fig. 1 are marked with black circles.

Each measurement is represented in its own coordinate system, which
originates from the related volume grid coordinate systems of each
measurement set-up. In order to render local data fusion based on surface
coordinates possible, a registration process is necessary to transform all
measurements of one series into the same coordinate system. The necessary
transformation is a rigid transformation, which allows for a degree of
freedom of six (three rotations and three translations). The goal of this
step is to transform all measurements into a common coordinate system, while
maintaining a minimum residual error between the registration partners. A
commonly used algorithm for this kind of problem is the iterative closest
point (ICP) algorithm, proposed almost at the same time by Besl and McKay
(1992) and by Chen and Medioni (1991). Initially, as there is no
CAD-reference surface available for a normal measuring task, a “master
surface” has to be chosen arbitrarily, which represents the reference
registration surface for the other measurements. The basic function of the
ICP algorithm consists of a matching step, in which corresponding coordinate
pairs are determined in such a way that each surface point of the master
surface

Overall, the registration problem at hand constitutes a huge challenge for
any registration process due to heavy artefact occurrences. If the
registration is performed without any additional information, the result will
be insufficient to use for further fusion algorithms, because the error
function will be evaluated incorrectly. Experiments have shown that a
convergence even near to a correct solution is impossible because of the
error introduced by faulty surface regions. To enable the correct
registration of correctly determined surface regions without the influence of
bad regions, a weighting factor is introduced for each corresponding point
pair. This factor is set as the product of the LQVs for each of the mentioned
vertex pairs

Registration result of four different measurements using LQV weights.

The previously described methods allow for an actual fusion routine to be
introduced. Starting with an overall number of

Choose a “master” surface with index

Define set

Choose a single-surface point

Search for a set of nearest neighbours

For fusion, determine

Repeat 3–5 for each surface point of surface

Repeat 1–6 for each surface by changing index

Set

Repeat 1–8 until maximum number of iterations reached (set to 15 for all further evaluations).

The first method (Eq. 3) can be calculated using the arithmetic mean
coordinate of the set

The second method (Eq. 4) is determined by computing the linearly weighted
mean of set

Lastly, the third method (Eq. 5), introduces an additional condition compared
to method two, which states that a correction will not be performed if the
LQV of

A visual observation of a corrected surface utilizing correction method three (Eq. 5) is shown in Fig. 4. It is apparent that faulty surface regions have been corrected up to a certain extent, but some errors remain. Yellow regions depict an incorrect surface normal vector, indicating an imperfect triangulation of the surface at hand. The reason for that is that the fusion algorithm itself only processes point clouds without any consideration of the spatial correspondence of certain points within a triangulated surface. The visual presentation in Fig. 4 uses the original triangulation mapping, which may not be correct any more after the fusion process in certain regions. A repeated triangulation of the raw point cloud may solve this problem but could prove to be difficult without implementation of knowledge about the direction of the underlying grey value gradient of the volume data. Nevertheless, it is visible that a selective correction of faulty surface regions has been achieved.

Visualization of a corrected surface. Yellow regions indicate faulty triangulation correspondences due to point cloud processing.

Figure 4 shows several nominal–actual comparisons of different processing
results of the same measurement. The three lines representing the results of
the different fusion algorithms (colours teal, red and blue) originate from
the same single measurement, ensuring comparability. All observed deviations
are limited to a maximum deviation of 100

Nominal–actual comparison of a selected measurement (1 of 4, Fig. 3) and its corrections with the CAD reference surface.

The method presented is able to compensate for locally occurring faulty surface regions due to the influence of beam hardening artefacts. Part of the solution demonstrated is the implementation of LQVs for each surface point, which allow the classification of surface regions with different quality measures. Using LQV parameters for data fusion yields superior results compared to unweighted fusion procedures, which indirectly shows the performance capabilities of the LQV method. Furthermore, a correction is possible without knowledge of a reference surface. In addition, the geometric orientations of different single measurements of a complete measurement series do not need to be known beforehand.

In the future, additional improvements regarding fusion results can be
achieved by further development of the LQV parameters. These parameters are
used within the presented framework at several occasions: registration and
weighted fusion. Consequently, LQV classification errors also directly result
in fusion errors, subsequently reducing the quality of the corrected
surfaces. Difficulties appear when large correction vectors are applied for
certain surface regions. The correspondences determined between coordinate
pairs

In this paper, we present a possibility to fuse CT surface data sets from repeated measurements to reduce the unwanted influence of artefacts on the measurement result. The algorithms used for this purpose are described in detail in the paper. Additionally, publications cited in the paper describe the LQV method, as well as the specimen used and the registration routine. Furthermore, the parameters for the generation of the measurement data are described in detail in the paper as well as the software used to process the data if necessary.

AMM contributed data curation, formal analysis, investigation, methodology, software, validation, visualization, and writing (the original draft) and lead the review and editing process. TH contributed to conceptualization, funding acquisition, project administration, supervision, and writing and supported the review and editing process.

The authors declare that they have no conflict of interest.

The work presented was supported by the DFG with the project “Bestimmung der Messunsicherheit und systematischer Gestaltabweichungen für eine funktionsorientierte Toleranzvergabe” (DFG Germany, FOR 2271, TP 03). Edited by: Marco Jose da Silva Reviewed by: three anonymous referees