Here, we describe the structure of the NNPDF code and we present a high-level description of its functionalities. The workflow for an NNPDF fit is displayed in the figure below.
This code takes hard-scattering partonic matrix element interpolators
from APPLgrid and
FastNLO (for hadronic processes) and
APFEL (for DIS structure functions) and
combines them with the DGLAP evolution kernels provided by
construct the fast interpolation grids called
FK-tables (for further instructions
see How to generate and implement FK tables). In this way, physical
observables can be evaluated in a highly efficient manner as a tensor sum of
FK-tables with a grid of PDFs at an initial parametrisation scale \(Q_0\).
APFELcomb also handles NNLO QCD and/or NLO electroweak
K-factors when needed.
Theory predictions can be generated configuring a variety of options, such as the perturbative order (currently up to NNLO), the values of the heavy quark masses, the electroweak parameters, the maximum number of active flavours, and the variable-flavour-number scheme used to account for the effects of the heavy quark masses in the DIS structure functions. The FK-tables resulting from each choice are associated to a database entry trough a theory id, which allows to quickly identify them them.
The buildmaster experimental data formatter
A C++ code which transforms the original measurements provided by the experimental collaborations, e.g. via HepData, into a standard format that is tailored for PDF fitting.
In particular, the code allows for a flexible handling of experimental systematic uncertainties allowing for different treatments of the correlated systematic uncertainties.
This module implements the core fitting methodology as implemented through
TensorFlow framework. The
n3fit library allows
for a flexible specification of the neural network model adopted to
parametrise the PDFs, whose settings can be selected automatically via
the built-in hyperoptimization algorithm. These
include the neural network type and architecture, the activation
functions, and the initialization strategy; the choice of optimizer and
of its corresponding parameters; and hyperparameters related to the
implementation in the fit of theoretical constraints such as PDF
positivity and integrability. The settings for a
PDF fit are inputted via a declarative runcard. Using these
n3fit finds the values of the neural network parameters,
corresponding to the PDF at initial scale which describe the input data.
Following a post-fit selection (using the
postfit tool implemented
in validphys) and PDF evolution step, the final output
consists of an LHAPDF grid corresponding to
the best fit PDF as well as metadata on the fit performance.
As an implementation of the
reportengine, it enables a workflow
focused on declarative and reproducible runcards. The code implements data
structures that can interact with those of
libnnpdf and are accessible from
the runcard. The analysis code makes heavy use of common Python Data Science
libraries such as
through its use of
Pandoc it is capable of outputting the final results to
HTML reports. These can be composed directly by the user or be generated by more
specialised, downstream applications. The package includes tools to interact
with online resources such as the results of fits or PDF grids, which, for
example, are automatically downloaded when they are required by a runcard.
The libnnpdf C++ legacy code
A C++ library which contains common data structures together with the fitting code used to produce the NNPDF3.0 and NNPDF3.1 analyses.
The availability of the
libnnpdf guarantees strict backwards
compatibility of the NNPDF framework and the ability to benchmark the
current methodology against the previous one.
To facilitate the interaction between the NNPDF C++ and Python
codebases, we have developed Python wrappers using the
For instructions on how to run a legacy fit, see How to run a legacy PDF fit (NNPDF3.1 style).