n3fit package
Subpackages
- n3fit.backends package
- Subpackages
- n3fit.backends.keras_backend package
- Submodules
- n3fit.backends.keras_backend.MetaLayer module
- n3fit.backends.keras_backend.MetaModel module
- n3fit.backends.keras_backend.base_layers module
- n3fit.backends.keras_backend.callbacks module
- n3fit.backends.keras_backend.constraints module
- n3fit.backends.keras_backend.internal_state module
- n3fit.backends.keras_backend.multi_initializer module
- n3fit.backends.keras_backend.operations module
- Module contents
- n3fit.backends.keras_backend package
- Module contents
- Subpackages
- n3fit.hyper_optimization package
- n3fit.io package
- n3fit.layers package
- Submodules
- n3fit.layers.DIS module
- n3fit.layers.DY module
- n3fit.layers.losses module
- n3fit.layers.mask module
- n3fit.layers.msr_normalization module
- n3fit.layers.observable module
- n3fit.layers.preprocessing module
- n3fit.layers.rotations module
- n3fit.layers.x_operations module
- Module contents
- n3fit.scripts package
- n3fit.tests package
- Subpackages
- Submodules
- n3fit.tests.conftest module
- n3fit.tests.test_backend module
- n3fit.tests.test_checks module
- n3fit.tests.test_evolven3fit module
- n3fit.tests.test_fit module
- n3fit.tests.test_hyperopt module
- n3fit.tests.test_layers module
- n3fit.tests.test_losses module
- n3fit.tests.test_modelgen module
- n3fit.tests.test_msr module
- n3fit.tests.test_multireplica module
- n3fit.tests.test_penalties module
- n3fit.tests.test_preprocessing module
- n3fit.tests.test_rotations module
- n3fit.tests.test_stopwatch module
- n3fit.tests.test_vpinterface module
- n3fit.tests.test_xops module
- Module contents
Submodules
n3fit.checks module
This module contains checks to be perform by n3fit on the input
- n3fit.checks.check_basis_with_layers(basis, validphys_basis, parameters)[source]
Check that the last layer matches the number of flavours defined in the runcard. And that the activation functions are compatible with the basis.
- n3fit.checks.check_consistent_basis(sum_rules, fitbasis, basis, theoryid, parameters)[source]
Checks the fitbasis setup for inconsistencies - Checks the sum rules can be imposed - Correct flavours for the selected basis - Correct ranges (min < max) for the small and large-x exponents
- n3fit.checks.check_consistent_layers(parameters)[source]
Checks that all layers have an activation function defined and that a final-activation function is not being used half-way through.
- n3fit.checks.check_consistent_parallel(parameters, parallel_models)[source]
Checks whether the multiple-replica fit options are consistent among them i.e., that the trvl seed is fixed and the layer type is correct
- n3fit.checks.check_correct_partitions(kfold, data)[source]
Ensures that all experimennts in all partitions are included in the fit definition
- n3fit.checks.check_deprecated_options(fitting)[source]
Checks whether the runcard is using deprecated options
- n3fit.checks.check_dropout(parameters)[source]
Checks the dropout setup (positive and smaller than 1.0)
- n3fit.checks.check_eko_exists(theoryid)[source]
Check that an eko for this theory exists. Since there might still be theories without an associated eko, this function raises a logger’ error instead of an Exception.
- n3fit.checks.check_existing_parameters(parameters)[source]
Check that non-optional parameters are defined and are not empty
- n3fit.checks.check_hyperopt_architecture(architecture)[source]
Checks whether the scanning setup for the NN architecture works - Initializers are valid - Dropout setup is valid - No ‘min’ is greater than its corresponding ‘max’
- n3fit.checks.check_hyperopt_positivity(positivity_dict)[source]
Checks that the positivity multiplier and initial values are sensible and valid
- n3fit.checks.check_hyperopt_stopping(stopping_dict)[source]
Checks that the options selected for the stopping are consistent
- n3fit.checks.check_kfold_options(kfold)[source]
Warns the user about potential bugs on the kfold setup
- n3fit.checks.check_lagrange_multipliers(parameters, key)[source]
Checks the parameters in a lagrange multiplier dictionary are correct, e.g. for positivity and integrability
- n3fit.checks.check_layer_type_implemented(parameters)[source]
Checks whether the layer_type is implemented
- n3fit.checks.check_model_file(save, load)[source]
Checks whether the model_files given in the runcard are acceptable
- n3fit.checks.check_stopping(parameters)[source]
Checks whether the stopping-related options are sane: stopping patience as a ratio between 0 and 1 and positive number of epochs
- n3fit.checks.check_sumrules(sum_rules)[source]
Checks that the chosen option for the sum rules are sensible
- n3fit.checks.check_tensorboard(tensorboard)[source]
Check that the tensorbard callback can be enabled correctly
n3fit.model_gen module
Library of functions which generate the models used by n3fit to determine PDF.
It contains functions to generate:
- Observables
The main function is
observable_generator
which takes the input theory and generates the path from the PDF result to the computation of the training and validation losses / chi2
- PDFs
The main function is
generate_pdf_model
, which takes a list of settings defining the replica-dependent architecture of each of the models that form the ensemble as well as ensemble-wide options such as the flavour basis, sum rule definition or theoretical settings, and generates a PDF model which takes an array of (x) as input and outputs the value of the PDF for each replica, for each x for each flavour.
- class n3fit.model_gen.ObservableWrapper(name: str, observables: list, trvl_mask_layer: Mask, dataset_xsizes: list, invcovmat: array = None, covmat: array = None, multiplier: float = 1.0, integrability: bool = False, positivity: bool = False, data: array = None, rotation: ObsRotation = None)[source]
Bases:
object
Wraps many observables into an experimental layer once the PDF model is prepared It can take normal datasets or Lagrange-multiplier-like datasets (such as positivity or integrability)
- covmat: array = None
- data: array = None
- invcovmat: array = None
- rotation: ObsRotation = None
- class n3fit.model_gen.ReplicaSettings(seed: int, nodes: list[int], activations: list[str], architecture: str = 'dense', initializer: str = 'glorot_normal', dropout_rate: float = 0.0, regularizer: str = None, regularizer_args: dict = <factory>)[source]
Bases:
object
Dataclass which holds all necessary replica-dependent information of a PDF.
- Parameters
seed (int) – seed for the initialization of the neural network
nodes (list[int]) – nodes of each of the layers, starting at the first hidden layer
activations (list[str]) – list of activation functions, should be of equal length as nodes
architecture (str) – select the architecture of the neural network used for the replica, e.g.
dense
ordense_per_flavour
initializer (str) – initializer to be used for this replica
dropout (float) – rate of dropout for each layer
regularizer (str) – name of the regularizer to use for this replica (if any)
regularizer_args (dict) – options to pass down to the regularizer (if any)
- n3fit.model_gen.generate_pdf_model(replicas_settings: list[n3fit.model_gen.ReplicaSettings], flav_info: dict = None, fitbasis: str = 'NN31IC', out: int = 14, impose_sumrule: str = None, scaler: Callable = None, photons: Photon = None)[source]
Generation of the full PDF model which will be used to determine the full PDF. The full PDF model can have any number of replicas, which can be trained in parallel, the limitations of the determination means that there are certain traits that all replicas must share, while others are fre per-PDF.
In its most general form, the output of this function is a
n3fit.backend.MetaModel
with the following architecture:- <input layer>
in the standard PDF fit this includes only the (x) grid of the NN
- [ list of a separate architecture per replica ]
which can be, but is not necessary, equal for all replicas
- [ <preprocessing factors> ]
postprocessing of the network output by a variation x^{alpha}*(1-x)^{beta}
- <normalization>
physical sum rules, requires an integral over the PDF
- <rotation to FK-basis>
regardless of the physical basis in which the PDF and preprocessing factors are applied the output is rotated to the 14-flavour general basis used in FkTables following PineaAPPL’s convention
- [<output layer>]
14 flavours per value of x per replica note that, depending on the fit basis (and fitting scale) the output of the PDF will contain repeated values
This function defines how the PDFs will be generated. In the case of identical PDF models (
identical_models = True
, default) the same settings will be used for all replicas. Otherwise, the sampling routines will be used.Parameters:
- replica_settings: list[ReplicaSettings]
- list of ReplicaSettings objects which must contain the following information
- nodes: list(int)
list of the number of nodes per layer of the PDF NN
- activation: list
list of activation functions to apply to each layer
- initializer_name: str
selects the initializer of the weights of the NN. Default: glorot_normal
- layer_type: str
selects the type of architecture of the NN. Default: dense
- seed: int
the initialization seed for the NN
- dropout: float
rate of dropout layer by layer
- regularizer: str
name of the regularizer to use for the NN
- regularizer_args: dict
options to pass down to the regularizer (if any)
- flav_info: dict
dictionary containing the information about each PDF (basis dictionary in the runcard) to be used by Preprocessing
- fitbasis: str
fitbasis used during the fit. Default: NN31IC
- out: int
number of output flavours of the model (default 14)
- impose_sumrule: str
whether to impose sumrules on the output pdf and which one to impose (All, MSR, VSR, TSR)
- scaler: callable
Function to apply to the input. If given the input to the model will be a (1, None, 2) tensor where dim [:,:,0] is scaled When None, instead turn the x point into a (x, log(x)) pair
- photons:
validphys.photon.compute.Photon
If given, gives the AddPhoton layer a function to compute a photon which will be added at the index 0 of the 14-size FK basis This same function will also be used to compute the MSR component for the photon
- returns
pdf_model – pdf model, with single_replica_generator attached as an attribute
- rtype
MetaModel
- n3fit.model_gen.observable_generator(spec_dict, boundary_condition=None, training_mask_array=None, validation_mask_array=None, training_data=None, validation_data=None, invcovmat_tr=None, invcovmat_vl=None, positivity_initial=1.0, integrability=False, n_replicas=1)[source]
This function generates the observable models for each experiment. These are models which takes as input a PDF tensor (1 x size_of_xgrid x flavours) and outputs the result of the observable for each contained dataset (n_points,).
- In summary the model has the following structure:
Observable layers, corresponding to commondata datasets and made of any number of fktables (and an operation on them).
An observable contains an fktable, which is loaded by the convolution layer (be it hadronic or DIS) and a inv covmat which loaded by the loss.
This function also outputs three “output objects” (which are functions that generate layers) that use the training and validation mask to create a training_output, validation_output and experimental_output
If the dataset is a positivity dataset acts in consequence.
The output is a dictionary (layer_info), each one of the three output functions have a signature:
def out_tr(pdf_layer, dataset_out=None)
The pdf_layer must be a layer of shape (1, size_of_xgrid, flavours) datasets_out is the list of dataset to be masked to 0 when generating the layer
- Parameters
spec_dict (dict) – a dictionary-like object containing the information of the experiment
boundary_condition (dict) – dictionary containing the instance of the a PDF set to be used as a Boundary Condition.
training_mask_array (np.ndarray) – training mask per replica
validation_mask_array (np.ndarray) – validation mask per replica, when not given ~training_mask_array will be used while in general the validation is a negation of the training, in special cases such as 1-point datasets, these are accepted by both masks and then removed by the loss
n_replicas (int) – number of replicas fitted simultaneously
positivity_initial (float) – set the positivity lagrange multiplier for epoch 1
integrability (bool) – switch on/off the integrability constraints
- Returns
layer_info – a dictionary with: - inputs: input layer - output: output layer (unmasked) - output_tr: output layer (training) - output_vl: output layer (validation) - experiment_xsize: int (size of the output array)
- Return type
n3fit.model_trainer module
The ModelTrainer class is the true driver around the n3fit code
This class is initialized with all information about the NN, inputs and outputs. The construction of the NN and the fitting is performed at the same time when the hyperparametrizable method of the function is called.
This allows to use hyperscanning libraries, that need to change the parameters of the network between iterations while at the same time keeping the amount of redundant calls to a minimum
- class n3fit.model_trainer.InputInfo(input, split, idx)
Bases:
tuple
- idx
Alias for field number 2
- input
Alias for field number 0
- split
Alias for field number 1
- class n3fit.model_trainer.ModelTrainer(experiments_data, exp_info, pos_info, integ_info, flavinfo, fitbasis, nnseeds, boundary_condition, debug=False, kfold_parameters=None, max_cores=None, model_file=None, sum_rules=None, theoryid=None, lux_params=None, replicas=None)[source]
Bases:
object
ModelTrainer Class:
Wrapper around the fitting code and the generation of the Neural Network
When the “hyperparametrizable”* function is called with a dictionary of parameters, it generates a NN and subsequentially performs a fit.
The motivation behind this class is minimising the amount of redundant calls of each hyperopt run, in particular this allows to completely reset the NN at the beginning of each iteration reusing some of the previous work.
*called in this way because it accept a dictionary of hyper-parameters which defines the Neural Network
- enable_tensorboard(logdir, weight_freq=0, profiling=False)[source]
Enables tensorboard callback for further runs of the fitting procedure
- evaluate(stopping_object)[source]
Returns the training, validation and experimental chi2
- Parameters
stopping_object – A Stopping intance which will have associated a validation model and the list of output layers that should contribute to the training chi2
- Returns
train_chi2 (chi2 of the trainining set)
val_chi2 (chi2 of the validation set)
exp_chi2 (chi2 of the experimental data (without replica or tr/vl split))
- hyperparametrizable(params)[source]
Wrapper around all the functions defining the fit.
After the ModelTrainer class has been instantiated, a call to this function (with a
params
dictionary) is necessary in order to generate the whole PDF model and perform a fit.This is a necessary step for hyperopt to work
- Parameters used only here:
epochs
: maximum number of iterations for the fit to runstopping_patience
: patience of the stopper after finding a new minimum
All other parameters are passed to the corresponding functions
n3fit.msr module
The constraint module include functions to impose the momentum sum rules on the PDFs
- n3fit.msr.gen_integration_input(nx)[source]
Generates a np.array (shaped (nx,1)) of nx elements where the nx/2 first elements are a logspace between 0 and 0.1 and the rest a linspace from 0.1 to 0
- n3fit.msr.generate_msr_model_and_grid(output_dim: int = 14, fitbasis: str = 'NN31IC', mode: str = 'ALL', nx: int = 2000, scaler: Optional[Callable] = None, replica_seeds: Optional[list] = None) MetaModel [source]
Generates a model that applies the sum rules to the PDF.
- Parameters
output_dim (int) – Number of flavours of the output PDF
mode (str) –
- Mode of sum rules to apply. It can be:
”ALL”: applies both the momentum and valence sum rules
”MSR”: applies only the momentum sum rule
”VSR”: applies only the valence sum rule
nx (int) – Number of points of the integration grid
scaler (Scaler) – Scaler to be applied to the PDF before applying the sum rules
- Returns
model (MetaModel) – Model that applies the sum rules to the PDF It takes as inputs:
pdf_x: the PDF output of the model
pdf_xgrid_integration: the PDF output of the model evaluated at the integration grid
xgrid_integration: the integration grid
photon_integral: the integrated photon contribution
It returns the PDF with the sum rules applied
xgrid_integration (dict) –
- Dictionary with the integration grid, with:
values: the integration grid
input: the input layer of the integration grid
n3fit.n3fit_checks_provider module
This module contains a checks provider to be used by n3fit apps
n3fit.performfit module
Fit action controller
- n3fit.performfit.performfit(*, experiments_data, n3fit_checks_action, replicas, replicas_nnseed_fitting_data_dict, posdatasets_fitting_pos_dict, integdatasets_fitting_integ_dict, theoryid, fiatlux, basis, fitbasis, positivity_bound, sum_rules=True, parameters, replica_path, output_path, save=None, load=None, hyperscanner=None, hyperopt=None, kfold_parameters, tensorboard=None, debug=False, maxcores=None, double_precision=False, parallel_models=True)[source]
This action will (upon having read a validcard) process a full PDF fit for a set of replicas.
The input to this function is provided by validphys and/or defined in the runcards or commandline arguments.
This controller is provided with: 1. Seeds generated using the replica number and the seeds defined in the runcard. 2. Loaded datasets with replicas generated.
2.1 Loaded positivity/integrability sets.
The workflow of this controller is as follows: 1. Generate a ModelTrainer object holding information to create the NN and perform a fit
(at this point no NN object has been generated) 1.1 (if hyperopt) generates the hyperopt scanning dictionary
taking as a base the fitting dictionary and the runcard’s hyperscanner dictionary
- Pass the dictionary of parameters to ModelTrainer
for the NN to be generated and the fit performed
2.1 (if hyperopt) Loop over point 4 for hyperopt number of times
Once the fit is finished, output the PDF grid and accompanying files
- Parameters
genrep (bool) – Whether or not to generate MC replicas. (Only used for checks)
data (validphys.core.DataGroupSpec) – containing the datasets to be included in the fit. (Only used for checks)
experiments_data (list[validphys.core.DataGroupSpec]) – similar to data but now passed as argument to ModelTrainer
replicas_nnseed_fitting_data_dict (list[tuple]) – list with element for each replica (typically just one) to be fitted. Each element is a tuple containing the replica number, nnseed and
fitted_data_dict
containing all of the data, metadata for each group of datasets which is to be fitted.posdatasets_fitting_pos_dict (list[dict]) – list of dictionaries containing all data and metadata for each positivity dataset
integdatasets_fitting_integ_dict (list[dict]) – list of dictionaries containing all data and metadata for each integrability dataset
theoryid (validphys.core.TheoryIDSpec) – Theory which is used to generate theory predictions from model during fit. Object also contains some metadata on the theory settings.
fiatlux (dict) – dictionary containing the params needed from LuxQED
basis (list[dict]) – preprocessing information for each flavour to be fitted.
fitbasis (str) – Valid basis which the fit is to be ran in. Available bases can be found in
validphys.pdfbases
.sum_rules (str) – Whether to impose sum rules in fit. By default set to True=”ALL”
parameters (dict) – Mapping containing parameters which define the network architecture/fitting methodology.
replica_path (pathlib.Path) – path to the output of this run
output_path (str) – name of the fit
save (None, str) – model file where weights will be saved, used in conjunction with
load
.load (None, str) – model file from which to load weights from.
hyperscanner (dict) – dictionary containing the details of the hyperscanner
hyperopt (int) – if given, number of hyperopt iterations to run
kfold_parameters (None, dict) – dictionary with kfold settings used in hyperopt.
tensorboard (None, dict) – mapping containing tensorboard settings if it is to be used. By default it is None and tensorboard is not enabled.
debug (bool) – activate some debug options
maxcores (int) – maximum number of (logical) cores that the backend should be aware of
double_precision (bool) – whether to use double precision
parallel_models (bool) – whether to run models in parallel
n3fit.scaler module
n3fit.stopping module
Module containing the classes related to the stopping alogirthm
In this module there are four Classes:
- FitState: this class contains the information of the fit
for a given point in history
- FitHistory: this class contains the information necessary
in order to reset the state of the fit to the point in which the history was saved. i.e., a list of FitStates
- Stopping: this class monitors the chi2 of the validation
and training sets and decides when to stop
Positivity: Decides whether a given point fullfills the positivity conditions
Validation: Controls the NNPDF cross-validation algorithm
Note
There are situations in which the validation set is empty, in those cases
the training set is used as validation set. This implies several changes in the behaviour of this class as the training chi2 will now be monitored for stability.
In order to parse the set of loss functions coming from the backend::MetaModel,
the function parse_losses relies on the fact that they are all suffixed with _loss the validation case, instead, is suffixed with val_loss. In the particular casse in which both training and validation model correspond to the same backend::MetaModel only the _loss suffix can be found. This is taken into account by the class Stopping which will tell Validation that no validation set was found and that the training is to be used instead.
- class n3fit.stopping.FitHistory(tr_ndata, vl_ndata)[source]
Bases:
object
Keeps a list of FitState items holding the full chi2 history of the fit.
- Parameters
- class n3fit.stopping.FitState(training_info, validation_info, training_loss=None)[source]
Bases:
object
Holds the state of the chi2 during the fit, for all replicas and one epoch
Note: the training chi2 is computed before the update of the weights so it is the chi2 that informed the updated corresponding to this state. The validation chi2 instead is computed after the update of the weights.
- Parameters
- property all_tr_chi2
- property all_vl_chi2
- property tr_chi2
- property tr_loss
Return the total validation loss as it comes from the info dictionaries
- tr_ndata = None
- property vl_chi2
- property vl_loss
Return the total validation loss as it comes from the info dictionaries
- vl_ndata = None
- vl_suffix = None
- class n3fit.stopping.Positivity(threshold, positivity_sets)[source]
Bases:
object
Controls the positivity requirements.
In order to check the positivity passes will check the history of the fitting as the fitting included positivity sets. If the sum of all positivity sets losses is above a certain value the model is not accepted and the training continues.
- Parameters
- check_positivity(history_object)[source]
This function receives a history objects and loops over the positivity_sets to check the value of the positivity loss.
If the positivity loss is above the threshold, the positivity fails otherwise, it passes. It returns an array booleans which are True if positivity passed
story_object[key_loss] < self.threshold
- history_object: dict
dictionary of entries in the form {‘name’: loss}, output of a MetaModel .fit()
- class n3fit.stopping.Stopping(validation_model, all_data_dicts, pdf_model, threshold_positivity=1e-06, total_epochs=0, stopping_patience=7000, threshold_chi2=10.0, dont_stop=False)[source]
Bases:
object
Driver of the stopping algorithm
Note, if the total number of points in the validation dictionary is None, it is assumed the validation_model actually corresponds to the training model.
- Parameters
validation_model (n3fit.backends.MetaModel) – the model with the validation mask applied (and compiled with the validation data and covmat)
all_data_dicts (dict) – list containg all dictionaries containing all information about the experiments/validation/regularizers/etc to be parsed by Stopping
pdf_model (n3fit.backends.MetaModel) – pdf_model being trained
threshold_positivity (float) – maximum value allowed for the sum of all positivity losses
total_epochs (int) – total number of epochs
stopping_patience (int) – how many epochs to wait for the validation loss to improve
threshold_chi2 (float) – maximum value allowed for chi2
dont_stop (bool) – dont care about early stopping
- chi2exps_json(i_replica=0, log_each=100)[source]
Returns and apt-for-json dictionary with the status of the fit every log_each epochs It reports the total training loss and the validation loss broken down by experiment.
- property e_best_chi2
Epoch of the best chi2, if there is no best epoch, return last
- evaluate_training(training_model)[source]
Given the training model, evaluates the model and parses the chi2 of the training datasets
- Parameters
training_model (n3fit.backends.MetaModel) – an object implementing the evaluate function
- Returns
tr_chi2 – chi2 of the given
training_model
- Return type
- make_stop()[source]
Convenience method to set the stop_now flag and reload the history to the point of the best model if any
- monitor_chi2(training_info, epoch, print_stats=False)[source]
Function to be called at the end of every epoch. Stores the total chi2 of the training set as well as the total chi2 of the validation set. If the training chi2 is below a certain threshold, stores the state of the model which gave the minimum chi2 as well as the epoch in which occurred If the epoch is a multiple of save_all_each then we also save the per-exp chi2
Returns True if the run seems ok and False if a NaN is found
- property positivity_status
Returns POS_PASS if positivity passes or veto if it doesn’t for each replica
- print_current_stats(epoch, fitstate)[source]
Prints
fitstate
validation chi2 for every experiment and the current total training loss as well as the validation loss after the training step
- property stop_epoch
Epoch in which the fit is stopped
- stop_here()[source]
Returns the stopping status If dont_stop is set returns always False (i.e., never stop)
- property vl_chi2
Current validation chi2
- n3fit.stopping.parse_losses(history_object, data, suffix='loss')[source]
Receives an object containing the chi2 Usually a history object, but it can come in the form of a dictionary.
It loops over the dictionary and uses the npoints_data dictionary to normalize the chi2 and return backs a tuple (total, tr_chi2)
- Parameters
- Returns
total_loss (float) – Total value for the loss
dict_chi2 (dict) – dictionary of {‘expname’ : loss }
- n3fit.stopping.parse_ndata(all_data)[source]
Parses the list of dictionaries received from ModelTrainer into dictionaries containing only the name of the experiments and the number of points per replica
- Returns
tr_ndata – dictionary of {‘exp’ : np.ndarray}
vl_ndata – dictionary of {‘exp’ : np.ndarray}
`pos_set` (list of the names of the positivity sets)
Note: if there is no validation (total number of val points == 0) then vl_ndata will point to tr_ndata
n3fit.stopwatch module
StopWatch module for computing the time performance of n3fit
- class n3fit.stopwatch.StopWatch[source]
Bases:
object
This class works as a stopwatch, upon initialization it will register the initialization time as start and times can be register by running the .register_times(tag) method.
When the stopwatchn is stopped (with the .stop() method) it will generate two dictionaries with the relative times between every register time and the starting point.
- get_times(tag=None)[source]
Return a tuple with the tag time of the watch defaults to the starting time
- Parameters
tag – if none, defaults to start_key
- Return type
(tag cpu time, tag wall time)
- register_ref(tag, reference)[source]
Register an event named tag and register a request to compute also the time difference between this event and reference
- start_key = 'start'
n3fit.vpinterface module
n3fit interface to validphys
Example
>>> import numpy as np
>>> from n3fit.vpinterface import N3PDF
>>> from n3fit.model_gen import generate_pdf_model, ReplicaSettings
>>> from validphys.pdfgrids import xplotting_grid
>>> fake_fl = [{'fl' : i, 'largex' : [0,1], 'smallx': [1,2]} for i in ['u', 'ubar', 'd', 'dbar', 'c', 's', 'sbar', 'g']]
>>> fake_x = np.linspace(1e-3,0.8,3)
>>> rps = [ReplicaSettings(nodes=[8], activations=["linear"], seed=4)]*4
>>> pdf_model = generate_pdf_model(rps, flav_info=fake_fl, fitbasis='FLAVOUR')
>>> n3pdf = N3PDF(pdf_model.split_replicas())
>>> res = xplotting_grid(n3pdf, 1.6, fake_x)
>>> res.grid_values.error_members().shape
(4, 8, 3)
# (nreplicas, flavours, x-grid)
- class n3fit.vpinterface.N3LHAPDFSet(name, pdf_models, Q=1.65)[source]
Bases:
LHAPDFSet
Extension of LHAPDFSet using n3fit models
- grid_values(flavours, xarr, qmat=None)[source]
- Parameters
flavours (numpy.ndarray) – flavours to compute
xarr (numpy.ndarray) – x-points to compute, dim: (xgrid_size,)
qmat (numpy.ndarray) – q-points to compute (not used by n3fit, used only for shaping purposes)
- Returns
numpy.ndarray
array of shape (replicas, flavours, xgrid_size, qmat) with the values of – the
pdf_model``(s) evaluated in ``xarr
- class n3fit.vpinterface.N3PDF(pdf_models, fit_basis=None, name='n3fit', Q=1.65)[source]
Bases:
PDF
Creates a N3PDF object, extension of the validphys PDF object to perform calculation with a n3fit generated model.
- Parameters
- get_preprocessing_factors(replica=None)[source]
Loads the preprocessing alpha and beta arrays from the PDF trained model. If a
fit_basis
given in the format ofn3fit
runcards is given it will be used to generate a new dictionary with the names, the exponent and whether they are trainable otherwise outputs a Nx2 array where [:,0] are alphas and [:,1] betas
- class n3fit.vpinterface.N3Stats(data)[source]
Bases:
MCStats
The PDFs from n3fit are MC PDFs however, since there is no grid, the CV has to be computed manually
- n3fit.vpinterface.compute_arclength(self, q0=1.65, basis='evolution', flavours=None)[source]
Given the layer with the fit basis computes the arc length using the corresponding validphys action
- Parameters
Example
>>> from n3fit.vpinterface import N3PDF, compute_arclength >>> from n3fit.model_gen import pdfNN_layer_generator >>> fake_fl = [{'fl' : i, 'largex' : [0,1], 'smallx': [1,2]} for i in ['u', 'ubar', 'd', 'dbar', 'c', 'g', 's', 'sbar']] >>> pdf_model = pdfNN_layer_generator(nodes=[8], activations=['linear'], seed=0, flav_info=fake_fl, fitbasis="FLAVOUR") >>> n3pdf = N3PDF(pdf_model) >>> res = compute_arclength(n3pdf)
- n3fit.vpinterface.compute_phi(n3pdf, experimental_data)[source]
Compute phi using validphys functions.
For more info on how phi is calculated; see Eq.(4.6) of 10.1007/JHEP04(2015)040
- Parameters
n3pdfs (
n3fit.vpinterface.N3PDF
) – N3PDF instance defining the n3fitted multi-replica PDFexperimental_data (List[validphys.core.DataGroupSpec]) – List of experiment group datasets as DataGroupSpec instances
- Returns
sum_phi – Sum of phi over all experimental group datasets
- Return type
Example
>>> from n3fit.vpinterface import N3PDF, compute_phi >>> from n3fit.model_gen import generate_pdf_model, ReplicaSettings >>> from validphys.loader import Loader >>> fake_fl = [{'fl' : i, 'largex' : [0,1], 'smallx': [1,2]} for i in ['u', 'ubar', 'd', 'dbar', 'c', 'g', 's', 'sbar']] >>> rps = [ReplicaSettings(nodes=[8], activations=["linear"], seed=i) for i in [0,1]] >>> pdf_model = generate_pdf_model(rps, flav_info=fake_fl, fitbasis="FLAVOUR") >>> n3pdf = N3PDF(pdf_model.split_replicas()) >>> ds = Loader().check_dataset("NMC_NC_NOTFIXED_P_EM-SIGMARED", theoryid=40_000_000, cuts="internal", variant="legacy") >>> data_group_spec = Loader().check_experiment("My DataGroupSpec", [ds]) >>> phi = compute_phi(n3pdf, [data_group_spec])
- n3fit.vpinterface.integrability_numbers(n3pdf, q0=1.65, flavours=None)[source]
Compute the integrability numbers for the current PDF using the corresponding validphys action
- Parameters
- Returns
Value for the integrability for each of the flavours
- Return type
np.array(float)
Example
>>> from n3fit.vpinterface import N3PDF, integrability_numbers >>> from n3fit.model_gen import pdfNN_layer_generator >>> fake_fl = [{'fl' : i, 'largex' : [0,1], 'smallx': [1,2]} for i in ['u', 'ubar', 'd', 'dbar', 'c', 'g', 's', 'sbar']] >>> pdf_model = pdfNN_layer_generator(nodes=[8], activations=['linear'], seed=0, flav_info=fake_fl, fitbasis="FLAVOUR") >>> n3pdf = N3PDF(pdf_model) >>> res = integrability_numbers(n3pdf)