n3fit.backends.keras_backend package

Submodules

n3fit.backends.keras_backend.MetaLayer module

The class MetaLayer is an extension of the backend Layer class with a number of methods and helpers to facilitate writing new custom layers in such a way that the new custom layer don’t need to rely in anything backend-dependent

In other words, if you want to implement a new layer and need functions not included here it is better to add a new method which is just a call to the relevant backend-dependent function For instance: np_to_tensor is just a call to K.constant

class n3fit.backends.keras_backend.MetaLayer.MetaLayer(*args, **kwargs)[source]

Bases: Layer

This metalayer function must contain all backend-dependent functions

In order to write a custom Keras layer you usually need to override:
  • __init__

  • meta_call

builder_helper(name, kernel_shape, initializer, trainable=True, constraint=None)[source]

Creates a kernel that should be saved as an attribute of the caller class name: name of the kernel shape: tuple with its shape initializer: one of the initializers from this class (actually, any keras initializer) trainable: if it is constraint: one of the constraints from this class (actually, any keras constraints)

get_weight_by_name(weight_name, internal_count=0)[source]

Returns a weight of the layer by name, returns None if the layer does not include the named weight.

Note that internally weights of a layer are prefaced by the name of the layer, this should not be added to the input of this function. i.e., if the internal name is “layer/weight:0”, the argument to this method should be just “weight”.

Parameters

weight_name (str) – Name of the weight

static init_constant(value)[source]
initializers = {'glorot_normal': (<class 'keras.src.initializers.initializers.GlorotNormal'>, {}), 'glorot_uniform': (<class 'keras.src.initializers.initializers.GlorotUniform'>, {}), 'random_uniform': (<class 'keras.src.initializers.initializers.RandomUniform'>, {'minval': -0.5, 'maxval': 0.5})}
static select_initializer(ini_name, seed=None, **kwargs)[source]

Selects one of the initializers (which does initialize, i.e., not constant) All of them should accept seed

weight_inits = []

n3fit.backends.keras_backend.MetaModel module

MetaModel class

Extension of the backend Model class containing some wrappers in order to absorb other backend-dependent calls.

class n3fit.backends.keras_backend.MetaModel.MetaModel(*args, **kwargs)[source]

Bases: Model

The model wraps keras.Model and adds some custom behaviour. Most notably it allows supplying constant values for input arguments, which are used when training and making predictions with the model (note that constants need to be explicitly registered as inputs, see https://github.com/keras-team/keras/issues/11912). These inputs can be passed in the input_values parameter, or gathered from the tensor_content attribute of the input_tensors, which is set automatically when using the numpy_to_input function from n3fit.backends.keras_backend.operations.

Parameters
  • input_tensors (dict[Any, tensorflow.keras.layers.Input]) – Input layer

  • output_tensors (tensorflow.keras.layers.Layer) – Output layer

  • input_values (dict[Any, array_like]) – Constant values for the input layer, to be supplied when making predictions with the model.

  • **kwargs – keyword arguments to pass directly to Model

accepted_optimizers = {'Adadelta': (<class 'keras.src.optimizers.adadelta.Adadelta'>, {'learning_rate': 1.0, 'clipnorm': 1.0}), 'Adagrad': (<class 'keras.src.optimizers.adagrad.Adagrad'>, {'clipnorm': 1.0}), 'Adam': (<class 'keras.src.optimizers.adam.Adam'>, {'learning_rate': 0.01, 'clipnorm': 1.0}), 'Adamax': (<class 'keras.src.optimizers.adamax.Adamax'>, {'clipnorm': 1.0}), 'Amsgrad': (<class 'keras.src.optimizers.adam.Adam'>, {'learning_rate': 0.01, 'amsgrad': True, 'clipnorm': 1.0}), 'Nadam': (<class 'keras.src.optimizers.nadam.Nadam'>, {'learning_rate': 0.001, 'clipnorm': 1.0}), 'RMSprop': (<class 'keras.src.optimizers.rmsprop.RMSprop'>, {'learning_rate': 0.01, 'clipnorm': 1.0}), 'SGD': (<class 'keras.src.optimizers.sgd.SGD'>, {'learning_rate': 0.01, 'momentum': 0.0, 'nesterov': False, 'clipnorm': 1.0})}
apply_as_layer(x)[source]

Apply the model as a layer

compile(optimizer_name='RMSprop', learning_rate=None, loss=None, target_output=None, clipnorm=None, **kwargs)[source]

Compile the model given an optimizer and a list of loss functions. The optimizer must be one of those implemented in the optimizer attribute of this class.

Options:
  • A learning rate and a list of target outpout can be defined.

    These will be passed down to the optimizer.

  • A target_output can be defined. If done in this way

    (for instance because we know the target data will be the same for the whole fit) the data will be compiled together with the model and won’t be necessary to input it again when calling the perform_fit or compute_losses methods.

Parameters
  • optimizer_name (str) – string defining the optimizer to be used

  • learning_rate (float) – learning rate of of the optimizer (if accepted as an argument, if not it will be ignored)

  • loss (list) – list of loss functions to be pass to the model

  • target_output (list) – list of outputs to compare the results to during fitting/evaluation if given further calls to fit/evaluate must be done with y = None.

compute_losses()[source]

This function is equivalent to the model evaluate(x,y) method of most TensorFlow models which return a dictionary of losses per output layer. The losses reported in the evaluate method for n3fit are, however, summed over replicas. Instead the loss we are interested in is usually the output of the model (i.e., predict) This function then generates a dict of partial losses of the model separated per replica. i.e., the output for experiment {‘LHC_exp’} will be an array of Nrep elements.

Returns

a dictionary with all partial losses of the model

Return type

dict

get_layer_re(regex)[source]

Get all layers matching the given regular expression

get_replica_weights(i_replica)[source]

Get the weights of replica i_replica.

This assumes that the only weights are in the layer types defined as the constants

NN_LAYER_ALL_REPLICAS & PREPROCESSING_LAYER_ALL_REPLICAS

Parameters

i_replica (int) –

Returns

dictionary with the weights of the replica

Return type

dict

load_identical_replicas(model_file)[source]

From a single replica model, load the same weights into all replicas.

property num_replicas
perform_fit(x=None, y=None, epochs=1, **kwargs)[source]

Performs forward (and backwards) propagation for the model for a given number of epochs.

The output of this function consists on a dictionary that maps the names of the metrics of the model (the loss functions) to the partial losses.

If the model was compiled with input and output data, they will not be passed through. In this case by default the number of epochs will be set to 1

ex:

{‘loss’: [100], ‘dataset_a_loss1’ : [67], ‘dataset_2_loss’: [33]}

Returns

loss_dict – a dictionary with all partial losses of the model

Return type

dict

predict(x=None, **kwargs)[source]

Call super().predict with the right input arguments

reset_layer_weights_to(layer_names, reference_vals)[source]

Set weights for the given layer to the given reference values

The reference_vals values list must be a list of the same size of layer_names and it must consist of numpy arrays that perfectly align to the reference layer weights. In the special case of 1-weight layers it admits a scalar as input.

Parameters
  • layer_names (list) – list of names of the layers to update weights

  • reference_vals (list(float) or list(arrays)) – list of scalar or arrays to assign to each layer

save_weights(file)[source]
Compatibility function for:
  • tf < 2.16, keras < 3: argument save format needed for h5

  • tf >= 2.16, keras >= 3: save format is deduced from the file extension

In both cases, the final weights are finally copied to the file path.

set_masks_to(names, val=0.0)[source]

Set all mask value to the selected value Masks in MetaModel should be named {name}_mask

Mask are layers with one single weight (shape=(1,)) that multiplies the input

Parameters
  • names (list) – list of masks to look for

  • val (float) – selected value of the mask

set_replica_weights(weights, i_replica=0)[source]

Set the weights of replica i_replica.

This assumes that the only weights are in layers called NN_{i_replica} and preprocessing_factor_{i_replica}

Parameters
  • weights (dict) – dictionary with the weights of the replica

  • i_replica (int) – the replica number to set, defaulting to 0

split_replicas()[source]

Split the single multi-replica model into a list of separate single replica models, maintaining the current state of the weights.

Returns

list of single replica models

Return type

list

n3fit.backends.keras_backend.MetaModel.get_layer_replica_weights(layer, i_replica: int)[source]

Get the weights for the given single replica i_replica, from a layer that contains the weights of all the replicas.

Note that the layer could be a complete NN with many separated sub_layers each of which containing weights for all replicas together. This functions separates the per-replica weights and returns the list of weight as if the input layer were made of _only_ replica i_replica.

Parameters
  • layer (MetaLayer) – the layer to get the weights from

  • i_replica (int) – the replica number

Returns

weights – list of weights for the replica

Return type

list

n3fit.backends.keras_backend.MetaModel.is_stacked_single_replicas(layer)[source]

Check if the layer consists of stacked single replicas (Only happens for NN layers), to determine how to extract single replica weights.

Parameters

layer (MetaLayer) – the layer to check

Returns

True if the layer consists of stacked single replicas

Return type

bool

n3fit.backends.keras_backend.MetaModel.set_layer_replica_weights(layer, weights, i_replica: int)[source]

Set the weights for the given single replica i_replica. When the input layer contains weights for many replicas, ensures that only those corresponding to replica i_replica are updated.

Parameters
  • layer (MetaLayer) – the layer to set the weights for

  • weights (list) – list of weights for the replica

  • i_replica (int) – the replica number

n3fit.backends.keras_backend.base_layers module

This module defines custom base layers to be used by the n3fit Neural Network. These layers can use the keras standard set of activation function or implement their own.

For a layer to be used by n3fit it should be contained in the layers dictionary defined below. This dictionary has the following structure:

‘name of the layer’ : ( Layer_class, {dictionary of arguments: defaults} )

In order to add custom activation functions, they must be added to the custom_activations dictionary with the following structure:

‘name of the activation’ : function

The names of the layer and the activation function are the ones to be used in the n3fit runcard.

class n3fit.backends.keras_backend.base_layers.Dense(*args, **kwargs)[source]

Bases: Dense, MetaLayer

n3fit.backends.keras_backend.base_layers.LSTM_modified(**kwargs)[source]

LSTM asks for a sample X timestep X features kind of thing so we need to reshape the input

n3fit.backends.keras_backend.base_layers.base_layer_selector(layer_name, **kwargs)[source]

Given a layer name, looks for it in the layers dictionary and returns an instance.

The layer dictionary defines a number of defaults but they can be overwritten/enhanced through kwargs

Parameters
  • `layer_name

    str with the name of the layer

  • **kwargs – extra optional arguments to pass to the layer (beyond their defaults)

n3fit.backends.keras_backend.base_layers.dense_per_flavour(basis_size=8, kernel_initializer='glorot_normal', **dense_kwargs)[source]

Generates a list of layers which can take as an input either one single layer or a list of the same size If taking one single layer, this one single layer will be the input of every layer in the list. If taking a list of layer of the same size, each layer on the list will take as input the layer on the input list in the same position.

Note that, if the initializer is seeded, it should be a list where the seed is different for each element.

i.e., if basis_size is 3 and is taking as input one layer A the output will be:

[B1(A), B2(A), B3(A)]

if taking, instead, a list [A1, A2, A3] the output will be:

[B1(A1), B2(A2), B3(A3)]

n3fit.backends.keras_backend.base_layers.leaky_relu(x)[source]

Computes the Leaky ReLU activation function

n3fit.backends.keras_backend.base_layers.modified_tanh(x)[source]

A non-saturating version of the tanh function

n3fit.backends.keras_backend.base_layers.regularizer_selector(reg_name, **kwargs)[source]

Given a regularizer name looks in the regularizer dictionary and return an instance.

The regularizer dictionary defines defaults for regularizers but these can be overwritten by supplying kwargs

Parameters
  • layer_name – str with the name of the regularizer

  • **kwargs – extra optional arguments to pass to the regularizer

n3fit.backends.keras_backend.base_layers.square_activation(x)[source]

Squares the input

n3fit.backends.keras_backend.callbacks module

Callbacks to be used during training

The callbacks defined in this module can be passed to the callbacks argument of the perform_fit method as a list.

For the most typical usage: on_epoch_end, they must take as input an epoch number and a log of the partial losses.

class n3fit.backends.keras_backend.callbacks.LagrangeCallback(datasets, multipliers, update_freq=100)[source]

Bases: Callback

Updates the given datasets with its respective multipliers each update_freq epochs

Parameters
  • datasets (list(str)) – List of the names of the datasets to be trained

  • multipliers (list(float)) – List of multipliers to be applied

  • update_freq (int) – each how many epochs the positivity lambda is updated

on_epoch_end(epoch, logs=None)[source]

Function to be called at the end of every epoch

on_train_begin(logs=None)[source]

Save an instance of all relevant layers

class n3fit.backends.keras_backend.callbacks.StoppingCallback(stopping_object, log_freq=100)[source]

Bases: Callback

Given a stopping_object, the callback will monitor the validation chi2 and will stop the training model when the conditions given by stopping_object are met.

Parameters
  • stopping_object (Stopping) – instance of Stopping which controls when the fit should stop

  • log_freq (int) – each how many epochs the print_stats argument of stopping_object will be set to true

on_epoch_end(epoch, logs=None)[source]

Function to be called at the end of every epoch Every log_freq number of epochs, the monitor_chi2 method of the stopping_object will be called and the validation loss (broken down by experiment) will be logged. For the training model only the total loss is logged during the training.

on_train_end(logs=None)[source]

The training can be finished by the stopping or by Tensorflow when the number of epochs reaches the maximum. In this second case the stopping has to be manually set

class n3fit.backends.keras_backend.callbacks.TimerCallback(count_range=100)[source]

Bases: Callback

Callback to be used during debugging to time the fit

on_epoch_end(epoch, logs=None)[source]

At the end of every epoch it checks the time

on_train_end(logs=None)[source]

Print the results

n3fit.backends.keras_backend.callbacks.gen_tensorboard_callback(log_dir, profiling=False, histogram_freq=0)[source]

Generate tensorboard logging details at log_dir. Metrics of the system are saved each epoch. If the profiling flag is set to True, it will also attempt to save profiling data.

Note the usage of this callback can hurt performance.

Parameters
  • log_dir (str) – Directory in which to save tensorboard details

  • profiling (bool) – Whether or not to save profiling information (default False)

n3fit.backends.keras_backend.constraints module

Implementations of weight constraints for initializers

class n3fit.backends.keras_backend.constraints.MinMaxWeight(min_value, max_value, **kwargs)[source]

Bases: MinMaxNorm

Small override to the MinMaxNorm Keras class to not look at the absolute value This version looks at the sum instead of at the norm

n3fit.backends.keras_backend.internal_state module

Library of functions that modify the internal state of Keras/Tensorflow

n3fit.backends.keras_backend.internal_state.clear_backend_state()[source]

Clears the state of the backend. Internally it cleans the Keras/TF internal state, liberating the layer names and unused memory.

n3fit.backends.keras_backend.internal_state.get_physical_gpus()[source]

Retrieve a list of all physical GPU devices available in the system.

Returns

list

Return type

A list of TensorFlow physical devices of type ‘GPU’.

n3fit.backends.keras_backend.internal_state.set_eager(flag=True)[source]

Set eager mode on or off for a very slow but fine grained debugging call this function as early as possible ideally after the first tf import

n3fit.backends.keras_backend.internal_state.set_initial_state(debug=False, external_seed=None, max_cores=None, double_precision=False)[source]

This function sets the initial internal state for the different components of n3fit.

In debug mode it seeds all seedable libraries, which include:
  • numpy

  • hyperopt

  • python random

  • tensorflow

The tensorflow/keras part is based on Keras’ own [guide](https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development) Note that you might also need PYTHONHASHSEED=0 (outside the program) for full reproducibility.

To ensure reproducibility in debug mode, if the number of cores is not given, it will be set to 1 (with 1 thread per core)

Parameters
  • debug (bool) – If this is a debug run, the initial seeds are fixed

  • external_seed (int) – Force a seed into numpy, random and tf

  • max_cores (int) – Maximum number of cores (as many as physical cores by default)

  • double_precision (bool) – If set, use float64 as the default float type

n3fit.backends.keras_backend.internal_state.set_number_of_cores(max_cores=None, max_threads=None)[source]

Set the maximum number of cores and threads per core to be used by TF. It defaults to the number of physical cores (and will never surpass it even if max_cores is above)

Parameters

max_cores (int) – Maximum number of cores to be used

n3fit.backends.keras_backend.multi_dense module

Extend the Dense layer from Keras to act on an arbitrary number of replicas. This extension provides a performance improvement with respect to the original Dense layer from Keras even in the single replica case.

class n3fit.backends.keras_backend.multi_dense.MultiDense(*args, **kwargs)[source]

Bases: Dense

Dense layer for multiple replicas at the same time.

For the first layer in the network, (for which is_first_layer should be set to True), the input shape is (batch_size, gridsize, features), still without a replica axis. In this case this layer acts as a stack of single dense layers, with their own kernel and bias, acting on the same input.

For subsequent layers, the input already contains multiple replicas, and the shape is (batch_size, replicas, gridsize, features). In this case, the input for each replica is multiplied by its own slice of the kernel.

Weights are initialized using a replica_seeds list of seeds, and are identical to the weights of a list of single dense layers with the same replica_seeds.

Parameters
  • replica_seeds (List[int]) – List of seeds per replica for the kernel initializer.

  • kernel_initializer (Initializer) – Initializer class for the kernel.

  • is_first_layer (bool (default: False)) – Whether this is the first MultiDense layer in the network, and so the input shape does not contain a replica axis.

  • base_seed (int (default: 0)) – Base seed for the single replica initializer to which the replica seeds are added.

build(input_shape)[source]

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(inputs)[source]

Compute output of shape (batch_size, replicas, gridsize, units).

For the first layer, this is equivalent to applying each replica separately and concatenating along the last axis. If the input already contains multiple replica outputs, it is equivalent to applying each replica to its corresponding input.

compute_output_shape(input_shape)[source]

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Parameters

input_shape – Shape tuple (tuple of integers) or tf.TensorShape, or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns

A tf.TensorShape instance or structure of tf.TensorShape instances.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns

Python dictionary.

class n3fit.backends.keras_backend.multi_dense.MultiInitializer(single_initializer: Initializer, replica_seeds: List[int], base_seed: int)[source]

Bases: Initializer

Multi replica initializer that exactly replicates a stack of single replica initializers.

Weights are stacked on the first axis, and per replica seeds are added to a base seed of the given single replica initializer.

Parameters
  • single_initializer (Initializer) – Initializer class for the kernel.

  • replica_seeds (List[int]) – List of seeds per replica for the kernel initializer.

  • base_seed (int) – Base seed for the single replica initializer to which the replica seeds are added.

n3fit.backends.keras_backend.operations module

This module contains the list of operations that can be used within the call method of the n3fit layers as well as operations that can act on layers.

This includes an implementation of the NNPDF operations on fktable in the keras language (with the mapping c_to_py_fun) into Keras Lambda layers.

Tensor operations are compiled through the @tf.function decorator for optimization

The rest of the operations in this module are divided into four categories: numpy to tensor:

Operations that take a numpy array and return a tensorflow tensor

layer to layer:

Operations that take a layer and return another layer

tensor to tensor:

Operations that take a tensor and return a tensor

layer generation:

Instanciate a layer to be applied by the calling function

Some of these are just aliases to the backend (tensorflow or Keras) operations Note that tensor operations can also be applied to layers as the output of a layer is a tensor equally operations are automatically converted to layers when used as such.

n3fit.backends.keras_backend.operations.as_layer(operation, op_args=None, op_kwargs=None, **kwargs)[source]

Wrap any operation as a keras layer

Note that a layer call argument takes only one argument, therefore all extra arguments defining the operation must be given as part of op_args (a list) and op_kwargs (a dict) and will be compiled together with the operation

Parameters
  • operation (function) – opertion to compute (its first argument must be for a tensor)

  • op_args (list) – list of positional arguments for the operation

  • op_kwargs (dict) – dict of optional arguments for the operation

Returns

op_layer – a keras layer that applies the operation upon call

Return type

layer

n3fit.backends.keras_backend.operations.backend_function(fun_name, *args, **kwargs)[source]

Wrapper to call non-explicitly implemented backend functions by name: (fun_name) see full docs for some possibilities

n3fit.backends.keras_backend.operations.batchit(x, batch_dimension=0, **kwarg)[source]

Add a batch dimension to tensor x

n3fit.backends.keras_backend.operations.boolean_mask(*args, **kwargs)[source]

Applies a boolean mask to a tensor

Relevant parameters: (tensor, mask, axis=None) see full docs.

n3fit.backends.keras_backend.operations.c_to_py_fun(op_name, name='dataset')[source]

Map the NNPDF operations to Keras layers NNPDF operations are defined in validphys.convolution.OP()

Parameters

op_name (str) – A string defining the operation name

n3fit.backends.keras_backend.operations.concatenate(tensor_list, axis=-1, target_shape=None, name=None)[source]

Concatenates a list of numbers or tensor into a bigger tensor If the target shape is given, the output is reshaped to said shape

n3fit.backends.keras_backend.operations.einsum(equation, *args, **kwargs)[source]

Computes the tensor product using einsum See full docs

n3fit.backends.keras_backend.operations.evaluate(tensor)[source]

Evaluate input tensor using the backend

n3fit.backends.keras_backend.operations.flatten(x)[source]

Flatten tensor x

n3fit.backends.keras_backend.operations.gather(*args, **kwargs)[source]

Gather elements from a tensor along an axis

n3fit.backends.keras_backend.operations.numpy_to_input(numpy_array: ndarray[Any, dtype[_ScalarType_co]], name: Optional[str] = None)[source]

Takes a numpy array and generates an Input layer with the same shape, but with a batch dimension (of size 1) added.

Parameters
  • numpy_array (np.ndarray) –

  • name (str) – name to give to the layer

n3fit.backends.keras_backend.operations.numpy_to_tensor(ival, **kwargs)[source]

Make the input into a tensor

n3fit.backends.keras_backend.operations.op_gather_keep_dims(tensor, indices, axis=0, **kwargs)[source]

A convoluted way of providing x[:, indices, :]

From TF 2.4 onwards tensorflow is able to understand the syntax above for both eager and non-eager tensors

n3fit.backends.keras_backend.operations.op_log(o_tensor, **kwargs)[source]

Computes the logarithm of the input

n3fit.backends.keras_backend.operations.op_multiply(o_list, **kwargs)[source]

Receives a list of layers of the same output size and multiply them element-wise

n3fit.backends.keras_backend.operations.op_multiply_dim(o_list, **kwargs)[source]

Bypass in order to multiply two layers with different output dimension for instance: (10000 x 14) * (14) as the normal keras multiply don’t accept it (but somewhow it does accept it doing it like this)

n3fit.backends.keras_backend.operations.op_subtract(inputs, **kwargs)[source]

Computes the difference between two tensors. see full docs

n3fit.backends.keras_backend.operations.pow(tensor, power)[source]

Computes the power of the tensor

n3fit.backends.keras_backend.operations.reshape(x, shape)[source]

reshape tensor x

n3fit.backends.keras_backend.operations.scatter_to_one(values, indices, output_shape)[source]

Like scatter_nd initialized to one instead of zero see full docs

n3fit.backends.keras_backend.operations.split(*args, **kwargs)[source]

Splits the tensor on the selected axis see full docs

n3fit.backends.keras_backend.operations.stack(tensor_list, axis=0, **kwargs)[source]

Stack a list of tensors see full docs

n3fit.backends.keras_backend.operations.sum(*args, **kwargs)[source]

Computes the sum of the elements of the tensor see full docs

n3fit.backends.keras_backend.operations.swapaxes(tensor, source, destination)[source]

Moves the axis of the tensor from source to destination, as in numpy.swapaxes. see full docs

n3fit.backends.keras_backend.operations.tensor_ones_like(*args, **kwargs)[source]

Generates a tensor of ones of the same shape as the input tensor See full docs

n3fit.backends.keras_backend.operations.tensor_product(*args, **kwargs)[source]

Computes the tensordot product between tensor_x and tensor_y See full docs

n3fit.backends.keras_backend.operations.transpose(tensor, **kwargs)[source]

Transpose a layer, see full docs

Module contents