3.5. lib/

The fitsnap3lib/lib directory contains external code used by FitSNAP, sort of like helper classes and functions.

3.5.1. Atom-centered Networks

class fitsnap3lib.lib.neural_networks.pytorch.FitTorch(networks, descriptor_count, force_bool, n_elements=1, multi_element_option=1, dtype=torch.float32)

FitSNAP PyTorch network model.

Parameters
  • networks (list) – List of nn.Sequential network architectures. Each list element is a different network type if multi-element option = 2.

  • descriptor_count (int) – Length of descriptors for an atom.

  • force_bool (bool) – Boolean telling whether to calculate forces or not.

  • n_elements (int) – Number of differentiable atoms types

  • multi_element_option (int) – Option for which multi-element network model to use.

forward(x, xd, indices, atoms_per_structure, types, xd_indx, unique_j, unique_i, device, dtype=torch.float32)

Forward pass through the PyTorch network model, calculating both energies and forces.

Parameters
  • x (torch.Tensor.float) – Array of descriptors for this batch.

  • xd (torch.Tensor.float) – Array of descriptor derivatives dDi/dRj for this batch.

  • indices (torch.Tensor.long) – Array of indices upon which to contract per atom energies, for this batch.

  • atoms_per_structure (torch.Tensor.long) – Number of atoms per configuration for this batch.

  • types (torch.Tensor.long) – Atom types starting from 0, for this batch.

  • xd_indx (torch.Tensor.long) – Array of indices corresponding to descriptor derivatives, for this batch. These are concatenations of the direct LAMMPS dgradflag=1 output; we rely on unique_j and unique_i for adjusted indices of this batch (see dataloader.torch_collate).

  • unique_j (torch.Tensor.long) – Array of indices corresponding to unique atoms j in all batches of configs. All forces in this batch will be contracted over these indices.

  • unique_i (torch.Tensor.long) – Array of indices corresponding to unique neighbors i in all batches of configs. Forces on atoms j are summed over these neighbors and contracted appropriately.

  • dtype (torch.float32) – Data type used for torch tensors, default is torch.float32 for easy training, but we set to torch.float64 for finite difference tests to ensure correct force calculations.

  • device – pytorch accelerator device object

import_wb(weights, bias)

Imports weights and bias into FitTorch model

Parameters
  • weights (list of numpy array of floats) – Network weights at each layer.

  • bias (list of numpy array of floats) – Network bias at each layer.

load_lammps_torch(filename='FitTorch.pt')

Loads lammps ready pytorch model.

Parameters

filename (str) – Filename of lammps usable pytorch model.

write_lammps_torch(filename='FitTorch.pt')

Saves lammps ready pytorch model.

Parameters

filename (str) – Filename for lammps usable pytorch model.

fitsnap3lib.lib.neural_networks.pytorch.create_torch_network(layer_sizes)

Creates a pytorch network architecture from layer sizes. This also performs standarization in the first linear layer. This only supports softplus as the nonlinear activation function.

Parameters:

layer_sizes (list of ints): Size of each network layers

Return:

Network Architecture of type neural network sequential

3.5.2. Pairwise Networks

class fitsnap3lib.lib.neural_networks.pairwise.FitTorch(networks, descriptor_count, num_radial, num_3body, cutoff, n_elements=1, multi_element_option=1, dtype=torch.float32)

FitSNAP PyTorch model for pairwise networks.

networks

Description of attr1.

Type

list of torch.nn.Sequential

descriptor_count

Number of descriptors for a pair.

Type

int

num_radial

Number of radial descriptors for a pair.

Type

int

num_3body

Number of 3-body descriptors for a pair.

Type

int

cutoff

Radial cutoff for neighlist.

Type

float

n_elements

Number of element types.

Type

int

multi_element_option

Setting for how to deal with multiple element types.

Type

int

forward(x, neighlist, transform_x, indices, atoms_per_structure, types, unique_i, unique_j, device, dtype=torch.float32)

Forward pass through the PyTorch network model, calculating both energies and forces.

Parameters
  • x (torch.Tensor.float) – Array of positions for this batch

  • ( (neighlist) – obj`torch.Tensor.long`): Sparse neighlist for this batch

  • transform_x (torch.Tensor.float) – Array of LAMMPS transformed positions of neighbors for this batch.

  • indices (torch.Tensor.long) – Array of configuration indices upon which to contract pairwise energies, for this batch.

  • atoms_per_structure (torch.Tensor.long) – Number of atoms per configuration for this batch.

  • types (torch.Tensor.long) – Atom types starting from 0, for this batch.

  • unique_i (torch.Tensor.long) – Atoms i for all atoms in this batch indexed starting from 0 to (natoms_batch-1)

  • unique_j (torch.Tensor.long) – Neighbors j for all atoms in this batch indexed starting from 0 to (natoms_batch-1)

  • dtype (torch.float32, optional) – Data type used for torch tensors, default is torch.float32 for training, but we set to torch.float64 for finite difference tests to ensure correct force calculations.

  • device – pytorch accelerator device object

Returns

tuple of (predicted_energy_total, predicted_forces). First element is predicted energies for this batch, second element is predicted forces for this batch.

import_wb(weights, bias)

Imports weights and bias into FitTorch model

Parameters:

weights (list of numpy array of floats): Network weights at each layer bias (list of numpy array of floats): Network bias at each layer

load_lammps_torch(filename='FitTorch.pt')

Loads lammps ready pytorch model.

Parameters:

filename (str): Filename of lammps usable pytorch model

write_lammps_torch(filename='FitTorch.pt')

Saves lammps ready pytorch model.

Parameters:

filename (str): Filename for lammps usable pytorch model

fitsnap3lib.lib.neural_networks.pairwise.create_torch_network(layer_sizes)

Creates a pytorch network architecture from layer sizes. This also performs standarization in the first linear layer. This only supports softplus as the nonlinear activation function.

Parameters

layer_sizes (list of int) – Number of nodes for each layer.

Returns

Neural network architecture

Return type

torch.nn.Sequential