behavenet.models

Model documentation.

behavenet.models.base Module

Base models/modules in PyTorch.

Classes

BaseModule(*args, **kwargs)

Template for PyTorch modules.

BaseModel(*args, **kwargs)

Template for PyTorch models.

DiagLinear(features[, bias])

Applies a diagonal linear transformation to the incoming data: \(y = xD^T + b\)

CustomDataParallel(module[, device_ids, …])

Wrapper class for multi-gpu training.

behavenet.models.ae_model_architecture_generator Module

Functions

calculate_output_dim(input_dim, kernel, …)

Calculate output dimension of a layer/dimension based on input size, kernel size, etc.

draw_archs(batch_size, input_dim, n_ae_latents)

Generate multiple random autoencoder architectures with a fixed number of latents.

estimate_model_footprint(model, input_dim[, …])

Estimate model size to determine if it will fit on a single GPU.

get_decoding_conv_block(arch)

Build symmetric decoding block of convolutional autoencoder based on encoding block.

get_encoding_conv_block(arch, opts)

Build encoding block of convolutional autoencoder.

get_handcrafted_dims(arch[, symmetric])

Compute input/output dims as well as necessary padding for handcrafted architectures.

get_possible_arch(input_dim, n_ae_latents[, …])

Generate a random autoencoder architecture.

load_default_arch()

Load default convolutional AE architecture used in Whiteway et al 2021.

load_handcrafted_arch(input_dim, …[, …])

Load handcrafted autoencoder architecture from a json file.

load_handcrafted_arches(input_dim, …[, …])

Load handcrafted autoencoder architectures from a json file.

behavenet.models.aes Module

Autoencoder models implemented in PyTorch.

Functions

load_pretrained_ae(model, hparams)

Load pretrained weights into already constructed AE model.

Classes

ConvAEEncoder(hparams)

Convolutional encoder.

ConvAEDecoder(hparams)

Convolutional decoder.

LinearAEEncoder(n_latents, input_size)

Linear encoder.

LinearAEDecoder(n_latents, output_size[, …])

Linear decoder.

AE(hparams)

Base autoencoder class.

ConditionalAE(hparams)

Conditional autoencoder class.

AEMSP(hparams)

Autoencoder class with matrix subspace projection for disentangling the latent space.

behavenet.models.vaes Module

Variational autoencoder models implemented in PyTorch.

Functions

reparameterize(mu, logvar)

Sample from N(mu, var)

Classes

VAE(hparams)

Base variational autoencoder class.

ConditionalVAE(hparams)

Conditional variational autoencoder class.

BetaTCVAE(hparams)

Beta Total Correlation VAE class.

PSVAE(hparams)

Partitioned subspace variational autoencoder class.

MSPSVAE(hparams)

Partitioned subspace variational autoencoder class for multiple sessions.

ConvAEPSEncoder(hparams)

Convolutional encoder that separates label-related subspace.

ConvAEMSPSEncoder(hparams)

Convolutional encoder that separates label-related subspace.

behavenet.models.decoders Module

Encoding/decoding models implemented in PyTorch.

Classes

Decoder(hparams)

General wrapper class for encoding/decoding models.

MLP(hparams)

Feedforward neural network model.

LSTM(hparams)

LSTM neural network model.

ConvDecoder(hparams)

Decode images from predictors with a convolutional decoder.