title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
How to correctly test the model by using multiple test data loaders ? | [
"question"
] | How to correctly test the model by using multiple test data loaders ?
My benechmark has two test datasets, so I just want to test those two datasets in one test epoch. However, I don't know how to correctly using those two data loader in the test_* fucntions. Here is my code:
Code
DataModule
class MNISTData(pl.Lightnin... |
Bounded memory leak caused by `trainer.evalutaion_loop.outputs` | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
trainer.evalutaion_loop.outputs caches the outputs of every validation steps in def run_evaluation(self, max_batches=None): of trainer:
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 659
in
d71659b
self.eva... |
is the LightningModule init args must be dict? | [
"help wanted",
"question",
"docs"
] | π Bug
my LightningModule init args is object which parse from cmd line, when i use load_from_checkpoint() the args change to dict, is these right? which cause my orignal code access such as opt.batch_size can't use anymore, pytorch_lightning is 1.1.6
PyTorch Version (e.g., 1.0):
OS (e.g., Linux): mac
How you insta... |
[docs] Add documentation for non-slurm computing cluster setup | [
"docs"
] | Make the documentation for the functionality of #1387
See the discussion at #1345 |
Apply some previously ignored flake8 rules | [
"good first issue",
"won't fix",
"refactor"
] | pytorch-lightning/setup.cfg
Lines 84 to 87
in
ae14fca
# TODO: delete these next 3 because we no longer use black
E203 # whitespace before ':'. Opposite convention enforced by black
... |
strange behavior with tests in PL: tests influence each other | [
"bug",
"help wanted",
"priority: 0",
"ci"
] | π Bug
I observed that some tests under tests/trainer have very strange behavior. The order in which the tests are executed seems to matter and they are influencing each other!
To Reproduce
Checkout release/1.2-dev branch
Remove the predict() method in one of the accelerators (GPUAccelerator is enough)
Run py.test -v ... |
Densenet architectures providing non-deterministic results | [
"question"
] | β Questions and Help
Before asking:
Try to find answers to your questions in the Lightning Forum!
Search for similar issues.
Search the docs.
I have tried looking for answers in other forums but couldn't find anything related to my question.
What is your question?
I can't seem to obtain deterministic results using D... |
Problem with syncing logged values with multi-gpu and ddp | [
"bug",
"help wanted"
] | π Bug
When logging values with sync_dist and ddp (on two GPUs), the logged value is changed and the wrong value of averaged metric is logged.
It can be reproduced with dummy training_step() and batch_size=1
def training_step(self, batch, batch_idx):
loss = torch.tensor(1.0, device=self.device, requires_gra... |
Pytorch-Lightning save and continue training from state_dict. | [
"bug",
"duplicate",
"help wanted",
"waiting on author"
] | π Feature
Save the model and other checkpoint at any step as a dict. And load these points to retraining the model on datasets.
Motivation
Recently, I am working on federated learning, an learning paradiam where the training is runing on different clients. And the clients share one common model which is aggregate from... |
Is there a guide for code and test structure for new contributors? | [
"docs",
"priority: 2"
] | β Questions and Help
Is there a guide for code and test structure for new contributors?
Something like an architecture document, to be able to get into the code without reading most of it, and probably making mistakes.
I want to be able to contribute when I have time, but just getting into the code will take a while, a... |
Errors within try/except of train(self) are misrepresented as checkpointing MisconfigurationException | [
"feature",
"help wanted"
] | π Bug
I think I found a bug, where errors probably caused by users are misrepresented as checkpointing MisconfigurationException even though the checkpointing is configured correctly.
This happens when errors are raised within training (such as RuntimeErrors or CUDA-OOM errors) and bubble up to the try/except command ... |
Error while using distributed_backed = "ddp" | [
"bug",
"question",
"distributed"
] | My code works perfectly fine with distributed_backend='dp', but fails when I use distributed_backend='ddp' with the following error:
Traceback (most recent call last):
File "/scratch/nvarshn2/explore/test_ddp.py", line 89, in <module>
trainer.fit(model, train_data, val_data)
File "/home/nvarshn2/.conda/envs/pyt... |
hi, how to plot different(e.g. training acc and val acc) in tensorboard in same window? | [
"question"
] | Hi, thanks for your great work!
I want plot training accuracy and validation accuracy in same tensorboard, so
I can realize overfiting very convient
Thanks a lot!! |
Completely overwrite validation/test block (including the batch-level loop) | [
"feature",
"help wanted",
"won't fix",
"refactor",
"design"
] | π Feature
Motivation
In certain cases, for example, an image+text multimodal retrieval model, the training and validation/testing logic can be very different. Specifically:
In training, for each input query, we construct the corresponding batches by sampling randomly from the dataset;
In validation/testing, for eac... |
Modify ModelCheckpoint class to support additional options for cloud storages | [
"feature",
"help wanted",
"won't fix",
"checkpointing",
"priority: 2"
] | π Feature
Modify pytorch_lightning.callbacks.ModelCheckpoint class to make it possible to provide additional configuration options for cloud storage.
Motivation
We at @blue-yonder started building a prototype that uses PyTorch-Lighting as a high-level training loop library. It works great, but we stumbled upon an iss... |
MultiTask Training on multi-gpus returns NaN and inf in model output during Validation phase | [
"bug",
"help wanted"
] | π Bug
I'm trying to train a multihead ResNet on images. Train data size is 2700 and valid data size is 600. Each batch is [n, 3, 224, 224, 320] and normalized to have values [0,1]. I've already trained a single head resnet on this dataset many times and never encountered a problem with any datapoint so I am sure the d... |
Hanging with TPUs on GCE VM | [
"bug",
"help wanted",
"priority: 0",
"accelerator: tpu"
] | π Bug
Seems like training of any model hangs indefinitely when running on a Google Compute Engine VM.
Mainly I've been trying this example model but I've also tried the LitAutoEncoder from this page.
Note that all unit tests pass, including the 8-core model training.
There seem to be 2 key areas that trigger a hang:
... |
Multigpu with different RAM capabilities | [
"feature",
"help wanted"
] | I couldn't find a way to use more than one gpu that have different RAM capabilities (it fails when the smallest GPU reach it's capacity). Is there a way to solve this?
Thanks! |
failing Profiler with PT 1.8 | [
"bug",
"help wanted"
] | π Bug
#5840 (comment)
Please reproduce using the BoringModel
there is some incompatibility, to be fixed in other PR
FAILED tests/metrics/test_composition.py::test_metrics_mod[2.0-expected_result2]
FAILED tests/models/test_horovod.py::test_result_reduce_horovod - RuntimeErro...
FAILED tests/trainer/test_trainer.py::tes... |
How can I apply many data batches to a function on GPU | [
"question"
] | How can I apply many data batches to a function on GPU using PL? I like to do something similar to SPMD in MATLAB.
As an example, assume that I have 5 batches of 64 2D-points. I can generate it by
p_batch=torch.randn(5,64,2)#five batches of points, 64 points in each batch, two dimesnions
I want to apply the following f... |
training with ddp get replicas mismatch error | [
"bug",
"help wanted",
"won't fix",
"distributed",
"priority: 1"
] | Hi,
I've been getting this replicas error with ddp training.
setup: windows 10, torch 1.7.1, pytorch-lightning 1.1.7, on a 3 gpus machine.
The model training was working well with ddp on another machine 2 gpus (same setup w/ win10, torch 1.7.1 and pl 1.1.7)
the code crashed after printed the following error message:
se... |
Access dataset directly in validation_epoch_end | [
"feature",
"help wanted"
] | π Feature
Motivation / Pitch
Not sure if duplicated but:
I would propose to allow users to add additional arguments to be passed into functions such as validation_epoch_end().
One of the examples being: during validation, we might need to fetch some additional information from the dataset (e.g., len(dataset)), and c... |
Calling fit mutiple times fails to call teardown | [
"help wanted",
"question",
"waiting on author"
] | πWhen calling fit mutiple times teardown(self, stage), on_train_end(self) are called mutiple times
Please reproduce using the BoringModel
If your create a new teacher instance everything works great but if you dont it fails.
Here is the colab to reproduce the error:
https://colab.research.google.com/drive/1G6korxcAO12... |
BaseFinetuning Callback freeze_before_training is never called | [
"bug",
"help wanted"
] | π Bug
I think there might be a bug in the implementation of the BaseFinetuning callback, and in particular in the following lines:
pytorch-lightning/pytorch_lightning/callbacks/finetuning.py
Lines 236 to 237
in
a028171
def on... |
Unexpected behaviour of hooking inside callback | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
on_epoch_end hook in callback called between train and validation epochs.
To Reproduce
Use following callback
class BuggyCallback(Callback):
def on_epoch_end(self, trainer: Trainer, pl_module: LightningModule):
print("I'm called")
Expected behavior
Line with I'm called should appear after train and ... |
Make reduce_on_plateau more flexible | [
"feature",
"help wanted",
"won't fix"
] | π Feature
I want to have freedom to use a custom reduce on plateau type scheduler.
Motivation
Right now, only torch.optim.lr_scheduler.ReduceLROnPlateau is supported out of the box. See here. This means even if I specify {'reduce_on_plateau': True} in:
def configure_optimizers(self):
optimizer = TrainConf.optimizer(... |
ClearML logger (ex Allegro Trains AI) | [
"feature",
"help wanted",
"won't fix",
"logger"
] | π Feature
Motivation
ClearML is opensourced self-hosted service for MLOps, logging and experiment management.
https://github.com/allegroai/clearml/
Previously known as Allegro Trains AI.
Their repository has more than 2k stars on GitHub and very active community.
Pitch
I would like to have an implementation for pyt... |
ModelCheckpoint doesn't delete checkpoints from s3 storage using Tensorboard Logger | [
"bug",
"help wanted",
"won't fix",
"waiting on author",
"priority: 1"
] | π Bug
When using ModelCheckpoint with TensorboardLogger with a S3 bucket url path the models checkpoints are correclty uploaded into the cloud directory set by the logger but but past epochs versions are not deleted. If, instead, I use directly the ModelCheckpoint with dirpath=<s3-url> while saving tensorboard logs lo... |
Learning rate loaded as Namespace from argparse | [
"help wanted",
"question"
] | π Bug
When I'm loading the model with argument from ArgumentParser I receive a strange output with learning_rate being treated as argparse Namespace.
To Reproduce
Here is my basic class that saves hyperparameters.
from argparse import ArgumentParser
import pytorch_lightning as pl
class Trial(pl.LightningModule):
... |
init_optimizers(self, model) | [
"bug",
"help wanted"
] | π Bug
pytorch_lightning/trainer/optimizers.py in init_optimizers(self, model) fails to load monitor value when configure_optimizers() returns a tuple with multiple optimizers in dictionary form, each with its own LR Scheduler and Monitor Value. The bug seems to occur in the elif clause in line 56, where no monitor val... |
The header link is broken | [
"docs"
] | The link below is broken:
[pytorch-lightning page] -> header menu (γ»γ»γ») β Get Started
Currently:
https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html
Fix:
https://pytorch-lightning.readthedocs.io/en/latest/starter/introduction_guide.html |
Training loop get filled with unwanted logs. | [
"bug",
"help wanted"
] | π Bug
During the training process, there are lots of unwanted logs printing about different function runtimes. I am using the latest pytorch_lightning version. |
support len(datamodule) | [
"feature",
"help wanted",
"good first issue",
"data handling"
] | Let's add support for len(datamodule) so we can get the following:
len(datamodule)
# prints:
# train_dataloder_1: 200 samples
# train_dataloader_2: 500 samples
# tval_dataloder_1: 200 samples
# val_dataloader_2: 500 samples
# test_dataloder_1: 200 samples
# test_dataloader_2: 500 samples
cc @edenafek |
Add ipython kernel detection and give warning when accelerator = "ddp" | [
"feature",
"help wanted"
] | π Feature
Add ipython kernel detection and give warning when accelerator = "ddp"
Motivation
When users try to use ddp as accelerator in Jupyter Notebook or Jupyter Lab, the trainer will be stuck forever and no hints about the cause. So, to better inform developers, it will be great to detect whether the code is run i... |
How to use a callback to run a periodical process inside the training loop? | [
"question",
"waiting on author"
] | I want to modify a certain function during the training loop (let's save every 10 000 global training step). I am using multiple GPUs. Currently, I have implemented it inside the training_step. While the update is happening I want to make sure other DDP processes works. In the following code, I make sure if the global... |
use of pdb.set_trace() inside LightningModule | [
"question",
"won't fix"
] | So being new to lightning, I have rather a basic question that I couldn't quite find the answer to in the docs. So in regular pytorch I can use python's pdb module to pause the execution at any point and like check tensor dimensions/values, etc :
from pdb import set_trace
set_trace()
However I seem not be able to do s... |
Add `ignore` param to `save_hyperparameters` | [
"feature"
] | π Enhancement
Add ignore param to self.save_hyperparameters(). It would support types list and str.
Motivation
I believe Users should be given an option to explicitly mention which parameters should not be saved. |
Mypy complaining about `transfer_batch_to_device` as abstract method | [
"bug",
"help wanted"
] | π Bug
In my projects, mypy complains if I don't override transfer_batch_to_device in DataModule. This is probably due to a design choice, since transfer_batch_to_device is an abstract method. However, it's clear that overriding the method is not mandatory, as the same behavior is obtained (probably somewhere else in l... |
module 'pytorch_lightning.metrics.classification' has no attribute 'AUROC' | [
"bug",
"help wanted",
"working as intended"
] | π Bug
module 'pytorch_lightning.metrics.classification' has no attribute 'AUROC' when I try to use pytorch_lightning.metrics.classification.AUROC
To Reproduce
import pytorch_lightning as pl
auroc = pl.metrics.classification.AUROC()
-pytorch_lightning version == 1.1.8
Additional context
Similarly AUC metric is also mi... |
Support Tensorboard logs dir structure costumizaton | [
"feature",
"help wanted",
"design",
"logger"
] | Feature request:
As a user, i would like to have control over checkpoint dir name and structure.
Motivation:
From @janhenriklambrechts (janhenrik.lambrechts@gmail.com):
My logging version directory of a single pytorch-lighting project with tensorboard logs looks something like this:
βββ checkpoints
β βββ epoch=86-max... |
Avoid patching LightningModule methods during training | [
"feature",
"help wanted",
"let's do it!",
"refactor"
] | π Feature
Can we implement the dataloaders without π-patching the methods in LightningModule?
Motivation
Currently, we patch the LightningModule methods in the trainer when also a DataModule is used.
pytorch-lightning/pytorch_lightning/trainer/connectors/data_connector.py
Line 115
... |
readme typo in master and newest branch | [
"docs"
] | π Documentation
For typos and doc fixes, please go ahead and:
On master and the newest branch, variable loss under training_step is referenced before assignment.
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), n... |
Validation loss Tensor object is print in progress bar, it is expected only value | [
"bug",
"help wanted"
] | π Bug
When I add validation loss in progress bar training, tensor object is printed whereas only loss value is expected.
For example :
Epoch 1: 100%|ββββββββββ| 5/5 [00:04<00:00, 1.21it/s, loss=82.423, v_num=52, val_loss=tensor(76.4331, dtype=torch.float32)]
Validation loss is added with the following command : self... |
Subclassing ProgressBarBase causes pylint crash | [
"bug",
"help wanted",
"3rd party"
] | π Bug
Subclassing ProgressBarBase causes pylint crash
To Reproduce
Code sample
Say file: prog.py
from pytorch_lightning.callbacks.progress import ProgressBarBase
class ProgressBar(ProgressBarBase):
...
Run pylint:
$ pylint prog.py
Exceptions:
Traceback (most recent call last):
File "/home/pwwang/.cache/pyp... |
TPU error | [
"bug",
"help wanted",
"accelerator: tpu"
] | Hi,
I am getting a TPU error on Colab and I am using the latest version of lightning.
Notebook
Trainer:
trainer = pl.Trainer(tpu_cores=8, precision=16, logger=logger, checkpoint_callback=checkpoint_callback, progress_bar_refresh_rate=50, accumulate_grad_batches=2, fast_dev_run=False,\
default_root_d... |
clean-up metric testing | [
"feature",
"ci"
] | π Feature
apply comments in #4043 which were ignored and make the tests easier too understand
Motivation
now the testing so black-boxing, and confusing with the same function SK names in different files |
backward callback does not work on pytorch-lightning version 1.0.0rc3 | [
"bug",
"help wanted"
] | In pytorch-lightning version 0.10 the following code works well. However in pytorch-lightning version 1.0.0rc3, the code does not work
and gives the following error:
TypeError: backward() missing 1 required positional argument: 'optimizer_idx'
Code sample
def configure_optimizers(self):
optimizer = optim.Adam(sel... |
Data Parallel bug (return outputs not being moved to same device) | [
"bug",
"help wanted",
"priority: 0",
"waiting on author",
"strategy: dp",
"logging"
] | π Bug
Under backend='dp' doesn't handle reduction of the loss across multiple GPUs correctly. This is present in v0.10--v1.0.0rc4
To Reproduce
Code sample
import torch
import pytorch_lightning as ptl
from pytorch_lightning import LightningModule
from torch.utils.data import Dataset
class RandomDictDataset(Dataset):... |
Logging on step does not work anymore | [
"bug",
"help wanted"
] | π Bug
Logging on step does not seem to work properly.
To Reproduce
Run the following MNIST example.
Code sample
import os
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from tor... |
Why does the example not use the prepare data hook? | [
"won't fix",
"example",
"docs"
] | I would expect the out of the box example to use the proper prepare_data hook to enable multi node training. Is there a reason that the mnist data is downloaded in main rather than in the style of the sample from the blog post?
If relevant I'd be happy to contribute a patch.
pytorch-lightning/pl_example... |
using LBFGS optimizer in pytorch lightening the model is not converging as compared to native pytoch + LBFGS | [
"bug",
"help wanted",
"priority: 1"
] | Common bugs:
Comparing the results of LBFGS + Pytorch lightening to native pytorch + LBFGS, Pytorch lightening is not able to update wights and model is not converging. there are some issues to point out:
Adam + Pytorch lightening on MNIST works fine, however LBFGS + Pytorch lightening is not working as expected.
LBF... |
1.0.0rc4 Save to TorchScript self.eval() device error | [
"bug",
"help wanted"
] | π Bug
Likely self.eval() sends the model to the CPU, resulting in saving to a TorchScript file fails.
To Reproduce
Run the code sample below.
Code sample
Script can also be downloaded from: https://gist.github.com/NumesSanguis/388b4cfab2a8945afa85e8b79cd0c794
Most relevant code (extended to_torchscript to support tra... |
Avoid unnecessary DDP synchronization when gradient_accumulation_steps > 1 | [
"feature",
"help wanted"
] | π Feature
Avoid unnecessary DDP synchronization when gradient_accumulation_steps > 1
Motivation
When training large models the synchronization is costly and the actual speedup from 2 gpus is much lower than 200%
Pitch
We can use DDP no_sync feature to avoid synchronization in steps that doesn't call optimizer_step |
Recursive Summary for Models | [
"feature",
"help wanted"
] | π Feature
At the moment whenever you start training you get a print out of the model's modules and number of parameters. It would be great if you were able to recursively traverse modules and print out the modules and parameters in these submodules. This could be enabled through a possible flag verbose, which would fu... |
Memory leak when using Metric with list state | [
"help wanted"
] | π Bug
I tried implementing a custom Metric to use as a loss when training. It seems to compute the desired values fine, however like the title states the metric quickly consumes all the memory on my single GPU. Models that previously required less than half of my GPU memory now run into OOMs after less than one epoch.... |
Slurm resubmit at the end of epoch. | [
"feature",
"help wanted",
"won't fix"
] | From my understanding, the current resubmit will stop the model at the middle of epoch, which may have problem with dataloader resuming.
Is it possible that lightning automatically estimates that if a new epoch can be finished within the time limit, and decide if to halt or continue at the end of each epoch. |
How to log by epoch for both training and validation on 1.0.0rc4 / 1.0.0rc5 / 1.0.0 | [
"question"
] | What is your question?
I have been trying out pytorch-lightning 1.0.0rc5 and wanted to log only on epoch end for both training and validation while having in the x-axis the epoch number. I noticed that training_epoch_end now does not allow to return anything. Though I noticed that for training I can achieve what I want... |
Graceful interrupt not graceful | [
"feature",
"help wanted",
"won't fix"
] | The current key interrupt will call on_train_end which will save the checkpoint. However when resuming from the saved checkpoint, it starts a new epoch, which is not graceful: presumably it should restart from the middle of the epoch(ideally); otherwise it should not save the interrupted checkpoint with the same name a... |
Unable to save model using torch.save in pytorch lightning version 0.10 | [
"bug",
"help wanted"
] | π Bug
torch.save(trainer.model, "model.pth") throwing error in pytorch lightning version 0.10
Please reproduce using the following code
import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvisi... |
Weird warning popping out. Unable to understand | [
"question"
] | I'm getting this warning only after first epoch
/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimizer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
I'm unable to understand this. |
How do you correctly subclass an Accelerator? | [
"question"
] | β Questions and Help
What is your question?
I am trying to subclass DDPCPUSpawnAccelerator for test purposes.
Trainer now has an accelerator argument for you to pass an accelerator object. On the other hand, the accelerator has a trainer argument. I asume the later can be initialized to None to avoid this chicken and e... |
Strange validation_step and global_step behavior after every epoch | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Bug
The following was produced with the BoringModel provided. With val_check_interval=10, last validation_step of first epoch was at global_step=59 and first validation_step of second epoch was at global_step=73, and so on. It seems like it is always off by len(dataset) % val_check_interval every time after an epoc... |
Expand to_torchscript to support also TorchScript's trace method | [
"feature",
"help wanted"
] | π Feature
Allow for the user to easily choose between TorchScript's script or trace method to create a module.
Motivation
While TorchScript's script method will work for simple models, it will not always work out of the box when models rely on Python variables to be set. This requires the user to manually annotate t... |
the self.log problem in validation_step() | [
"bug",
"help wanted"
] | as doc say we should use self.log in last version,
but the loged data are strange if we change EvalResult() to self.log(on_epoch=True)
Then we check the data in tensorboard, the self.log() will only log the result of last batch each epoch, instead of the mean of them.
That is quite unreliable about this issue, it must ... |
missing scaler.scale in manual_backward? | [
"bug",
"help wanted"
] | π Bug
When using manual_backward and precision=16, an exception is raised:
RuntimeError: unscale_() has already been called on this optimizer since the last update().
As far as I can see, the scaler is actually never used to scale the loss during manual_backward and in the example here it is not mentioned that one has... |
on_after_backward is called before gradient is unscale_ when using mixed precision | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
From my understanding, one of the main purpose of callback on_after_backward, is to check and log gradients. However, when AMP is being enabled, the gradients you are accessing are not unscaled. I.E. all the numbers and norms you look at will be super large and not really useful.
To Reproduce
Use following code ... |
Trainer: min_steps arguments overwrites early_stopping functionality | [
"help wanted",
"question"
] | Hi,
I observed the following behaviour:
If I set min_steps to a large number (guaranteeing that the early stopping callback gets activated), the trainer will continue training even after reaching the min_steps. This error does not occur with min_epochs.
It will print "Trainer was signaled to stop but required minimum e... |
Accuracy RuntimeError: cannot infer num_classes when target is all zero | [
"bug",
"help wanted"
] | π Bug
ptl.metrics.Accuracy can't infer num_classes from the target vector.
Throws RuntimeError: cannot infer num_classes when target is all zero is each GPU, but inspection of attributes shows that should work.
The error is triggered in this call:
File /aidio/lightning_modules.py", line 340, in validation_step
sel... |
Update the replay buffer in DQN | [
"question",
"won't fix"
] | Line 263-307 in DQN code in Bolts
It states that the training_step method should carry out a single step through the environment to update the buffer. However, I only see the loss calculation.
def training_step(self, batch: Tuple[torch.Tensor, torch.Tensor], _) -> OrderedDict:
"""
Carries out a single s... |
How to log metric value in checkpoint file name with default save dir? | [
"question"
] | Now checkpoint default save to dir named with job id, such as version_1986332/checkpoints/epoch=415.ckpt, I want to save it as version_1986332/checkpoints/epoch=415-loss=0.1.ckpt. I know I can set filepath='my/path/sample-mnist-{epoch:02d}-{val_loss:.2f}', but it won't automatically save to version_{job id} dir. How c... |
Issue with epoch count with repeated save/restore | [
"bug",
"help wanted",
"priority: 0",
"checkpointing"
] | π Bug
I'm trying to save and restore the state of both a model and a pytorch-lightning trainer.
I suspect the epoch count is wrong because I'm not able to save and restore several times with the same max_epoch count.
Here's what I do:
Step 1: run model for max_epochs = 1. Save checkpoint (gets saved as epoch=0.ckpt)
S... |
Hparams are not automatically saved to WandB logger in 1.0.2 | [
"bug",
"help wanted"
] | π Bug
When I update to 1.0.2, when I assign self.hparams = args in Lightning module, the hparams are not logged in WandB anymore. This bug is not present in 1.0.0 however. Snippets of my code.
...
parser.add_argument("--description", type=str, default="Trainig")
...
args = parser.parse_args()
main(args)
# Inside main... |
Resume training from a finished-training model will results in a new incorrect checkpoint | [
"bug",
"duplicate",
"help wanted"
] | To verify a model if a model has finished training. I ran the training script again.
However, when I try to evaluate with the model, I got error that:
pytorch_lightning.utilities.exceptions.MisconfigurationException:
you restored a checkpoint with current_epoch=11
but the Trainer(max_epochs=10)
... |
To many backwards with LBFGS | [
"bug",
"help wanted"
] | π Bug
When using LBFGS we have one backward step to much, because we call backward before the optimiser step (also for gradient accumulation), but the optimizer step get's a closure and therefore calls backward again.
To Reproduce
import torch
import pytorch_lightning as ptl
from pytorch_lightning import LightningMo... |
ModelCheckpoint(monitor='val_loss') crashes when with self.log("val_loss") | [
"bug",
"help wanted"
] | π Bug
ModelCheckpoint is crashing with MisconfigurationException when using monitor and using self.log inside the validation_epoch_end function.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1vqVx1l2tp9adKAeTUS8Q-zUUBQnZtbTY
To Reproduce
N/A
Expected behavior
ModelCheckpo... |
Viewing validation statistics by epoch (on x-axis) broken in Wandb | [
"bug",
"help wanted"
] | π Bug
Here's the boring model.
If you want to view the charts in Weights and Biases with epoch on the X-axis, you get a message that there is "no data availible". Viewing with the step/global step on the X-axis still works. See the two images below:
With epoch on x-axis
With global_step on x-axis
I suspect this is r... |
loss=None and no logs when automatic_optimization=False | [
"bug",
"docs",
"logger"
] | π Bug
I think there is a bug when automatic_optimization=False. The loss=None (
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 336
in
72f1976
loss=untouched_loss,
) and this means that al... |
<auto_select_gpus=True, gpus=-1> raise MisconfigurationException("GPUs requested but none are available.") | [
"bug",
"help wanted"
] | π Bug
auto_select_gpus
if auto_select_gpus enabled and gpus is an integer, pick available gpus automatically.
but, if you set gpus is -1, raise MisconfigurationException("GPUs requested but none are available.")
Please reproduce using [the BoringModel and post here]
bug_auto_select_gpus
Expected behavior
pick all... |
[Feature] Add on_after_backward in Callback. Enable ModelGradTrackerCallback | [
"duplicate",
"feature",
"help wanted",
"good first issue",
"callback"
] | π Feature
Motivation
The call_hook on_after_backward is already implemented, but not added in Callback class.
It boils down to add it within Callback Class
Pitch
Adding this new hook within callback could be used to implement something likeModelGradTrackerCallback.
Alternatives
Additional context |
use of add_embedding as logger.experiment.add_embedding | [
"bug",
"help wanted",
"3rd party"
] | π Bug
def validation_step(self, batch, batch_idx):
x, y = batch
mu, logvar = self.encode(x.view(-1, 784))
z = self.reparameterize(mu, logvar)
x_rec =self(z)
val_loss = self.loss_function(x_rec, x, mu, logvar)
if batch_idx == 0:
n = min(x.size(0), 8)
... |
Checkpoint is saving the model based on the last val_metric_step value and not val_metric_epoch | [
"help wanted",
"docs",
"checkpointing"
] | π Bug
Checkpoint callback did not save some models even thought they achieved better result in the monitored metric, than the currently top k saved models
Expected behavior
Checkpoint callback saving the best scoring models based on a metric
Environment
I am using pytorch-lightning 1.0.2
Update:
I changed the checkpoi... |
Advice on how to use a self-supervised regression scheme within a single step in pl | [
"question",
"won't fix"
] | Hi
I have the following scheme:
class refine_P(LightningModule):
def __init__(
self, hparams,
):
self.model = #
self.regressor #
def training_step(self, batch, batch_idx, is_train=True):
out = self.model(batch)
self.regressor.reset_parameters()
base_loss = self.base... |
Add total params to weights_summary table | [
"duplicate",
"feature",
"help wanted",
"won't fix"
] | π Feature
Add total number of parameters when printing the weights_summary table.
Motivation
Since the total number of parameters for each layer is already calculated, it would be really informative if a total sum of number of parameters were also provided. Something like https://github.com/TylerYep/torch-summary or h... |
segmentation fault when import pytorch_lightning | [
"question"
] | I am trying the very minimum code which imports torch and pytorch_lightning, with the following code:
import pytorch_lightning as pl
The import of pytorch_lightning fails immediately with the error: 29144 segmentation fault (core dumped) python
I am using pytorch-dev 1.7.0 as it is required for cuda11 with new GPUs, ... |
Batch size finder is not working if batch_size is specified in LightningDataModule | [
"bug",
"help wanted",
"trainer: tune"
] | π Bug
The batch size finder won't work if the batch size is specified in the LightningDataModule, only (it is natural to define it there).
An instance of a LightningModule always has the attribute hparams; the batch_size finder raises a MisconfigurationException if batch_size isn't found there.
Please reproduce using ... |
Gif on the main repo page is outdated | [
"docs"
] | It uses TrainResult which was deprecated |
Circulation training with different seed increases memory | [
"bug",
"help wanted",
"won't fix"
] | π Bug
I reproduce using [the BoringModel and post here]
https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing
Because of the needs of my project, I need to run the program over and over again to measure the performance of the model. So Each time I give the model a different SEED. This ... |
Metrics do not support multilabel tasks. | [
"bug",
"help wanted"
] | π Bug
Scikit-learn metrics deal well will multilabel tasks, but this doesn't seem to be supported in Pytorch-Lightning metrics. There is this #3350 , but it seems to confuse multiclass with multilabel (multiple values to predict).
To Reproduce
Given predictions tensor:
tensor([[0., 0.],
[0., 0.],
... |
.fit() hangs when using DDP with relative imports in main code | [
"bug",
"help wanted",
"won't fix",
"distributed",
"priority: 2"
] | π Bug
My training script is a module within a package. If my module uses relative imports and ddp backend, it throws an error about relative imports and hangs. Using ddp_spawn backend and relative imports works as expected.
The process becomes unresponsive even to Ctrl-C and I have to kill it and its subprocesses by P... |
Function IoU - Input Expectations | [
"docs"
] | π Documentation
Changing input description to include that predictions should be an integer mask rather than raw logits [b, h, w] vs. [b, c, h, w]
If logits are provided solution runs with warning about number of classes being set incorrectly - from feeding the returned determined num_classes from get_num_classes() to... |
GAN example doesn't work | [
"bug",
"help wanted"
] | π Bug
The GAN example for MNIST doesn't work as advised in the script basic_gan_module.py
Please reproduce using the BoringModel and post here
To reproduce the issue just run:
basic_gan_module.py --gpus 1
To Reproduce
Expected behavior
Train the GAN network well for MNIST dataset with the default setting.
Environment... |
Tidy up returns from `ddp_train()` of accelerators | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
There are currently inconsistent returns from the method ddp_train of accelerator classes.
For example, the method ddp_train of DDP2Accelerator returns results while ddp_train of DDPSpawnAccelerator returns None. Although I am not familiar with distributed training, it seems that both of the methods should retur... |
slurm auto re-queue inconsistency | [
"bug",
"help wanted",
"won't fix",
"checkpointing",
"environment: slurm",
"priority: 1"
] | Hi! I submitted a slurm job-array with pytorch lightning functionality. I used the suggested signal (#SBATCH --signal=SIGUSR1@90) and set distributed_backend to 'ddp' in the Trainer call. I did notice successful auto-resubmission this morning whenever my jobs were pre-emptied; however, I now notice that several of them... |
@abstractmethod on virtual methods | [
"bug",
"help wanted",
"good first issue",
"won't fix",
"priority: 1"
] | There are a number of methods of the LightningDataModule which are marked as @abstractmethod even though they're not really abstract i.e. transfer_batch_to_device.
Requires that the metaclass is ABCMeta or derived from it. A
class that has a metaclass derived from ABCMeta cannot be
instantiated unless all of its abst... |
WandbLogger fails in 1.0.2 due to non-JSON serializable object | [
"bug",
"help wanted"
] | π Bug
After updating to PL 1.0.2, the WandbLogger fails with the following TypeError:
Traceback (most recent call last):
File "wandblogger_issue.py", line 12, in <module>
wandb_logger.log_hyperparams(vars(args))
File "/home/groups/mignot/miniconda3/envs/pl/lib/python3.7/site-packages/pytorch_lightning/utilitie... |
k-fold cross validation using DataModule | [
"duplicate",
"feature",
"help wanted",
"question",
"data handling"
] | DataModule is great!
I'm wondering how it can be applied to handle with k-fold cross-validation. |
Limit_train_batches vs val_check_interval | [
"question",
"won't fix"
] | Does limit_train_batches=0.5 and val_check_interval=0.5 effectively do the same thing (minus impacting the total number of epochs)? That is, if my data loader is shuffling and I use limit_train_batches, can I safely assume that after 2 epochs I will have gone through the whole dataset or will I only go through the same... |
Segmentation Fault when training 3D CNN using 4 GPUs with batch_size=8 | [
"bug",
"help wanted",
"won't fix"
] | π Bug
Trying to train 3D CNN using 4 GPUs, batch_size = 8, and num_workers >= 4 (ddp backend). I'm using a GCP VM with 16 cores and 60GB memory. The data is stored on a mounted disk and is roughly 3 TB.
I can successfully train using 2 GPUs, batch_size=4, and num_workers=4, but whenever I try increasing number of GPUs... |
auto_select_gpu does not free memory allocated on GPU for DDP/Horovod | [
"bug",
"help wanted",
"won't fix",
"distributed",
"priority: 1"
] | π Bug
If using both auto_select_gpu=True and the ddp or horovod accelerator, the memory allocated by pick_single_gpu is not freed by torch.
This can't be reproduced on Colab since DDP isn't supported.
To Reproduce
Paste this code into a file and run it:
import pytorch_lightning as pl
from torch import nn
from torchvis... |
Problem with truncated_bptt_steps | [
"help wanted",
"working as intended"
] | π Bug
When setting truncated_bptt_steps we can observed 3 bugs:
1. An exception is raised when num_sanity_val_steps=2:
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py in get_progress_bar_dict(self)
1355
1356 if self.trainer.truncated_bptt_steps is not None:
-> 1357 ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.