最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

cifar100 - Python code learning a score based diffusion model for CIFAR10 - Stack Overflow

programmeradmin3浏览0评论

I am trying to implement python code for learning score functions of CIFAR10 dataset for generating images from that dataset based on diffusion process of Song2020 paper. Although I tried different settings I can't manage to generate good samples not sure what is the bog of my code in training or sampling step. Can anyone help me?

# %%
save_path ="ckptAll.pth"
device = 'mps'

# %%
from torchvision.datasets import CIFAR10
from torchvision import transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import torch
import numpy as np
import warnings

warnings.filterwarnings("ignore")

# Define the batch size
batch_size = 256


# Define a transform to resize the images to 28x28 and convert to tensors without grayscale conversion
transform = transforms.Compose([
    transforms.Resize((28, 28)),
    # transforms.RandomHorizontalFlip(p=0.5),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])

# Load the CIFAR-10 dataset
dataset = CIFAR10('.', train=True, transform=transform, download=True)

# idx = [i for i, (img, label) in enumerate(dataset) if label == 2]
# dataset = torch.utils.data.Subset(dataset, idx)
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=4)

# Get a single batch of images and labels
dataiter = iter(data_loader)
images, labels = next(dataiter)

# Print the size of the images
print(f"Image size: {images.size()}")

def cvtImg(img):
    img = img.permute([0, 2, 3, 1])
    img = img - img.min()
    img = (img / img.max())
    return img.numpy().astype(np.float32)

def show_examples(x):
    plt.figure(figsize=(6, 6),dpi=200)
    # plt.subplots_adjust(left=0, right=1, top=1, bottom=0, wspace=0, hspace=0)

    imgs = cvtImg(x)
    for i in range(256):
        plt.subplot(16, 16, i+1)
        plt.imshow(imgs[i])
        plt.axis('off')

x, _ = next(iter(data_loader))
show_examples(x)

# %%
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

class GaussianFourierProjection(nn.Module):
    """Gaussian random features for encoding time steps."""
    def __init__(self, embed_dim, scale=30.):
        super().__init__()
        self.W = nn.Parameter(torch.randn(embed_dim // 2) * scale, requires_grad=False)  # PyTorch Parameter

    def forward(self, x):
        x_proj = x[:, None] * self.W[None, :] * 2 * np.pi
        return torch.cat([torch.sin(x_proj), torch.cos(x_proj)], dim=-1)

class Dense(nn.Module):
    """A fully connected layer that reshapes outputs to feature maps."""
    def __init__(self, input_dim, output_dim):
        super().__init__()
        self.dense = nn.Linear(input_dim, output_dim)
    def forward(self, x):
        return self.dense(x)[..., None, None]

class ChannelAttention(nn.Module):
    """Channel Attention Layer (Squeeze-and-Excitation)"""
    def __init__(self, channels, reduction=16):
        super().__init__()
        self.fc1 = nn.Linear(channels, channels // reduction, bias=False)
        self.fc2 = nn.Linear(channels // reduction, channels, bias=False)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        # Squeeze: Global Average Pooling
        avg_pool = torch.mean(x, dim=[2, 3], keepdim=True)
        # Excitation: Fully connected layers
        excitation = self.fc1(avg_pool.view(avg_pool.size(0), -1))
        excitation = self.fc2(excitation)
        attention = self.sigmoid(excitation).view(x.size(0), x.size(1), 1, 1)
        return x * attention

class ScoreNet(nn.Module):
    """A time-dependent score-based model built upon U-Net architecture."""
    def __init__(self, marginal_prob_std, channels=[32, 64, 128, 256], embed_dim=256):
        super().__init__()
        self.embed = nn.Sequential(GaussianFourierProjection(embed_dim=embed_dim),
                                   nn.Linear(embed_dim, embed_dim))
        
        self.conv1 = nn.Conv2d(3, channels[0], 3, stride=1, bias=False)
        self.dense1 = Dense(embed_dim, channels[0])
        self.gnorm1 = nn.GroupNorm(4, num_channels=channels[0])
        self.conv2 = nn.Conv2d(channels[0], channels[1], 3, stride=2, bias=False)
        self.dense2 = Dense(embed_dim, channels[1])
        self.gnorm2 = nn.GroupNorm(32, num_channels=channels[1])
        self.conv3 = nn.Conv2d(channels[1], channels[2], 3, stride=2, bias=False)
        self.dense3 = Dense(embed_dim, channels[2])
        self.gnorm3 = nn.GroupNorm(32, num_channels=channels[2])
        self.conv4 = nn.Conv2d(channels[2], channels[3], 3, stride=2, bias=False)
        self.dense4 = Dense(embed_dim, channels[3])
        self.gnorm4 = nn.GroupNorm(32, num_channels=channels[3])

        # # Adding Attention Layer
        self.att1 = ChannelAttention(channels[0])
        self.att2 = ChannelAttention(channels[1])
        self.att3 = ChannelAttention(channels[2])
        self.att4 = ChannelAttention(channels[3])

        self.tconv4 = nn.ConvTranspose2d(channels[3], channels[2], 3, stride=2, bias=False)
        self.dense5 = Dense(embed_dim, channels[2])
        self.tgnorm4 = nn.GroupNorm(32, num_channels=channels[2])
        self.tconv3 = nn.ConvTranspose2d(channels[2] + channels[2], channels[1], 3, stride=2, bias=False, output_padding=1)
        self.dense6 = Dense(embed_dim, channels[1])
        self.tgnorm3 = nn.GroupNorm(32, num_channels=channels[1])
        self.tconv2 = nn.ConvTranspose2d(channels[1] + channels[1], channels[0], 3, stride=2, bias=False, output_padding=1)
        self.dense7 = Dense(embed_dim, channels[0])
        self.tgnorm2 = nn.GroupNorm(32, num_channels=channels[0])
        self.tconv1 = nn.ConvTranspose2d(channels[0] + channels[0], 3, 3, stride=1)

        self.act = lambda x: x * torch.sigmoid(x)
        self.marginal_prob_std = marginal_prob_std

    def forward(self, x, t):
        embed = self.act(self.embed(t))
        h1 = self.conv1(x)
        h1 += self.dense1(embed)
        h1 = self.gnorm1(h1)
        h1 = self.act(h1)
        h1 = self.att1(h1)  # Apply attention

        h2 = self.conv2(h1)
        h2 += self.dense2(embed)
        h2 = self.gnorm2(h2)
        h2 = self.act(h2)
        h2 = self.att2(h2)  # Apply attention

        h3 = self.conv3(h2)
        h3 += self.dense3(embed)
        h3 = self.gnorm3(h3)
        h3 = self.act(h3)
        h3 = self.att3(h3)  # Apply attention

        h4 = self.conv4(h3)
        h4 += self.dense4(embed)
        h4 = self.gnorm4(h4)
        h4 = self.act(h4)
        h4 = self.att4(h4)  # Apply attention

        h = self.tconv4(h4)
        h += self.dense5(embed)
        h = self.tgnorm4(h)
        h = self.act(h)
        h = self.tconv3(torch.cat([h, h3], dim=1))
        h += self.dense6(embed)
        h = self.tgnorm3(h)
        h = self.act(h)
        h = self.tconv2(torch.cat([h, h2], dim=1))
        h += self.dense7(embed)
        h = self.tgnorm2(h)
        h = self.act(h)
        h = self.tconv1(torch.cat([h, h1], dim=1))

        h = h / self.marginal_prob_std(t)[:, None, None, None]
        return h


# %%
#@title Set up the SDE
import functools


def marginal_prob_std(t, sigma):
  """Compute the mean and standard deviation of $p_{0t}(x(t) | x(0))$.

  Args:
    t: A vector of time steps.
    sigma: The $\sigma$ in our SDE.

  Returns:
    The standard deviation.
  """
  t = torch.tensor(t, device=device)
  return torch.sqrt((sigma**(2 * t) - 1.) / 2. / np.log(sigma))

def diffusion_coeff(t, sigma):
  """Compute the diffusion coefficient of our SDE.

  Args:
    t: A vector of time steps.
    sigma: The $\sigma$ in our SDE.

  Returns:
    The vector of diffusion coefficients.
  """
  return torch.tensor(sigma**t, device=device)

sigma =  15.0#@param {'type':'number'}
marginal_prob_std_fn = functools.partial(marginal_prob_std, sigma=sigma)
diffusion_coeff_fn = functools.partial(diffusion_coeff, sigma=sigma)

# %%
#@title Define the loss function (double click to expand or collapse)

def loss_fn(model, x, marginal_prob_std, eps=1e-5):
  """The loss function for training score-based generative models.

  Args:
    model: A PyTorch model instance that represents a
      time-dependent score-based model.
    x: A mini-batch of training data.
    marginal_prob_std: A function that gives the standard deviation of
      the perturbation kernel.
    eps: A tolerance value for numerical stability.
  """
  random_t = torch.rand(x.shape[0], device=x.device) * (1. - eps) + eps
  z = torch.randn_like(x)
  std = marginal_prob_std(random_t)
  perturbed_x = x + z * std[:, None, None, None]
  score = model(perturbed_x, random_t)
  loss = torch.mean(torch.sum((score * std[:, None, None, None] + z)**2, dim=(1,2,3)))
  return loss

# %%

import torch
import functools
from torch.optim import Adam
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torchvision.datasets import MNIST
import multiprocessing

# from tqdm.notebook import tqdm
import functools
score_model = torch.nn.DataParallel(ScoreNet(marginal_prob_std=marginal_prob_std_fn))
score_model = score_model.to(device)
# ckpt = torch.load(save_path, map_location=device)
# score_model.load_state_dict(ckpt)

n_epochs =   5000#@param {'type':'integer'}
## size of a mini-batch
batch_size =  128 #@param {'type':'integer'}
## learning rate
lr=5e-4 #@param {'type':'number'}



optimizer = Adam(score_model.parameters(), lr=lr, weight_decay=0.000,
                              betas=(0.9, 0.999))
def training():

    for epoch in range(n_epochs):
        if epoch% 50 == 0:
            print(f'epoch={epoch}')
            torch.save(score_model.state_dict(), save_path)

        avg_loss = 0.
        num_items = 0
        for x, y in data_loader:
            x = x.to(device)
            loss = loss_fn(score_model, x, marginal_prob_std_fn)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            avg_loss += loss.item() * x.shape[0]
            num_items += x.shape[0]

        # Print the averaged training loss so far.
        # tqdm_epoch.set_description('Average Loss: {:5f}'.format(avg_loss / num_items))
        # Update the checkpoint after each epoch of training.
        # torch.save(score_model.state_dict(), 'ckptOR1.pth')
        torch.save(score_model.state_dict(), save_path)
    return; 


training()
        

# %%
import torch
import numpy as np
import tqdm

signal_to_noise_ratio = 0.2
 # Signal-to-noise ratio
num_steps = 500  # Number of sampling steps
score_model = torch.nn.DataParallel(ScoreNet(marginal_prob_std=marginal_prob_std_fn))
score_model = score_model.to(device)

def pc_sampler(score_model,
               marginal_prob_std,
               diffusion_coeff,
               batch_size=64,
               num_steps=num_steps,
               snr=signal_to_noise_ratio,
               device='cuda',
               eps=1e-5):
    """Generate samples from score-based models with Predictor-Corrector method.

    Args:
        score_model: A PyTorch model that represents the time-dependent score-based model.
        marginal_prob_std: A function that gives the standard deviation
            of the perturbation kernel.
        diffusion_coeff: A function that gives the diffusion coefficient
            of the SDE.
        batch_size: The number of samples to generate by calling this function once.
        num_steps: The number of sampling steps.
            Equivalent to the number of discretized time steps.
        device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
        eps: The smallest time step for numerical stability.

    Returns:
        Samples.
    """
    t = torch.ones(batch_size, device=device)
    init_x = torch.randn(batch_size, 3, 28, 28, device=device) * marginal_prob_std(t)[:, None, None, None]  # Change to 3 channels
    time_steps = np.linspace(1., eps, num_steps)
    step_size = time_steps[0] - time_steps[1]
    x = init_x
    with torch.no_grad():
        for time_step in tqdm(time_steps):
            batch_time_step = torch.ones(batch_size, device=device) * time_step
            # Corrector step (Langevin MCMC)
            grad = score_model(x, batch_time_step)
            grad_norm = torch.norm(grad.reshape(grad.shape[0], -1), dim=-1).mean()
            noise_norm = np.sqrt(np.prod(x.shape[1:]))
            langevin_step_size = 2 * (snr * noise_norm / grad_norm)**2
            x = x + langevin_step_size * grad + torch.sqrt(2 * langevin_step_size) * torch.randn_like(x)

            # Predictor step (Euler-Maruyama)
            g = diffusion_coeff(batch_time_step)
            x_mean = x + (g**2)[:, None, None, None] * score_model(x, batch_time_step) * step_size
            x = x_mean + torch.sqrt(g**2 * step_size)[:, None, None, None] * torch.randn_like(x)

        # The last step does not include any noise
        return x_mean


# %%
save_path='ckptAllA15.pth'
score_model = torch.nn.DataParallel(ScoreNet(marginal_prob_std=marginal_prob_std_fn))
score_model = score_model.to(device)

ckpt = torch.load(save_path, map_location=device)
score_model.load_state_dict(ckpt, strict=False)

# %%

from IPython import display

from torchvision.utils import make_grid
from tqdm import tqdm
sample_batch_size = 64#@param {'type':'integer'}
import random
random.seed(100)
# ckpt = torch.load(save_path, map_location=device)
# score_model.load_state_dict(ckpt)

def show_examples(x):
    plt.figure(figsize=(4, 4),dpi=200)
    imgs = cvtImg(x.cpu())
    for i in range(sample_batch_size):
        plt.subplot(8, 8, i+1)
        plt.imshow(imgs[i])
        plt.axis('off')
    

    plt.show()
    
        

num_steps=500
sampler = pc_sampler

## Generate samples using the specified sampler.
samples = sampler(score_model,
                  marginal_prob_std_fn,
                  diffusion_coeff_fn,
                  sample_batch_size,
                  num_steps=num_steps,
                  device=device)
show_examples(samples)
发布评论

评论列表(0)

  1. 暂无评论