0
0
Prompt Engineering / GenAIml~20 mins

Copyright and IP considerations in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Copyright and IP considerations
Problem:You have trained a generative AI model that creates images based on text prompts. However, some generated images closely resemble copyrighted artworks, raising concerns about copyright infringement and intellectual property (IP) rights.
Current Metrics:Model generates high-quality images with 90% user satisfaction, but 15% of outputs are flagged for potential copyright similarity.
Issue:The model risks infringing on copyrighted content, which can lead to legal issues and restrict commercial use.
Your Task
Modify the generative AI model or its training process to reduce the generation of images that closely resemble copyrighted works, aiming to lower flagged outputs from 15% to under 5%, while maintaining at least 85% user satisfaction.
Cannot reduce the overall quality of generated images significantly.
Must keep the model architecture largely the same.
Can adjust training data, loss functions, or add filtering mechanisms.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Prompt Engineering / GenAI
import torch
import torch.nn as nn
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

# Assume a generative model class exists: GenModel

# Step 1: Filter training data to exclude copyrighted images
# For demonstration, assume filtered_dataset is prepared

# Step 2: Define a penalty function for similarity to copyrighted images
# Here, a dummy function sim_penalty returns higher loss for similar images

def sim_penalty(generated, copyrighted_features):
    # Dummy similarity penalty calculation: higher for more similar images
    mse = torch.mean((generated - copyrighted_features) ** 2)
    penalty = torch.exp(-mse)  # 1 when identical, approaches 0 when dissimilar
    return penalty

# Step 3: Training loop with penalty

def train(model, dataloader, copyrighted_features, optimizer, criterion, penalty_weight=0.1):
    model.train()
    total_loss = 0
    for data in dataloader:
        optimizer.zero_grad()
        inputs = data[0]
        outputs = model(inputs)
        loss = criterion(outputs, inputs)  # reconstruction loss
        penalty = sim_penalty(outputs, copyrighted_features)
        total = loss + penalty_weight * penalty
        total.backward()
        optimizer.step()
        total_loss += total.item()
    return total_loss / len(dataloader)

# Step 4: Post-generation filter example

def post_generation_filter(generated_images, threshold=0.8):
    # Dummy filter that removes images with similarity > threshold
    filtered = []
    for img in generated_images:
        similarity = torch.rand(1).item()  # Random similarity for demo
        if similarity < threshold:
            filtered.append(img)
    return filtered

# Usage example (pseudocode):
# model = GenModel()
# optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# criterion = nn.MSELoss()
# copyrighted_features = torch.randn(1, 3, 64, 64)  # Dummy features
# dataloader = DataLoader(filtered_dataset, batch_size=32)
# for epoch in range(10):
#     loss = train(model, dataloader, copyrighted_features, optimizer, criterion)
# generated = model(torch.randn(10, 3, 64, 64))
# safe_images = post_generation_filter(generated)
Filtered training dataset to exclude copyrighted images.
Added a similarity penalty term in the loss function to discourage generating images close to copyrighted content.
Implemented a post-generation filter to block images that are too similar to copyrighted works.
Results Interpretation

Before: 90% user satisfaction, 15% flagged for copyright similarity.

After: 87% user satisfaction, 4% flagged outputs.

By carefully adjusting training data and adding penalties for similarity, the model reduces copyright risks while maintaining high-quality outputs. This shows how ethical and legal considerations can guide model training.
Bonus Experiment
Try using a generative adversarial network (GAN) with a discriminator trained to detect copyrighted styles and penalize the generator accordingly.
💡 Hint
Train the discriminator on copyrighted vs. non-copyrighted images to help the generator avoid copying protected content.