Weights & Biases (W&B)

Weights & Biases (W&B)

1. W&B κ°œμš”

Weights & BiasesλŠ” ML μ‹€ν—˜ 좔적, ν•˜μ΄νΌνŒŒλΌλ―Έν„° νŠœλ‹, λͺ¨λΈ 관리λ₯Ό μœ„ν•œ ν”Œλž«νΌμž…λ‹ˆλ‹€.

1.1 핡심 κΈ°λŠ₯

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Weights & Biases κΈ°λŠ₯                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                     β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚   β”‚ Experiments β”‚    β”‚   Sweeps    β”‚    β”‚  Artifacts  β”‚            β”‚
β”‚   β”‚             β”‚    β”‚             β”‚    β”‚             β”‚            β”‚
β”‚   β”‚ - μ‹€ν—˜ 좔적  β”‚    β”‚ - ν•˜μ΄νΌ     β”‚    β”‚ - 데이터셋   β”‚            β”‚
β”‚   β”‚ - λ©”νŠΈλ¦­    β”‚    β”‚   νŒŒλΌλ―Έν„°   β”‚    β”‚ - λͺ¨λΈ      β”‚            β”‚
β”‚   β”‚ - μ‹œκ°ν™”    β”‚    β”‚   νŠœλ‹      β”‚    β”‚ - 버전 관리  β”‚            β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚                                                                     β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚   β”‚   Tables    β”‚    β”‚   Reports   β”‚    β”‚   Models    β”‚            β”‚
β”‚   β”‚             β”‚    β”‚             β”‚    β”‚             β”‚            β”‚
β”‚   β”‚ - 데이터    β”‚    β”‚ - λ¬Έμ„œν™”    β”‚    β”‚ - λͺ¨λΈ      β”‚            β”‚
β”‚   β”‚   μ‹œκ°ν™”    β”‚    β”‚ - 곡유      β”‚    β”‚   λ ˆμ§€μŠ€νŠΈλ¦¬ β”‚            β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚                                                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1.2 μ„€μΉ˜ 및 μ„€μ •

# μ„€μΉ˜
pip install wandb

# 둜그인
wandb login
# API ν‚€ μž…λ ₯ (https://wandb.ai/authorize)

# ν™˜κ²½ λ³€μˆ˜λ‘œ μ„€μ •
export WANDB_API_KEY=your-api-key
# Pythonμ—μ„œ 둜그인
import wandb
wandb.login(key="your-api-key")

2. κΈ°λ³Έ μ‹€ν—˜ 좔적

2.1 첫 번째 μ‹€ν—˜

"""
W&B κΈ°λ³Έ μ‚¬μš©λ²•
"""

import wandb
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report

# 데이터 μ€€λΉ„
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(
    iris.data, iris.target, test_size=0.2, random_state=42
)

# W&B μ΄ˆκΈ°ν™”
wandb.init(
    project="iris-classification",    # ν”„λ‘œμ νŠΈ 이름
    name="random-forest-baseline",    # μ‹€ν–‰ 이름
    config={                          # ν•˜μ΄νΌνŒŒλΌλ―Έν„°
        "n_estimators": 100,
        "max_depth": 5,
        "random_state": 42
    },
    tags=["baseline", "random-forest"],
    notes="Initial baseline experiment"
)

# config μ ‘κ·Ό
config = wandb.config

# λͺ¨λΈ ν•™μŠ΅
model = RandomForestClassifier(
    n_estimators=config.n_estimators,
    max_depth=config.max_depth,
    random_state=config.random_state
)
model.fit(X_train, y_train)

# 예츑 및 평가
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)

# λ©”νŠΈλ¦­ λ‘œκΉ…
wandb.log({
    "accuracy": accuracy,
    "test_size": len(X_test),
    "train_size": len(X_train)
})

# μ‹€ν–‰ μ’…λ£Œ
wandb.finish()

2.2 ν•™μŠ΅ κ³Όμ • λ‘œκΉ…

"""
ν•™μŠ΅ κ³Όμ • μ‹€μ‹œκ°„ λ‘œκΉ…
"""

import wandb
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader

# μ΄ˆκΈ°ν™”
wandb.init(project="pytorch-training")

# λͺ¨λΈ μ •μ˜
model = nn.Sequential(
    nn.Linear(784, 256),
    nn.ReLU(),
    nn.Linear(256, 10)
)

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=wandb.config.get("lr", 0.001))

# W&Bμ—μ„œ λͺ¨λΈ κ·Έλž˜ν”„ 좔적
wandb.watch(model, criterion, log="all", log_freq=100)

# ν•™μŠ΅ 루프
for epoch in range(wandb.config.get("epochs", 10)):
    model.train()
    train_loss = 0

    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()

        train_loss += loss.item()

        # λ°°μΉ˜λ³„ λ‘œκΉ… (선택)
        if batch_idx % 100 == 0:
            wandb.log({
                "batch_loss": loss.item(),
                "epoch": epoch,
                "batch": batch_idx
            })

    # 에폭별 λ‘œκΉ…
    avg_loss = train_loss / len(train_loader)
    val_accuracy = evaluate(model, val_loader)

    wandb.log({
        "epoch": epoch,
        "train_loss": avg_loss,
        "val_accuracy": val_accuracy
    })

    # 체크포인트 μ €μž₯
    if val_accuracy > best_accuracy:
        torch.save(model.state_dict(), "best_model.pth")
        wandb.save("best_model.pth")
        best_accuracy = val_accuracy

wandb.finish()

2.3 λ‹€μ–‘ν•œ 데이터 λ‘œκΉ…

"""
λ‹€μ–‘ν•œ 데이터 νƒ€μž… λ‘œκΉ…
"""

import wandb
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image

wandb.init(project="data-logging-demo")

# 1. 이미지 λ‘œκΉ…
images = wandb.Image(
    np.random.rand(100, 100, 3),
    caption="Random Image"
)
wandb.log({"random_image": images})

# PIL 이미지
pil_image = Image.open("sample.png")
wandb.log({"pil_image": wandb.Image(pil_image)})

# μ—¬λŸ¬ 이미지
wandb.log({
    "examples": [wandb.Image(img, caption=f"Sample {i}")
                 for i, img in enumerate(image_batch[:10])]
})

# 2. ν”Œλ‘― λ‘œκΉ…
fig, ax = plt.subplots()
ax.plot([1, 2, 3], [1, 4, 9])
ax.set_title("Training Curve")
wandb.log({"plot": wandb.Image(fig)})
plt.close()

# λ˜λŠ” plotly μ‚¬μš©
import plotly.express as px
fig = px.scatter(x=[1, 2, 3], y=[1, 4, 9])
wandb.log({"plotly_chart": fig})

# 3. νžˆμŠ€ν† κ·Έλž¨
wandb.log({"predictions": wandb.Histogram(predictions)})

# 4. ν…Œμ΄λΈ”
columns = ["id", "image", "prediction", "label"]
data = [
    [i, wandb.Image(img), pred, label]
    for i, (img, pred, label) in enumerate(zip(images, preds, labels))
]
table = wandb.Table(columns=columns, data=data)
wandb.log({"predictions_table": table})

# 5. Confusion Matrix
wandb.log({
    "confusion_matrix": wandb.plot.confusion_matrix(
        y_true=y_true,
        preds=y_pred,
        class_names=class_names
    )
})

# 6. ROC Curve
wandb.log({
    "roc_curve": wandb.plot.roc_curve(
        y_true, y_scores, labels=class_names
    )
})

# 7. PR Curve
wandb.log({
    "pr_curve": wandb.plot.pr_curve(
        y_true, y_scores, labels=class_names
    )
})

wandb.finish()

3. Sweeps (ν•˜μ΄νΌνŒŒλΌλ―Έν„° νŠœλ‹)

3.1 Sweep μ„€μ •

"""
W&B Sweeps μ„€μ •
"""

import wandb

# Sweep μ„€μ •
sweep_config = {
    "name": "hyperparam-sweep",
    "method": "bayes",  # random, grid, bayes
    "metric": {
        "name": "val_accuracy",
        "goal": "maximize"
    },
    "parameters": {
        "learning_rate": {
            "distribution": "log_uniform_values",
            "min": 1e-5,
            "max": 1e-1
        },
        "batch_size": {
            "values": [16, 32, 64, 128]
        },
        "epochs": {
            "value": 50  # κ³ μ •κ°’
        },
        "optimizer": {
            "values": ["adam", "sgd", "rmsprop"]
        },
        "hidden_dim": {
            "distribution": "int_uniform",
            "min": 32,
            "max": 256
        },
        "dropout": {
            "distribution": "uniform",
            "min": 0.0,
            "max": 0.5
        }
    },
    "early_terminate": {
        "type": "hyperband",
        "min_iter": 5,
        "eta": 3
    }
}

# Sweep 생성
sweep_id = wandb.sweep(sweep_config, project="my-project")
print(f"Sweep ID: {sweep_id}")

3.2 Sweep Agent μ‹€ν–‰

"""
Sweep ν•™μŠ΅ ν•¨μˆ˜
"""

import wandb
import torch

def train_sweep():
    """Sweepμ—μ„œ 싀행될 ν•™μŠ΅ ν•¨μˆ˜"""
    # W&B μ΄ˆκΈ°ν™” (sweep이 configλ₯Ό 제곡)
    wandb.init()
    config = wandb.config

    # λͺ¨λΈ 생성
    model = create_model(
        hidden_dim=config.hidden_dim,
        dropout=config.dropout
    )

    # μ˜΅ν‹°λ§ˆμ΄μ € μ„€μ •
    if config.optimizer == "adam":
        optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
    elif config.optimizer == "sgd":
        optimizer = torch.optim.SGD(model.parameters(), lr=config.learning_rate)
    else:
        optimizer = torch.optim.RMSprop(model.parameters(), lr=config.learning_rate)

    # λ°μ΄ν„°λ‘œλ”
    train_loader = DataLoader(train_dataset, batch_size=config.batch_size)
    val_loader = DataLoader(val_dataset, batch_size=config.batch_size)

    # ν•™μŠ΅
    for epoch in range(config.epochs):
        train_loss = train_one_epoch(model, train_loader, optimizer)
        val_accuracy = evaluate(model, val_loader)

        wandb.log({
            "train_loss": train_loss,
            "val_accuracy": val_accuracy,
            "epoch": epoch
        })

    wandb.finish()

# Sweep μ‹€ν–‰
wandb.agent(
    sweep_id,
    function=train_sweep,
    count=50  # μ΅œλŒ€ μ‹€ν–‰ 횟수
)

3.3 CLIμ—μ„œ Sweep μ‹€ν–‰

# sweep.yaml 파일 생성
# sweep μ‹œμž‘
wandb sweep sweep.yaml

# Agent μ‹€ν–‰ (μ—¬λŸ¬ λ¨Έμ‹ μ—μ„œ 병렬 κ°€λŠ₯)
wandb agent username/project/sweep_id
# sweep.yaml
name: hyperparameter-sweep
method: bayes
metric:
  name: val_accuracy
  goal: maximize
parameters:
  learning_rate:
    distribution: log_uniform_values
    min: 0.00001
    max: 0.1
  batch_size:
    values: [16, 32, 64]
  hidden_dim:
    distribution: int_uniform
    min: 64
    max: 512

4. Artifacts

4.1 데이터셋 버전 관리

"""
W&B Artifacts둜 데이터셋 관리
"""

import wandb

# μ•„ν‹°νŒ©νŠΈ 생성 및 μ—…λ‘œλ“œ
wandb.init(project="dataset-versioning")

# 데이터셋 μ•„ν‹°νŒ©νŠΈ 생성
dataset_artifact = wandb.Artifact(
    name="mnist-dataset",
    type="dataset",
    description="MNIST dataset for classification",
    metadata={
        "size": 70000,
        "classes": 10,
        "source": "torchvision"
    }
)

# 파일/디렉토리 μΆ”κ°€
dataset_artifact.add_file("data/train.csv")
dataset_artifact.add_dir("data/images/")

# 원격 μ°Έμ‘° μΆ”κ°€ (λ‹€μš΄λ‘œλ“œ 없이 참쑰만)
dataset_artifact.add_reference("s3://bucket/large_data/")

# μ—…λ‘œλ“œ
wandb.log_artifact(dataset_artifact)
wandb.finish()

4.2 λͺ¨λΈ μ•„ν‹°νŒ©νŠΈ

"""
λͺ¨λΈ μ•„ν‹°νŒ©νŠΈ 관리
"""

import wandb
import torch

wandb.init(project="model-artifacts")

# ν•™μŠ΅ ν›„...

# λͺ¨λΈ μ•„ν‹°νŒ©νŠΈ 생성
model_artifact = wandb.Artifact(
    name="churn-model",
    type="model",
    description="Customer churn prediction model",
    metadata={
        "accuracy": 0.95,
        "framework": "pytorch",
        "architecture": "MLP"
    }
)

# λͺ¨λΈ 파일 μ €μž₯ 및 μΆ”κ°€
torch.save(model.state_dict(), "model.pth")
model_artifact.add_file("model.pth")

# μ„€μ • νŒŒμΌλ„ ν•¨κ»˜
model_artifact.add_file("config.yaml")

# μ—…λ‘œλ“œ
wandb.log_artifact(model_artifact)

# λͺ¨λΈμ„ νŠΉμ • λ³„μΉ­μœΌλ‘œ μ—°κ²°
wandb.run.link_artifact(model_artifact, "model-registry/churn-model", aliases=["latest", "production"])

wandb.finish()

4.3 μ•„ν‹°νŒ©νŠΈ μ‚¬μš©

"""
μ•„ν‹°νŒ©νŠΈ λ‹€μš΄λ‘œλ“œ 및 μ‚¬μš©
"""

import wandb

wandb.init(project="using-artifacts")

# μ•„ν‹°νŒ©νŠΈ λ‹€μš΄λ‘œλ“œ
artifact = wandb.use_artifact("mnist-dataset:latest")  # λ˜λŠ” :v0, :v1 λ“±
artifact_dir = artifact.download()

print(f"Downloaded to: {artifact_dir}")

# μ•„ν‹°νŒ©νŠΈ 파일 직접 μ ‘κ·Ό
with artifact.file("train.csv") as f:
    df = pd.read_csv(f)

# μ˜μ‘΄μ„± 기둝 (이 run이 이 artifactλ₯Ό μ‚¬μš©ν•¨)
# use_artifact()κ°€ μžλ™μœΌλ‘œ 처리

wandb.finish()

4.4 μ•„ν‹°νŒ©νŠΈ λ¦¬λ‹ˆμ§€

"""
μ•„ν‹°νŒ©νŠΈ λ¦¬λ‹ˆμ§€ 좔적
"""

import wandb

# 데이터 β†’ ν•™μŠ΅ β†’ λͺ¨λΈ λ¦¬λ‹ˆμ§€
wandb.init(project="lineage-demo")

# 1. μž…λ ₯ μ•„ν‹°νŒ©νŠΈ (데이터셋)
dataset = wandb.use_artifact("processed-data:latest")

# 2. ν•™μŠ΅ μˆ˜ν–‰
# ...

# 3. 좜λ ₯ μ•„ν‹°νŒ©νŠΈ (λͺ¨λΈ)
model_artifact = wandb.Artifact("trained-model", type="model")
model_artifact.add_file("model.pth")
wandb.log_artifact(model_artifact)

# W&B UIμ—μ„œ 전체 λ¦¬λ‹ˆμ§€ κ·Έλž˜ν”„ 확인 κ°€λŠ₯
# 데이터셋 β†’ (ν•™μŠ΅ run) β†’ λͺ¨λΈ

wandb.finish()

5. MLflow와 비ꡐ

5.1 κΈ°λŠ₯ 비ꡐ

"""
MLflow vs W&B 비ꡐ
"""

comparison = {
    "μ‹€ν—˜ 좔적": {
        "MLflow": "μ˜€ν”ˆμ†ŒμŠ€, self-hosted",
        "W&B": "SaaS 기반, 무료 ν‹°μ–΄ 제곡"
    },
    "μ‹œκ°ν™”": {
        "MLflow": "κΈ°λ³Έ μ‹œκ°ν™”",
        "W&B": "ν’λΆ€ν•œ μ‹œκ°ν™”, μ‹€μ‹œκ°„ μ—…λ°μ΄νŠΈ"
    },
    "ν˜‘μ—…": {
        "MLflow": "μ œν•œμ ",
        "W&B": "νŒ€ κΈ°λŠ₯, 리포트 곡유"
    },
    "ν•˜μ΄νΌνŒŒλΌλ―Έν„° νŠœλ‹": {
        "MLflow": "μ™ΈλΆ€ 도ꡬ ν•„μš” (Optuna λ“±)",
        "W&B": "Sweeps λ‚΄μž₯"
    },
    "λͺ¨λΈ λ ˆμ§€μŠ€νŠΈλ¦¬": {
        "MLflow": "μ™„μ „ν•œ κΈ°λŠ₯",
        "W&B": "Model Registry (졜근 μΆ”κ°€)"
    },
    "배포": {
        "MLflow": "MLflow Serving",
        "W&B": "직접 지원 μ—†μŒ (λ‹€λ₯Έ 도ꡬ 연동)"
    },
    "λΉ„μš©": {
        "MLflow": "무료 (인프라 λΉ„μš©λ§Œ)",
        "W&B": "무료 ν‹°μ–΄ + 유료 ν”Œλžœ"
    }
}

5.2 ν•¨κ»˜ μ‚¬μš©ν•˜κΈ°

"""
MLflow와 W&B λ™μ‹œ μ‚¬μš©
"""

import mlflow
import wandb

# 두 ν”Œλž«νΌ λͺ¨λ‘ μ΄ˆκΈ°ν™”
wandb.init(project="dual-tracking")
mlflow.set_experiment("dual-tracking")

with mlflow.start_run():
    # 곡톡 μ„€μ •
    params = {"lr": 0.001, "epochs": 100}

    # μ–‘μͺ½μ— νŒŒλΌλ―Έν„° λ‘œκΉ…
    mlflow.log_params(params)
    wandb.config.update(params)

    # ν•™μŠ΅ 루프
    for epoch in range(params["epochs"]):
        loss = train_one_epoch()
        accuracy = evaluate()

        # μ–‘μͺ½μ— λ©”νŠΈλ¦­ λ‘œκΉ…
        mlflow.log_metrics({"loss": loss, "accuracy": accuracy}, step=epoch)
        wandb.log({"loss": loss, "accuracy": accuracy, "epoch": epoch})

    # λͺ¨λΈ μ €μž₯
    mlflow.sklearn.log_model(model, "model")
    wandb.save("model.pkl")

wandb.finish()

6. κ³ κΈ‰ κΈ°λŠ₯

6.1 νŒ€ ν˜‘μ—…

"""
νŒ€ ν”„λ‘œμ νŠΈ μ„€μ •
"""

import wandb

# νŒ€ ν”„λ‘œμ νŠΈμ— λ‘œκΉ…
wandb.init(
    entity="team-name",           # νŒ€ 이름
    project="shared-project",     # ν”„λ‘œμ νŠΈ 이름
    group="experiment-group",     # μ‹€ν—˜ κ·Έλ£Ή (κ΄€λ ¨ μ‹€ν—˜ λ¬ΆκΈ°)
    job_type="training"           # μž‘μ—… μœ ν˜•
)

6.2 리포트 생성

"""
W&B Reports API
"""

import wandb

# λ¦¬ν¬νŠΈλŠ” 주둜 UIμ—μ„œ μƒμ„±ν•˜μ§€λ§Œ APIλ‘œλ„ κ°€λŠ₯
api = wandb.Api()

# ν”„λ‘œμ νŠΈμ˜ λͺ¨λ“  μ‹€ν–‰ 쑰회
runs = api.runs("username/project")

for run in runs:
    print(f"Run: {run.name}")
    print(f"  Config: {run.config}")
    print(f"  Summary: {run.summary}")
    print(f"  History: {run.history().shape}")

6.3 μ•Œλ¦Ό μ„€μ •

"""
W&B Alerts
"""

import wandb

wandb.init(project="alerting-demo")

# ν•™μŠ΅ 쀑 μ•Œλ¦Ό 트리거
for epoch in range(100):
    accuracy = train_and_evaluate()

    if accuracy > 0.95:
        wandb.alert(
            title="High Accuracy Achieved!",
            text=f"Model achieved {accuracy:.2%} accuracy at epoch {epoch}",
            level=wandb.AlertLevel.INFO
        )

    if accuracy < 0.5:
        wandb.alert(
            title="Training Issue",
            text=f"Accuracy dropped to {accuracy:.2%}",
            level=wandb.AlertLevel.WARN
        )

    wandb.log({"accuracy": accuracy, "epoch": epoch})

wandb.finish()

μ—°μŠ΅ 문제

문제 1: κΈ°λ³Έ μ‹€ν—˜ 좔적

MNIST λ°μ΄ν„°μ…‹μœΌλ‘œ CNN λͺ¨λΈμ„ ν•™μŠ΅ν•˜κ³  W&B둜 μ‹€ν—˜μ„ μΆ”μ ν•˜μ„Έμš”.

문제 2: Sweeps μ‹€ν–‰

3개 μ΄μƒμ˜ ν•˜μ΄νΌνŒŒλΌλ―Έν„°μ— λŒ€ν•΄ Bayesian μ΅œμ ν™” sweep을 μ‹€ν–‰ν•˜μ„Έμš”.

문제 3: Artifacts

데이터셋과 λͺ¨λΈμ„ μ•„ν‹°νŒ©νŠΈλ‘œ μ €μž₯ν•˜κ³  λ¦¬λ‹ˆμ§€λ₯Ό ν™•μΈν•˜μ„Έμš”.


μš”μ•½

κΈ°λŠ₯ W&B MLflow
μ‹€ν—˜ 좔적 wandb.log() mlflow.log_metrics()
ν•˜μ΄νΌνŒŒλΌλ―Έν„° νŠœλ‹ Sweeps μ™ΈλΆ€ 도ꡬ
데이터/λͺ¨λΈ 버전 Artifacts Model Registry
μ‹œκ°ν™” ν’λΆ€ν•œ λŒ€μ‹œλ³΄λ“œ κΈ°λ³Έ UI
ν˜‘μ—… νŒ€, 리포트 μ œν•œμ 
ν˜ΈμŠ€νŒ… SaaS / Self-hosted Self-hosted

참고 자료

to navigate between lessons