Materi Tutorial

Implementasi Machine Learning dengan Python - Tutorial Lengkap Dasar Sampai Advance

Scikit-Learn Kode ML dapat dijalankan di browser. Klik Run untuk mencoba.

Machine Learning dengan Python Scikit-Learn Tutorial

Machine Learning (ML) adalah cabang dari kecerdasan buatan (artificial intelligence) yang berfokus pada pembangunan sistem yang dapat belajar dari data, mengidentifikasi pola, dan membuat keputusan dengan intervensi manusia yang minimal.

Python adalah bahasa nomor satu untuk Machine Learning karena kesederhanaannya dan dukungan pustaka yang luar biasa, terutama Scikit-Learn.

Dalam tutorial ini, kita akan mempelajari:


Konsep Dasar Machine Learning

Sebelum coding, penting untuk memahami terminologi dan konsep dasar berikut:

Terminologi Penting

Istilah Penjelasan
Dataset Kumpulan data yang digunakan untuk melatih dan menguji model
Features (X) Variabel input yang digunakan untuk membuat prediksi
Target (y) Apa yang ingin kita prediksi (output)
Training Set Data untuk melatih model (biasanya 70-80%)
Test Set Data untuk menguji performa model (20-30%)
Model Algoritma yang telah dilatih pada data
Prediction Hasil output dari model

Jenis Machine Learning

Machine Learning
├── Supervised Learning (Ada Label)
│   ├── Classification (output: kategori)
│   │   └── Contoh: Spam detection, Disease diagnosis
│   └── Regression (output: angka kontinu)
│       └── Contoh: House price prediction, Stock forecasting
│
├── Unsupervised Learning (Tanpa Label)
│   ├── Clustering
│   │   └── Contoh: Customer segmentation
│   └── Dimensionality Reduction
│       └── Contoh: PCA untuk visualisasi
│
└── Reinforcement Learning
    └── Contoh: Game AI, Robot navigation

Workflow Machine Learning

Proses ML yang benar mengikuti langkah-langkah ini:

  1. Problem Definition - Definisikan masalah dengan jelas
  2. Data Collection - Kumpulkan data yang relevan
  3. Data Exploration (EDA) - Pahami distribusi dan pola data
  4. Data Preprocessing - Bersihkan dan transformasi data
  5. Feature Engineering - Buat fitur baru yang informatif
  6. Model Selection - Pilih algoritma yang sesuai
  7. Training - Latih model dengan training data
  8. Evaluation - Ukur performa dengan test data
  9. Hyperparameter Tuning - Optimasi parameter model
  10. Deployment - Deploy model ke production

Instalasi Library

Pastikan Anda sudah menginstal library yang dibutuhkan:

pip install scikit-learn pandas numpy matplotlib seaborn

Verifikasi instalasi:

import sklearn
import pandas as pd
import numpy as np

print(f"Scikit-Learn version: {sklearn.__version__}")
print(f"Pandas version: {pd.__version__}")
print(f"NumPy version: {np.__version__}")

Studi Kasus 1: Klasifikasi Bunga Iris

Mari mulai dengan dataset klasik Iris untuk memahami workflow ML.

1. Load dan Eksplorasi Data

from sklearn.datasets import load_iris
import pandas as pd
import numpy as np

# Memuat dataset
iris = load_iris()
X = iris.data
y = iris.target

# Membuat DataFrame untuk eksplorasi
df = pd.DataFrame(X, columns=iris.feature_names)
df['species'] = iris.target_names[y]

print("=== INFO DATASET ===")
print(f"Jumlah sampel: {len(df)}")
print(f"Jumlah fitur: {len(iris.feature_names)}")
print(f"Nama fitur: {iris.feature_names}")
print(f"Kelas target: {iris.target_names}")

print("\n=== 5 DATA PERTAMA ===")
print(df.head())

print("\n=== STATISTIK DESKRIPTIF ===")
print(df.describe())

2. Membagi Data (Train-Test Split)

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import numpy as np

# Load data
iris = load_iris()
X = iris.data
y = iris.target

# Membagi data: 80% training, 20% testing
X_train, X_test, y_train, y_test = train_test_split(
    X, y, 
    test_size=0.2, 
    random_state=42,
    stratify=y  # Pastikan distribusi kelas seimbang
)

print(f"Training set: {len(X_train)} sampel")
print(f"Test set: {len(X_test)} sampel")

# Cek distribusi kelas
print(f"\nDistribusi Training: {np.bincount(y_train)}")
print(f"Distribusi Test: {np.bincount(y_test)}")

3. Feature Scaling (Preprocessing)

Banyak algoritma ML bekerja lebih baik dengan data yang di-scale:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

# Load dan split data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Standarisasi fitur (mean=0, std=1)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

print("Sebelum scaling:")
print(f"Mean: {X_train.mean(axis=0).round(2)}")
print(f"Std: {X_train.std(axis=0).round(2)}")

print("\nSetelah scaling:")
print(f"Mean: {X_train_scaled.mean(axis=0).round(2)}")
print(f"Std: {X_train_scaled.std(axis=0).round(2)}")

Algoritma Klasifikasi

K-Nearest Neighbors (KNN)

KNN adalah algoritma sederhana yang mengklasifikasikan berdasarkan "tetangga" terdekat.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Membuat dan melatih model KNN
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train_scaled, y_train)

# Prediksi
y_pred_knn = knn.predict(X_test_scaled)

# Evaluasi
accuracy = accuracy_score(y_test, y_pred_knn)
print(f"Akurasi KNN: {accuracy * 100:.2f}%")

Decision Tree

Decision Tree membuat keputusan berdasarkan aturan if-else dari fitur.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Membuat dan melatih Decision Tree
dt = DecisionTreeClassifier(max_depth=3, random_state=42)
dt.fit(X_train, y_train)  # Decision Tree tidak perlu scaling

# Prediksi dan evaluasi
y_pred_dt = dt.predict(X_test)
accuracy = accuracy_score(y_test, y_pred_dt)
print(f"Akurasi Decision Tree: {accuracy * 100:.2f}%")

# Feature importance
print("\nFeature Importance:")
for name, importance in zip(iris.feature_names, dt.feature_importances_):
    print(f"  {name}: {importance:.4f}")

Random Forest

Random Forest adalah ensemble dari banyak Decision Tree untuk hasil yang lebih stabil.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Membuat Random Forest dengan 100 pohon
rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)

# Prediksi dan evaluasi
y_pred_rf = rf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred_rf)
print(f"Akurasi Random Forest: {accuracy * 100:.2f}%")

# Feature importance
print("\nFeature Importance (Random Forest):")
for name, importance in zip(iris.feature_names, rf.feature_importances_):
    print(f"  {name}: {importance:.4f}")

Support Vector Machine (SVM)

SVM mencari hyperplane optimal untuk memisahkan kelas.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Membuat SVM dengan kernel RBF
svm = SVC(kernel='rbf', C=1.0, random_state=42)
svm.fit(X_train_scaled, y_train)

# Prediksi dan evaluasi
y_pred_svm = svm.predict(X_test_scaled)
accuracy = accuracy_score(y_test, y_pred_svm)
print(f"Akurasi SVM: {accuracy * 100:.2f}%")

Perbandingan Semua Model

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Definisikan semua model
models = {
    'KNN': KNeighborsClassifier(n_neighbors=5),
    'Decision Tree': DecisionTreeClassifier(max_depth=3, random_state=42),
    'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42),
    'SVM': SVC(kernel='rbf', random_state=42),
    'Logistic Regression': LogisticRegression(max_iter=200, random_state=42)
}

print("=== PERBANDINGAN MODEL ===\n")
results = {}
for name, model in models.items():
    # Gunakan scaled data untuk KNN, SVM, dan Logistic Regression
    if name in ['KNN', 'SVM', 'Logistic Regression']:
        model.fit(X_train_scaled, y_train)
        y_pred = model.predict(X_test_scaled)
    else:
        model.fit(X_train, y_train)
        y_pred = model.predict(X_test)
    
    accuracy = accuracy_score(y_test, y_pred)
    results[name] = accuracy
    print(f"{name:20s}: {accuracy * 100:.2f}%")

print(f"\n Model Terbaik: {max(results, key=results.get)}")

Evaluasi Model yang Lebih Lengkap

Akurasi saja tidak cukup. Kita perlu metrik yang lebih komprehensif:

Classification Report

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train model
rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)

# Classification Report
print("=== CLASSIFICATION REPORT ===\n")
print(classification_report(y_test, y_pred, target_names=iris.target_names))

Confusion Matrix

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
import numpy as np

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train model
rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)

# Hitung confusion matrix
cm = confusion_matrix(y_test, y_pred)

print("=== CONFUSION MATRIX ===\n")
print("Prediksi (kolom) vs Aktual (baris)")
print(f"Kelas: {list(iris.target_names)}")
print(cm)

Penjelasan Metrik

Metrik Penjelasan Kapan Penting?
Precision Dari yang diprediksi positif, berapa yang benar? Ketika false positive costly (spam filter)
Recall Dari yang sebenarnya positif, berapa yang terdeteksi? Ketika false negative costly (cancer detection)
F1-Score Harmonik mean dari Precision dan Recall Ketika butuh keseimbangan
Accuracy Total prediksi benar / Total prediksi Ketika kelas seimbang

Cross-Validation

Untuk mendapatkan estimasi performa yang lebih reliable, gunakan Cross-Validation:

from sklearn.datasets import load_iris
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier

# Load data
iris = load_iris()
X, y = iris.data, iris.target

# Model
rf = RandomForestClassifier(n_estimators=100, random_state=42)

# 5-Fold Cross Validation
cv_scores = cross_val_score(rf, X, y, cv=5, scoring='accuracy')

print("=== 5-FOLD CROSS VALIDATION ===\n")
for i, score in enumerate(cv_scores, 1):
    print(f"Fold {i}: {score * 100:.2f}%")

print(f"\nRata-rata: {cv_scores.mean() * 100:.2f}%")
print(f"Std Dev: {cv_scores.std() * 100:.2f}%")

Studi Kasus 2: Regresi (Prediksi Harga)

Untuk masalah regresi, targetnya adalah angka kontinu, bukan kategori.

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error, r2_score
import numpy as np

# Load dataset California Housing
housing = fetch_california_housing()
X = housing.data
y = housing.target

print("=== DATASET INFO ===")
print(f"Fitur: {housing.feature_names}")
print(f"Target: Median house value (ratusan ribu $)")
print(f"Sampel: {len(X)}")

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Scaling
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Train model
rf_reg = RandomForestRegressor(n_estimators=100, random_state=42)
rf_reg.fit(X_train_scaled, y_train)

# Prediksi
y_pred = rf_reg.predict(X_test_scaled)

# Evaluasi
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
r2 = r2_score(y_test, y_pred)

print("\n=== EVALUASI MODEL REGRESI ===")
print(f"RMSE: ${rmse * 100000:.2f} (error rata-rata)")
print(f"R² Score: {r2:.4f} (semakin dekat ke 1, semakin baik)")

Overfitting vs Underfitting

Salah satu tantangan terbesar dalam ML adalah keseimbangan antara overfitting dan underfitting.

Underfitting          Just Right          Overfitting
    │                     │                    │
    ▼                     ▼                    ▼
Model terlalu       Model optimal        Model terlalu
sederhana           generalisasi         kompleks
    │                     │                    │
High bias           Low bias +          Low bias +  
High variance       Low variance        High variance
on training         on both             on training
    │                     │                    │
Solusi:             ✓ Target            Solusi:
- Tambah fitur      kita               - Kurangi fitur
- Model kompleks                       - Regularization
- Kurangi reg.                         - Lebih banyak data

Deteksi Overfitting

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.tree import DecisionTreeClassifier

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Membandingkan model dengan berbagai depth
depths = [1, 2, 3, 5, 10, None]

print("=== ANALISIS OVERFITTING ===\n")
print(f"Depth      Train Acc   CV Acc      Status")
print("-" * 50)

for depth in depths:
    dt = DecisionTreeClassifier(max_depth=depth, random_state=42)
    
    # Training accuracy
    dt.fit(X_train, y_train)
    train_acc = dt.score(X_train, y_train)
    
    # Cross-validation accuracy
    cv_scores = cross_val_score(dt, X, y, cv=5)
    cv_acc = cv_scores.mean()
    
    gap = train_acc - cv_acc
    
    # Tentukan status
    if gap > 0.1:
        status = "Overfitting"
    elif cv_acc < 0.7:
        status = "Underfitting"
    else:
        status = "Good"
    
    depth_str = str(depth) if depth else "None"
    print(f"{depth_str:10} {train_acc*100:6.2f}%     {cv_acc*100:6.2f}%     {status}")

Hyperparameter Tuning

Setiap model memiliki hyperparameter yang bisa dioptimasi:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Definisikan parameter grid (simplified untuk demo)
param_grid = {
    'n_estimators': [50, 100],
    'max_depth': [3, 5, None]
}

# Grid Search dengan Cross Validation
rf = RandomForestClassifier(random_state=42)
grid_search = GridSearchCV(rf, param_grid, cv=3, scoring='accuracy')

print("Mencari hyperparameter terbaik...")
grid_search.fit(X_train, y_train)

print("\n=== HASIL GRID SEARCH ===")
print(f"Best Parameters: {grid_search.best_params_}")
print(f"Best CV Score: {grid_search.best_score_ * 100:.2f}%")

# Evaluasi dengan best model
best_model = grid_search.best_estimator_
test_acc = best_model.score(X_test, y_test)
print(f"Test Accuracy: {test_acc * 100:.2f}%")

Menyimpan dan Memuat Model

Setelah model dilatih, simpan untuk digunakan nanti:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
import pickle
import io

# Setup dan train model
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)

# Serialisasi model ke bytes (simulasi save)
model_bytes = pickle.dumps(rf)
print(f"Model diserialisasi: {len(model_bytes)} bytes")

# Deserialisasi model (simulasi load)
loaded_model = pickle.loads(model_bytes)

# Gunakan model yang dimuat
y_pred = loaded_model.predict(X_test)
accuracy = (y_pred == y_test).mean()
print(f"Akurasi model yang dimuat: {accuracy * 100:.2f}%")

Catatan: Dalam praktiknya, Anda akan menyimpan ke file:

# Simpan ke file (jalankan di local Python)
# with open('model.pkl', 'wb') as f:
#     pickle.dump(model, f)

# Atau gunakan joblib (lebih efisien untuk array besar)
# from joblib import dump, load
# dump(model, 'model.joblib')
# loaded = load('model.joblib')
print("Contoh kode untuk save/load ada di komentar di atas")

Pipeline: Workflow yang Rapi

Gunakan Pipeline untuk menggabungkan preprocessing dan model:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC

# Setup data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Buat pipeline
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('svm', SVC(kernel='rbf', C=1.0))
])

# Fit pipeline (scaling + training dalam satu langkah)
pipeline.fit(X_train, y_train)

# Prediksi (scaling + prediksi otomatis)
y_pred = pipeline.predict(X_test)
accuracy = (y_pred == y_test).mean()
print(f"Pipeline Accuracy: {accuracy * 100:.2f}%")

# Cross-validation dengan pipeline
cv_scores = cross_val_score(pipeline, X, y, cv=5)
print(f"CV Accuracy: {cv_scores.mean() * 100:.2f}% (+/-{cv_scores.std() * 100:.2f}%)")

Tips dan Best Practices

Do's ✅

  1. Selalu split data sebelum preprocessing

    # BENAR
    X_train, X_test, y_train, y_test = train_test_split(X, y)
    scaler.fit(X_train)  # Fit hanya pada training data
    X_train_scaled = scaler.transform(X_train)
    X_test_scaled = scaler.transform(X_test)
    
  2. Gunakan cross-validation untuk evaluasi yang reliable

  3. Cek distribusi kelas untuk dataset imbalanced

  4. Simpan scaler bersama model untuk deployment

Don'ts ❌

  1. Jangan fit scaler pada seluruh dataset (data leakage)

    # SALAH!!! Data leakage
    scaler.fit(X)  # Jangan lakukan ini
    
  2. Jangan hanya mengandalkan akurasi untuk imbalanced data

  3. Jangan abaikan feature engineering


FAQ: Pertanyaan Umum tentang Machine Learning

Apa bedanya AI, ML, dan Deep Learning?
  • AI (Artificial Intelligence): Bidang luas yang mencakup semua teknik untuk membuat mesin "cerdas"
  • Machine Learning: Subset dari AI yang fokus pada algoritma yang belajar dari data
  • Deep Learning: Subset dari ML yang menggunakan Neural Networks dengan banyak layer
AI > Machine Learning > Deep Learning
Kapan menggunakan supervised vs unsupervised learning?
  • Supervised: Ketika Anda memiliki data dengan label (jawaban yang diketahui)

    • Contoh: Email yang sudah ditandai spam/tidak spam
  • Unsupervised: Ketika Anda hanya punya data tanpa label

    • Contoh: Mengelompokkan customer berdasarkan perilaku
Berapa banyak data yang dibutuhkan untuk ML?

Tidak ada angka pasti, tapi rules of thumb:

  • Minimum: 10x jumlah fitur untuk setiap kelas
  • Ideal: Semakin banyak data, semakin baik (biasanya 1000+ sampel per kelas)
  • Deep Learning: Butuh jauh lebih banyak (ribuan hingga jutaan)

Yang lebih penting adalah kualitas data, bukan hanya kuantitas.

Model mana yang terbaik?

Tidak ada "model terbaik" untuk semua kasus. Pilihan tergantung pada:

  • Ukuran dataset
  • Jumlah fitur
  • Apakah butuh interpretability
  • Computational resources

Tips: Mulai dengan model sederhana (Logistic Regression, Decision Tree), lalu tingkatkan kompleksitas jika diperlukan.

Bagaimana cara mengatasi overfitting?
  1. Tambah data - lebih banyak data = generalisasi lebih baik
  2. Feature selection - kurangi fitur yang tidak relevan
  3. Regularization - tambahkan penalty untuk model kompleks
  4. Cross-validation - evaluasi dengan data yang berbeda
  5. Early stopping - hentikan training sebelum overfit
  6. Ensemble methods - gabungkan beberapa model

Sumber Daya Tambahan (External Resources)

Dokumentasi & Tutorial

Dataset & Practice

Courses


Kesimpulan

Selamat! Anda telah mempelajari fondasi Machine Learning dengan Python. Berikut ringkasan yang telah kita pelajari:

Topik Yang Dipelajari
Konsep Supervised vs Unsupervised, Classification vs Regression
Workflow Problem Definition → Data → Model → Evaluation → Deployment
Algoritma KNN, Decision Tree, Random Forest, SVM, Logistic Regression
Evaluasi Accuracy, Precision, Recall, F1, Confusion Matrix
Validasi Train-Test Split, Cross-Validation
Preprocessing Feature Scaling, Pipeline
Best Practices Menghindari overfitting, data leakage

Poin penting:

Scikit-Learn sangat powerful untuk algoritma ML tradisional. Untuk masalah yang lebih kompleks seperti computer vision atau NLP, Anda membutuhkan Deep Learning, yang akan kita bahas di tutorial berikutnya.

Edit tutorial ini