ML Cheatsheet
Основы
Линейная регрессия
Введение
Простая регрессия
Предсказывание
Cost function
Gradient descent
Training
Model evaluation
Summary
Multivariable regression
Growing complexity
Normalization
Making predictions
Initialize weights
Cost function
Gradient descent
Simplifying with matrices
Bias term
Model evaluation
Gradient Descent
Introduction
Learning rate
Cost function
Step-by-step
Logistic Regression
Introduction
Comparison to linear regression
Types of logistic regression
Binary logistic regression
Sigmoid activation
Decision boundary
Making predictions
Cost function
Gradient descent
Mapping probabilities to classes
Training
Model evaluation
Multiclass logistic regression
Procedure
Softmax activation
Scipy example
Glossary
Необходимая математика
Calculus
Introduction
Derivatives
Geometric definition
Taking the derivative
Step-by-step
Machine learning use cases
Chain rule
How It Works
Step-by-step
Multiple functions
Gradients
Partial derivatives
Step-by-step
Directional derivatives
Useful properties
Integrals
Computing integrals
Applications of integration
Computing probabilities
Expected value
Variance
Linear Algebra
Vectors
Notation
Vectors in geometry
Scalar operations
Elementwise operations
Dot product
Hadamard product
Vector fields
Matrices
Dimensions
Scalar operations
Elementwise operations
Hadamard product
Matrix transpose
Matrix multiplication
Test yourself
Numpy
Dot product
Broadcasting
Probability (TODO)
Statistics (TODO)
Notation
Algebra
Calculus
Linear algebra
Probability
Set theory
Statistics
Нейронные сети
Основы
Нейронная сеть (Neural network)
Нейрон (Neuron)
Синапсы (Synapses)
Веса (Weights)
Смещение (Bias)
Layers
Взвешенный вход (Weighted Input)
Функции Активации
Функции потерь (ошибки)
Оптимизационные алгоритмы
Прямое распространение
Простая сеть
Прямой проход по шагам
Код
Более сложная сеть
Архитектура
Инициализация весов
Bias Terms
Working with Matrices
Dynamic Resizing
Refactoring Our Code
Final Result
Backpropagation
Chain rule refresher
Applying the chain rule
Saving work with memoization
Code example
Activation Functions
ELU
ReLU
LeakyReLU
Sigmoid
Tanh
Softmax
Layers
BatchNorm
Convolution
Dropout
Linear
LSTM
Pooling
RNN
Loss Functions
Cross-Entropy
Hinge
Huber
Kullback-Leibler
MAE (L1)
MSE (L2)
Optimizers
Adadelta
Adagrad
Adam
Conjugate Gradients
BFGS
Momentum
Nesterov Momentum
Newton’s Method
RMSProp
SGD
Regularization
Data Augmentation
Dropout
Early Stopping
Ensembling
Injecting Noise
L1 Regularization
L2 Regularization
Architectures
Autoencoder
CNN
GAN
MLP
RNN
VAE
Алгоритмы (TODO)
Classification
Bayesian
Boosting
Decision Trees
K-Nearest Neighbor
Logistic Regression
Random Forests
Support Vector Machines
Clustering
Centroid
Density
Distribution
Hierarchical
K-Means
Mean shift
Regression
Lasso
Linear
Ordinary Least Squares
Polynomial
Ridge
Splines
Stepwise
Reinforcement Learning
Внешние ресурсы
Datasets
Libraries
Papers
Other
Сделать вклад
Как сделать вклад
ML Cheatsheet
Docs
»
Алфавитный указатель
Edit on GitHub
Алфавитный указатель
Read the Docs
v: latest
Versions
latest
Downloads
pdf
htmlzip
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.