AC
AnkiCollab
AnkiCollab
Sign in
Explore Decks
Helpful
Join Discord
Download Add-on
Documentation
Leave a Review
Notes in
Week 3 - Classification
To Subscribe, use this Key
alpha-pennsylvania-lemon-six-seventeen-black
Status
Last Update
Fields
Published
11/26/2024
{{c1::Binary classification}} is a task where the output can only be one of two possible values, typically 0 or 1.
Published
11/26/2024
The {{c1::Logistic/Sigmoid function}} outputs values between 0 and 1, which represent probabilities in binary classification.
Published
11/26/2024
The formula for the {{c2::logistic/sigmoid}} function is {{c1::\( g(z) = \frac{1}{1 + e^{-z} } \)}}, where \( z \) is the weighted sum of the input.&…
Published
11/26/2024
In logistic regression, the decision boundary occurs where \( f_{w,b}(x) \) equals {{c1::0.5}}.
Published
11/26/2024
{{c1::Probabilistic interpretation}} in logistic regression means that the output can be interpreted as the probability of a particular class.
Published
11/26/2024
In logistic regression, the decision is \( \hat{y} = 1\) if {{c1::\( f_{w,b}(x) > 0.5 \)}} and \( \hat{y} = 0 \) if {{c1::\( f_{w,b}(x) \leq 0.5 \)…
Published
11/26/2024
The {{c2::sigmoid}} function in logistic regression has an 'S' shape and is used to map any real-valued number to the range {{c1::(0,1)}}.
Published
11/26/2024
The {{c1::decision boundary}} separates the space into regions where the predicted output is 0 or 1.
Published
11/26/2024
A common issue in linear regression for classification tasks is {{c1::misclassification}} due to incorrect decision boundaries.
Published
11/26/2024
The cost function for logistic regression is based on the {{c1::log loss function}}, which penalises wrong predictions more than right ones.
Published
11/26/2024
The cost function in logistic regression is minimized using the {{c1::gradient descent}} algorithm.
Published
11/26/2024
In gradient descent, the weights \( w \) and bias \( b \) are updated iteratively using the {{c1::learning rate}} \( \alpha \).
Published
11/26/2024
Gradient descent updates the parameters \( w \) and \( b \) until the {{c1::cost function}} reaches a minimum.
Published
11/26/2024
In logistic regression, the {{c1::logistic loss function}} is used to measure the difference between predicted and actual labels.
Published
11/26/2024
In regularized logistic regression, the cost function includes a {{c1::penalty term}} to prevent overfitting.
Published
11/26/2024
Regularization reduces overfitting by adding a {{c1::penalty term}} that shrinks the model parameters.
Published
11/26/2024
The two main types of regularization are {{c1::L1 regularization}} (lasso) and {{c2::L2 regularization}} (ridge).
Published
11/26/2024
{{c1::Overfitting}} occurs when a model performs well on training data but poorly on new, unseen data.
Published
11/26/2024
To address overfitting, you can either gather {{c1::more data}} or apply {{c2::regularization}}.
Published
11/26/2024
In gradient descent, the term \( \alpha \) is called the {{c1::learning rate}}, and it controls the size of the steps during the parameter updates.
Published
11/26/2024
In regularized logistic regression, the cost function adds the penalty {{c2::\( \frac{\lambda}{2m} \sum_{j=1}^n w_j^2 \)}} for {{c1::L2 regulariz…
Published
11/26/2024
The logistic loss function for \( y = 1 \) is {{c1::\( L(f_{w,b}(x), y) = -\log(f_{w,b}(x)) \).}}
Published
11/26/2024
For \( y = 0 \), the logistic loss function is {{c1::\( L(f_{w,b}(x), y) = -\log(1 - f_{w,b}(x)) \).}}
Published
11/26/2024
Gradient descent for regularized cost functions penalizes large values of {{c1::model parameters}}.
Published
11/26/2024
The purpose of regularization in logistic regression is to prevent the model from {{c1::overfitting}} the training data.
Published
11/26/2024
In regularized cost functions, the parameter {{c2::\( \lambda \)}} controls the strength of the {{c1::regularization term}}.
Published
11/26/2024
In logistic regression, the decision boundary is determined by the function \( z = w \cdot x + b \), where \( z = 0 \) defines the {{c1::boundary}}.
Published
11/26/2024
To ensure gradient descent converges, it is important to choose an appropriate {{c1::learning rate}} \( \alpha \).
Status
Last Update
Fields