Notes in Week 4 - SVM's and Neural Networks

To Subscribe, use this Key


Status Last Update Fields
Published 11/26/2024 {{c1::Support Vector Machines (SVM)}} are max-margin models used for classification tasks in machine learning.
Published 11/26/2024 The cost function for logistic regression is minimized when the model assigns high confidence to {{c1::correct classifications}}.
Published 11/26/2024 In SVMs, the {{c1::decision boundary}} is also known as the hyperplane that separates different classes.
Published 11/26/2024 The SVM model prioritizes a large margin, which is the distance from the hyperplane to the {{c1::nearest data points}}.
Published 11/26/2024 The loss function used in SVMs to penalise misclassifications is called the {{c1::hinge loss}}.
Published 11/26/2024 The SVM cost function includes a regularization term to penalize large {{c1::weight values}}, promoting a wider margin.
Published 11/26/2024 In SVM, a point’s {{c2::distance from the hyperplane}} is defined as {{c1::\( d_i = \frac{{y_i (w \cdot x_i)} }{{\|w\|} } \),}} where \( y_i \) i…
Published 11/26/2024 A support vector in SVM is a data point that lies {{c1::closest}} to the hyperplane.
Published 11/26/2024 The margin in SVM is {{c1::twice the distance}} to the {{c2::nearest support vector}}, calculated as \( \frac{{2}}{{\|w\|}} \).
Published 11/26/2024 In binary classification with SVM, labels are often encoded as {{c1::-1 and 1}}.
Published 11/26/2024 A {{c1::multilayer perceptron (MLP)}} is a type of neural network with multiple layers of neurons.
Published 11/26/2024 The output layer in a neural network represents the final {{c1::prediction}} for the input.
Published 11/26/2024 An activation function introduces {{c1::non-linearity}} to a neural network model, enabling it to learn complex patterns.
Published 11/26/2024 In neural networks, each neuron computes an output by applying an activation function to a {{c1::weighted sum}} of inputs.
Published 11/26/2024 Logistic regression can be seen as a {{c1::single-layer neural network}} due to its structure.
Published 11/26/2024 A multilayer perceptron (MLP) includes one or more {{c1::hidden layers}} between the input and output layers.
Published 11/26/2024 The purpose of an activation function is to introduce {{c1::non-linear}} transformations to the input data.
Published 11/26/2024 In MLPs, a common choice for hidden layer activations is the {{c1::ReLU}} function, which outputs {{c2::zero}} for {{c3::negative inputs}}.
Published 11/26/2024 The {{c1::sigmoid}} activation function is often used in the output layer for binary classification tasks.
Published 11/26/2024 The {{c1::cost function}} in a neural network measures the error between the model’s predictions and the true labels.
Published 11/26/2024 Neural network parameters, such as weights and biases, are adjusted using {{c1::gradient descent}}.
Published 11/26/2024 The process of {{c2::backpropagation}} calculates the {{c1::gradient}} of the cost function with respect to each weight.
Published 11/26/2024 In neural network training, {{c1::mini-batch gradient descent}} is often used to speed up convergence.
Published 11/26/2024 The learning rate \( \alpha \) controls the {{c1::step size}} in gradient descent updates.
Published 11/26/2024 Overfitting in neural networks occurs when the model performs well on training data but poorly on {{c1::unseen data}}.
Published 11/26/2024 In neural networks, the {{c1::loss function}} quantifies how far predictions are from the actual values.
Published 11/26/2024 {{c1::Backpropagation}} uses the chain rule to compute gradients of the loss with respect to each weight in the network.
Published 11/26/2024 An epoch in neural network training is a single pass over the {{c1::entire dataset}}.
Published 11/26/2024 The purpose of a {{c2::kernel function}} in SVMs is to allow the algorithm to {{c1::learn non-linear boundaries}} in higher-dimensional space.
Published 11/26/2024 An SVM with a linear kernel can be effective if the data is {{c1::linearly separable}} in its original feature space.
Published 11/26/2024 When using backpropagation, weights are updated in the direction that {{c1::reduces the cost function}}.
Published 11/26/2024 In neural networks, initialization of weights can affect the {{c1::convergence rate}} and {{c2::solution quality}}.
Published 11/26/2024 In binary classification, the output of a neural network can be interpreted as the {{c1::probability of the positive class}}.
Published 11/26/2024 The goal of using a soft margin in SVMs is to allow for some {{c1::misclassification}} while maximizing the margin.
Published 11/26/2024 The {{c1::activation function}} used in the output layer of a neural network often depends on the task, such as softmax for multi-class classification…
Published 11/26/2024 The {{c1::regularization term}} in the cost function penalizes large weights, reducing overfitting.
Published 11/26/2024 Dropout is a regularization technique where random neurons are {{c1::deactivated}} during each forward pass.
Published 11/26/2024 The role of a support vector in SVM is to define the {{c1::margin boundaries}} in the feature space.
Published 11/26/2024 In SVMs, {{c1::maximizing the margin}} is beneficial for generalization as it helps balance the {{c2::bias-variance tradeoff}} by reducing variance wi…
Published 11/26/2024 A {{c1::non-linear kernel}} in SVM is preferable over a linear kernel when data is {{c2::not linearly separable}}. Examples include the polynomial and…
Published 11/26/2024 In neural networks, {{c1::regularization}} prevents overfitting by adding a penalty for large weights, reducing model complexity. This differs from {{…
Published 11/26/2024 The {{c1::chain rule}} in backpropagation allows efficient calculation of gradients by computing derivatives layer-by-layer, from output to input, kno…
Published 11/26/2024 Compared to SVMs, {{c1::deep neural networks}} are generally less interpretable due to complex layers and non-linearities, which trade off {{c2::inter…
Status Last Update Fields