Skip to main content
  1. Posts/

Self Test Questions Machine Learning

Self test questions for the lecture “Machine Learning - Foundations and Algorithms” at KIT.

Lecture 3: Model Selection

  • Why is it a bad idea to evaluate your algorithm on the training set?

    Evaluating on the training set, rewards overfitting. Overfitting means learning training points by heart, instead of approximating the distribution the training points were drawn from. A trivial algorithm that just stores and queries all training points, has 100 % accuracy on the training set.

  • What is the difference between true and empirical risk

    True risk is the mean squared error of a predictorffο»Ώ over the whole distribution.z

    For classificationR∞(f)=p(f(x)β‰ y)R_\infty(f) = p(f(\boldsymbol{x})\neq y)ο»Ώ, For regressionEx,y[(f(x)βˆ’y)2]\mathbb{E}_{\boldsymbol{x},y}[(f(\boldsymbol{x})-y)^2]ο»Ώ

    Empirical risk is a Monte-Carlo estimation of the true risk on a sample drawn from the distribution.

    For classificationRN(f)=1Nβˆ‘iI(f(xi)β‰ yi)R_N(f) = \frac{1}{N} \sum_{i} \mathbb{I}\left(f\left(\boldsymbol{x}_{i}\right) \neq y_{i}\right)ο»Ώ

    For regressionRN=1Nβˆ‘i=1N(f(xi)βˆ’y)2R_N = \frac{1}{N}\sum_{i=1}^N(f(\boldsymbol{x}_i)-y)^2ο»Ώ

  • The true risk can be decomposed in which parts?

    True risk is variance + squared bias + noise

    R∞(f)=EDn[Ex[(f^Dn(x)βˆ’f^βˆ—(x))2]]undefinedVarianceΒ +Ex[(f^βˆ—(x)βˆ’f(x))2]undefinedBiasΒ 2+Οƒ2undefinednoiseΒ R_\infty(f) = \underbrace{\mathbb{E}_{D_{n}}\left[\mathbb{E}_{x}\left[\left(\hat{f}_{D_{n}}(\boldsymbol{x})-\hat{f}_{*}(\boldsymbol{x})\right)^{2}\right]\right]}_{\text {Variance }}+\underbrace{\mathbb{E}_{x}\left[\left(\hat{f}_{*}(\boldsymbol{x})-f(\boldsymbol{x})\right)^{2}\right]}_{\text {Bias }^{2}}+\underbrace{\sigma^{2}}_{\text {noise }}ο»Ώ

    With true function:ffο»Ώ, best function in function class:f^βˆ—(x)\hat{f}_{*}(\boldsymbol{x})ο»Ώ, best function for datasetDnD_nο»Ώ:f^Dn(x)\hat{f}_{D_{n}}(\boldsymbol{x})ο»Ώ

  • How is the bias and the variance of a learning algorithm defined, and how do they contribute to the true risk?

    The bias term quantifies the error we are making by limiting the function class we are choosing fromEx[(f^βˆ—(x)βˆ’f(x))2]\mathbb{E}_{x}\left[\left(\hat{f}_{*}(\boldsymbol{x})-f(\boldsymbol{x})\right)^{2}\right]ο»Ώ

    Variance is the error we are making by only having a subset of all the dataEDn[Ex[(f^Dn(x)βˆ’f^βˆ—(x))2]]\mathbb{E}_{D_{n}}\left[\mathbb{E}_{x}\left[\left(\hat{f}_{D_{n}}(\boldsymbol{x})-\hat{f}_{*}(\boldsymbol{x})\right)^{2}\right]\right]ο»Ώ

  • What is the advantage/disadvantage of k-fold CV vs. the Hold-out method?

    Advantage: We average out the randomness of choosing a validation set

    Disadvantage: We need to train and evaluate our classifier k times

  • Why does it make sense to penalize the norm of the weight vector?

    Regularizing the weight vector prevents overfitting. Visually, we smooth the decision function, by penalizing spikes in the function caused by bigwiw_iο»Ώ.

  • Which norms can we use, and what are the different effects?

    Most commonly, theβ„“2\ell_2ο»Ώ and theβ„“1\ell_1ο»Ώ norms are used.βˆ₯wβˆ₯1=βˆ‘i∣wi∣\|\boldsymbol{w}\|_1=\sum_i|w_i|ο»Ώ leads to sparse weight vectors.βˆ₯wβˆ₯2=βˆ‘iwi2\|\boldsymbol{w}\|_2=\sqrt{\sum_iw_i^2}ο»Ώ leads to smoother functions and is much easier to optimize in general, but we don’t get sparse solution as close to zero entries are not penalized as much.

  • What is the effect of early stopping?

    It prevents overfitting by observing the loss on a validation set, if it goes up, we stop training at the observed local minimum. With that, despite possible improvements on the training set, we keep the ability of the model to predict unseen data. It effects in an implicit limitation of the model complexity, by not giving it enough time to learn the details of the training data, because we assume focusing on the bigger and therefore general tendencies of the data leads to higher steps in the loss minimization.

Lecture 4: Trees and Forests

  • What we mean with non-parametric / instance-based machine learning algorithms?

    Non-parametric algorithms do not learn parameters, i.e. a weight vector or parameters of a distribution. Instead, they store the training data (instances) and calculate predictions for new data points on it.

  • How does k-NN works?

    We store all training data. For a new data pointxβˆ—\boldsymbol{x}^*ο»Ώ, we calculate the k-neighbourhoodNk(x)N_k(\boldsymbol{x})ο»Ώ of the k nearest points and predict the most frequent class. k-NN for regression interpolates the unknown dimension from the k-neighbourhood.

  • How to choose the k for k-NN?

    k is a hyperparameter. It should be optimized on a validation set. E.g. lowest loss with k’-fold cross validation.

  • Why is k-NN hard to use for high-D data?

    Due to the curse of dimensionality, distances in high dimensions loose meaning. In high-D data all data points are close together. Furthermore, k-NN is sensitive to noise dimensions.

  • How to search for nearest neighbors efficiently?

    Using an index data structure to store the points that allows efficient computation of the k-neighbourhood. Most commonly, k-d trees are used for that. This improves the runtime fromO(n)\mathcal{O}(n) for the naïve solution, toO(klog⁑n)\mathcal{O}(k\log{n}).

  • What is a binary regression / decision tree?

    A CART is a binary tree that, for each subtree root, has a comparison with a fixed value that indicates if the left or right subtree is better matching for a data point. In the leafs, there is the predicted class/value. A CART is trained top down by spitting the data at the dimension and comparison value that maximizes a splitting criterion for the left and right subtree.

  • What are useful splitting criterions

    For regressionRSS=βˆ‘leftΒ (yiβˆ’yΛ‰L)2+βˆ‘rightΒ (yiβˆ’yΛ‰R)2=NLΟƒL2+NRΟƒR2\mathrm{RSS} =\sum_{\text {left }}\left(y_{i}-\bar{y}_{L}\right)^{2}+\sum_{\text {right }}\left(y_{i}-\bar{y}_{R}\right)^{2} =N_{L} \sigma_{L}^{2}+N_{R} \sigma_{R}^{2} ο»Ώ (Residual Sum of Squares)

    For classificationNLH(pL)+NRH(pR)N_L H(p_L) + N_R H(p_R)ο»Ώ (minimum entropy in subtrees)

    Other criterions, that are not covered in the lecture, are the Gini index and the minimum classification error.

  • How can we influence the model complexity of the tree?

    Complexity can be measured by the number of samples per leaf and by the depth of the tree.

    We can influence that with prepruning, stopping training if a lower threshold of samples per leaf has been reached or the depth of the tree exceeds an upper threshold.

    More effective, but also computationally more expensive, is postpruning which is not covered in the lecture.

  • Why is it useful to use multiple trees and randomization?

    Because this decreases variance, without increasing bias.

    LetVar⁑[Xi]=1M\operatorname{Var}[X_i] = \frac{1}{M}

    From the fundamental properties of the variance:

    Var⁑[βˆ‘i=1MXi]=βˆ‘i=1MVar⁑[Xi]+2βˆ‘1β©½i<j≀MCov⁑(Xi,Xj)undefined=0Β forΒ i.d.d.Β Xi\operatorname{Var}\left[\sum_{i=1}^{M} X_{i}\right]=\sum_{i=1}^{M} \operatorname{Var}\left[X_{i}\right]+\underbrace{2 \sum_{1 \leqslant i< j \leq M} \operatorname{Cov}\left(X_{i}, X_{j}\right)}_{= 0\text{ for i.d.d. } X_i}ο»Ώ

    andVar⁑[aβ‹…X+b]=a2+Var⁑[x]\operatorname{Var}[a\cdot X + b] = a^2 + \operatorname{Var}[x]ο»Ώ it follows for i.d.d.XiX_iο»Ώ

    Var⁑[1Mβˆ‘i=1MXi]=1M2Var⁑[βˆ‘i=1MXi]=1MVar⁑[X],Β ifΒ XΒ i.i.d.Β \operatorname{Var}\left[\frac{1}{M} \sum_{i=1}^{M} X_{i}\right]=\frac{1}{M^{2}} \operatorname{Var}\left[\sum_{i=1}^{M} X_{i}\right]=\frac{1}{M} \operatorname{Var}[X], \quad \text { if } X \text { i.i.d. }ο»Ώ

    For trees, we cannot assume independence, but still get a significant variance reduction in practice.

Lecture 5: Kernel Methods

  • What is the definition of a kernel and its relation to an underlying feature space?

    A kernel is a functionk:X×X→Rk: \mathcal{X}\times \mathcal{X} \to \mathbb{R}

    According to Mercer's theorem:

    kkο»Ώ is a symmetric positive semi-definite kernel.⇔\Leftrightarrowο»Ώ There exists aΟ•:Xβ†’H\phi: \mathcal{X}\to\mathcal{H}ο»Ώ so thatk(x,xβ€²)=βŸ¨Ο•(x),Ο•(xβ€²)⟩k(x,x') = \langle \phi(x),\phi(x')\rangleο»Ώ.

    Every p.d. kernel comes with an associated feature space.

  • Why are kernels more powerful than traditional feature-based methods?

    Because kernels can operate on an infinite dimensional feature space that adopts to the training data.

  • What do we mean by the kernel trick?

    Rewriting a machine learning algorithm so that explicit usages of the weight vector are replaced with inner products of feature space vectors. This allows for an infinite dimensional weight vector.

  • How do we apply the kernel trick to ridge regression?

    wβˆ—=(Ξ¦TΞ¦+Ξ»I)βˆ’1undefineddΓ—dΒ matrixΒ inversionΒ Ξ¦Ty=Ξ¦T(ΦΦT+Ξ»I)βˆ’1undefinedNΓ—NΒ matrixΒ inversionΒ y=Ξ¦T(K+Ξ»I)βˆ’1yundefinedΞ±=Ξ¦TΞ±\boldsymbol{w}^{*}=\underbrace{\left(\boldsymbol{\Phi}^{T} \boldsymbol{\Phi}+\lambda \boldsymbol{I}\right)^{-1}}_{d \times d \text { matrix inversion }} \boldsymbol{\Phi}^{T} \boldsymbol{y}=\boldsymbol{\Phi}^{T} \underbrace{\left(\boldsymbol{\Phi} \boldsymbol{\Phi}^{T}+\lambda \boldsymbol{I}\right)^{-1}}_{N \times N \text { matrix inversion }} \boldsymbol{y} =\boldsymbol{\Phi}^{T} \underbrace{(\boldsymbol{K}+\lambda \boldsymbol{I})^{-1} \boldsymbol{y}}_{\boldsymbol{\alpha}}=\boldsymbol{\Phi}^{T} \boldsymbol{\alpha}ο»Ώ

    f(x)=Ο•(x)Twβˆ—=Ο•(x)TΞ¦TΞ±=k(x)TΞ±=βˆ‘iΞ±ik(xi,x)f(\boldsymbol{x})=\boldsymbol{\phi}(\boldsymbol{x})^{T} \boldsymbol{w}^{*}=\boldsymbol{\phi}(\boldsymbol{x})^{T} \boldsymbol{\Phi}^{T} \boldsymbol{\alpha}=\boldsymbol{k}(\boldsymbol{x})^{T} \boldsymbol{\alpha}=\sum_{i} \alpha_{i} k\left(\boldsymbol{x}_{i}, \boldsymbol{x}\right)ο»Ώ

  • How do we compute with infinite dimensional vectors?

    We can calculate their scalar product by taking the integral.

    Forμ∈R\mu \in \mathbb{R}ο»Ώ:βŸ¨Ο•ΞΌ(x),ϕμ(y)⟩=βˆ«Ο•ΞΌ(x)ϕμ(y)dΞΌ\left\langle\phi_{\boldsymbol{\mu}}(\boldsymbol{x}), \phi_{\boldsymbol{\mu}}(\boldsymbol{y})\right\rangle=\int \phi_{\boldsymbol{\mu}}(\boldsymbol{x}) \phi_{\boldsymbol{\mu}}(\boldsymbol{y}) d \boldsymbol{\mu}ο»Ώ

  • What are hyperparameters of a kernel, and how can we optimize them?

    Hyperparameters are constants, that cannot be determined by the learning algorithm, but need to be set beforehand. Kernel function definitions typically come with such hyperparameters. For the Gausian kernel, the hyperparameter is the bandwidthσ\sigma quantifying the distance between samples that are still considered similar. Hyperparameters should be optimized on a validation set disjoint from the training set.

Lecture 6: SVMs

  • Why is it good to use a maximum margin objective for classification?

    To have a robust decision boundary, that generalizes to unseen data. A maximum margin objective can add the most noise to samples without them being misclassified.

  • How can we define the margin as an optimization problem?

    arg max⁑wm=arg max⁑w2βˆ₯wβˆ₯=arg min⁑wβˆ₯wβˆ₯2\argmax_w m = \argmax_{\boldsymbol{w}} \frac{2}{\|\boldsymbol{w}\|} = \argmin_{\boldsymbol{w}} \|\boldsymbol{w}\|^2ο»Ώ

    s.t.β€…β€Šβˆ€iyiundefinedβˆ’1,1(wTx+b)β‰₯1\mathrm{s.t.}\; \forall i\quad \underbrace{y_i}_{-1, 1}(\boldsymbol{w}^T\boldsymbol{x}+b) \geq 1ο»Ώ

  • What are slack variables, and how can they be used to get a β€œsoft” margin?

    Slack variablesΞΎiβ‰₯0\xi_i \geq 0ο»Ώ allow a deviation for certain training samples from the optimization constraint. A soft margin SVM is less sensitive to outliers by trading off violations in individual points for an overall more robust decision boundary, which is a form of regularization. In case of non-linearly separable data, they allow us to solve the optimization problem.

  • How is the hinge loss defined?

    Lhinge=max⁑(0,1βˆ’yif(xi))L_{\mathrm{hinge}} = \max(0, 1-y_if(\boldsymbol{x}_i))ο»Ώ

  • What is the relation between the slack variables and the hinge loss?

    Starting from the constrained optimization problem, we can rewrite it as an unconstrained optimization problem using hinge loss as the data loss. ByΞΎi=max⁑(0,1βˆ’yif(xi))\xi_i=\max(0, 1-y_if(\mathbf{x}_i))ο»Ώ

    arg min⁑wβˆ₯wβˆ₯2undefinedregularizationΒ +Cβˆ‘i=1NΞΎis.t.β€…β€Šyi(wTxi+b)β‰₯1βˆ’ΞΎiΞΎiβ‰₯0=arg min⁑wβˆ₯wβˆ₯2undefinedregularizationΒ +Cβˆ‘i=1Nmax⁑(0,1βˆ’yif(xi))undefinedhingeΒ lossΒ \begin{align}\argmin_{\mathbf{w}} \underbrace{\|\mathbf{w}\|^{2}}_{\text {regularization }}+C \sum_{i=1}^{N} \xi_i\quad s.t. \;y_i(\boldsymbol{w}^T\boldsymbol{x}_i+b)\geq1-\xi_i\quad\xi_i \geq0\\=\argmin_{\mathbf{w}} \underbrace{\|\mathbf{w}\|^{2}}_{\text {regularization }}+C \underbrace{\sum_{i=1}^{N} \max \left(0,1-y_{i} f\left(\boldsymbol{x}_{i}\right)\right)}_{\text {hinge loss }}\end{align}
  • What are the advantages and disadvantages in comparison to logistic regression?
    • SVM outputs class labels. Logistic regression outputs probabilities.
    • Soft margin SVM is less sensitive to outliers
    • SVM finds more balanced decision boundary
    • Loss contribution is 0 for correct classification β‡’ Results in slightly better accuracy
  • What is the difference between gradients and sub-gradients

    Sub-gradients are only defined for convex functions.

    Gradients are only defined for differentiable functions.

    If a function is both (at a point) the sub-gradient is equal to the gradient.

    At non-differentiable points, sub-gradients exist, but are not unique.

Lecture 7: Bayesian Learning

  • What are the 2 basic steps behind Bayesian Learning?
    1. Compute the posteriorp(θ∣D)undefinedposterior =p(D∣θ)undefineddata likelihoodp(θ)undefinedpriorp(D)undefinedevidence \underbrace{p(\boldsymbol{\theta} \mid \mathcal{D})}_{\text {posterior }}=\frac{\overbrace{p(\mathcal{D} \mid \boldsymbol{\theta})}^{\text {data likelihood}} \overbrace{p(\boldsymbol{\theta})}^{\text {prior}}}{\underbrace{p(\mathcal{D})}_{\text {evidence }}}
    1. Compute predictive distributionp(yβˆ—βˆ£xβˆ—,D)undefinedmarginalΒ likelihoodΒ =∫p(yβˆ—βˆ£xβˆ—,ΞΈ)undefinedlikelihoodΒ p(θ∣D)undefinedposteriorΒ dΞΈ\underbrace{p\left(\boldsymbol{y}^{*} \mid \boldsymbol{x}^{*}, \mathcal{D}\right)}_{\text {marginal likelihood }}=\int \underbrace{p\left(\boldsymbol{y}^{*} \mid \boldsymbol{x}^{*}, \boldsymbol{\theta}\right)}_{\text {likelihood }} \underbrace{p(\boldsymbol{\theta} \mid \mathcal{D})}_{\text {posterior }} d \boldsymbol{\theta}ο»Ώ
  • Why is Bayesian Learning more robust against overfitting?

    When computing the predictive distribution, all possible model parameters are considered according to their weight, which is the probability of the weight vector generating the seen data. We therefore may say, that we average-out overfitting behavior of single weight vectors. Bayesian learning can be seen as an ensemble method. We use ensemble methods in other contexts to avoid overfitting.

  • What happens with the posterior if we add more data to the training set?

    We reduce variance and get a better estimate of the posterior distribution.

  • What is completing the square, and how does it work?

    Competing the square is the reverse application of the binomial formula. In our context, we do this to bring the inside of an exponential to match the form of the Gaussian distribution. This allows us to read the mean and variance parameters from the term.

    βˆ’(ΞΌβˆ’ΞΌN)22ΟƒN2=βˆ’12ΞΌ2ΟƒN2+ΞΌΞΌNΟƒN2βˆ’12ΞΌN2ΟƒN2β‡’a=1ΟƒN2,b=ΞΌΟƒN2β‡’ΞΌN=aβˆ’1bΟƒN2=aβˆ’1-\frac{\left(\mu-\mu_{N}\right)^{2}}{2 \sigma_{N}^{2}}=-\frac{1}{2} \frac{\mu^{2}}{\sigma_{N}^{2}}+\frac{\mu \mu_{N}}{\sigma_{N}^{2}} -\frac{1}{2} \frac{\mu_{N}^{2}}{\sigma_{N}^{2}}\Rightarrow a=\frac{1}{\sigma_{N}^{2}}, b=\frac{\mu}{\sigma_{N}^{2}} \Rightarrow \begin{aligned} \mu_{N} &=a^{-1} b \\ \sigma_{N}^{2} &=a^{-1} \end{aligned}ο»Ώ

  • For which 2 cases can Bayesian Learning be solved in closed form?

    Bayesian linear regression

    ΞΌ(xβˆ—)=Ο•(xβˆ—)T(Ξ¦TΞ¦+λσy2I)βˆ’1Ξ¦TyΟƒ2(xβˆ—)=Οƒy2(1+Ο•(xβˆ—)T(Ξ¦TΞ¦+λσy2I)βˆ’1Ο•(xβˆ—))\begin{aligned} \mu\left(\boldsymbol{x}^{*}\right) &=\boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)^{T}\left(\boldsymbol{\Phi}^{T} \boldsymbol{\Phi}+\lambda \sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}\right)^{-1} \boldsymbol{\Phi}^{T} \boldsymbol{y} \\ \sigma^{2}\left(\boldsymbol{x}^{*}\right) &=\sigma_{\boldsymbol{y}}^{2}\left(1+\boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)^{T}\left(\boldsymbol{\Phi}^{T} \boldsymbol{\Phi}+\lambda \sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}\right)^{-1} \boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)\right) \end{aligned}ο»Ώ

    Gaussian Processes (kernelized Bayesian linear regression)

    ΞΌ(xβˆ—)=Ξ»βˆ’1Ο•(xβˆ—)TΞ¦T(Οƒy2I+K)βˆ’1y=kxβˆ—T(Οƒy2I+K)βˆ’1yΟƒ(xβˆ—)=Οƒy2+Ξ»βˆ’1Ο•(xβˆ—)TΟ•(xβˆ—)βˆ’Ξ»βˆ’2Ο•(xβˆ—)TΞ¦T(Οƒy2I+K)βˆ’1Φϕ(xβˆ—)=Οƒy2+kβˆ—βˆ’kxβˆ—T(Οƒy2I+K)βˆ’1kxβˆ—\begin{aligned} \mu\left(\boldsymbol{x}^{*}\right) &=\lambda^{-1} \boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)^{T} \boldsymbol{\Phi}^{T}\left(\sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}+\boldsymbol{K}\right)^{-1} \boldsymbol{y} \\ &=\boldsymbol{k}_{\boldsymbol{x}^{*}}^{T}\left(\sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}+\boldsymbol{K}\right)^{-1} \boldsymbol{y} \\ \sigma\left(\boldsymbol{x}^{*}\right) &=\sigma_{y}^{2}+\lambda^{-1} \boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)^{T} \boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)-\lambda^{-2} \boldsymbol{\phi}\left(\boldsymbol{x}^{*}\right)^{T} \boldsymbol{\Phi}^{T}\left(\sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}+\boldsymbol{K}\right)^{-1} \boldsymbol{\Phi} \phi\left(\boldsymbol{x}^{*}\right) \\ &=\sigma_{y}^{2}+k^{*}-\boldsymbol{k}_{\boldsymbol{x}^{*}}^{T}\left(\sigma_{\boldsymbol{y}}^{2} \boldsymbol{I}+\boldsymbol{K}\right)^{-1} \boldsymbol{k}_{\boldsymbol{x}^{*}} \end{aligned}ο»Ώ

  • Which approximations can we use if no closed form is available?
    1. Sample fromp(θ∣D)p(\boldsymbol{\theta} \mid \mathcal{D})ο»Ώ to approximate the integralp(yβˆ—βˆ£xβˆ—,D)undefinedmarginalΒ likelihoodΒ =∫p(yβˆ—βˆ£xβˆ—,ΞΈ)undefinedlikelihoodΒ p(θ∣D)undefinedposteriorΒ dΞΈ\underbrace{p\left(\boldsymbol{y}^{*} \mid \boldsymbol{x}^{*}, \mathcal{D}\right)}_{\text {marginal likelihood }}=\int \underbrace{p\left(\boldsymbol{y}^{*} \mid \boldsymbol{x}^{*}, \boldsymbol{\theta}\right)}_{\text {likelihood }} \underbrace{p(\boldsymbol{\theta} \mid \mathcal{D})}_{\text {posterior }} d \boldsymbol{\theta}ο»Ώ
    1. Laplace Approximation
    1. Variational Inference

    Or we ignore uncertainty inΞΈ\thetaο»Ώ and settle for the maximum a-posteriori solutionΞΈMAP=arg⁑maxβ‘ΞΈβ€…β€Šp(θ∣D)=arg⁑maxβ‘ΞΈβ€…β€Šp(D∣θ)p(ΞΈ)\boldsymbol{\theta}_{\mathrm{MAP}}=\underset{\boldsymbol{\theta}}{\arg \max}\; p(\boldsymbol{\theta} \mid \mathcal{D})=\underset{\boldsymbol{\theta}}{\arg \max}\; p(\mathcal{D} \mid \boldsymbol{\theta}) p(\boldsymbol{\theta})ο»Ώ

  • How can we derive Bayesian Linear regression
  • What is the advantage of Bayesian Linear regression to Ridge regression? What is the conceptual difference?

    We not only obtain a point estimate, but a Gaussian distribution that quantifies the confidence we have at a prediction. It is computationally different, because for Bayesian Linear regression we are forced to invert matrices.

  • What is the major advantage of GPs over Kernel Ridge Regression?

    We not only obtain a point estimate, but a Gaussian distribution that quantifies the confidence we have at a prediction.

  • Why are GPs a Bayesian approach?

    Gaussian processes are just kernelized Bayesian Linear regression.

  • What principle allowed deriving GPs from a Bayesian regression point of view?

    The kernel trick

Other questions

  • What are possible interpretations of a maximum a-posteriori solution

    Interpolation between the MLE and a prior

    Bayesian learning, ignoring the uncertainty in the parameter vector

    Equivalent to ridge regression for Gaussian distributions withΞ»ridge=λσ2\lambda_{ridge}=\lambda\sigma^2ο»Ώ

Lecture 8: Neural Networks

  • How does logistic regression relate to neural networks?

    Neural networks are compositions of logistic regression functions (Perceptrons), sometimes swapping out the Sigmoid function for other activation functions

  • What kind of functions can single layer neural networks learn?

    According to the representer theorem a NN with one hidden layer with infinite neurons is an arbitrary function approximator.

  • Why do we need non-linear activation functions?

    Otherwise, we would have a composition of linear functions, which itself would be a linear function again.

  • What activation functions can we use, and what are the advantages/disadvantages of those?
    • Sigmoid: Vanishing gradients, not zero centered, a bit expensive to compute, nice interpretation as probabilies
    • tanh: Vanishing gradients, zero centered, a bit expensive to compute
    • ReLU: No gradients for x < 0, efficient to compute, fast convergence in practice
    • Leaky ReLU/ PReLu: like ReLu, but meaningful gradients everywhere, fast convergence in practice
    • ELU: Vanishing gradients for x < 0, bit expensive to compute, fast convergence in practice
  • What output layer and loss function to use given the task (regression, classification)?
    • Regression
      • Deterministic
        • Output Layer: LinearWx+bW\boldsymbol{x} + \boldsymbol{b}ο»Ώ
        • Loss: Squared Error:12(yiβˆ’f(xi))2\frac{1}{2}(y_i-f(\boldsymbol{x}_i))^2ο»Ώ
      • Probabilistic
        • Output Layer: Linear GaussianN(y∣Wx+b,Ξ£)\mathcal{N}(\boldsymbol{y}\mid W \boldsymbol{x} +\boldsymbol{b},\Sigma)ο»Ώ
        • Loss: Negative log likelihoodN(yi∣μ(xi),Ξ£)\mathcal{N}(y_i\mid \boldsymbol{\mu}(\boldsymbol{x}_i), \Sigma)ο»Ώ
    • Binary Classification
      • Deterministic
        • Output Layer: LinearWx+bW\boldsymbol{x} + \boldsymbol{b}ο»Ώ
        • Loss: Hinge Lossmax⁑(0,1βˆ’yif(xi))\max(0,1 - y_if(\boldsymbol{x}_i))ο»Ώ
      • Probabilistic
        • Output Layer: SigmoidΟƒ(Wx+b)\sigma(W\boldsymbol{x} + \boldsymbol{b} )ο»Ώ
        • Loss: Negative log-likelihood of Bernoulliβˆ’cilog⁑(f(xi)βˆ’(1βˆ’ci)log(1βˆ’f(xi))-c_i\log(f(\boldsymbol{x}_i)-(1-c_i)log(1-f(\boldsymbol{x}_i))ο»Ώ
    • Multi-class Classification
      • Deterministic
        • Output Layer: LinearWx+bW\boldsymbol{x} + \boldsymbol{b}ο»Ώ
        • Loss:
      • Probabilistic
        • Output Layer: Softmaxsoftmax⁑(Wx+b)\operatorname{softmax}(W\boldsymbol{x} + \boldsymbol{b} )ο»Ώ
        • Loss: Negative log-likelihood of multinomial:βˆ’βˆ‘k=1Khci,klog⁑yk(xi)-\sum_{k=1}^{K} \boldsymbol{h}_{c_{i}, k} \log y_{k}\left(\mathbf{x}_{i}\right)ο»Ώ
  • Why not use a sigmoid activation function?

    Sigmoid has vanishing gradients and is therefore not suited for deeper neural networks where it is composed many times.

  • Derive the equations for forward and backpropagation for a simple network

    For the multi layer perceptron from the slides:

  • What is mini-batch gradient descent? Why use it instead of SGD or full gradient descent?

    Full gradient decent computes the gradients to updates weights and biases on the full training set, this is computationally expensive as it need to deal with huge matrices. SGD does single gradient steps for each sample, this is computationally less expensive, but introduces jitter in the descent, because one point is not a good representation of the whole dataset. Mini-batches is a tradeoff between both. The training set is partitioned into mini-batches (e.g., 32, 64 or 128 samples) in order to reduce jitter, but still be computationally tractable, especially with the parallelism of GPUs.

  • Why neural networks can overfit, and what are the options to prevent it?

    Training a NN for to long reduces the training loss to a point where it does not reduce validation loss anymore. Especially NNs with many parameters have the capacity to store effectively training data. To prevent overfitting we can use classical approaches such as early stopping, weight regularization, ensembles or data augmentation, but also NN specific methods such as dropout and drop connect

  • Why is the initialization of the network important?

    Different initializations tend to work better for activation functions. Due to vanishing gradients and different centers on the y-axis, the activations may vanish (only be meaningful for a narrow interval of inputs) the more layers we have.

  • What can you read from the loss-curves during training (validation and training loss)?

    We can find see the areas at which the NN under fits (validation loss decreases) and overfits (validation loss increases) and the optimal point for stopping (local minimum of the validation loss). The training loss should always approach zero. If the training loss has too much jitter, we should decrease the learning rate. If it decreases too slowly, we should increase the learning rate. The training loss curve also gives us feedback for the effectiveness of a learning rate schedule.

  • How can we accelerate gradient descent? How does Adam work?

    We can use second order methods. The most straight forward ones would be momentum or gradient normalization (RMSProp). Adam combines momentum and gradient normalization.

    gk=βˆ‡ΞΈL(ΞΈk)vk+1,i=Ξ³1vk,i+(1βˆ’Ξ³1)gk,i2… gradientΒ normΒ mk+1=Ξ³2mk+(1βˆ’Ξ³2)gk… momentumΒ ΞΈk+1,i=ΞΈk,iβˆ’Ξ·c2(k)c1(k)vk+1,i+Ο΅undefinednorm-basedΒ scalingΒ mk+1,i\begin{aligned} \boldsymbol{g}_{k} &=\nabla_{\boldsymbol{\theta}} \mathcal{L}\left(\boldsymbol{\theta}_{k}\right) \\ \boldsymbol{v}_{k+1, i} &=\gamma_{1} \boldsymbol{v}_{k, i}+\left(1-\gamma_{1}\right) \boldsymbol{g}_{k, i}^{2} \quad \ldots \text { gradient norm } \\ \boldsymbol{m}_{k+1} &=\gamma_{2} \boldsymbol{m}_{k}+\left(1-\gamma_{2}\right) \boldsymbol{g}_{k} \quad \ldots \text { momentum } \\ \boldsymbol{\theta}_{k+1, i} &=\boldsymbol{\theta}_{k, i}-\underbrace{\frac{\eta c_{2}(k)}{\sqrt{c_{1}(k) \boldsymbol{v}_{k+1, i}+\epsilon}}}_{\text {norm-based scaling }} \boldsymbol{m}_{k+1, i} \end{aligned}ο»Ώ

    We can also use learning rate decay, to make big steps at the beginning and then slow down.

Lecture 9: Convolutional Neural Networks and Recurrent Neural Networks

  • Why are fully connected networks for images a bad idea, and why do we need images?

    Using fully connected networks would require us to have (width * height * channels) weights in the input layer. Also, by inputting the pixels as an array of numbers we throw away the spacial information in the image, because of the commutativity of the summation. The NN has to relearn the lost information.

    If the questions really intends to ask why we need images, the answer is obvious. Maybe it should read convolutions for images instead. In this case:

    We represent images as tensors [width, height, channels], and apply convolutions and pooling layers to reduce the number of input dimensions (= weights to train), while minimizing the (spacial) information we lose. Convolutions capture spacial information of neighboring pixels and the color channels of an image, by forming super-positions of pixels. Thinking of NNs as feature extractors, they help detect things like edges and shapes.

  • What are the key components of a CNN?

    Convolutional layers, intermixed or just before, pooling layers, followed by fully connected layers. After convolutional layers and fully connected layers, activation functions are applied.

  • What hyper-parameters can we set for a convolutional layer, and what is their meaning?

    Convolution window size F: How many pixels should we consider in the sliding window. Influencing what is considered local. Assumed to be quadratic.

    Number of filters K: How many convolution to apply. Can be interpreted as how many higher level feature types do we want to extract

    Stride S: Step size of the window, can be used to downscale output volume

    Padding: Size P (per side) and value of padding, commonly size is determined by filter size and stride and the value is zero.

  • What hyper-parameters can we set for a pooling layer, and what is their meaning?

    Stride S: Step size of the window, can be used to downscale output volume. S β‰  1 is much more common for pooling layers than for convolutional layers

    Window size F: How many pixels should we consider in the sliding window. Influencing what is considered local. Assumed to be quadratic.

  • How can we compute the dimensionality of the output of a convolutional layer

    Input:W1Γ—H1Γ—D1W_1\times H_1 \times D_1ο»Ώ

    Output:W2Γ—H2Γ—D2;β€…β€ŠW2=W1βˆ’F+2PS+1;β€…β€ŠH2=H1βˆ’F+2PS+1;β€…β€ŠD2=KW_2\times H_2 \times D_2;\;W_2=\frac{W_1 - F+2P}S+1;\;H_2=\frac{H_1-F+2P}S+1;\;D_2=Kο»Ώ

  • Describe basic properties of AlexNet and VCG

    Using small but many convolutional layers and intermixing them with max-pooling and normalization. Followed by three densely connected layers.

  • What is the main idea of ResNet to make it very deep?

    Introducing residual connection, that skip convolutions and just apply the identity functions. The result of the convolution is then combined with the identity. This can be interpreted as giving the layers not only one level of hierarchical feature extraction, but also access to lower level features. Also, the addition of input and output helps against vanishing gradients.

  • How can we use CNNs with a β€œreasonable” amount of training data?

    With transfer learning we can use weights from a similar task and fine tune only the last few layers with a lower training rate.

  • What are recurrent neural networks (RNNs) and why do we need them?

    Recurrent neural networks allow to handle sequential data (e.g., time series data) both as input and output. This is acheived by providing the network not only with a single data point (one time step) but also a state representing previous time steps.

  • How do train a vanilla RNN?

    A vanilla RNN unit

    ht=tanh⁑(Whhhtβˆ’1+Wxhxt)=tanh⁑(W[htβˆ’1xt])\begin{aligned} \boldsymbol{h}_{t} &=\tanh \left(\boldsymbol{W}_{h h} \boldsymbol{h}_{t-1}+\boldsymbol{W}_{x h} \boldsymbol{x}_{t}\right) \\ &=\tanh \left(\boldsymbol{W}\left[\begin{array}{c} \boldsymbol{h}_{t-1} \\ \boldsymbol{x}_{t} \end{array}\right]\right) \end{aligned}ο»Ώ

    is trained with normal backpropagation

    βˆ‚htβˆ‚htβˆ’1=diag⁑(tanh⁑′(W[htβˆ’1xt]))WhhT\frac{\partial \boldsymbol{h}_{t}}{\partial \boldsymbol{h}_{t-1}}=\operatorname{diag}\left(\tanh ^{\prime}\left(\boldsymbol{W}\left[\begin{array}{c} \boldsymbol{h}_{t-1} \\ \boldsymbol{x}_{t} \end{array}\right]\right)\right) \boldsymbol{W}_{h h}^{T}ο»Ώ

Lecture 10: Dimensionality Reduction and Clustering

  • What does dimensionality reduction mean?

    Projecting data into a lower dimensional space, while minimizing the lost information.

  • How does linear dimensionality reduction work?

    LetD>MD > Mο»Ώ. Given a data pointx∈RD\boldsymbol{x}\in \mathbb{R}^Dο»Ώ project it, using matrixW∈RMΓ—DW\in \mathbb{R}^{M\times D}ο»Ώ, into a lower dimensional spacez=Wx∈RM\boldsymbol{z}=W\boldsymbol{x}\in\mathbb{R}^Mο»Ώ. While maximizing the variance of thte projection

  • What is PCA? What are the three things that it does?

    PCA stands for principal component analysis. It is a closed form method for linear dimensionality reduction. It allows us to control the lost variance in the data. Steps:

    Normalize the data (subtract the mean, usually also divide by the standard deviation)

    Do an eigendecomposition of the covariance matrix.

    Keep the basis eigenvectors corresponding to the largest eigenvalues.

    It guarantees:

    PCA maximizes variance of the projection

    PCA minimizes the error of the reconstruction

    PCA projects the data into a linear subspace

  • What are the roles of the Eigenvectors and Eigenvalues in PCA?

    The eigenvalueΞ»i\lambda_iο»Ώ (of the centered covariance matrix) describes the marginal variance captured by the eigenvectorui\boldsymbol{u}_iο»Ώ.

  • Can you describe applications of PCA?

    Image morphing, reducing input dimensionality for a machine learning algorithm (fewer weights to train for parametric algorithms, reduced curse of dimensionality effect for instance based algorithms that are using distances), image compression for certain types of images

  • How is the clustering problem defined? Why is it called β€œunsupervised”?

    We are in the unsupervised setting, i.e. without having ground truth labels and want to group data points into distinct clusters. Clusters should be similar inside and dissimilar to other clusters.

  • How do hierarchical clustering methods work? What is the rule of the cluster-2-cluster distance, and which distances can we use?

    Either bottom-up merging similar clusters together one level above, or top down by splitting clusters one level bellow. Algorithms for doing this or the cluster-2-cluster distance are not covered, but we whould use SLINK/CLINK/UPGMA for bottom up, measuring the cluster-2-cluster distance by:

    dist⁑sl(X,Y)=min⁑x∈X,y∈Ydist⁑(x,y)\operatorname{dist}_{s l}(X, Y)=\min _{x \in X, y \in Y} \operatorname{dist}(x, y)

    dist⁑cl(X,Y)=max⁑x∈X,y∈Ydist⁑(x,y)\operatorname{dist}_{c l}(X, Y)=\max _{x \in X, y \in Y} \operatorname{dist}(x, y)

    dist⁑al(X,Y)=1∣Xβˆ£β‹…βˆ£Yβˆ£βˆ‘x∈X,y∈Ydist⁑(x,y)\operatorname{dist}_{a l}(X, Y)=\frac{1}{|X| \cdot|Y|} \sum_{x \in X, y \in Y} \operatorname{dist}(x, y)ο»Ώ

    and DIANA for top down clustering

  • How does the k-mean algorithm work? What are the 2 main steps?
    1. Place k center points randomly (on datapoints)
    1. Calculate the clostest center for each data point to get clusters
    1. Calculate the mean of each cluster, use the closest datapoint as center and go back to step 2.
  • Why does the algorithm converge? What is it minimizing?

    K-means converges locally. Because in each iteration SDD is reduced and there is only a finite number of possible centers.

  • Does k-means finds the global minimum of the objective?

    No. Minimizingβˆ‘i=1nβˆ‘kI(zi=k)d(xi,ck)2\sum_{i=1}^{n} \sum_{k} \mathbb{I}\left(z_{i}=k\right)d\left(\boldsymbol{x}_{i}, \boldsymbol{c}_{k}\right)^{2}ο»Ώ is a NP-hard problem.

Lecture 11: Expectation Maximization

  • What are mixture models?

    Linear combinations of distributions

    p(x)=βˆ‘k=1KΟ€kp(x∣k)p(\boldsymbol{x}) = \sum_{k=1}^{K}\pi_kp(\boldsymbol{x}\mid k)ο»Ώ

  • Should gradient methods be used for training mixture models?

    No, because of the log of a sum for the negative loglikelihood, gradient decent is very slow.

  • How does the EM algorithm work?

    Do some reasonable initialization of distribution parameters, e.g., with k-means

    Expectation: Compute cluster probabilities aka responsibilities for each sample (Bayes rule)qik=p(ki=k∣xi)=p(xi∣k)p(k)βˆ‘jp(xi∣j)p(j)=Ο€kN(xi∣μk,Ξ£k)βˆ‘j=1KΟ€jN(xi∣μj,Ξ£j) q_{i k}=p\left(k_{i}=k \mid \boldsymbol{x}_{i}\right)=\frac{p\left(\boldsymbol{x}_{i} \mid k\right) p(k)}{\sum_{j} p\left(\boldsymbol{x}_{i} \mid j\right) p(j)}=\frac{\pi_{k} \mathcal{N}\left(\boldsymbol{x}_{i} \mid \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}\right)}{\sum_{j=1}^{K} \pi_{j} \mathcal{N}\left(\boldsymbol{x}_{i} \mid \boldsymbol{\mu}_{j}, \boldsymbol{\Sigma}_{j}\right)}ο»Ώ

    Maximization: Compute (weighted) maximum likelihood estimateΟ€k=βˆ‘iqikNΞΌk=βˆ‘iqikxiβˆ‘iqikΞ£k=βˆ‘iqik(xiβˆ’ΞΌk)(xiβˆ’ΞΌk)Tβˆ‘iqik\pi_{k}=\frac{\sum_{i} q_{i k}}{N} \quad \boldsymbol{\mu}_{k}=\frac{\sum_{i} q_{i k} \boldsymbol{x}_{i}}{\sum_{i} q_{i k}} \quad{\Sigma}_{k}=\frac{\sum_{i} q_{i k}\left(\boldsymbol{x}_{i}-\boldsymbol{\mu}_{k}\right)\left(\boldsymbol{x}_{i}-\boldsymbol{\mu}_{k}\right)^{T}}{\sum_{i} q_{i k}}ο»Ώ

  • What is the biggest problem of mixture models?

    We are left with the model selection problem of choosing the number and kind of distributions to mix.

  • How does EM decomposes the marginal likelihood?

    log⁑p(x∣θ)=L(q,ΞΈ)+KL(q(z)βˆ₯p(z∣x))\log p(\boldsymbol{x} \mid \boldsymbol{\theta})=\mathcal{L}(q, \boldsymbol{\theta})+\mathrm{KL}(q(z) \| p(z \mid \boldsymbol{x}))ο»Ώ

  • Why does EM always improve the lower bound?

    In the expectation step, we minimize the second term of the marginal log likelihoodlog⁑p(x∣θ)=L(q,ΞΈ)+KL(q(z)βˆ₯p(z∣x))\log p(\boldsymbol{x} \mid \boldsymbol{\theta})=\mathcal{L}(q, \boldsymbol{\theta})+\mathrm{KL}(q(z) \| p(z \mid \boldsymbol{x}))ο»Ώ without changing the marginal, as it does not depend onqqο»Ώ. Therefore, the lower bound increases. The maximization step explicitly maximizes the lower bound directly.

  • Why does EM always improve the marginal likelihood?

    In the previous answer, we argued, that the expectation step does not change the marginal likelihood. In the maximization step, z we increase the lower bound and the KL, as the KL can’t get smaller than zero.

  • Why can we optimize each mixture component independently with EM

    We have independent loss functions for every component k, because the M-step gives us just the sum of the components.

    ΞΈ=arg⁑maxβ‘ΞΈβˆ‘iβˆ‘kqiklog⁑p(k)p(xi∣k)\boldsymbol{\theta}=\underset{\boldsymbol{\theta}}{\arg \max } \sum_{i} \sum_{k} q_{i k} \log p(k) p\left(\boldsymbol{x}_{i} \mid k\right)ο»Ώ

  • Why do we need sampling for continuous latent variables?

    Because there is usually no analytical solution for the lower bound integral

    βˆ‘i(∫zqi(z)log⁑p(xi,z∣θ)dzβˆ’βˆ«zqi(z)log⁑qi(z)dz)\sum_{i}\left(\int_{z} q_{i}(z) \log p\left(\boldsymbol{x}_{i}, z \mid \boldsymbol{\theta}\right) d z-\int_{z} q_{i}(z) \log q_{i}(z) d z\right)ο»Ώ, we do a monte carlo estimation of it.