Review Note
Last Update: 10/08/2023 02:57 PM
Current Deck: EEN100: Theory and exercises
New Card (Unpublished)Currently Published Content
Front
Back
No published tags.
Pending Suggestions
Field Change Suggestions:
Front
Commit #21167
PAC learnability
Back
Commit #21167
A hypothesis class \(\mathcal H\) is PAC learnable wrt a set \(\mathcal Z\) and loss function \(\ell: \mathcal H \times \mathcal Z \to \mathbb R_+\)
- if there exists a sample complexity function \(m_{\mathcal H}: (0,1)^2 \to \mathbb N\) and a learning algorithm such that
- for every \(\epsilon, \delta \in (0,1)\) and every distribution \(P_{\mathcal Z}\) over \(\mathcal Z\)
- when running the algorithm on \(m \ge m_{\mathcal H}(\epsilon, \delta)\) iid samples from \(P_{\mathcal Z}\)
- it returns a \(h \in \mathcal H\) such that the PAC guarantee holds
The property is not trivial and one can exhibit failing examples (e.g. binary classifiers over infinite domain)