본문 바로가기

20183

Virtual Class Enhanced Discriminative Embedding Learning. NIPS. 2018. Virtual Softmax 1. Method 이 논문에서 제안하는 Virtual Softmax는 Softmax에서 파생된 classifier입니다. 먼저 Softmax의 수식은 아래와 같습니다. $$ \frac{\mathcal{e}^{W^{T}_{y_i}X_i}}{\sum^C_{j=1}\mathcal{e}^{W^{T}_{y_j}X_i}} $$ Softmax는 수식에서 볼 수 있듯이 weight와 입력의 feature의 inner product \(Wx\)를 기반으로 연산됩니다. 학습 중에는 \( W^T_{y_i}X_i > \max_{j \in c, j \neq y_i}(W^T_jX_i)\)를 충족하도록 penalty를 부여함으로써 입력이 올바른 class로 분류되도록 합니다. 하지만, Softmax는 .. 2021. 7. 28.
Deep One-Class Classification. ICML. 2018. (2/2) DSVDD: Deep Support Vector Data Description 3. Properties of Deep SVDD Proposition 1: All-zero-weights solution. Let \(\mathcal{W}_{0}\) be the set of all-zero network weights, i.e., \(\boldsymbol{W}^{l} = \boldsymbol{0}\) for every \(\boldsymbol{W}^{l} \in \mathcal{W}_{0}\). For this choice of parameters, the network maps any input to the same output, i.e., \(\phi(\boldsymbol{x};\mathcal{W}_{0}.. 2021. 7. 5.
Deep One-Class Classification. ICML. 2018. (1/2) DSVDD: Deep Support Vector Data Description 1. Background 1.1 Kernel-based One-Class Classification. Let \(\mathcal{X} \subset \mathbb{R}^d\) be the data space. Let \( k : \mathcal{X} \times \mathcal{X} \rightarrow [0, \inf)\) be a PSD kernel, \(\mathcal{F}_{k}\) it's associated RKHS, and \(\phi_{k} : \mathcal{X} \rightarrow \mathcal{F}_{k}\) its associated feature mapping. So \(k(\boldsymbol{x}.. 2021. 6. 29.