Week 2
I met with Professor Verma on May 18 as I continue to gather background information on metric learning. In addition to MMC and LMNN, we discussed the paper on Neighboorhood Component Analysis (Goldberger et al., 2005), which learns the Mahalanobis distance in a k-Nearest Neighbor classification, and t-Distributed Stochastic Neighbor Embedding (van der Maaten and Hinton, 2008), which converts high-dimensional data into a matrix of pairwise similarities and allows for visualization in 2-3 dimensions. To solidify my understanding of the underlying math, Professor Verma asked me to write out the derivation of the NCA cost function, and provide a proof showing how a conic constraint for a positive semi-definite matrix is convex while a rank constraint for a square matrix is non-convex.