Zero-Shot Learning
Transductive Zero-Shot Learning Setting
In transductive setting, we are provided some unlabeled images from unseen classes during training.
CADA-VAE: Generalized Zero-and Few-Shot Learning via Aligned Variational Autoencoders (CVPR2019)
Experiments
Train your own dataset, extract image features https://github.com/hbdat/cvpr20_DAZLE
Continuous attributes: https://arxiv.org/pdf/1503.08677.pdf
In AWA, each class was annotated with 85 attributes by 10 students [42]. Continuous class-attribute associations were obtained by averaging the per-student votes and subsequently thresholded
to obtain binary attributes.
In CUB, 312 attributes were obtained from a bird field guide. Each image was annotated according to the presence/absence of these attributes. The per-image attributes were averaged to obtain continuousvalued class-attribute associations and thresholded with respect to the overall mean to obtain binary attributes