两种模型来自于论文Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer。
英文比较好的同学,可以参考原文的介绍:
论文Online incremental attribute-based zero-shot learning中的描述:
The difference between DAP and IAP is the relationship between training classes and testing classes. In DAP, all classes are treated equally based on the attributes’ results in the attribute layer. On the contrary, by indirectly learning the attributes in IAP, the attributes’ values are induced from the training classes, which are intermediate feature layers.
以下摘自天涯惟笑的博客:
DAP可以理解为一个三层模型:第一层是原始输入层,例如一张电子图片(可以用像素的方式进行描述);第二层是p维特征空间,每一维代表一个特征(例如是否有尾巴、是否有毛等等);第三层是输出层,输出模型对输出样本的类别判断。在第一层和第二层中间,训练p个分类器,用于对一张图片判断是否符合p维特征空间各个维度所对应的特征;在第二层和第三层间,有一个语料知识库,用于保存p维特征空间和输出y的对应关系。
简单来讲,就是对输入的每一个属性训练一个分类器,然后将训练得出的模型用于属性的预测,测试时,对测试样本的属性进行预测,再从属性向量空间里面找到和测试样本最接近的类别。
而关于IAP模型的介绍却很少很少,在一篇介绍ZSL的文章(DeepLearning | Zero Shot Learning 零样本学习 - 冯良骏 的 博客 - CSDN博客)里,作者说:
间接属性预测 Indirect attribute prediction (IAP)
这一方法使用了两层标签,属性层作为中间层,在实际中使用较少,这里不多做介绍。
直接略过了。
但是最近读的这篇论文Online Incremental Attribute-based Zero-shot Learning却主要用到了IAP模型,为此要对IAP模型有一定的理解。
Indirect attribute prediction (IAP), depicted in Figure 2(c), also uses the attributes to transfer knowledge between classes,but the attributes form a connecting layer between two layers of labels, one for classes that are known at training time and one for classes that are not. The training phase of IAP is ordinary multi-class classification. At test time, the predictions for all training classes induce a labeling of the attribute layer, from which a labeling over the test classes can be inferred.