what does torch.nn.CosineEmbeddingLoss() real do?

loss function for each sample

similarity = cos(\theta) = \frac{\vec{A} * \vec{B} }{|\vec{A}||\vec{B}|}

def CustomCosineEmbeddingLoss(x1, x2, target):
    x1_ = torch.sqrt(torch.sum(x1 * x1, dim = 1)) # |x1|
    x2_ = torch.sqrt(torch.sum(x2 * x2, dim = 1)) # |x2|
    cos_x1_x2 = torch.sum(x1 * x2, dim = 1)/(x1_ * x2_)
    ans = torch.mean(target- cos_x1_x2)
    return ans
   
cirt =  torch.nn.CosineEmbeddingLoss(reduction = "mean")
x1 = torch.randn((5,3))
x2 = torch.randn((5,3))

a1 = cirt(x1,x2,target)
print(a1)
a2 =CustomCosineEmbeddingLoss(x1,x2, target)
print(a2)
# Out[11]:
# tensor(1.0479)
# tensor(1.0479)
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。