cs231n:assignment2——python文件:layers.py

视频里 Andrej Karpathy上课的时候说,这次的作业meaty but educational,确实很meaty,作业一般是由.ipynb文件和.py文件组成,这次因为每个.ipynb文件涉及到的.py文件较多,且互相之间有交叉,所以每篇博客只贴出一个.ipynb或者一个.py文件.(因为之前的作业由于是一个.ipynb文件对应一个.py文件,所以就整合到一篇博客里)
还是那句话,有错误希望帮我指出来,多多指教,谢谢

layers.py 内容:


 import numpy as np

from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.layer_utils import *


class ThreeLayerConvNet(object):
  """
  A three-layer convolutional network with the following architecture:

  conv - relu - 2x2 max pool - affine - relu - affine - softmax

  The network operates on minibatches of data that have shape (N, C, H, W)
  consisting of N images, each with height H and width W and with C input
  channels.
  """

  def __init__(self, input_dim=(3, 32, 32), num_filters=32, filter_size=7,
               hidden_dim=100, num_classes=10, weight_scale=1e-3, reg=0.0,
               dtype=np.float32):
    """
    Initialize a new network.

    Inputs:
    - input_dim: Tuple (C, H, W) giving size of input data
    - num_filters: Number of filters to use in the convolutional layer
    - filter_size: Size of filters to use in the convolutional layer
    - hidden_dim: Number of units to use in the fully-connected hidden layer
    - num_classes: Number of scores to produce from the final affine layer.
    - weight_scale: Scalar giving standard deviation for random initialization
      of weights.
    - reg: Scalar giving L2 regularization strength
    - dtype: numpy datatype to use for computation.
    """
    self.params = {}
    self.reg = reg
    self.dtype = dtype

    ############################################################################
    # TODO: Initialize weights and biases for the three-layer convolutional    #
    # network. Weights should be initialized from a Gaussian with standard     #
    # deviation equal to weight_scale; biases should be initialized to zero.   #
    # All weights and biases should be stored in the dictionary self.params.   #
    # Store weights and biases for the convolutional layer using the keys 'W1' #
    # and 'b1'; use keys 'W2' and 'b2' for the weights and biases of the       #
    # hidden affine layer, and keys 'W3' and 'b3' for the weights and biases   #
    # of the output affine layer.                                              #
    ############################################################################
    self.params['W1'] = weight_scale * np.random.randn(num_filters,\
                                                        input_dim[0],\
                                                        filter_size,\
                                                        filter_size)
    self.params['b1'] = np.zeros(num_filters)
    self.params['W2'] = weight_scale * np.random.randn(\
                        num_filters * input_dim[1] * input_dim[2] / 4, \
                        hidden_dim)
    self.params['b2'] = np.zeros(hidden_dim)
    self.params['W3'] = weight_scale * np.random.randn(hidden_dim, num_classes)
    self.params['b3'] = np.zeros(num_classes)
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    for k, v in self.params.iteritems():
      self.params[k] = v.astype(dtype)


  def loss(self, X, y=None):
    """
    Evaluate loss and gradient for the three-layer convolutional network.

    Input / output: Same API as TwoLayerNet in fc_net.py.
    """
    W1, b1 = self.params['W1'], self.params['b1']
    W2, b2 = self.params['W2'], self.params['b2']
    W3, b3 = self.params['W3'], self.params['b3']

    # pass conv_param to the forward pass for the convolutional layer
    filter_size = W1.shape[2]
    conv_param = {'stride': 1, 'pad': (filter_size - 1) / 2}

    # pass pool_param to the forward pass for the max-pooling layer
    pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}

    scores = None
    ############################################################################
    # TODO: Implement the forward pass for the three-layer convolutional net,  #
    # computing the class scores for X and storing them in the scores          #
    # variable.                                                                #
    ############################################################################
    a1, cache1= conv_relu_pool_forward(X, W1, b1, conv_param, pool_param)
    a1_reshape = a1.reshape(a1.shape[0], -1)
    a2, cache2 = affine_relu_forward(a1_reshape, W2, b2)
    scores, cache3 = affine_forward(a2, W3, b3)
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    if y is None:
      return scores

    loss, grads = 0, {}
    ############################################################################
    # TODO: Implement the backward pass for the three-layer convolutional net, #
    # storing the loss and gradients in the loss and grads variables. Compute  #
    # data loss using softmax, and make sure that grads[k] holds the gradients #
    # for self.params[k]. Don't forget to add L2 regularization!               #
    ############################################################################
    loss_without_reg, dscores = softmax_loss(scores, y)
    loss = loss_without_reg + \
            0.5 * self.reg * (np.sum(W1**2) + np.sum(W2**2) + np.sum(W3**2))
    da2, grads['W3'], grads['b3'] = affine_backward(dscores, cache3)
    grads['W3'] += self.reg * cache3[1]
    da1, grads['W2'], grads['b2'] = affine_relu_backward(da2, cache2)
    grads['W2'] += self.reg * cache2[0][1]
    da1_reshape = da1.reshape(*a1.shape)
    dx, grads['W1'], grads['b1'] = conv_relu_pool_backward(da1_reshape, cache1)
    grads['W1'] += self.reg * cache1[0][1]
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    return loss, grads

################################################################################
#                             add by xieyi                                     #
################################################################################

class ConvNetArch_1(object):
  """
  A arbitrary number of hidden layers convolutional network with the following
  architecture:

  [conv-relu-pool]xN - [conv - relu] - [affine]xM - [softmax or SVM]

  [conv-relu-pool]XN - [affine]XM - [softmax or SVM]

  ' - [conv - relu] - ' can be trun on/off by parameter 'connect_conv'
  both architecture can do with or without batch normalization

  The network operates on minibatches of data that have shape (N, C, H, W)
  consisting of N images, each with height H and width W and with C input
  channels.
  The learnable parameters of the model are stored in the dictionary self.params
  that maps parameter names to numpy arrays.
  """

  def __init__(self, conv_dims, hidden_dims, input_dim=(3, 32, 32), connect_conv = 0,
               use_batchnorm=False, num_classes=10, loss_fuction='softmax',
               weight_scale=1e-3, reg=0.0, dtype=np.float32):
    """
    Initialize a new network.

    Inputs:
    - input_dim: Tuple (C, H, W) giving size of input data
    - conv_dims: List of tuple[(filters_num, filter_size)*N] Number of filters
      and filter_size in convolutional layers
    - connect_conv: Whether or not the conv - relu should implement to connect
      [conv-relu-pool]xN and [affine]xM
    - hidden_dims: Number of units to use in the fully-connected hidden layer
    - num_classes: Number of scores to produce from the final affine layer.
    - loss_fuction: type of loss function
    - weight_scale: Scalar giving standard deviation for random initialization
      of weights.
    - reg: Scalar giving L2 regularization strength
    - dtype: numpy datatype to use for computation.
    """
    self.num_conv_layers = len(conv_dims)
    self.num_fc_layers = len(hidden_dims)
    self.use_connect_conv = connect_conv>0
    self.use_batchnorm = use_batchnorm
    self.loss_fuction = loss_fuction
    self.reg = reg
    self.dtype = dtype
    self.params = {}
    ############################################################################
    # Initialize weights and biases for the  convolutional                     #
    # network. Weights should be initialized from a Gaussian with standard     #
    # deviation equal to weight_scale; biases should be initialized to zero.   #
    # All weights and biases should be stored in the dictionary self.params.   #
    # Store weights and biases for the convolutional layer using the keys 'CW1'#
    # and 'cb1'; use keys 'FW1' and 'fb1' for the weights and biases of the    #
    # hidden fully connectedNet layers, 'CCW' and 'ccb' for connect_conv layer #
    ############################################################################

    # initialize conv_layers:
    for i in xrange(self.num_conv_layers):
      if i==0:
        self.params['CW1'] = weight_scale * np.random.randn(\
                                  conv_dims[i][0],\
                                  input_dim[0],\
                                  conv_dims[i][1],\
                                  conv_dims[i][1])
        self.params['cb1'] = np.zeros(conv_dims[i][0])
        #print self.params['CW1'].shape,self.params['cb1'].shape
        if use_batchnorm:
          self.params['cgamma1'] = np.ones(conv_dims[i][0])
          self.params['cbeta1'] = np.zeros(conv_dims[i][0])

      else:
        self.params['CW'+str(i+1)] = weight_scale * np.random.randn(\
                                  conv_dims[i][0],\
                                  conv_dims[i-1][0],\
                                  conv_dims[i][1],\
                                  conv_dims[i][1])
        self.params['cb'+str(i+1)] = np.zeros(conv_dims[i][0])
        if use_batchnorm:
          self.params['cgamma'+str(i+1)] = np.ones(conv_dims[i][0])
          self.params['cbeta'+str(i+1)] = np.zeros(conv_dims[i][0])

    if self.use_connect_conv:
      self.params['CCW'] = weight_scale * np.random.randn(\
                                  connect_conv[0],\
                                  conv_dims[-1][0],\
                                  connect_conv[1],\
                                  connect_conv[1])
      self.params['ccb'] = np.zeros(connect_conv[0])
      if self.use_batchnorm:
        self.params['ccgamma'] = np.ones(connect_conv[0])
        self.params['ccbeta'] = np.zeros(connect_conv[0])

    #initialize affine layers:
    for i in xrange(self.num_fc_layers):
        if i == 0:
          #initialize first affine layers
          if self.use_connect_conv:
            self.params['FW'+str(i+1)] = weight_scale * np.random.randn(
               connect_conv[0]*input_dim[1]*input_dim[2]/4**self.num_conv_layers,
               hidden_dims[i])
          else:
            self.params['FW'+str(i+1)] = weight_scale * np.random.randn(
               conv_dims[-1][0]*input_dim[1]*input_dim[2]/4**self.num_conv_layers,
               hidden_dims[i])
          self.params['fb'+str(i+1)] = np.zeros(hidden_dims[i])
          #initialize first batch normalize layers
          if self.use_batchnorm:
            self.params['fgamma'+str(i+1)] = np.ones(hidden_dims[i])
            self.params['fbeta'+str(i+1)] = np.zeros(hidden_dims[i])
        elif i == self.num_fc_layers-1:
          #initialize last affine layers
          self.params['FW'+str(i+1)] = \
              weight_scale * np.random.randn(hidden_dims[i-1], num_classes)
          self.params['fb'+str(i+1)] = np.zeros(num_classes)
          if self.use_batchnorm:
            self.params['fgamma'+str(i+1)] = np.ones(num_classes)
            self.params['fbeta'+str(i+1)] = np.zeros(num_classes)
        else:
          #initialize  affine layers
          self.params['FW'+str(i+1)] = \
              weight_scale * np.random.randn(hidden_dims[i-1], hidden_dims[i])
          self.params['fb'+str(i+1)] = np.zeros(hidden_dims[i])
          #initialize batch normalize layers
          if self.use_batchnorm:
            self.params['fgamma'+str(i+1)] = np.ones(hidden_dims[i])
            self.params['fbeta'+str(i+1)] = np.zeros(hidden_dims[i])

    # pass conv_params to the forward pass for the convolutional layer
    self.conv_params = []
    self.conv_params = [{'stride': 1, 'pad': (conv_dims[i][1] - 1) / 2}\
                        for i in xrange(self.num_conv_layers)]
    if self.use_connect_conv:
      self.conv_params.append({'stride': 1, 'pad': (connect_conv[1] - 1) / 2})
    # pass pool_param to the forward pass for the max-pooling layer
    self.pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
    # With batch normalization we need to keep track of running means and
    # variances, so we need to pass a special bn_param object to each batch
    # normalization layer. You should pass self.bn_params[0] to the forward pass
    # of the first batch normalization layer, self.bn_params[1] to the forward
    # pass of the second batch normalization layer, etc.
    self.sbn_params = []
    if self.use_batchnorm:
      self.sbn_params = [{'mode': 'train'} for i in xrange(self.num_conv_layers)]
      if self.use_connect_conv:
        self.sbn_params.append({'mode': 'train'})
    self.bn_params = []
    if self.use_batchnorm:
      self.bn_params = [{'mode': 'train'} for i in xrange(self.num_fc_layers)]

    # Cast all parameters to the correct datatype
    for k, v in self.params.iteritems():
      self.params[k] = v.astype(dtype)

  def loss(self, X, y=None):
    """
    Evaluate loss and gradient for the three-layer convolutional network.

    Input / output: Same API as TwoLayerNet in fc_net.py.
    """
    X = X.astype(self.dtype)
    mode = 'test' if y is None else 'train'

    # Set train/test mode for batchnorm params since they
    # behave differently during training and testing.
    if self.use_batchnorm:
      for bn_param in self.bn_params:
        bn_param['mode'] = mode
      for bn_param in self.sbn_params:
        bn_param['mode'] = mode
        # bn_param[mode] = mode
        # the original CODE is right above, BUT I think it's wrong
      if self.use_connect_conv:
        self.sbn_params[-1]['mode'] = mode

    scores = None
    ############################################################################
    # Implement the forward pass for the three-layer convolutional net,        #
    # computing the class scores for X and storing them in the scores          #
    # variable.                                                                #
    ############################################################################
    a = X
    cache = []
    #conv_forward:
    for i in xrange(self.num_conv_layers):
      if self.use_batchnorm:
        a_temp, cache_temp = conv_bn_relu_pool_forward(a,\
                                        self.params['CW'+str(i+1)], \
                                        self.params['cb'+str(i+1)], \
                                        self.params['cgamma'+str(i+1)], \
                                        self.params['cbeta'+str(i+1)], \
                                        self.conv_params[i], \
                                        self.pool_param,\
                                        self.sbn_params[i])
        a = a_temp
        cache.append(cache_temp)
      else:
        a_temp, cache_temp = conv_relu_pool_forward(a,\
                                        self.params['CW'+str(i+1)], \
                                        self.params['cb'+str(i+1)], \
                                        self.conv_params[i], \
                                        self.pool_param)
        a = a_temp
        cache.append(cache_temp)

    #connect_conv_forward
    if self.use_connect_conv:
      if self.use_batchnorm:
        # conv_forward:
        a_temp, cache_temp = conv_forward_fast(a, \
                                        self.params['CCW'], \
                                        self.params['ccb'], \
                                        self.conv_params[-1])
        a = a_temp
        cache.append(cache_temp)

        # spatial_batchnorm_forward:
        a_temp, cache_temp = spatial_batchnorm_forward(a, \
                                        self.params['ccgamma'], \
                                        self.params['ccbeta'], \
                                        self.sbn_params[-1])
        a = a_temp
        cache.append(cache_temp)

        # Rulu forward:
        a_temp, cache_temp = relu_forward(a)
        a = a_temp
        cache.append(cache_temp)
      else:
        a_temp, cache_temp = conv_relu_forward(a, \
                                        self.params['CCW'], \
                                        self.params['ccb'], \
                                        self.conv_params[-1])
        a = a_temp
        cache.append(cache_temp)

    #affine forward:
    N = X.shape[0]
    x_temp_shape = a.shape
    for i in xrange(self.num_fc_layers):
      a_temp, cache_temp = affine_forward(a.reshape(N, -1), \
                                          self.params['FW'+str(i+1)], \
                                          self.params['fb'+str(i+1)])
      a = a_temp
      cache.append(cache_temp)
      if self.use_batchnorm:
        a_temp, cache_temp = batchnorm_forward(a, \
                                          self.params['fgamma'+str(i+1)], \
                                          self.params['fbeta'+str(i+1)],
                                          self.bn_params[i])
        a = a_temp
        cache.append(cache_temp)
    scores = a
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    if mode == 'test':
      return scores

    loss, grads = 0, {}
    ############################################################################
    # Implement the backward pass for the three-layer convolutional net,       #
    # storing the loss and gradients in the loss and grads variables. Compute  #
    # data loss using softmax or svm_loss, and make sure that grads[k] holds   #
    # the gradients for self.params[k]. Don't forget to add L2 regularization! #
    ############################################################################
    # compute loss:
    if self.loss_fuction=='softmax':
      loss_without_reg, dscores = softmax_loss(scores, y)
    elif self.loss_fuction=='svm_loss':
      loss_without_reg, dscores = svm_loss(scores, y)
    loss = loss_without_reg
    for i in xrange(self.num_conv_layers):
      loss += 0.5*self.reg*np.sum(self.params['CW'+str(i+1)]**2)
    if self.use_connect_conv:
      loss += 0.5*self.reg*np.sum(self.params['CCW']**2)
    for i in xrange(self.num_fc_layers):
      loss += 0.5*self.reg*np.sum(self.params['FW'+str(i+1)]**2)

    #### compute fully-connected layers grads{}
    dout = dscores
    for i in reversed(xrange(self.num_fc_layers)):
      if self.use_batchnorm:
        dout_temp, grads['fgamma'+str(i+1)], grads['fbeta'+str(i+1)] = \
                      batchnorm_backward(dout, cache.pop(-1))
        dout = dout_temp
      dout_temp, grads['FW'+str(i+1)], grads['fb'+str(i+1)] = \
                      affine_backward(dout, cache.pop(-1))
      dout = dout_temp

    # compute connect conv_layer grads{}
    dout = dout.reshape(x_temp_shape)
    if self.use_connect_conv:
      if self.use_batchnorm:
        dout_temp = relu_backward(dout, cache.pop(-1))
        dout = dout_temp
        dout_temp, grads['ccgamma'], grads['ccbeta'] = \
            spatial_batchnorm_backward(dout, cache.pop(-1))
        dout = dout_temp
        dout_temp, grads['CCW'], grads['ccb'] = \
          conv_backward_fast(dout, cache.pop(-1))
        dout = dout_temp
      else:
        dout_temp, grads['CCW'], grads['ccb'] = \
            conv_relu_backward(dout, cache.pop(-1))
        dout = dout_temp

    # compute conv_layers grads{}
    for i in reversed(xrange(self.num_conv_layers)):
      if self.use_batchnorm:
        dout_temp, grads['CW'+str(i+1)], grads['cb'+str(i+1)], \
          grads['cgamma'+str(i+1)], grads['cbeta'+str(i+1)] =  \
                      conv_bn_relu_pool_backward(dout, cache.pop(-1))
        dout = dout_temp
      else:
        dout_temp, grads['CW'+str(i+1)], grads['cb'+str(i+1)] =  \
                      conv_relu_pool_backward(dout, cache.pop(-1))
        dout = dout_temp
    ############################################################################
    #                             END OF YOUR CODE                             #
    ############################################################################

    return loss, grads

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,293评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,604评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,958评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,729评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,719评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,630评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,000评论 3 397
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,665评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,909评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,646评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,726评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,400评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,986评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,959评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,197评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,996评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,481评论 2 342

推荐阅读更多精彩内容