• Pascal Lamblin's avatar
    Re-add part of the dtype constraint on out grads · 3bd9ffde
    Pascal Lamblin 提交于
    In order to avoid expanding memory usage and computations in the part of
    the graph that computes gradients, I propose the following conventions,
    that re-instate some of the constraint that existed before on the
    dtype of gradients:
    - When calling some_op.grad(inputs, output_grads), each variable in the
      "output_grads" list, if it is an actual numeric variable (and not,
      for instance, DisconnectedType or NullType), should have the same
      dtype as the corresponding output variable.
    - Moreover, if one of the output variables is of a discrete dtype (int
      or uint), then the corresponding output gradient (if not a special
      case like NullType) should be zeros.
    
    This is implemented in theano.grad, so the Op's grad method does not
    have to be changed, but now it can rely again on the fact that, if an
    output gradient has a dtype, that dtype will be the same as the
    corresponding output variable.
    3bd9ffde
名称
最后提交
最后更新
..
compile 正在载入提交数据...
gof 正在载入提交数据...
misc 正在载入提交数据...
sandbox 正在载入提交数据...
scalar 正在载入提交数据...
scan_module 正在载入提交数据...
sparse 正在载入提交数据...
tensor 正在载入提交数据...
tests 正在载入提交数据...
__init__.py 正在载入提交数据...
configdefaults.py 正在载入提交数据...
configparser.py 正在载入提交数据...
gradient.py 正在载入提交数据...
ifelse.py 正在载入提交数据...
printing.py 正在载入提交数据...
raise_op.py 正在载入提交数据...
updates.py 正在载入提交数据...
version.py 正在载入提交数据...