Polynomial([d]) | Theano wrapper for polynomials. |
Elemwise(scalar_op[, inplace_pattern, name, ...]) | Wrapper for theano.tensor.Elemwise that overloads |
UnaryScalarOp([d, name, _proxy]) | Theano wrapper for functions of a single variable that can compute their own derivative via a flag d. |
InverseUnaryScalarOp([name, _proxy]) | Class skeleton to represent the inverse of an operator. |
ScalarOpFromExpr(inputs, outputs[, d, name, ...]) | This class allows you to construct a theano.scalar.ScalarOp |
UnaryScalarOpFromExpr(x, f_x[, d, name, fn, ...]) | Unary version of ScalarOpFromExpr. Takes a single variable x |
Inheritance diagram for mmf.utils.mmf_theano:
Tools for working with theano.
The main goal is to enable as seamless integration of custom functions as possible. In particular, addressing the following issues:
Bases: theano.gof.op.Op
Theano wrapper for polynomials.
Examples
>>> import theano.tensor as T
>>> x = T.scalar('x')
>>> c = T.vector('c')
>>> p = Polynomial()(c, x)
>>> dp = T.grad(p, x)
>>> ddp = T.grad(dp, x)
>>> f = theano.function([c, x], [p, dp, ddp], mode='FAST_COMPILE')
>>> _c = [1.0,2.0,-3.0,4.0]
>>> f(_c, 2.0)
[25.0, array(38.0), array(42.0)]
We can also vectorize the polynomial over x. Note the use of sum() when constructing the derivatives. See https://groups.google.com/d/topic/theano-users/81hG-LzvxOY/discussion >>> import theano.tensor as T >>> x = T.matrix(‘x’) >>> c = T.vector(‘c’) >>> p = Polynomial()(c, x) >>> dp = T.grad(p.sum(), x) >>> ddp = T.grad(dp.sum(), x) >>> f = theano.function([c, x], [p, dp, ddp], mode=’FAST_COMPILE’) >>> _c = [1.0,2.0,-3.0,4.0] >>> _x = [[1.0,2.0],[3.0,4.0]] >>> f(_c, _x) [array([[ 4., 25.],
[ 88., 217.]]),
- array([[ 8., 38.],
- [ 92., 170.]]),
- array([[ 18., 42.],
- [ 66., 90.]])]
Attributes
default_output |
Methods
__call__(*inputs, **kwargs) | Optional: Return some or all output[s] of make_node. | ||
c_code(node, name, inputs, outputs, sub) | Required: Return the C implementation of an Op. | ||
c_code_cache_version() | Return a tuple of integers indicating the version of this Op. | ||
c_code_cache_version_apply(node) | Return a tuple of integers indicating the version of this Op. | ||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | Optional: Return a list of header files required by code returned by | ||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | Optional: Return utility code for use by a Variable or Op to be | ||
c_support_code_apply(node, name) | Optional: Return utility code for use by an Op that will be inserted at global | ||
grad(inputs, output_gradients) | We do not yet support differentiation wrt c though this should be | ||
make_node(c, x) | |||
make_thunk(node, storage_map, compute_map, ...) |
|
||
perform(node, inputs, output_storage) |
Bases: theano.tensor.elemwise.Elemwise
Wrapper for theano.tensor.Elemwise that overloads __call__() to directly call the implementation for numerical arguments.
Attributes
default_output |
Methods
R_op(inputs, eval_points) | |||
__call__(*v, **kw) | |||
c_code(node, nodename, inames, onames, sub) | |||
c_code_cache_version() | Return a tuple of integers indicating the version of this Op. | ||
c_code_cache_version_apply(node) | |||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | |||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | |||
c_support_code_apply(node, nodename) | |||
grad(inputs, ograds) | |||
infer_shape(node, i_shapes) | |||
make_node(*inputs) | If the inputs have different number of dimensions, their shape | ||
make_thunk(node, storage_map, compute_map, ...) |
|
||
perform(node, inputs, output_storage) |
Usage: Elemwise(scalar_op, inplace_pattern = {})
scalars
index of an input so the output is calculated inplace using the input’s storage. (Just like destroymap, but without the lists.)
that getattr(numpy, nfunc_name) implements this operation, takes nin inputs and abs(nout) outputs (nout < 0 if the numpy function does not provide the option of providing a numpy array to store the results in). Note that nin cannot always be inferred from the scalar op’s own nin field because that value is sometimes 0 (meaning a variable number of inputs), whereas the numpy function may not have varargs. NOTE: as of now, the sign of the nout field is ignored (some work needs to be done to resize the destinations when needed).
Attributes
default_output |
Methods
R_op(inputs, eval_points) | |||
__call__(*v, **kw) | |||
c_code(node, nodename, inames, onames, sub) | |||
c_code_cache_version() | Return a tuple of integers indicating the version of this Op. | ||
c_code_cache_version_apply(node) | |||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | |||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | |||
c_support_code_apply(node, nodename) | |||
grad(inputs, ograds) | |||
infer_shape(node, i_shapes) | |||
make_node(*inputs) | If the inputs have different number of dimensions, their shape | ||
make_thunk(node, storage_map, compute_map, ...) |
|
||
perform(node, inputs, output_storage) |
Usage: Elemwise(scalar_op, inplace_pattern = {})
scalars
index of an input so the output is calculated inplace using the input’s storage. (Just like destroymap, but without the lists.)
that getattr(numpy, nfunc_name) implements this operation, takes nin inputs and abs(nout) outputs (nout < 0 if the numpy function does not provide the option of providing a numpy array to store the results in). Note that nin cannot always be inferred from the scalar op’s own nin field because that value is sometimes 0 (meaning a variable number of inputs), whereas the numpy function may not have varargs. NOTE: as of now, the sign of the nout field is ignored (some work needs to be done to resize the destinations when needed).
Bases: theano.scalar.basic.UnaryScalarOp
Theano wrapper for functions of a single variable that can compute their own derivative via a flag d. To use this, simply subclass and define impl(). Several things happen automatically:
Attributes
default_output |
Methods
__call__(*v, **kw) | |||
c_code(node, name, inputs, outputs, sub) | Required: Return the C implementation of an Op. | ||
c_code_cache_version() | |||
c_code_cache_version_apply(node) | Return a tuple of integers indicating the version of this Op. | ||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | Optional: Return a list of header files required by code returned by | ||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | Optional: Return utility code for use by a Variable or Op to be | ||
c_support_code_apply(node, name) | Optional: Return utility code for use by an Op that will be inserted at global | ||
get_function([vectorize]) | Return the op as a compiled function. | ||
grad(inputs, output_gradients) | |||
impl(*v, **kw) | Required. | ||
make_node(*inputs) | |||
make_thunk(node, storage_map, compute_map, ...) |
|
||
output_types(types) | |||
perform(node, inputs, output_storage) |
Bases: mmf.utils.mmf_theano.UnaryScalarOp
Class skeleton to represent the inverse of an operator. You must provide the implementation (presumably using a root-finder of some sort. Presently you need to provide an operator (not applied to its arguments) for both f and its derivative df. This could be relaxed if we figure out how to “unapply” an operation.
Notation, y=f(x), this function thus has argument y.
Attributes
default_output |
Methods
__call__(*v, **kw) | |||
c_code(node, name, inputs, outputs, sub) | Required: Return the C implementation of an Op. | ||
c_code_cache_version() | |||
c_code_cache_version_apply(node) | Return a tuple of integers indicating the version of this Op. | ||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | Optional: Return a list of header files required by code returned by | ||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | Optional: Return utility code for use by a Variable or Op to be | ||
c_support_code_apply(node, name) | Optional: Return utility code for use by an Op that will be inserted at global | ||
get_function([vectorize]) | Return the op as a compiled function. | ||
grad((y,), (gz,)) | This is the key: we use self to compute x then return 1/df. | ||
impl(*v, **kw) | Required. | ||
make_node(*inputs) | |||
make_thunk(node, storage_map, compute_map, ...) |
|
||
output_types(types) | |||
perform(node, inputs, output_storage) |
Bases: theano.scalar.basic.ScalarOp
This class allows you to construct a theano.scalar.ScalarOp from a tensor expression. This allows for automatic derivative calculations.
Attributes
default_output |
Methods
__call__(*inputs, **kwargs) | Optional: Return some or all output[s] of make_node. | ||
c_code(node, name, inputs, outputs, sub) | Required: Return the C implementation of an Op. | ||
c_code_cache_version() | |||
c_code_cache_version_apply(node) | Return a tuple of integers indicating the version of this Op. | ||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | Optional: Return a list of header files required by code returned by | ||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | Optional: Return utility code for use by a Variable or Op to be | ||
c_support_code_apply(node, name) | Optional: Return utility code for use by an Op that will be inserted at global | ||
grad(inputs, grads) | |||
impl(inputs) | |||
make_node(*inputs) | |||
make_thunk(node, storage_map, compute_map, ...) |
|
||
output_types() | |||
perform(node, inputs, output_storage) |
Bases: mmf.utils.mmf_theano.ScalarOpFromExpr
Unary version of ScalarOpFromExpr. Takes a single variable x and a single expression f_x as inputs instead of lists. The derivatives are also specified as an integer d now.
Attributes
default_output |
Methods
__call__(*inputs, **kwargs) | Optional: Return some or all output[s] of make_node. | ||
c_code(node, name, inputs, outputs, sub) | Required: Return the C implementation of an Op. | ||
c_code_cache_version() | |||
c_code_cache_version_apply(node) | Return a tuple of integers indicating the version of this Op. | ||
c_code_cleanup(node, name, inputs, outputs, sub) | Optional: Return C code to run after c_code, whether it failed or not. | ||
c_compile_args() | Optional: Return a list of compile args recommended to compile the | ||
c_header_dirs() | Optional: Return a list of header search paths required by code returned by | ||
c_headers() | Optional: Return a list of header files required by code returned by | ||
c_lib_dirs() | Optional: Return a list of library search paths required by code returned by | ||
c_libraries() | Optional: Return a list of libraries required by code returned by | ||
c_no_compile_args() | Optional: Return a list of incompatible gcc compiler arguments. | ||
c_support_code() | Optional: Return utility code for use by a Variable or Op to be | ||
c_support_code_apply(node, name) | Optional: Return utility code for use by an Op that will be inserted at global | ||
grad((x,), (gf,)) | |||
impl(x) | |||
make_node(*inputs) | |||
make_thunk(node, storage_map, compute_map, ...) |
|
||
output_types(_dtypes) | |||
perform(node, inputs, output_storage) |