Core (pyeddl.layers.core
)¶
Applies an activation function to an output. |
|
Just your regular densely-connected NN layer. |
|
Applies Dropout to the input. |
|
Layer to be used as an entry point into a model. |
|
Reshapes an output to a certain shape. |
Activation¶
-
class
pyeddl.layers.core.
Activation
(activation, **kwargs)[source]¶ Applies an activation function to an output.
- Args:
activation: name of activation function to use
- Input shape:
Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.
- Output shape:
Same shape as input.
Dense¶
-
class
pyeddl.layers.core.
Dense
(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)[source]¶ Just your regular densely-connected NN layer.
Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel.
- Example:
# as first layer in a sequential model: model = Sequential() model.add(Dense(32, input_shape=(16,))) # now the model will take as input arrays of shape (, 16) # and output arrays of shape (, 32) # after the first layer, you don’t need to specify # the size of the input anymore: model.add(Dense(32))
- Args:
units: Positive integer, dimensionality of the output space. activation: Activation function to use
If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix bias_initializer: Initializer for the bias vector kernel_regularizer: Regularizer function applied to
the kernel weights matrix
bias_regularizer: Regularizer function applied to the bias vector activity_regularizer: Regularizer function applied to
the output of the layer (its “activation”).
- kernel_constraint: Constraint function applied to
the kernel weights matrix
bias_constraint: Constraint function applied to the bias vector
- Input shape:
nD tensor with shape: (batch_size, …, input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).
- Output shape:
nD tensor with shape: (batch_size, …, units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).
-
__init__
(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
Dropout¶
-
class
pyeddl.layers.core.
Dropout
(rate, noise_shape=None, seed=None, **kwargs)[source]¶ Applies Dropout to the input.
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
- Args:
rate: float between 0 and 1. Fraction of the input units to drop. noise_shape: 1D integer tensor representing the shape of the
binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).
seed: A Python integer to use as random seed.
- References
- [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](
http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)
Input¶
-
class
pyeddl.layers.core.
Input
(input_shape=None, batch_size=None, batch_input_shape=None, dtype=None, input_tensor=None, sparse=False, name=None)[source]¶ Layer to be used as an entry point into a model.
It can either wrap an existing tensor (pass an input_tensor argument) or create its a placeholder tensor (pass arguments input_shape or batch_input_shape as well as dtype).
- Args:
input_shape: Shape tuple, not including the batch axis. batch_size: Optional input batch size (integer or None). batch_input_shape: Shape tuple, including the batch axis. dtype: Datatype of the input. input_tensor: Optional tensor to use as layer input
instead of creating a placeholder.
- sparse: Boolean, whether the placeholder created
is meant to be sparse.
name: Name of the layer (string).
Reshape¶
-
class
pyeddl.layers.core.
Reshape
(target_shape, **kwargs)[source]¶ Reshapes an output to a certain shape.
- Args:
- target_shape: target shape. Tuple of integers.
Does not include the batch axis.
- Input shape:
Arbitrary, although all dimensions in the input shaped must be fixed. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model.
- Output shape:
(batch_size,) + target_shape
# as first layer in a Sequential model model = Sequential() model.add(Reshape((3, 4), input_shape=(12,))) # now: model.output_shape == (None, 3, 4) # note: None is the batch dimension # as intermediate layer in a Sequential model model.add(Reshape((6, 2))) # now: model.output_shape == (None, 6, 2) # also supports shape inference using -1 as dimension model.add(Reshape((-1, 2, 2))) # now: model.output_shape == (None, 3, 2, 2)