Keras Model Training API#
fit
method#
fit(
x=None,
y=None,
batch_size=None,
epochs=1,
verbose='auto',
callbacks=None,
validation_split=0.0,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_batch_size=None,
validation_freq=1
)
Trains the model for a fixed number of epochs (dataset iterations).
Args:
Args |
|
---|---|
|
Input data. It could be:
|
|
Target data. Like the input data |
|
Integer or |
|
Integer. Number of epochs to train the model.
An epoch is an iteration over the entire |
|
|
|
List of |
|
Float between 0 and 1.
Fraction of the training data to be used as validation data.
The model will set apart this fraction of the training data,
will not train on it, and will evaluate
the loss and any model metrics
on this data at the end of each epoch.
The validation data is selected from the last samples
in the |
|
Data on which to evaluate
the loss and any model metrics at the end of each epoch.
The model will not be trained on this data. Thus, note the fact
that the validation loss of data provided using
|
|
Boolean, whether to shuffle the training data
before each epoch. This argument is
ignored when |
|
Optional dictionary mapping class indices (integers)
to a weight (float) value, used for weighting the loss function
(during training only).
This can be useful to tell the model to
“pay more attention” to samples from
an under-represented class. When |
|
Optional NumPy array of weights for
the training samples, used for weighting the loss function
(during training only). You can either pass a flat (1D)
NumPy array with the same length as the input samples
(1:1 mapping between weights and samples),
or in the case of temporal data,
you can pass a 2D array with shape
|
|
Integer. Epoch at which to start training (useful for resuming a previous training run). |
|
Integer or |
|
Only relevant if |
|
Integer or |
|
Only relevant if validation data is provided.
Specifies how many training epochs to run
before a new validation run is performed,
e.g. |
Unpacking behavior for iterator-like inputs:
A common pattern is to pass an iterator like object such as a
tf.data.Dataset
or a keras.utils.PyDataset
to fit()
,
which will in fact yield not only features (x
)
but optionally targets (y
) and sample weights (sample_weight
).
Keras requires that the output of such iterator-likes be
unambiguous. The iterator should return a tuple
of length 1, 2, or 3, where the optional second and third elements
will be used for y
and sample_weight
respectively.
Any other type provided will be wrapped in
a length-one tuple, effectively treating everything as x
. When
yielding dicts, they should still adhere to the top-level tuple
structure,
e.g. ({"x0": x0, "x1": x1}, y)
. Keras will not attempt to separate
features, targets, and weights from the keys of a single dict.
A notable unsupported data type is the namedtuple
. The reason is
that it behaves like both an ordered datatype (tuple) and a mapping
datatype (dict). So given a namedtuple of the form:
namedtuple("example_tuple", ["y", "x"])
it is ambiguous whether to reverse the order of the elements when
interpreting the value. Even worse is a tuple of the form:
namedtuple("other_tuple", ["x", "y", "z"])
where it is unclear if the tuple was intended to be unpacked
into x
, y
, and sample_weight
or passed through
as a single element to x
.
Returns:
A History
object. Its History.history
attribute is
a record of training loss values and metrics values
at successive epochs, as well as validation loss values
and validation metrics values (if applicable).
References#
.fit
is a top-level domain.
This might be the reason why you find here :-) Details here.
How many guys are before you? ↓
Maybe you are also interested in:
A Concise Handbook of TensorFlow 2: https://tf.wiki/en/
My personal webpage: https://snowkylin.github.io