本文介绍了在 google colab 问题中训练 MNIST 数据集:的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在专业版的 google colab notebook 中执行 CNN.尽管 x_train 的形状为 (60,000, 28,28).该模型仅在 1875 行上进行了训练.以前有人遇到过这个问题吗?我的模型在本地机器的 jupyter 笔记本上运行良好.它在所有 60,000 行上运行
I am performing CNN in google colab notebook in the pro version. Though the x_train takes the shape (60,000, 28,28). The model gets trained on only 1875 rows. Did any one faced this issue before? My model runs fine on local machine's jupyter notebook. It runs on all 60,000 rows
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.0
y_train = y_train.astype('float32') / 255.0
print("x_train.shape:", x_train.shape)
#Build the model
from tensorflow.keras.layers import Dense, Flatten, Dropout
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
r = model.fit(x_train, y_train, validation_data=(x_test,y_test), epochs = 10)
Output:
x_train.shape: (60000, 28, 28)
Epoch 1/10
1875/1875 [==============================] - 3s 2ms/step - loss: 2.2912e-06 - accuracy: 0.0987 - val_loss: 7716.5078 - val_accuracy: 0.0980
推荐答案
1875 是多个批次.默认情况下,批次包含 32 个样本.
60000/32 = 1875
1875 is a number of batches. By default batches contain 32 samles.
60000 / 32 = 1875
这篇关于在 google colab 问题中训练 MNIST 数据集:的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!