本文目标:使用 "宽深 "和 "深交 "网络进行结构化数据分类。
目录
简介
请注意,本示例应在 TensorFlow 2.5 或更高版本上运行。
数据集
本示例使用 UCI 机器学习资料库中的 Covertype 数据集。任务是根据地图变量预测森林覆盖类型。该数据集包含 506 011 个实例和 12 个输入特征:10 个数字特征和 2 个分类特征。每个实例被分为 7 类中的 1 类。
设置
import os
# Only the TensorFlow backend supports string inputs.
os.environ["KERAS_BACKEND"] = "tensorflow"
import math
import numpy as np
import pandas as pd
from tensorflow import data as tf_data
import keras
from keras import layers
准备数据
首先,让我们将 UCI 机器学习资源库中的数据集加载到 Pandas DataFrame 中:
data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz"
)
raw_data = pd.read_csv(data_url, header=None)
print(f"Dataset shape: {raw_data.shape}")
raw_data.head()
Dataset shape: (581012, 55)
5 行 × 55 列
数据集中的两个分类特征是二进制编码的。我们将把这个数据集表示法转换为典型的表示法,即每个分类特征表示为一个整数值。
soil_type_values = [f"soil_type_{idx+1}" for idx in range(40)]
wilderness_area_values = [f"area_type_{idx+1}" for idx in range(4)]
soil_type = raw_data.loc[:, 14:53].apply(
lambda x: soil_type_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
wilderness_area = raw_data.loc[:, 10:13].apply(
lambda x: wilderness_area_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
CSV_HEADER = [
"Elevation",
"Aspect",
"Slope",
"Horizontal_Distance_To_Hydrology",
"Vertical_Distance_To_Hydrology",
"Horizontal_Distance_To_Roadways",
"Hillshade_9am",
"Hillshade_Noon",
"Hillshade_3pm",
"Horizontal_Distance_To_Fire_Points",
"Wilderness_Area",
"Soil_Type",
"Cover_Type",
]
data = pd.concat(
[raw_data.loc[:, 0:9], wilderness_area, soil_type, raw_data.loc[:, 54]],
axis=1,
ignore_index=True,
)
data.columns = CSV_HEADER
# Convert the target label indices into a range from 0 to 6 (there are 7 labels in total).
data["Cover_Type"] = data["Cover_Type"] - 1
print(f"Dataset shape: {data.shape}")
data.head().T
Dataset shape: (581012, 13)
DataFrame 的形状显示每个样本有 13 列(12 列表示特征,1 列表示目标标签)。
我们把数据分成训练集(85%)和测试集(15%)。
train_splits = []
test_splits = []
for _, group_data in data.groupby("Cover_Type"):
random_selection = np.random.rand(len(group_data.index)) <= 0.85
train_splits.append(group_data[random_selection])
test_splits.append(group_data[~random_selection])
train_data = pd.concat(train_splits).sample(frac=1).reset_index(drop=True)
test_data = pd.concat(test_splits).sample(frac=1).reset_index(drop=True)
print(f"Train split size: {len(train_data.index)}")
print(f"Test split size: {len(test_data.index)}")
Train split size: 493323
Test split size: 87689
然后,将训练数据和测试数据分别存储在不同的 CSV 文件中。
train_data_file = "train_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False)
test_data.to_csv(test_data_file, index=False)
定义数据集元数据
这里,我们定义了数据集的元数据,这些元数据将有助于将数据读取和解析为输入特征,并根据输入特征的类型对其进行编码。
TARGET_FEATURE_NAME = "Cover_Type"
TARGET_FEATURE_LABELS = ["0", "1", "2", "3", "4", "5", "6"]
NUMERIC_FEATURE_NAMES = [
"Aspect",
"Elevation",
"Hillshade_3pm",
"Hillshade_9am",
"Hillshade_Noon",
"Horizontal_Distance_To_Fire_Points",
"Horizontal_Distance_To_Hydrology",
"Horizontal_Distance_To_Roadways",
"Slope",
"Vertical_Distance_To_Hydrology",
]
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"Soil_Type": list(data["Soil_Type"].unique()),
"Wilderness_Area": list(data["Wilderness_Area"].unique()),
}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
COLUMN_DEFAULTS = [
[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ["NA"]
for feature_name in CSV_HEADER
]
NUM_CLASSES = len(TARGET_FEATURE_LABELS)
实验设置
接下来,让我们定义一个输入函数,用于读取和解析文件,然后将特征和标签转换为 atf.data.Dataset 以进行训练或评估。
def get_dataset_from_csv(csv_file_path, batch_size, shuffle=False):
dataset = tf_data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=True,
shuffle=shuffle,
)
return dataset.cache()
在此,我们将配置参数,并执行给定模型的训练和评估实验程序。
learning_rate = 0.001
dropout_rate = 0.1
batch_size = 265
num_epochs = 50
hidden_units = [32, 32]
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True)
test_dataset = get_dataset_from_csv(test_data_file, batch_size)
print("Start training the model...")
history = model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
_, accuracy = model.evaluate(test_dataset, verbose=0)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
创建模型输入
现在,将模型的输入定义为一个字典,其中键是特征名称,值是具有相应特征形状和数据类型的 keras.layers.Input 张量。
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype="float32"
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype="string"
)
return inputs
特征编码
我们为输入特征创建了两种表示:稀疏表示和密集表示: 1. 在稀疏表示中,分类特征使用类别编码层(CategoryEncoding layer)进行单次编码。这种表示法可以帮助模型记忆特定的特征值,从而做出某些预测。2.在密集表示法中,分类特征使用嵌入层(Embedding layer)进行低维嵌入编码。这种表示法有助于模型很好地概括未见过的特征组合。
def encode_inputs(inputs, use_embedding=False):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert string values to an integer indices.
# Since we are not using a mask token nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = layers.StringLookup(
vocabulary=vocabulary,
mask_token=None,
num_oov_indices=0,
output_mode="int" if use_embedding else "binary",
)
if use_embedding:
# Convert the string input values into integer indices.
encoded_feature = lookup(inputs[feature_name])
embedding_dims = int(math.sqrt(len(vocabulary)))
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=len(vocabulary), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_feature = embedding(encoded_feature)
else:
# Convert the string input values into a one hot encoding.
encoded_feature = lookup(
keras.ops.expand_dims(inputs[feature_name], -1)
)
else:
# Use the numerical features as-is.
encoded_feature = keras.ops.expand_dims(inputs[feature_name], -1)
encoded_features.append(encoded_feature)
all_features = layers.concatenate(encoded_features)
return all_features
实验 1:基线模型
在第一个实验中,让我们创建一个多层前馈网络,对分类特征进行单击编码。
def create_baseline_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
for units in hidden_units:
features = layers.Dense(units)(features)
features = layers.BatchNormalization()(features)
features = layers.ReLU()(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
baseline_model = create_baseline_model()
keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
让我们运行它:
run_experiment(baseline_model)
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - loss: 1.0713 - sparse_categorical_accuracy: 0.5634
Epoch 2/50
179/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 1s 848us/step - loss: 0.7473 - sparse_categorical_accuracy: 0.6840
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 904us/step - loss: 0.7386 - sparse_categorical_accuracy: 0.6866
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 909us/step - loss: 0.7135 - sparse_categorical_accuracy: 0.6958
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 878us/step - loss: 0.6975 - sparse_categorical_accuracy: 0.7051
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 941us/step - loss: 0.6876 - sparse_categorical_accuracy: 0.7089
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 936us/step - loss: 0.6848 - sparse_categorical_accuracy: 0.7106
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 934us/step - loss: 0.7165 - sparse_categorical_accuracy: 0.6969
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 924us/step - loss: 0.6979 - sparse_categorical_accuracy: 0.7053
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 967us/step - loss: 0.6913 - sparse_categorical_accuracy: 0.7088
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 975us/step - loss: 0.6807 - sparse_categorical_accuracy: 0.7124
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 987us/step - loss: 0.6829 - sparse_categorical_accuracy: 0.7110
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 917us/step - loss: 0.6823 - sparse_categorical_accuracy: 0.7109
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6658 - sparse_categorical_accuracy: 0.7175
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 948us/step - loss: 0.6677 - sparse_categorical_accuracy: 0.7170
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 866us/step - loss: 0.6695 - sparse_categorical_accuracy: 0.7130
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 860us/step - loss: 0.6847 - sparse_categorical_accuracy: 0.7074
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 853us/step - loss: 0.6660 - sparse_categorical_accuracy: 0.7174
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 855us/step - loss: 0.6620 - sparse_categorical_accuracy: 0.7184
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 900us/step - loss: 0.6642 - sparse_categorical_accuracy: 0.7163
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6614 - sparse_categorical_accuracy: 0.7167
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 988us/step - loss: 0.6560 - sparse_categorical_accuracy: 0.7199
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6559 - sparse_categorical_accuracy: 0.7201
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 868us/step - loss: 0.6514 - sparse_categorical_accuracy: 0.7217
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 925us/step - loss: 0.6509 - sparse_categorical_accuracy: 0.7222
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6464 - sparse_categorical_accuracy: 0.7233
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 898us/step - loss: 0.6442 - sparse_categorical_accuracy: 0.7237
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6476 - sparse_categorical_accuracy: 0.7210
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 815us/step - loss: 0.6427 - sparse_categorical_accuracy: 0.7247
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 837us/step - loss: 0.6414 - sparse_categorical_accuracy: 0.7244
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 865us/step - loss: 0.6408 - sparse_categorical_accuracy: 0.7256
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 845us/step - loss: 0.6378 - sparse_categorical_accuracy: 0.7269
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6432 - sparse_categorical_accuracy: 0.7235
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 905us/step - loss: 0.6482 - sparse_categorical_accuracy: 0.7226
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6586 - sparse_categorical_accuracy: 0.7191
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 958us/step - loss: 0.6511 - sparse_categorical_accuracy: 0.7215
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 910us/step - loss: 0.6571 - sparse_categorical_accuracy: 0.7217
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 897us/step - loss: 0.6451 - sparse_categorical_accuracy: 0.7253
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6455 - sparse_categorical_accuracy: 0.7254
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 907us/step - loss: 0.6722 - sparse_categorical_accuracy: 0.7131
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1000us/step - loss: 0.6393 - sparse_categorical_accuracy: 0.7282
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 872us/step - loss: 0.6804 - sparse_categorical_accuracy: 0.7078
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 884us/step - loss: 0.6657 - sparse_categorical_accuracy: 0.7135
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 960us/step - loss: 0.6557 - sparse_categorical_accuracy: 0.7180
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 870us/step - loss: 0.6671 - sparse_categorical_accuracy: 0.7115
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 871us/step - loss: 0.6730 - sparse_categorical_accuracy: 0.7069
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 875us/step - loss: 0.6669 - sparse_categorical_accuracy: 0.7105
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 847us/step - loss: 0.6634 - sparse_categorical_accuracy: 0.7129
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6625 - sparse_categorical_accuracy: 0.7137
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 824us/step - loss: 0.6596 - sparse_categorical_accuracy: 0.7146
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 833us/step - loss: 0.6714 - sparse_categorical_accuracy: 0.7106
Model training finished
Test accuracy: 69.5%
基线线性模型的测试准确率约为 76%。
实验 2:广度和深度模型
在第二个实验中,我们创建了一个广度和深度模型。广度模型是线性模型,深度模型是多层前馈网络。
在广度模型中使用输入特征的稀疏表示,在深度模型中使用输入特征的密集表示。
请注意,每个输入特征都会对模型的两个部分产生不同的表示。
def create_wide_and_deep_model():
inputs = create_model_inputs()
wide = encode_inputs(inputs)
wide = layers.BatchNormalization()(wide)
deep = encode_inputs(inputs, use_embedding=True)
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([wide, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
wide_and_deep_model = create_wide_and_deep_model()
keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
让我们运行它:
run_experiment(wide_and_deep_model)
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.8979 - sparse_categorical_accuracy: 0.6386
Epoch 2/50
128/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6317 - sparse_categorical_accuracy: 0.7302
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6290 - sparse_categorical_accuracy: 0.7295
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6130 - sparse_categorical_accuracy: 0.7350
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6029 - sparse_categorical_accuracy: 0.7397
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6010 - sparse_categorical_accuracy: 0.7397
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5924 - sparse_categorical_accuracy: 0.7445
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5917 - sparse_categorical_accuracy: 0.7442
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5945 - sparse_categorical_accuracy: 0.7438
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5933 - sparse_categorical_accuracy: 0.7443
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5862 - sparse_categorical_accuracy: 0.7481
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5809 - sparse_categorical_accuracy: 0.7507
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5777 - sparse_categorical_accuracy: 0.7519
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7534
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5716 - sparse_categorical_accuracy: 0.7545
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7537
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5712 - sparse_categorical_accuracy: 0.7559
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5683 - sparse_categorical_accuracy: 0.7564
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5666 - sparse_categorical_accuracy: 0.7569
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5652 - sparse_categorical_accuracy: 0.7575
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5634 - sparse_categorical_accuracy: 0.7583
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5677 - sparse_categorical_accuracy: 0.7563
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7578
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5628 - sparse_categorical_accuracy: 0.7586
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5619 - sparse_categorical_accuracy: 0.7593
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5603 - sparse_categorical_accuracy: 0.7589
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5644 - sparse_categorical_accuracy: 0.7585
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5592 - sparse_categorical_accuracy: 0.7604
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5571 - sparse_categorical_accuracy: 0.7616
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7629
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5538 - sparse_categorical_accuracy: 0.7640
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5535 - sparse_categorical_accuracy: 0.7635
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5521 - sparse_categorical_accuracy: 0.7645
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5505 - sparse_categorical_accuracy: 0.7648
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5494 - sparse_categorical_accuracy: 0.7657
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5496 - sparse_categorical_accuracy: 0.7660
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5488 - sparse_categorical_accuracy: 0.7673
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7668
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7673
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5457 - sparse_categorical_accuracy: 0.7674
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7689
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7679
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5442 - sparse_categorical_accuracy: 0.7692
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5436 - sparse_categorical_accuracy: 0.7701
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5419 - sparse_categorical_accuracy: 0.7706
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5432 - sparse_categorical_accuracy: 0.7691
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5406 - sparse_categorical_accuracy: 0.7708
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5412 - sparse_categorical_accuracy: 0.7701
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5400 - sparse_categorical_accuracy: 0.7701
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7699
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5403 - sparse_categorical_accuracy: 0.7701
Model training finished
Test accuracy: 79.04%
广度和深度模型的测试准确率约为 79%。
实验 3:深度和交叉模型
在第三个实验中,我们创建了一个深度和交叉模型。该模型的深度部分与前一个实验中创建的深度部分相同。交叉部分的主要理念是以一种高效的方式应用显式特征交叉,交叉特征的程度随层深度的增加而增加。
def create_deep_and_cross_model():
inputs = create_model_inputs()
x0 = encode_inputs(inputs, use_embedding=True)
cross = x0
for _ in hidden_units:
units = cross.shape[-1]
x = layers.Dense(units)(cross)
cross = x0 * x + cross
cross = layers.BatchNormalization()(cross)
deep = x0
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([cross, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
deep_and_cross_model = create_deep_and_cross_model()
keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
让我们运行它:
run_experiment(deep_and_cross_model)
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.9221 - sparse_categorical_accuracy: 0.6235
Epoch 2/50
116/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6388 - sparse_categorical_accuracy: 0.7257
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.6271 - sparse_categorical_accuracy: 0.7316
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6023 - sparse_categorical_accuracy: 0.7403
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5896 - sparse_categorical_accuracy: 0.7453
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5899 - sparse_categorical_accuracy: 0.7438
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5960 - sparse_categorical_accuracy: 0.7421
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5813 - sparse_categorical_accuracy: 0.7481
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5748 - sparse_categorical_accuracy: 0.7500
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5743 - sparse_categorical_accuracy: 0.7502
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5739 - sparse_categorical_accuracy: 0.7506
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5673 - sparse_categorical_accuracy: 0.7540
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5649 - sparse_categorical_accuracy: 0.7561
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7548
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5618 - sparse_categorical_accuracy: 0.7563
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5599 - sparse_categorical_accuracy: 0.7571
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5568 - sparse_categorical_accuracy: 0.7585
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7592
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5544 - sparse_categorical_accuracy: 0.7595
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5533 - sparse_categorical_accuracy: 0.7603
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5532 - sparse_categorical_accuracy: 0.7597
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7602
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5516 - sparse_categorical_accuracy: 0.7608
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5503 - sparse_categorical_accuracy: 0.7611
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7619
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5482 - sparse_categorical_accuracy: 0.7623
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5464 - sparse_categorical_accuracy: 0.7635
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5483 - sparse_categorical_accuracy: 0.7625
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5654 - sparse_categorical_accuracy: 0.7555
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5545 - sparse_categorical_accuracy: 0.7593
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5512 - sparse_categorical_accuracy: 0.7603
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5493 - sparse_categorical_accuracy: 0.7616
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5485 - sparse_categorical_accuracy: 0.7627
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5593 - sparse_categorical_accuracy: 0.7588
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5536 - sparse_categorical_accuracy: 0.7608
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7612
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5518 - sparse_categorical_accuracy: 0.7621
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5502 - sparse_categorical_accuracy: 0.7618
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7597
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5526 - sparse_categorical_accuracy: 0.7609
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5508 - sparse_categorical_accuracy: 0.7608
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5495 - sparse_categorical_accuracy: 0.7613
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5478 - sparse_categorical_accuracy: 0.7625
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7629
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5462 - sparse_categorical_accuracy: 0.7640
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5458 - sparse_categorical_accuracy: 0.7633
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5466 - sparse_categorical_accuracy: 0.7635
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7633
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7639
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7645
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5446 - sparse_categorical_accuracy: 0.7663
Model training finished
Test accuracy: 77.98%
深度和交叉模型的测试准确率约为 81%。
结论
您可以使用 Keras 预处理层轻松处理具有不同编码机制的分类特征,包括单次编码和特征嵌入。此外,针对不同的数据集属性,不同的模型架构(如广义网络、深度网络和交叉网络)具有不同的优势。您可以探索独立使用它们,或者将它们结合起来,以获得最适合您数据集的结果。