本文介绍了使用张量流的tf.contrib.learn.DNNClassifier提取神经网络权重的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以从Tensorflow的tf.contrib.learn.DNNClassifier中提取权重矩阵?我试图在Tensorflow网站上寻找答案,但是我对此并不陌生,所以到目前为止我还没有发现任何有用的东西.抱歉,如果我在这里找不到对此的明确解释,就可以了.

Is there a way to extract weight matrices from Tensorflow's tf.contrib.learn.DNNClassifier? I've tried to look up the Tensorflow site for an answer but I'm fairly new to this so I haven't found anything helpful so far. Sorry in advance if there is already explicit explanation for this in here that I wasn't able to find.

我的代码:

# read the csv file to numpy array
df = tf.contrib.learn.datasets.base.load_csv_with_header(
      filename="data.csv",
      target_dtype=np.int,
      features_dtype=np.float64)

X = df.data
Y = df.target
dimension = len(X)

feature_columns = [tf.contrib.layers.real_valued_column("", dimension=dimension)]

classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
                                                      hidden_units=[10,10],
                                                      n_classes=2,
                                                      activation_fn=tf.nn.sigmoid,
                                                      optimizer=tf.train.ProximalAdagradOptimizer(
                                                        learning_rate=0.1,
                                                        l2_regularization_strength=0.001))

#Fit model
classifier.fit(x=X, y=Y, steps=2000)

推荐答案

经过研究,我认为我想出了答案:

After some research I think I've come up with the answer:

classifier.get_variable_value(classifier.get_variable_names()[3])   

classifier.get_variable_names()打印名称列表

classifier.get_variable_names() prints a list of names

['dnn/binary_logistic_head/dnn/learning_rate', 
'dnn/hiddenlayer_0/biases', 
'dnn/hiddenlayer_0/biases//hiddenlayer_0/biases/part_0/Adagrad',
'dnn/hiddenlayer_0/weights',
'dnn/hiddenlayer_0/weights/hiddenlayer_0/weights/part_0/Adagrad', 
'dnn/logits/biases', 
'dnn/logits/biases/dnn/dnn/logits/biases/part_0/Adagrad', 
'dnn/logits/weights', 
'dnn/logits/weights/nn/dnn/logits/weights/part_0/Adagrad',
'global_step']

然后classifier.get_variable_names()[3]获得第四个,即第一层的权重.在这种情况下,分类器具有一个包含10个神经元的隐藏层.

And classifier.get_variable_names()[3] gets the fourth one, the weights for the first layer. The classifier in this case had one hidden layer with 10 neurons.

第七个'dnn/logits/weights'给出输出层的权重.

The 7th one 'dnn/logits/weights' gives the weights for the output layer.

这篇关于使用张量流的tf.contrib.learn.DNNClassifier提取神经网络权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-19 17:51