我正在使用 tensorflow 和python开发深度学习模型:

  • 首先,使用CNN图层获取要素。
  • 其次,重塑功能图,我想使用LSTM层。

  • 但是,尺寸不匹配的错误...

    ConcatOp:输入的尺寸应匹配:shape[0] = [71,48]shape[1] = [1200,24]
    W_conv1 = weight_variable([1,conv_size,1,12])
    b_conv1 = bias_variable([12])
    
    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1)+ b_conv1)
    h_pool1 = max_pool_1xn(h_conv1)
    
    W_conv2 = weight_variable([1,conv_size,12,24])
    b_conv2 = bias_variable([24])
    
    h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    h_pool2 = max_pool_1xn(h_conv2)
    
    W_conv3 = weight_variable([1,conv_size,24,48])
    b_conv3 = bias_variable([48])
    
    h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
    h_pool3 = max_pool_1xn(h_conv3)
    
    
    print(h_pool3.get_shape())
    h3_rnn_input = tf.reshape(h_pool3, [-1,x_size/8,48])
    
    num_layers = 1
    lstm_size = 24
    num_steps = 4
    
    lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size, initializer = tf.contrib.layers.xavier_initializer(uniform = False))
    cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers)
    init_state = cell.zero_state(batch_size,tf.float32)
    
    
    cell_outputs = []
    state = init_state
    with tf.variable_scope("RNN") as scope:
    for time_step in range(num_steps):
        if time_step > 0: scope.reuse_variables()
        cell_output, state = cell(h3_rnn_input[:,time_step,:],state) ***** Error In here...
    

    最佳答案

    当您输入到rnn单元格时,输入张量和状态张量的批处理大小应相同。

    在错误消息中,它说h3_rnn_input[:,time_step,:]具有[71,48]的形状,而state具有[1200,24]的形状

    您需要做的是使第一个尺寸(batch_size)相同。

    如果不需要数字71,请检查卷积部分。步幅/填充可能很重要。

    关于python - ConcatOp : Dimensions of inputs should match,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/41088064/

    10-12 17:33