1 year ago
#47176
user11453289
Customized regularization function with customized training in tf.keras (TF2) using GradientTape
I want to define my own customised regulizer and I am making use of GradientTape. I am making use of the following code, however no matter how large I choose the tuning parameters to be, the results always stay the same. Does someone know how I can get my customized regulizer working?
My model:
inputs = layers.Input(shape=(state_dim,))
hidden1 = layers.Dense(units = 40, activation = keras.layers.LeakyReLU(alpha=0.5),
kernel_regularizer = sparse_reg,
kernel_initializer = keras.initializers.HeUniform(seed = seed),
bias_initializer = keras.initializers.Zeros())(inputs)
hidden2 = layers.Dense(units = 15, activation = keras.layers.LeakyReLU(alpha=0.5),
kernel_initializer = keras.initializers.HeUniform(seed = seed),
bias_initializer = keras.initializers.Zeros())(hidden1)
q_values = layers.Dense(units = action_dim,
activation="linear",
kernel_initializer = keras.initializers.HeUniform(seed = seed))(hidden2)
deep_q_network = keras.Model(inputs=inputs, outputs=q_values)
My customised regulizer:
def sparse_reg(weight_matrix):
cumWeightPerInput = np.sum(np.abs(weight_matrix), axis=1)
penalty = tf.reduce_sum(np.sqrt(cumWeightPerInput))
return 0.01 * penalty
My training process:
with tf.GradientTape() as tape:
currentQvalues = mainNetwork(S, training = True)
loss_value = self.lossFunction(targetQvalues, currentQvalues)
loss_regularization = tf.math.add_n(mainNetwork.losses)
loss_value = loss_value + loss_regularization
grads = tape.gradient(loss_value, mainNetwork.trainable_variables)
opt.apply_gradients(zip(grads, mainNetwork.trainable_variables))
tensorflow
regularized
gradienttape
0 Answers
Your Answer