y_true and y_pred have different number of output (10!=1)

I am using Scikit learn wrapper KerasClassifier for hyperparameters tuning for my LSTM model using RandomizedSearchCV. Below is a summary of what I am doing: 1. xtrain has the shape [355,5,10] and ytrain has the shape[355,10], There are 355 training samples and 10 features and labels. 2. First I create the model using build_lstm_model function 3. Define KerasClassifier 4. Specify parameters that to be sued for fitting to determine the scoring 5. Specify parameters to be searched using RandomizedSearchCV 5. fit the model

I am using 'neg_mean_squared_error' as the scoring metrics. When I run the code I get an error "y_true and y_pred have different number of output (10!=1)"

I found that, if I do not specify any scoring metrics, then it works fine. But, I want to use neg_mean_squared_error, since its a regression problem.

# keras model
def build_lstm_model(n_blocks=6, n_cells=40, lr=0.001, lookback=lookback, n=n): model = Sequential() for i in range(n_blocks-1): model.add(LSTM(n_cells, input_shape=(lookback, n), return_sequences=True, activation='tanh', kernel_initializer='uniform')) model.add(LSTM(n_cells, input_shape=(lookback, n), activation='tanh', kernel_initializer='uniform')) model.add(Dense(n)) adam = optimizers.Adam(lr=lr, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model.compile(loss='mean_squared_error', optimizer=adam, metrics=['accuracy']) return model
# pass in fixed parameters n_input and n_class
model_lstm = KerasClassifier( build_fn = build_lstm_model, lookback = lookback, n = n)
# specify other extra parameters pass to the .fit
# number of epochs is set to a large number
keras_fit_params = { 'epochs': 10, 'batch_size': 16, 'validation_data': (xvalid, yvalid), 'verbose': 0
}
# random search parameters
# specify the options and store them inside the dictionary
# batch size and training method can also be hyperparameters, but it is fixed
n_blocks_params = [3, 4, 5, 6, 7, 8]
n_cells_params = [20, 30, 40, 50, 60]
lr_params = [0.001, 0.0001]
keras_param_options = { 'n_blocks': n_blocks_params, 'n_cells': n_cells_params, 'lr': lr_params
}
# `verbose` 2 will print the class info for every cross-validation, kind of too much
rs_lstm = GridSearchCV( model_lstm, param_distributions = keras_param_options, #fit_params = keras_fit_params, scoring = 'neg_mean_squared_error', n_iter = 3, cv = 5, n_jobs = -1 #verbose = 0
)
rs_lstm.fit(xtrain, ytrain)

Is there a way I can use mean_squared_error as the metrics in RandomizedSearchCV?

3

1 Answer

I was using KerasClassifier. I didn't know that there is another wrapper KerasRegressor in SKlearn. When I use KerasRegressor, I can use regression related metrics for finding a good model. Thank you.

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.

You Might Also Like