最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

tensorflow - Different results on f-1 score in binary classification task in CNN - Stack Overflow

programmeradmin0浏览0评论

I am making a CNN model for binary classification tasks.

  • When I am using binary_crossentropy as the loss function and keep 1 neuron in the last layer then I am getting around 94% in accuracy and 85% in val_accuracy, but my f-1 score is stuck around 69%.
  • When I am using categorical_crossentropy as the loss function then the results as somewhat similar but this time f-1 score is around 85%.
model = Sequential([
    Input(shape=(*input_shape, 1)),

    Conv2D(64, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    Conv2D(64, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    MaxPooling2D((2, 2)),
    BatchNormalization(),

    Conv2D(64, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    Conv2D(64, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    MaxPooling2D((2, 2)),
    BatchNormalization(),
    
    Conv2D(128, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    Conv2D(128, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    MaxPooling2D((2, 2)),
    BatchNormalization(),

    Conv2D(128, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    Conv2D(128, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    MaxPooling2D((2, 2)),
    BatchNormalization(),
    
    Conv2D(256, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    Conv2D(256, (3, 3), activation='relu', padding="same", kernel_regularizer=l2(0.001)),
    CBAMLayer(),
    MaxPooling2D((2, 2)),
    BatchNormalization(),

    Flatten(),
    Dense(512, activation='relu'),
    Dropout(0.5),
    Dense(256, activation='relu'),
    Dropout(0.2),
    Dense(2, activation='softmax')
])

Can anyone tell me why this is happening, and what is the solution to it. Also I want to know the reason for the gap in accuracy and val_accuracy, even when the classes are balanced.

I have tried changing the model structure and loss functions but it didn't worked out.

发布评论

评论列表(0)

  1. 暂无评论