最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - "ValueError: Shapes are incompatible", but where? - Stack Overflow

programmeradmin3浏览0评论

I load my datasets with keras.utils.image_dataset_from_directory(). Here's the code:

train_ds = image_dataset_from_directory(
  "train",
  validation_split=0.2,
  subset="training",
  seed=123,
  image_size=(224, 224))

val_ds = image_dataset_from_directory(
  "test",
  validation_split=0.2,
  subset="validation",
  seed=123,
  image_size=(224, 224))

class_names = train_ds.class_names
print(class_names)

print("image and label batch for train, than for val")
for image_batch, labels_batch in train_ds:
  print(image_batch.shape)
  print(labels_batch.shape)
  break
for image_batch, labels_batch in val_ds:
  print(image_batch.shape)
  print(labels_batch.shape)
  break

And here's the output of this part:

Found 75 files belonging to 2 classes.
Using 60 files for training.
Found 65 files belonging to 2 classes.
Using 13 files for validation.
['corner', 'round']
image and label batch for train, than for val
(32, 224, 224, 3)
(32,)
(13, 224, 224, 3)
(13,)

Then the tutorial had the normalization block:

normalization_layer = Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))

Then I create a sequential model, to partly fill it with vgg16 imagenet weights, and then add some more layers for the two classes of mine:

model = Sequential()
model.add(Conv2D(input_shape=(224,224,3),filters=64, name='block1_conv1',
                 kernel_size=(3,3),padding="same", activation="relu"))
...
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2), name='block5_pool'))

model.load_weights("vgg16_w.h5", skip_mismatch=True, by_name=True)
for i in range(0,9):
    model.layers[i].trainable = False

model.add(Flatten(name='flatten'))
model.add(Dense(512, activation="relu", name="deeper"))
model.add(Dense(512, activation="relu", name="more_deeper"))
model.add(Dense(2, activation="relu", name="to_narrow"))
model.add(Dense(2, activation="softmax", name="round_or_sharp"))
print(model.summary())

Then there's compiling:

boardy = TensorBoard(log_dir='./grph')
opt_big = Adam(learning_rate=0.3)
opt_small = SGD(learning_rate=0.001)
modelpile(loss=keras.losses.categorical_crossentropy, optimizer=opt_big, metrics=['accuracy'])
model.fit(train_ds, validation_data=val_ds,epochs=48,callbacks=[boardy])

And here's what happens on fit():

Epoch 1/48
Traceback (most recent call last):
  File "D:/Practice/prjPy/graph_cnn/vgg-dataset.py", line 119, in <module>
    model.fit(train_ds, validation_data=val_ds,epochs=48,callbacks=[brdy])
  File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\Juliy\AppData\Local\Temp\__autograph_generated_filemtsrptmq.py", line 15, in tf__train_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:

    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\training.py", line 1338, in train_function  *
        return step_function(self, iterator)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\training.py", line 1322, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\training.py", line 1303, in run_step  **
        outputs = model.train_step(data)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\training.py", line 1081, in train_step
        loss = selfpute_loss(x, y, y_pred, sample_weight)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\training.py", line 1139, in compute_loss
        return selfpiled_loss(
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\engine\compile_utils.py", line 265, in __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\losses.py", line 142, in __call__
        losses = call_fn(y_true, y_pred)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\losses.py", line 268, in call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\losses.py", line 2122, in categorical_crossentropy
        return backend.categorical_crossentropy(
    File "C:\Users\Juliy\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\src\backend.py", line 5560, in categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)

    ValueError: Shapes (None, 1) and (None, 2) are incompatible

The only "2" in my shapes is the one of the output layer, and using model.add(Dense(1, activation="softmax", name="wants_a_one")) helped to avoid the error message, but there wasn't any classifying.

Epoch 47/48

1/2 [==============>...............] - ETA: 12s - loss: 0.0000e+00 - accuracy: 0.5938
2/2 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.5667 
2/2 [==============================] - 27s 14s/step - loss: 0.0000e+00 - accuracy: 0.5667 - val_loss: 0.0000e+00 - val_accuracy: 0.6923
Epoch 48/48

1/2 [==============>...............] - ETA: 12s - loss: 0.0000e+00 - accuracy: 0.5625
2/2 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.5667 
2/2 [==============================] - 26s 13s/step - loss: 0.0000e+00 - accuracy: 0.5667 - val_loss: 0.0000e+00 - val_accuracy: 0.6923
expected round

1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 364ms/step
[[1.]]

1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 219ms/step
[[1.]]
expected sharp

1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 276ms/step
[[1.]]

1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 239ms/step
[[1.]]

Right now I'm practicing for a bigger range of classes, so making it binary isn't what I need.

How do I fix the ValueError?

发布评论

评论列表(0)

  1. 暂无评论