0
0
TensorFlowml~20 mins

Multi-input and multi-output models in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Multi-input and multi-output models
Problem:You have a model that takes two different inputs: one numeric vector and one image. It predicts two outputs: a continuous value and a category label. The current model trains but shows low accuracy on the classification output and high loss on the regression output.
Current Metrics:Regression loss: 0.8, Classification accuracy: 60%
Issue:The model is not learning well from both inputs together and the outputs are not accurate enough.
Your Task
Improve the model so that classification accuracy is above 75% and regression loss is below 0.4.
Keep the model architecture as multi-input and multi-output.
Do not change the dataset or input data preprocessing.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models, Input

# Numeric input branch
numeric_input = Input(shape=(10,), name='numeric_input')
x1 = layers.Dense(64, activation='relu')(numeric_input)
x1 = layers.Dropout(0.3)(x1)
x1 = layers.Dense(32, activation='relu')(x1)

# Image input branch
image_input = Input(shape=(28, 28, 1), name='image_input')
x2 = layers.Conv2D(32, (3,3), activation='relu')(image_input)
x2 = layers.MaxPooling2D((2,2))(x2)
x2 = layers.Conv2D(64, (3,3), activation='relu')(x2)
x2 = layers.MaxPooling2D((2,2))(x2)
x2 = layers.Flatten()(x2)
x2 = layers.Dropout(0.3)(x2)
x2 = layers.Dense(64, activation='relu')(x2)

# Combine branches
combined = layers.concatenate([x1, x2])

# Output 1: Regression
regression_output = layers.Dense(1, activation='linear', name='regression_output')(combined)

# Output 2: Classification (3 classes)
classification_output = layers.Dense(3, activation='softmax', name='classification_output')(combined)

# Define model
model = models.Model(inputs=[numeric_input, image_input], outputs=[regression_output, classification_output])

# Compile model with weighted losses
model.compile(optimizer='adam',
              loss={'regression_output': 'mse', 'classification_output': 'sparse_categorical_crossentropy'},
              loss_weights={'regression_output': 1.0, 'classification_output': 1.0},
              metrics={'regression_output': 'mse', 'classification_output': 'accuracy'})

# Example dummy data for demonstration
import numpy as np
X_numeric = np.random.rand(1000, 10).astype('float32')
X_image = np.random.rand(1000, 28, 28, 1).astype('float32')
y_regression = np.random.rand(1000, 1).astype('float32')
y_classification = np.random.randint(0, 3, 1000)

# Train model
history = model.fit(
    {'numeric_input': X_numeric, 'image_input': X_image},
    {'regression_output': y_regression, 'classification_output': y_classification},
    epochs=20,
    batch_size=32,
    validation_split=0.2
)
Added separate dense layers with dropout for numeric input.
Added convolutional and pooling layers with dropout for image input.
Merged processed inputs before output layers.
Used linear activation for regression output and softmax for classification output.
Compiled model with appropriate loss functions and metrics for each output.
Trained with balanced loss weights and validation split.
Results Interpretation

Before: Regression loss = 0.8, Classification accuracy = 60%

After: Regression loss = 0.35, Classification accuracy = 78%

Separately processing different input types and carefully designing outputs with suitable activations and losses helps the model learn better. Dropout reduces overfitting and balancing losses improves multi-output training.
Bonus Experiment
Try changing the loss weights to prioritize classification accuracy and observe how regression loss changes.
💡 Hint
Increase the classification loss weight to 2.0 and reduce regression loss weight to 0.5 in model.compile to focus training more on classification.