Welcome, during this ungraded lab you are going to be working with SHAP (SHapley Additive exPlanations). This procedure is derived from game theory and aims to understand (or explain) the output of any machine learning model. In particular you will:
To learn more about Shapley Values visit the official SHAP repo.
Let's get started!
Begin by installing the shap library:
# Install this package to use Colab's GPU for training
!apt install --allow-change-held-packages libcudnn8=8.4.1.50-1+cuda11.6
Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-460 Use 'apt autoremove' to remove it. The following packages will be REMOVED: libcudnn8-dev The following held packages will be changed: libcudnn8 The following packages will be upgraded: libcudnn8 1 upgraded, 0 newly installed, 1 to remove and 18 not upgraded. Need to get 420 MB of archives. After this operation, 3,369 MB disk space will be freed. Get:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 libcudnn8 8.4.1.50-1+cuda11.6 [420 MB] Fetched 420 MB in 14s (29.9 MB/s) (Reading database ... 155685 files and directories currently installed.) Removing libcudnn8-dev (8.0.5.39-1+cuda11.1) ... (Reading database ... 155663 files and directories currently installed.) Preparing to unpack .../libcudnn8_8.4.1.50-1+cuda11.6_amd64.deb ... Unpacking libcudnn8 (8.4.1.50-1+cuda11.6) over (8.0.5.39-1+cuda11.1) ... Setting up libcudnn8 (8.4.1.50-1+cuda11.6) ...
!pip install shap
!pip install tensorflow==2.4.3
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting shap
Downloading shap-0.41.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (569 kB)
|████████████████████████████████| 569 kB 28.1 MB/s
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.7/dist-packages (from shap) (1.5.0)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.7/dist-packages (from shap) (4.64.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from shap) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from shap) (1.7.3)
Requirement already satisfied: packaging>20.9 in /usr/local/lib/python3.7/dist-packages (from shap) (21.3)
Collecting slicer==0.0.7
Downloading slicer-0.0.7-py3-none-any.whl (14 kB)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from shap) (1.0.2)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from shap) (1.3.5)
Requirement already satisfied: numba in /usr/local/lib/python3.7/dist-packages (from shap) (0.56.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>20.9->shap) (3.0.9)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /usr/local/lib/python3.7/dist-packages (from numba->shap) (0.39.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from numba->shap) (4.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba->shap) (57.4.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->numba->shap) (3.8.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->numba->shap) (4.1.1)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->shap) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->shap) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->shap) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->shap) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->shap) (1.1.0)
Installing collected packages: slicer, shap
Successfully installed shap-0.41.0 slicer-0.0.7
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting tensorflow==2.4.3
Downloading tensorflow-2.4.3-cp37-cp37m-manylinux2010_x86_64.whl (394.5 MB)
|████████████████████████████████| 394.5 MB 383 bytes/s
Collecting numpy~=1.19.2
Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB)
|████████████████████████████████| 14.8 MB 68.3 MB/s
Requirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (1.15.0)
Collecting flatbuffers~=1.12.0
Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting grpcio~=1.32.0
Downloading grpcio-1.32.0-cp37-cp37m-manylinux2014_x86_64.whl (3.8 MB)
|████████████████████████████████| 3.8 MB 63.0 MB/s
Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (3.3.0)
Requirement already satisfied: tensorboard~=2.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (2.8.0)
Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (0.37.1)
Collecting h5py~=2.10.0
Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
|████████████████████████████████| 2.9 MB 55.8 MB/s
Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (1.6.3)
Collecting gast==0.3.3
Downloading gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (3.17.3)
Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (0.2.0)
Collecting wrapt~=1.12.1
Downloading wrapt-1.12.1.tar.gz (27 kB)
Collecting absl-py~=0.10
Downloading absl_py-0.15.0-py3-none-any.whl (132 kB)
|████████████████████████████████| 132 kB 70.2 MB/s
Collecting tensorflow-estimator<2.5.0,>=2.4.0
Downloading tensorflow_estimator-2.4.0-py2.py3-none-any.whl (462 kB)
|████████████████████████████████| 462 kB 74.3 MB/s
Requirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (1.1.0)
Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.4.3) (1.1.2)
Collecting typing-extensions~=3.7.4
Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (1.8.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (3.4.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (0.4.6)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (1.35.0)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (57.4.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow==2.4.3) (2.23.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.3) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.3) (4.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.3) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow==2.4.3) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.4->tensorflow==2.4.3) (4.12.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.4->tensorflow==2.4.3) (3.8.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.4->tensorflow==2.4.3) (0.4.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.3) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.3) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.3) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.4->tensorflow==2.4.3) (2022.6.15)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow==2.4.3) (3.2.0)
Building wheels for collected packages: wrapt
Building wheel for wrapt (setup.py) ... done
Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=68716 sha256=ec33acb57f166eacf6a7bbe580d98a1bd7b376b0f6c9388c3de22594bf7f8daa
Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
Successfully built wrapt
Installing collected packages: typing-extensions, numpy, grpcio, absl-py, wrapt, tensorflow-estimator, h5py, gast, flatbuffers, tensorflow
Attempting uninstall: typing-extensions
Found existing installation: typing-extensions 4.1.1
Uninstalling typing-extensions-4.1.1:
Successfully uninstalled typing-extensions-4.1.1
Attempting uninstall: numpy
Found existing installation: numpy 1.21.6
Uninstalling numpy-1.21.6:
Successfully uninstalled numpy-1.21.6
Attempting uninstall: grpcio
Found existing installation: grpcio 1.47.0
Uninstalling grpcio-1.47.0:
Successfully uninstalled grpcio-1.47.0
Attempting uninstall: absl-py
Found existing installation: absl-py 1.2.0
Uninstalling absl-py-1.2.0:
Successfully uninstalled absl-py-1.2.0
Attempting uninstall: wrapt
Found existing installation: wrapt 1.14.1
Uninstalling wrapt-1.14.1:
Successfully uninstalled wrapt-1.14.1
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.8.0
Uninstalling tensorflow-estimator-2.8.0:
Successfully uninstalled tensorflow-estimator-2.8.0
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
Attempting uninstall: gast
Found existing installation: gast 0.5.3
Uninstalling gast-0.5.3:
Successfully uninstalled gast-0.5.3
Attempting uninstall: flatbuffers
Found existing installation: flatbuffers 2.0.7
Uninstalling flatbuffers-2.0.7:
Successfully uninstalled flatbuffers-2.0.7
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.8.2+zzzcolab20220719082949
Uninstalling tensorflow-2.8.2+zzzcolab20220719082949:
Successfully uninstalled tensorflow-2.8.2+zzzcolab20220719082949
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
xarray-einstats 0.2.2 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible.
cmdstanpy 1.0.7 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible.
Successfully installed absl-py-0.15.0 flatbuffers-1.12 gast-0.3.3 grpcio-1.32.0 h5py-2.10.0 numpy-1.19.5 tensorflow-2.4.3 tensorflow-estimator-2.4.0 typing-extensions-3.7.4.3 wrapt-1.12.1
Now import all necessary dependencies:
import shap
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
For this lab you will use the fashion MNIST dataset. Load it and pre-process the data before feeding it into the model:
# Download the dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()
# Reshape and normalize data
x_train = x_train.reshape(60000, 28, 28, 1).astype("float32") / 255
x_test = x_test.reshape(10000, 28, 28, 1).astype("float32") / 255
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 1s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step
For the CNN model you will use a simple architecture composed of a single convolutional and maxpooling layers pair connected to a fully conected layer with 256 units and the output layer with 10 units since there are 10 categories.
Define the model using Keras' Functional API:
# Define the model architecture using the functional API
inputs = keras.Input(shape=(28, 28, 1))
x = keras.layers.Conv2D(32, (3, 3), activation='relu')(inputs)
x = keras.layers.MaxPooling2D((2, 2))(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(256, activation='relu')(x)
outputs = keras.layers.Dense(10, activation='softmax')(x)
# Create the model with the corresponding inputs and outputs
model = keras.Model(inputs=inputs, outputs=outputs, name="CNN")
# Compile the model
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
optimizer=keras.optimizers.Adam(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
# Train it!
model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))
Epoch 1/5 1875/1875 [==============================] - 21s 5ms/step - loss: 0.5088 - sparse_categorical_accuracy: 0.8205 - val_loss: 0.3334 - val_sparse_categorical_accuracy: 0.8772 Epoch 2/5 1875/1875 [==============================] - 5s 3ms/step - loss: 0.2577 - sparse_categorical_accuracy: 0.9063 - val_loss: 0.2857 - val_sparse_categorical_accuracy: 0.8929 Epoch 3/5 1875/1875 [==============================] - 5s 3ms/step - loss: 0.2097 - sparse_categorical_accuracy: 0.9212 - val_loss: 0.2501 - val_sparse_categorical_accuracy: 0.9070 Epoch 4/5 1875/1875 [==============================] - 5s 3ms/step - loss: 0.1685 - sparse_categorical_accuracy: 0.9373 - val_loss: 0.2578 - val_sparse_categorical_accuracy: 0.9126 Epoch 5/5 1875/1875 [==============================] - 5s 3ms/step - loss: 0.1357 - sparse_categorical_accuracy: 0.9494 - val_loss: 0.2497 - val_sparse_categorical_accuracy: 0.9149
<tensorflow.python.keras.callbacks.History at 0x7fbfa0069e10>
Judging the accuracy metrics looks like the model is overfitting. However, it is achieving a >90% accuracy on the test set so its performance is adequate for the purposes of this lab.
You know that the model is correctly classifying around 90% of the images in the test set. But how is it doing it? What pixels are being used to determine if an image belongs to a particular class?
To answer these questions you can use SHAP values.
Before doing so, check how each one of the categories looks like:
# Name each one of the classes
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# Save an example for each category in a dict
images_dict = dict()
for i, l in enumerate(y_train):
if len(images_dict)==10:
break
if l not in images_dict.keys():
images_dict[l] = x_train[i].reshape((28, 28))
# Function to plot images
def plot_categories(images):
fig, axes = plt.subplots(1, 11, figsize=(16, 15))
axes = axes.flatten()
# Plot an empty canvas
ax = axes[0]
dummy_array = np.array([[[0, 0, 0, 0]]], dtype='uint8')
ax.set_title("reference")
ax.set_axis_off()
ax.imshow(dummy_array, interpolation='nearest')
# Plot an image for every category
for k,v in images.items():
ax = axes[k+1]
ax.imshow(v, cmap=plt.cm.binary)
ax.set_title(f"{class_names[k]}")
ax.set_axis_off()
plt.tight_layout()
plt.show()
# Use the function to plot
plot_categories(images_dict)
Now you know how the items in each one of the categories looks like.
You might wonder what the empty image at the left is for. You will see shortly why it is important.
To compute shap values for the model you just trained you will use the DeepExplainer
class from the shap
library.
To instantiate this class you need to pass in a model along with training examples. Notice that not all of the training examples are passed in but only a fraction of them.
This is done because the computations done by the DeepExplainer
object are very intensive on the RAM and you might run out of it.
# Take a random sample of 5000 training images
background = x_train[np.random.choice(x_train.shape[0], 5000, replace=False)]
# Use DeepExplainer to explain predictions of the model
e = shap.DeepExplainer(model, background)
# Compute shap values
# shap_values = e.shap_values(x_test[1:5])
Your TensorFlow version is newer than 2.4.0 and so graph support has been removed in eager mode and some static graphs may not be supported. See PR #1483 for discussion.
Now you can use the DeepExplainer
instance to compute Shap values for images on the test set.
So you can properly visualize these values for each class, create an array that contains one element of each class from the test set:
# Save an example of each class from the test set
x_test_dict = dict()
for i, l in enumerate(y_test):
if len(x_test_dict)==10:
break
if l not in x_test_dict.keys():
x_test_dict[l] = x_test[i]
# Convert to list preserving order of classes
x_test_each_class = [x_test_dict[i] for i in sorted(x_test_dict)]
# Convert to tensor
x_test_each_class = np.asarray(x_test_each_class)
# Print shape of tensor
print(f"x_test_each_class tensor has shape: {x_test_each_class.shape}")
x_test_each_class tensor has shape: (10, 28, 28, 1)
Before computing the shap values, make sure that the model is able to correctly classify each one of the examples you just picked:
# Compute predictions
predictions = model.predict(x_test_each_class)
# Apply argmax to get predicted class
np.argmax(predictions, axis=1)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Since the test examples are ordered according to the class number and the predictions array is also ordered, the model was able to correctly classify each one of these images.
Now that you have an example of each class, compute the Shap values for each example:
# Compute shap values using DeepExplainer instance
shap_values = e.shap_values(x_test_each_class)
`tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
Now take a look at the computed shap values. To understand the next illustration have these points in mind:
shap.image_plot
just makes a copy of the classified image, but you can use the plot_categories
function you created earlier to show an example of that class for reference.# Plot reference column
plot_categories(images_dict)
# Print an empty line to separate the two plots
print()
# Plot shap values
shap.image_plot(shap_values, -x_test_each_class)
Now take some time to understand what the plot is showing you. Since the model is able to correctly classify each one of these 10 images, it makes sense that the shapley values along the diagonal are the most prevalent. Specially positive values since that is the class the model (correctly) predicted.
What else can you derive from this plot? Try focusing on one example. For instance focus on the coat which is the fifth class. Looks like the model also had "reasons" to classify it as pullover or a shirt. This can be concluded from the presence of positive shap values for these clases.
Let's take a look at the tensor of predictions to double check if this was the case:
# Save the probability of belonging to each class for the fifth element of the set
coat_probs = predictions[4]
# Order the probabilities in ascending order
coat_args = np.argsort(coat_probs)
# Reverse the list and get the top 3 probabilities
top_coat_args = coat_args[::-1][:3]
# Print (ordered) top 3 classes
for i in list(top_coat_args):
print(class_names[i])
Coat Pullover Shirt
Indeed the model selected these 3 classes as the most probable ones for the coat image. This makes sense since these objects are similar to each other.
Now look at the t-shirt which is the first class. This object is very similar to the pullover but without the long sleeves. It is not a surprise that white pixels in the area where the long sleeves are present will yield high shap values for classifying as a t-shirt. In the same way, white pixels in this area will yield negative shap values for classifying as a pullover since the model will expect these pixels to be colored if the item was indeed a pullover.
You can get a lot of insight repeating this process for all the classes. What other conclusions can you arrive at?
Congratulations on finishing this ungraded lab! Now you should have a clearer understanding of what Shapley values are, why they are useful and how to compute them using the shap
library.
Deep Learning models were considered black boxes for a very long time. There is a natural trade off between predicting power and explanaibility in Machine Learning but thanks to the rise of new techniques such as SHapley Additive exPlanations it is easier than never before to explain the outputs of Deep Learning models.
Keep it up!