Friday, June 13, 2025

Today we'll be diving into the topic of TensorFlow, a popular machine

Today we'll be diving into the topic of TensorFlow, a popular machine

A 1D tensor is a one-dimensional vector or array, which can represent anything from numbers (such as the input data we'll be working with today) to matrices and arrays. To access the first element of a 1D tensor, you use its index as a number, starting at 0.

For example: Let's say we have an image dataset of images of cats and dogs. We can represent this data as a 2D tensor, where each row is an image with one or more examples of the two dog and cat breeds. To access the first element (the second row) of the 2D tensor, we would use `x[1][0]` to reference the second element of the first row.

A 2D tensor is a two-dimensional array that can hold any type of data, including images, matrices, and arrays. In our image dataset, each row (or image) in the 2D tensor is represented by an entire row of an image with 10 examples of each dog and cat breed.

We'll use TensorFlow to build a simple CNN model that can recognize cats from images, just like we did in our previous blog post. The first step would be to load the dataset into TensorFlow using Theano. We can load it by creating an array of the data and defining its shape (the number of rows and columns).

To load the image dataset, we'll use a simple Python function

```python

import numpy as np

import tensorflow as tf

# Load dataset into TensorFlow array

dataset = np.loadtxt('dataset.csv')

# Shape the data (number of rows, number of columns)

rows, cols = dataset.shape

# Create a batched array with the specified shape

x = tf.constant(data=dataset, dtype=tf.float32)

```

We'll also need to convert our image data into TensorFlow-compatible format by resizing and normalizing it to a fixed size of 28x28 pixels (in this case).

In the next step, we can build our CNN model using TensorFlow. We'll create two separate layers: the first layer will be a convolutional layer with a kernel size of 5x5 and 32 neurons, followed by a max-pooling layer with a kernel size of 2x2 and 10 neurons, which reduces the number of neurons in the output layer.

Here's an example code snippet for building our CNN model

```python

import tensorflow as tf

# Create a function to define a convolutional neural network

def conv_net(input_shape)

# Define layers using TensorFlow functions

input_flat = tf.reshape(input_shape, [-1])

conv1 = tf.layers.conv2d(inputs=input_flat, filters=32, kernel_size=[5, 5], padding='same')

pool1 = tf.keras.layers.max_pooling2d(inputs=conv1)

drop1 = tf.keras.layers.dropout(rate=0.5, seed=None, training=True)(pool1)

# Add another convolutional layer with 32 neurons and a pooling layer with kernel size 2x2 and output size 10

conv2 = tf.layers.conv2d(inputs=drop1, filters=32, kernel_size=[5, 5], padding='same')

pool2 = tf.keras.layers.max_pooling2d(inputs=conv2)

drop2 = tf.keras.layers.dropout(rate=0.5, seed=None, training=True)(pool2)

# Add another fully-connected layer with 10 neurons and an activation function (softmax)

output = tf.keras.layers.flatten(drop2)

output = tf.keras.layers.dense(outputs=output, units=10, activation='softmax', name='output')

# Compile the model

model = tf.keras.models.model_from_json(model_fn=tf.saved_model.load('path/to/your/saved/model'),

custom_objects={'convolutional': convolution})

optimizer = tf.train.adam(learning_rate)

# Add loss and metrics layers

model.add(loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])

# Compile the model

return model, optimizer

# Define the function to load the saved TensorFlow graph and weights

model, optimizer = conv_net(input_shape=(None, 28, 28))

```

This code snippet creates a `conv_net` function that defines two layers with 32 neurons and a pooling layer with kernel size 2x2 and output size 10. It also adds an additional fully-connected layer with 10 neurons and an activation function (softmax) for prediction, then loads the model using TensorFlow's `saved_model` API and compiles it with the default optimizer (`Adam`) and loss function (`sparse_categorical_crossentropy`).

In the next step, we can use TensorFlow to perform inference on the loaded data by calling the model's `predict` method. We'll pass in the batched dataset of images as a numpy array and set the input shape to match the output shape (number of rows/columns), which should be 1x10x28x28, since we converted our image data into this shape using numpy.

In the next step, we'll load the trained model weights, load the batched dataset again, and pass in our input data to the `predict` method.

Let's create a simple TensorFlow program that will perform inference on our image dataset using our CNN

```python

import tensorflow as tf

from tensorflow import keras

from tensorflow_datasets.image.mnist import load_and_prepare

# Load and preprocess the MNIST dataset into a numpy array with shape (batch size, width * height)

(x_train, y_train), (x_test, y_test) = load_and_prepare('data/mnist.npz', 'train')

# Convert batched data to TensorFlow format for model inference

inputs = tf.keras.layers.Input(shape=(32 * 28, 1))

x = keras.layers.Rescaling(scale=1./255)(inputs)

# Add convolutional layers with kernel size 5 and filters 32, max pooling after 2nd layer and a dropout of 0.5

x = tf.keras.layers.Conv2D(filters=32, kernel_size=[5, 5], padding='same')(x)

x = tf.keras.layers.MaxPool2D(pool_size=[2, 2])(x)

drop1 = tf.keras.layers.Dropout(rate=0.5)(x)

# Add another convolutional layer with kernel size 3 and filters 64, max pooling after 3rd layer and dropout of 0.5

x = tf.keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='same')(x)

x = tf.keras.layers.MaxPool2D(pool_size=[2, 2])(x)

drop2 = tf.keras.layers.Dropout(rate=0.5)(x)

# Add another fully-connected layer with 100 neurons and an activation function (softmax) for prediction (output)

output = tf.keras.layers.Dense(units=100, activation='sparse_cnn')(drop1)

inputs="dogs vsense"conv_to predict"outputsensorpool25, "predicts/max"

# Concents (or'sand'

Add/convers, ors/productss|

No comments:

Post a Comment