I have exported my VertexAI model to TFJS as "edge", which results in:
- dict.txt
- group1_shard1of2.bin
- group1_shard2of2.bin
- model.json
Now, I send an image from my client to the Node/Express endpoint which I am really having a tough time figuring out - because I find the TFJS docs to be terrible to understand what I need to do. But here is what I have:
"@tensorflow/tfjs-node": "^4.22.0",
"@types/multer": "^1.4.12",
"multer": "^1.4.5-lts.1",
and then in my endpoint handler for image & model:
const upload = multer({
storage: memoryStorage(),
limits: {
fileSize: 10 * 1024 * 1024, // 10MB limit
},
}).single('image');
// Load the dictionary file
const loadDictionary = () => {
const dictPath = path.join(__dirname, 'model', 'dict_03192025.txt');
const content = fs.readFileSync(dictPath, 'utf-8');
return content.split('\n').filter(line => line.trim() !== '');
};
const getTopPredictions = (
predictions: number[],
labels: string[],
topK = 5
) => {
// Get indices sorted by probability
const indices = predictions
.map((_, i) => i)
.sort((a, b) => predictions[b] - predictions[a]);
// Get top K predictions with their probabilities
return indices.slice(0, topK).map(index => ({
label: labels[index],
probability: predictions[index],
}));
};
export const scan = async (req: Request, res: Response) => {
upload(req as any, res as any, async err => {
if (err) {
return res.status(400).send({ message: err.message });
}
const file = (req as any).file as Express.Multer.File;
if (!file || !file.buffer) {
return res.status(400).send({ message: 'No image file provided' });
}
try {
// Load the dictionary
const labels = loadDictionary();
// Load the model from JSON format
const model = await tf.loadGraphModel(
'file://' + __dirname + '/model/model_03192025.json'
);
// Process the image
const image = tf.node.decodeImage(file.buffer, 3, 'int32');
const resized = tf.image.resizeBilinear(image, [512, 512]);
const normalizedImage = resized.div(255.0);
const batchedImage = normalizedImage.expandDims(0);
const predictions = await model.executeAsync(batchedImage);
// Extract prediction data and get top matches
const predictionArray = Array.isArray(predictions)
? await (predictions[0] as tf.Tensor).array()
: await (predictions as tf.Tensor).array();
const flatPredictions = (predictionArray as number[][]).flat();
const topPredictions = getTopPredictions(flatPredictions, labels);
// Clean up tensors
image.dispose();
resized.dispose();
normalizedImage.dispose();
batchedImage.dispose();
if (Array.isArray(predictions)) {
predictions.forEach(p => (p as tf.Tensor).dispose());
} else {
(predictions as tf.Tensor).dispose();
}
return res.status(200).send({
message: 'Image processed successfully',
size: file.size,
type: file.mimetype,
predictions: topPredictions,
});
} catch (error) {
console.error('Error processing image:', error);
return res.status(500).send({ message: 'Error processing image' });
}
});
};
// Wrapper function to handle type casting
export const scanHandler = [
upload,
(req: Request, res: Response) => scan(req, res),
] as const;
Here is what I am concerned about:
- am I loading the model correctly as
graphModel
? I tried others and this is the only which worked. - I am resizing to 512x512 ok?
- How can I better handle results? If I want the highest "rated" image, what's the best way to do this?
I have exported my VertexAI model to TFJS as "edge", which results in:
- dict.txt
- group1_shard1of2.bin
- group1_shard2of2.bin
- model.json
Now, I send an image from my client to the Node/Express endpoint which I am really having a tough time figuring out - because I find the TFJS docs to be terrible to understand what I need to do. But here is what I have:
"@tensorflow/tfjs-node": "^4.22.0",
"@types/multer": "^1.4.12",
"multer": "^1.4.5-lts.1",
and then in my endpoint handler for image & model:
const upload = multer({
storage: memoryStorage(),
limits: {
fileSize: 10 * 1024 * 1024, // 10MB limit
},
}).single('image');
// Load the dictionary file
const loadDictionary = () => {
const dictPath = path.join(__dirname, 'model', 'dict_03192025.txt');
const content = fs.readFileSync(dictPath, 'utf-8');
return content.split('\n').filter(line => line.trim() !== '');
};
const getTopPredictions = (
predictions: number[],
labels: string[],
topK = 5
) => {
// Get indices sorted by probability
const indices = predictions
.map((_, i) => i)
.sort((a, b) => predictions[b] - predictions[a]);
// Get top K predictions with their probabilities
return indices.slice(0, topK).map(index => ({
label: labels[index],
probability: predictions[index],
}));
};
export const scan = async (req: Request, res: Response) => {
upload(req as any, res as any, async err => {
if (err) {
return res.status(400).send({ message: err.message });
}
const file = (req as any).file as Express.Multer.File;
if (!file || !file.buffer) {
return res.status(400).send({ message: 'No image file provided' });
}
try {
// Load the dictionary
const labels = loadDictionary();
// Load the model from JSON format
const model = await tf.loadGraphModel(
'file://' + __dirname + '/model/model_03192025.json'
);
// Process the image
const image = tf.node.decodeImage(file.buffer, 3, 'int32');
const resized = tf.image.resizeBilinear(image, [512, 512]);
const normalizedImage = resized.div(255.0);
const batchedImage = normalizedImage.expandDims(0);
const predictions = await model.executeAsync(batchedImage);
// Extract prediction data and get top matches
const predictionArray = Array.isArray(predictions)
? await (predictions[0] as tf.Tensor).array()
: await (predictions as tf.Tensor).array();
const flatPredictions = (predictionArray as number[][]).flat();
const topPredictions = getTopPredictions(flatPredictions, labels);
// Clean up tensors
image.dispose();
resized.dispose();
normalizedImage.dispose();
batchedImage.dispose();
if (Array.isArray(predictions)) {
predictions.forEach(p => (p as tf.Tensor).dispose());
} else {
(predictions as tf.Tensor).dispose();
}
return res.status(200).send({
message: 'Image processed successfully',
size: file.size,
type: file.mimetype,
predictions: topPredictions,
});
} catch (error) {
console.error('Error processing image:', error);
return res.status(500).send({ message: 'Error processing image' });
}
});
};
// Wrapper function to handle type casting
export const scanHandler = [
upload,
(req: Request, res: Response) => scan(req, res),
] as const;
Here is what I am concerned about:
- am I loading the model correctly as
graphModel
? I tried others and this is the only which worked. - I am resizing to 512x512 ok?
- How can I better handle results? If I want the highest "rated" image, what's the best way to do this?
1 Answer
Reset to default 0Technically based on the public documentation of TensorFlowJS, tf.loadGraphModel loads a graph model given a URL to the model definition. To further ensure the model is loaded correctly, you can add a simple check after loading:
if (!model) {
return res.status(500).send({ message: 'Failed to load model' });
}
Resizing to 512x512 is likely correct, but it depends on the input size your model was trained with. Check the model's documentation or metadata to confirm the expected input dimensions. If you are unsure, you can add a console log to print out the shape of the input tensor after resizing.
console.log("Resized image shape:", resized.shape);
Lastly, for handling object detection results models typically output multiple tensors, including:
Bounding box coordinates.
Class labels (indices).
Confidence scores (probabilities).
You can use model.summary() in a TensorFlow.js environment to see the output shapes and types.