1 year ago

#388515

test-img

chandraSiri

storageInitializer InitContainer not starting in Kserve InferenceService

I'm trying out KServe. I've followed installation instructions as per the official docs. I'm trying to create a sample HTTP InferenceService exactly same as this. by doing kubectl apply -f tensorflow.yaml

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "flower-sample"
spec:
  predictor:
    tensorflow:
      storageUri: "gs://kfserving-samples/models/tensorflow/flowers"

This yaml creates an InferenceService and assosicated deployments, replicaset, and a pod. However, the InferenceService remains in Unknown state, upon investiagating, I found that 1 container (queue-proxy container) out of 2 wasn't able to do Readiness probe, Readiness probe failed: HTTP probe failed with statuscode: 503 .

Upon further investigation, I saw the following logs from kserve-container container

2022-04-07 17:14:46.482205: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:365] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /mnt/models for servable flower-sample with error Not found: /mnt/models not found

So I understood models not present. I manually downloaded the same gs://kfserving-samples/models/tensorflow/flowers and copied in the correct path. Finally InferenceService started working.

But I want to avoid doing this manual copying of models into pod's container.

It should ideally be done by init-container - storage-initializer which is missing in my case.

I did kubectl describe flower-sample-predictor-default-00001-deployment-5db9d7d9fgqfn4 and I don't see any initContainers section.

All these workloads are running in kserve namespace. I've a configmap inferenceservice-config which has the following (default) storageInitializer

{
    "image" : "kserve/storage-initializer:v0.8.0",
    "memoryRequest": "100Mi",
    "memoryLimit": "1Gi",
    "cpuRequest": "100m",
    "cpuLimit": "1"
}

But still when I do kubectl apply -f tensorflow.yaml I'm facing the same error. Could anyone help me on how to fix this ?

kubernetes

kubeflow

0 Answers

Your Answer

Accepted video resources