参考:
https://github.com/SeldonIO/seldon-core/blob/master/examples/models/sklearn_iris/sklearn_iris.ipynb
https://github.com/SeldonIO/seldon-core/tree/master/examples/models/sklearn_spacy_text
#步骤完成

1. kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80
2.kubectl create namespace john
3.kubectl config set-context $(kubectl config current-context) --namespace=john
4.kubectl create -f sklearn_iris_deployment.yaml
cat sklearn_iris_deployment.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: seldon-deployment-example
  namespace: john
spec:
  name: sklearn-iris-deployment
  predictors:
  - componentSpecs:
    - spec:
        containers:
        - image: seldonio/sklearn-iris:0.1
          imagePullPolicy: IfNotPresent
          name: sklearn-iris-classifier
    graph:
      children: []
      endpoint:
        type: REST
      name: sklearn-iris-classifier
      type: MODEL
    name: sklearn-iris-predictor
    replicas: 1
kubectl get sdep -n john seldon-deployment-example -o json |状态
  "deploymentStatus": {
    "sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c": {
      "availableReplicas": 1,
      "replicas": 1
    }
  },
  "serviceStatus": {
    "seldon-635d389a05411932517447289ce51cde": {
      "httpEndpoint": "seldon-635d389a05411932517447289ce51cde.john:9000",
      "svcName": "seldon-635d389a05411932517447289ce51cde"
    },
    "seldon-bb8b177b8ec556810898594b27b5ec16": {
      "grpcEndpoint": "seldon-bb8b177b8ec556810898594b27b5ec16.john:5001",
      "httpEndpoint": "seldon-bb8b177b8ec556810898594b27b5ec16.john:8000",
      "svcName": "seldon-bb8b177b8ec556810898594b27b5ec16"
    }
  },
  "state": "Available"
}
5.在这里我使用istio,并且按照这个doc https://docs.seldon.io/projects/seldon-core/en/v1.1.0/workflow/serving.html,我做了同样的事情
Istio
Istio REST
Assuming the istio gateway is at <istioGateway> and with a Seldon deployment name <deploymentName> in namespace <namespace>:

A REST endpoint will be exposed at : http://<istioGateway>/seldon/<namespace>/<deploymentName>/api/v1.0/predictions

  • curl -s http:// localhost:8003 / seldon / john / sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c / api / v0.1 / predictions -H“Content-Type:application / json” -d' {“data”:{“ndarray”:[[5.964,4.006,2.081,1.031]]}}'-v
    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to localhost (127.0.0.1) port 8003 (#0)
    > POST /seldon/johnson-az-videspan/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions HTTP/1.1
    > Host: localhost:8003
    > User-Agent: curl/7.58.0
    > Accept: */*
    > Content-Type: application/json
    > Content-Length: 48
    >
    * upload completely sent off: 48 out of 48 bytes
    < HTTP/1.1 301 Moved Permanently
    < location: https://localhost:8003/seldon/john/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions
    < date: Fri, 23 Oct 2020 13:09:46 GMT
    < server: istio-envoy
    < connection: close
    < content-length: 0
    <
    * Closing connection 0
    
    sklearn_spacy_text 模型也会发生相同的事情,但是我想知道相同的模型在docker上运行时能否完美运行。
    请从docker找到示例响应
    curl  -s http://localhost:5000/predict -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' -v
    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to localhost (127.0.0.1) port 5000 (#0)
    > POST /predict HTTP/1.1
    > Host: localhost:5000
    > User-Agent: curl/7.61.1
    > Accept: */*
    > Content-Type: application/json
    > Content-Length: 48
    >
    * upload completely sent off: 48 out of 48 bytes
    * HTTP 1.0, assume close after body
    < HTTP/1.0 200 OK
    < Content-Type: application/json
    < Content-Length: 125
    < Access-Control-Allow-Origin: *
    < Server: Werkzeug/1.0.0 Python/3.7.4
    < Date: Fri, 23 Oct 2020 11:18:31 GMT
    <
    {"data":{"names":["t:0","t:1","t:2"],"ndarray":[[0.9548873249364169,0.04505474761561406,5.7927447968952436e-05]]},"meta":{}}
    * Closing connection 0
    
    curl  -s http://localhost:5001/predict -H "Content-Type: application/json" -d '{"data": {"names": ["text"], "ndarray": ["Hello world this is a test"]}}'
    {"data":{"names":["t:0","t:1"],"ndarray":[[0.6811839197596743,0.3188160802403257]]},"meta":{}}
    
    谁能帮助解决这个问题

    最佳答案

    问题
    您似乎不正确地发出了请求,尝试将其重定向到https协议(protocol)(端口443)

    使用 https代替http

    curl -s https://localhost:8003/seldon/john/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' -v
    

    使用带有-L标志的 curl,它指示curl遵循重定向。在这种情况下,服务器将HTTP请求的重定向响应(永久移动301)返回到http://localhost:8003。重定向响应指示客户端向https://localhost:8003发送附加请求,这次使用HTTPS。
    有关它的更多信息here

    关于kubernetes - SeldonIO | sklearn_iris和sklearn_spacy_text |在k8s中不起作用,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/64500794/

    10-16 09:17