Knative 安装与使用

Refer

Require

  • kubernetes 集群
  • 安装 istio
  • 安装 knative
  • kubectl
  • helm(可选)

Overview

Knative 的安装是可配置的,包括多种可选项, 可参考 Performing a Custom Knative Installation.我们选择全部安装。

  1. 我们首先需要一个 kubernetes 集群。使用前几篇搭建的 Rancher 管理的集群

image.png

  1. 然后安装 istio
  2. 之后安装 Knative
  3. 最后实践使用

Istio

istio 安装我们可以使用 kubectl 或者使用 rancher 自带的一键安装。我们选择 rancher。

1. rancher

  1. 一键启用

image.png

等待片刻即启用。

  1. 为 default 命名空间开启自动注入

image.png

2. kubectl

Verify

查看您的Istio安装状态,以确保安装成功。直到所有pod都显示 Running 或 Completed 的状态:

kubectl get pods --namespace istio-system

Knative

Prequire

knative 安装比较困难,由于网络原因,这里有两个脚本可以运行,具体看文档

Here is yuque doc card, click on the link to view:https://www.yuque.com/abser/blog/knative

文档中有我的 GitHub 链接,可以直接使用该 GitHub 链接下的三个 yaml 文件也能安装。(已经将 gcr.io 中的 image 替换到 docker Hub 中我的账号下)0.11.0 版本已验证,可以在 image.tmp 文件中查看被替换的镜像。

image.png

建议使用我这个库,安装和使用都用的这里的 yaml。

Install Knative

  1. 在有这三个 yaml 文件的地方执行该命令

 kubectl apply --selector knative.dev/crd-install=true -f monitoring.yaml -f release.yaml -f serving.yaml

ArideMacBook-Air:knative abser$ kubectl apply --selector knative.dev/crd-install=true -f monitoring.yaml -f release.yaml -f serving.yaml 
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/cronjobsources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
  1. 然后不带 selector 参数再执行一遍

kubectl apply -f monitoring.yaml -f release.yaml -f serving.yaml

ArideMacBook-Air:knative abser$ kubectl apply -f monitoring.yaml -f release.yaml -f serving.yaml 
namespace/knative-monitoring created
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
service/kibana-logging created
deployment.apps/kibana-logging created
configmap/fluentd-ds-config created
serviceaccount/fluentd-ds created
clusterrole.rbac.authorization.k8s.io/fluentd-ds created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-ds created
service/fluentd-ds created
daemonset.apps/fluentd-ds created
serviceaccount/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics-resizer created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
configmap/grafana-dashboard-definition-kubernetes-deployment created
configmap/grafana-dashboard-definition-kubernetes-capacity-planning created
configmap/grafana-dashboard-definition-kubernetes-cluster-health created
configmap/grafana-dashboard-definition-kubernetes-cluster-status created
configmap/grafana-dashboard-definition-kubernetes-control-plane-status created
configmap/grafana-dashboard-definition-kubernetes-resource-requests created
configmap/grafana-dashboard-definition-kubernetes-nodes created
configmap/grafana-dashboard-definition-kubernetes-pods created
configmap/grafana-dashboard-definition-kubernetes-statefulset created
serviceaccount/node-exporter created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
configmap/grafana-custom-config created
configmap/grafana-dashboard-definition-knative-efficiency created
configmap/grafana-dashboard-definition-knative-reconciler created
configmap/scaling-config created
configmap/grafana-dashboard-definition-knative created
configmap/grafana-datasources created
configmap/grafana-dashboards created
service/grafana created
deployment.apps/grafana created
configmap/prometheus-scrape-config created
service/kube-controller-manager created
service/prometheus-system-discovery created
serviceaccount/prometheus-system created
role.rbac.authorization.k8s.io/prometheus-system created
role.rbac.authorization.k8s.io/prometheus-system created
role.rbac.authorization.k8s.io/prometheus-system created
role.rbac.authorization.k8s.io/prometheus-system created
clusterrole.rbac.authorization.k8s.io/prometheus-system created
rolebinding.rbac.authorization.k8s.io/prometheus-system created
rolebinding.rbac.authorization.k8s.io/prometheus-system created
rolebinding.rbac.authorization.k8s.io/prometheus-system created
rolebinding.rbac.authorization.k8s.io/prometheus-system created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-system created
service/prometheus-system-np created
statefulset.apps/prometheus-system created
service/zipkin created
deployment.apps/zipkin created
namespace/knative-eventing created
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
serviceaccount/eventing-controller created
serviceaccount/eventing-webhook created
serviceaccount/eventing-source-controller created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-source-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-source-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-source-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/cronjobsources.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
configmap/default-ch-webhook created
service/eventing-webhook created
deployment.apps/eventing-controller created
deployment.apps/sources-controller created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created
secret/eventing-webhook-certs created
deployment.apps/eventing-webhook created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
service/imc-dispatcher created
serviceaccount/imc-controller created
serviceaccount/imc-dispatcher created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev unchanged
deployment.apps/imc-controller created
deployment.apps/imc-dispatcher created
namespace/knative-serving created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
service/activator-service created
service/controller created
service/webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
secret/webhook-certs created
image.caching.internal.knative.dev/queue-proxy created
deployment.apps/activator created
horizontalpodautoscaler.autoscaling/activator created
deployment.apps/autoscaler-hpa created
service/autoscaler created
deployment.apps/autoscaler created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-gc created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/controller created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
deployment.apps/webhook created
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/cluster-local-gateway created
configmap/config-istio created
deployment.apps/networking-istio created
  1. 检查各 pod 状态,直到都是 Running Completed 的状态
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-eventing
kubectl get pods --namespace knative-monitoring

Rancher

我们看看 rancher  中的状态

添加项目

  1. 添加一个 Knative 的项目管理命名空间,并把三个命名空间加入该项目image.png

查看情况

  1. 在 Rancher 中查看各 pod 的情况image.png

Usage

knative 使用很简单,一个 yaml 文件的 apply 就能搞定大部分情况,因为可以使用 istio 对这些部署的应用调整路由和流量。

Prequire

我已经将 helloworld-go 的 docker image 上传到我的账户下,直接使用我的 yaml 文件即可应用。

image.png

yaml 文件内容

内容如下:

apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
  name: helloworld-go # The name of the app
  namespace: default # The namespace the app will use
spec:
  template:
    spec:
      containers:
        - image: yhyddr/helloworld-go # The URL to the image of the app
          env:
            - name: TARGET # The environment variable printed out by the sample app
              value: "Go Sample v1"

你也可以把它保存为 yaml 文件。

Apply

安装

命令:kubectl apply -f service.yaml 

ArideMacBook-Air:knative abser$ kubectl apply -f service.yaml 
service.serving.knative.dev/helloworld-go created

检查

使用 kubectl get ksvc helloworld-go 查看服务状态

ArideMacBook-Air:knative abser$ kubectl get ksvc helloworld-go 
NAME            URL                                        LATESTCREATED         LATESTREADY   READY     REASON
helloworld-go   http://helloworld-go.default.example.com   helloworld-go-ss4tb                 Unknown   RevisionMissing

Use

检查

1.直到服务 ready 为 True 时,访问提供的 URL 即可。

ArideMacBook-Air:knative abser$ kubectl get ksvc helloworld-go 
NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON
helloworld-go   http://helloworld-go.default.example.com   helloworld-go-xz75h   helloworld-go-xz75h   True   

ArideMacBook-Air:knative abser$ curl http://helloworld-go.default.example.com 
curl: (6) Could not resolve host: helloworld-go.default.example.com

发现访问不成功,这是无法解析的 URL 的原因,我们需要访问 istio 的 ingressgateway 才行。

使用 Gateway

  1. 查看 ingressgateway 的 IP 地址,如果您的机器有外网 IP ,意味着你的 Knative 可以对外服务,更多参考 external ip
ArideMacBook-Air:knative abser$ kubectl get svc istio-ingressgateway --namespace istio-system
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
istio-ingressgateway   NodePort   10.43.243.67   <none>        80:31380/TCP,443:31390/TCP   26h
无 Load Balancer
  1. 使用命令将 ingressgateway 的 IP 记录下来。

KNATIVE_INGRESS=$(kubectl get node  --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc istio-ingressgateway  --namespace=istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')

ArideMacBook-Air:knative abser$ export KNATIVE_INGRESS=$(kubectl get node  --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc istio-ingressgateway  --namespace=istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')

ArideMacBook-Air:knative abser$ echo $KNATIVE_INGRESS
192.168.0.103:31380

这里是因为我使用的 k8s(比如: Minikube)外部负载均衡器 Load Balancer,所以 gateway 是 NodePort 模式,故 IP 是 192.168.0.103:31380

有 Load Balancer
$ export KNATIVE_INGRESS=$(kubectl get svc istio-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')

$ echo $KNATIVE_INGRESS
35.203.155.229

执行

  1. 访问成功
curl -H "Host: helloworld-go.default.example.com" http://$KNATIVE_INGRESS
Hello World: Go Sample v1!

来源: Knative 安装与使用 · 语雀

分类: 还在篮子里

0 条评论

发表评论

电子邮件地址不会被公开。 必填项已用*标注