SlideShare a Scribd company logo
Go語言開發APM微服務之經驗分享
劉得彥
teyen.liu@gmail.com
Agenda
• 實現微服務架構 using Golang on K8S
• Monolithic vs Microservices
• REST框架
• RPC框架
• Service Discovery
• Performance Profiler
• Message Queue
• API Gateway
實現微服務架構 using Golang on K8S
• 程式溝通
– REST框架
• Gin
– RPC框架
• gRPC
• Service Discovery
– Kubernetes Service
• Message Queue
– ZMQ
• API Gateway
– Nginx Ingress Controller
Monolithic vs Microservices
• Monolithic (傳統開發模式)
– 開發簡單,容易除錯
– 單一主機運行與管理
– 但不易擴充,程式不易維護
• Microservices(微服務開發模式)
– 可分開獨立開發
– 可快速置換服務與自動化佈署
– 容易擴充,程式易維護
Pod: APM
Monolithic APM
• Previously APM was a monolithic application
– We want to add/implement more features and components
– Disadvantages:
• 不易擴充( not good for scaling)、程式不易維護(當越來越多功能)
Node 1
Pod: observer
Node 2
Node n
Pod: observer
Pod: observer
REST APIs + Web UI
Data Access Layer
Data Collection Layer
Service A Service B Service C
Features/Components in a Pod
New
Feature/
Component
Microservice APM (微服務架構的APM)
• We are planning to refactor APM to become a Microservices APM
• Advantage: 快速置換、容易擴充、可獨立開發、跨程式語言
Node 1
Pod: observer
Node 2
Node n
Pod: observer
Pod: observer
Pod:
Service A
gRPC
Pod:
Service B
gRPC
Pod:
Service C
gRPC
Pod: Frontend
REST APIs
gRPC
Pod:
Service D
gRPC
(new feature)
Kubernetes API
RPC
Web UI
Golang Web Framework - Gin
• Gin優點
– 優秀的性能表現
– 基於原生的 net/http package 進行封裝
– 使用極其快速的 httprouter
– 優良設計的 middleware 機制
• APM使用Gin作為Web Server & API Server, 包含
– RouterGroup做 URI Routing管理
– 透過Middleware實作JWT (保護 API Server)
– Template產生網頁 ( obsolete )
RouterGroup做URI Routing管理
• Get: /
– GET: /static
– GET/POST: /auth
– /swagger/
– /api/v1/*
– …
JSON Web Token (JWT)
• 使用 JWT 保護 APM’s REST APIs
• 定義:
– JWT is a standardized, validated and/or encrypted container format
that is used to securely transfer information between two parties.
• What is the JSON Web Token structure?
– Header
– Payload
– Signature
• a JWT typically looks like the following:
– xxxxx.yyyyy.zzzzz
• 可設定產生出的Token的有效時間
• Auth API
– We use JWT to generate our token by these defined parameters
• app_key
• app_secret
Use JWT middleware as API authentication
• Middleware 執行順序
• 使用者需先登入才能有權限使用 APIs (have authority to access APIs)
login
APM API
/api/v1/*
Get
token
Access APIs
with token
Store
token
Auth API
/auth
Web /*
JWT Middleware
Protobuf and gRPC
• Protobuf 是一種與語言與平台無關,且可擴展的序列化結構化數
據的描述語言 (IDL)
– 支持Golang、Java、C++、Python 等多種語言,支持多個平台
– 高效 即比XML 更小(3 ~ 10倍)、更快(20 ~ 100倍)、更為
簡單
– 擴展性、兼容性好 你可以更新數據結構,而不影響和破壞原
有的舊程序
• gRPC
– 基於 HTTP/2 的協定,gRPC 是種協議緩衝協定,它必須事先
定義 IDL(.proto),然後透過 Compile 的支援,編譯成特定語言
框架,傳輸的序列化結構資訊
– 四種調用方式
• Unary RPC: 一元RPC
• Server-side streaming RPC: 服務端流式 RPC
• Client-side streaming RPC: 客戶端流式 RPC
• Bidirectional streaming RPC: 雙向流式 RPC
編寫 Protocol format file (*.proto)
• Language Guide: https://developers.google.com/protocol-buffers/docs/proto3
• Protocol Buffer Basics Go: https://developers.google.com/protocol-buffers/docs/gotutorial
Defining your protocol format Compiling your protocol buffers
Scalar Value Types
in Go
Example: proto file (IDL)
• 用 protoc 來 compile IDL 產生 Golang gRPC 程式碼
• 產生出的 *.pb.go 不能修改
compile
test.proto
test.pb.go
protoc
For Instance: Refactoring a Service with gRPC
• Implement gRPC for Tracing TCP service
Pod:
TraceTcpService
gRPC Server
Pod: Frontend
REST APIs + Web UI
gRPC Stub
gRPC on HTTP2 with TcpSendRecvTime
需實作gRPC Client Side
Code based on test.pb.go
與 Gin 界接
需實作gRPC Server Side
Code based on test.pb.go
與 後端處理資料的部分
註冊服務
Kubernetes Service Discovery
• Recap:
– Cluster users will user Service (Cluster IP) to connect to the service
• Service Discovery By DNS:
• The naming rule:
– <service_name>.<namespace>.svc.cluster.local
– <service_name>
– Service’s Cluster IP is a virtual IP which is not changed.
– It is not to “access” a pod without a service.
– Services are objects that define the desired state of an ultimate set of iptable rule(s) or
other kind of implementations.
https://tachingchen.com/tw/blog/kubernetes-service-in-detail-1/
Kubernetes Service Discovery
• Every Microservices should have to define a Service ( Service Name ) in Kubernetes
• Define env variables with service name and port number in Deployment YAML or Configmap YAML
instead of in the source code.
Pod: Frontend
env:
REST APIs + Web UI
Pod
Check Pod Health and gracefully shutdown
• 使用 binary file 作為發動 probes:
– https://github.com/grpc-ecosystem/grpc-health-
probe/releases/download/v0.3.6/grpc_health_prob
e-linux-amd64
• 使用 Go package: grpc_health_v1
– 實作 liveness & readiness probe response functions
並註冊healthpb到內部的gRPC service
• Gracefully shutdown pods and restart them
with DaemonSet
Pod: A
main process
└─ service job
└─ run other tasks
2. A SIGTERM signal is sent to the main process in
each container
3. pass signals to children
1. Pod's liveness probe detects its state to Error
PProf – Go Tool Performance Profiler
• 利用 Go 內建的分析工具來從內部分析程式的運行,可以知道效
能瓶頸(bottleneck)在哪裡
• pprof 支援了很多種分析類別:
– CPU profile : 報告程序的CPU 使用情況,按照一定頻率去採
集應用程序在CPU 和寄存器上面的數據
– Memory Profile(Heap Profile): 報告程序的內存使用情況
– Block Profiling : 報告goroutines 不在運行狀態的情況
– Goroutine Profiling :報告goroutines 的使用情況
– Mutex Profiling:報告互斥鎖的競爭情況
Pprof - CPU分析
• 使用方式:
啟動 pprof web server
Import pprof related packages
通過交互式終端使用
$ go tool pprof http://0.0.0.0:6060/debug/pprof/profile?seconds=60
執行該命令後,需等待60秒(可調整seconds的值),
pprof會進行CPU Profiling。
結束後將默認進入pprof的交互式命令模式,
可以對分析的結果進行查看或導出。
例如:
$ top50
flat :本函數的執行耗時
flat% :flat 佔CPU 總時間的比例
sum% :前面每一行的flat 佔比總和
cum :累計量。指該函數加上該函數調用的函數總耗時
cum% :佔CPU 總時間的比例
總耗時: 140ms
Pprof - CPU分析
• PProf 可視化界面(Web)
– 可視化界面需要輸入go tool pprof 所產生出來的data
– 一般預設都會在 user home directory/pprof, 例如:
• 查看可視化界面(Web)
$ go tool pprof -http=0.0.0.0:8080
~/pprof/pprof.main.samples.cpu.002.pb.gz
(Cum%)
Message Queue - ZeroMQ
• APM is planned to support Non-Kubernetes environment’s
monitoring and tracing included:
– Linux Host
• We will adopt ZeroMQ as message queue system
• Pros
– No need Broker
– High throughput, low latency
– Easily integrated into components, lightweight
deployment
– Support PUB/SUB, PULL/PUSH, REQ/RES, or Mix
together
– Supports Linux/Windows
• Cons
– Messages not guaranteed delivery
Message Queue - ZeroMQ
• APM uses extended Pub-Sub pattern
– Every agent or service using ZeroMQ will not be affected by others
– If service is dead, agent won’t be blocked
– Bind will bind a IP and Port, Connect needs to know the IP and Port.
APM Service
XPUB
XSUB
Proxy
Agent
publish
subscribe
Forward & log
bind
bind
connect
connect
Linux Host
(non-K8S)
K8S Cluster Host
APM
ZMQ
Message Queue - ZeroMQ
• Publisher A (an agent)
– send messages:
• “Topic_A|content1”
• “Topic_B|content2”
• Subscriber D (a service)
– subscribes a subject: “Topic_A”
– Will receive the message only
with Topic_A
Topic_A
Publisher A
Subscriber D
Topic_B|content2
Topic_A|content1
Proxy
Topic_A|content1
Subscriber E Subscriber F
Publisher B Publisher C
Topic_C|content1
Message Queue - ZeroMQ
• APM will use extended Pub-Sub pattern
APM Service A
XPUB
XSUB
Proxy
Linux Docker Agent
publish
subscribe
Forward & log
Subscriber
Subject:
“XXXXX”, “YYYYY”
APM Other Service
Subscriber
Subject:
“ZZZZZ”, “XXXXX”
connect
connect
bind
bind
Use Ingress Controller as Proxy Server
• Internet/outside users can access pod’s web via Ingress
• We adopt Nginx Ingress Controller with Bare Metal Using NodePort
• Traffic routing is controlled by rules defined on the Ingress resource.
• Ingress Controller is based on HTTP host header to determine which service it routes to
K8S Cluster
Pod A
(port: 8088)
Service A
Nginx
Ingress
controller
Internet/
Intranet
http://apmweb:30309
Service B
Service C
Pod B
(port: 8080)
Pod C
(port: 3001)
Ingress
resource
Nodeport:
30309
K8S Ingress Nginx controller and rules
Ingress
Rule
examples
• 需在DNS Server定義 domain name 與 IP 或是 在/etc/hosts設定
• The HOST header must match the host name specified in the Ingress.
• 例如:
– Curl: curl --header "HOST: apmweb"
http://192.168.122.167:(nodeport)/
– 瀏覽器: http://apmweb:(nodeport)/
• 查詢 Ingress Resources
在K8S v1.22以上的版本
須使用Ingress-Nginx controller version v.1.0.0
Ingress 需使用 networking.k8s.io/v1
在K8S v1.21以下的版本
須使用Ingress-Nginx controller version v.0.48以下
Ingress 需使用 networking.k8s.io/v1beta1 ( Syntax不同)
Some fixed issues/bugs
• Go 裡面有兩種資料結構,分別是 Array 和 Slice:
– Array:清單的長度是固定的(fixed length),屬於原生
型別(primitive type),較少在程式中使用。
– Slice:可以增加或減少清單的長度,使用 [] 定義,例如,
[]byte 是 byte slice
– 要注意Slice更動值後, 所reference到的位置也會變動, 故
要assign回去
• 傳遞Interrupt Signal to Pod’s app
– use the CMD in the exec form.
• 例如: CMD [ "myapp" ]
• gRPC 內的Request arguments, 若少填不會報錯,但會影響到
Server Side
Q&A

More Related Content

Go語言開發APM微服務在Kubernetes之經驗分享

  • 2. Agenda • 實現微服務架構 using Golang on K8S • Monolithic vs Microservices • REST框架 • RPC框架 • Service Discovery • Performance Profiler • Message Queue • API Gateway
  • 3. 實現微服務架構 using Golang on K8S • 程式溝通 – REST框架 • Gin – RPC框架 • gRPC • Service Discovery – Kubernetes Service • Message Queue – ZMQ • API Gateway – Nginx Ingress Controller
  • 4. Monolithic vs Microservices • Monolithic (傳統開發模式) – 開發簡單,容易除錯 – 單一主機運行與管理 – 但不易擴充,程式不易維護 • Microservices(微服務開發模式) – 可分開獨立開發 – 可快速置換服務與自動化佈署 – 容易擴充,程式易維護
  • 5. Pod: APM Monolithic APM • Previously APM was a monolithic application – We want to add/implement more features and components – Disadvantages: • 不易擴充( not good for scaling)、程式不易維護(當越來越多功能) Node 1 Pod: observer Node 2 Node n Pod: observer Pod: observer REST APIs + Web UI Data Access Layer Data Collection Layer Service A Service B Service C Features/Components in a Pod New Feature/ Component
  • 6. Microservice APM (微服務架構的APM) • We are planning to refactor APM to become a Microservices APM • Advantage: 快速置換、容易擴充、可獨立開發、跨程式語言 Node 1 Pod: observer Node 2 Node n Pod: observer Pod: observer Pod: Service A gRPC Pod: Service B gRPC Pod: Service C gRPC Pod: Frontend REST APIs gRPC Pod: Service D gRPC (new feature) Kubernetes API RPC Web UI
  • 7. Golang Web Framework - Gin • Gin優點 – 優秀的性能表現 – 基於原生的 net/http package 進行封裝 – 使用極其快速的 httprouter – 優良設計的 middleware 機制 • APM使用Gin作為Web Server & API Server, 包含 – RouterGroup做 URI Routing管理 – 透過Middleware實作JWT (保護 API Server) – Template產生網頁 ( obsolete )
  • 8. RouterGroup做URI Routing管理 • Get: / – GET: /static – GET/POST: /auth – /swagger/ – /api/v1/* – …
  • 9. JSON Web Token (JWT) • 使用 JWT 保護 APM’s REST APIs • 定義: – JWT is a standardized, validated and/or encrypted container format that is used to securely transfer information between two parties. • What is the JSON Web Token structure? – Header – Payload – Signature • a JWT typically looks like the following: – xxxxx.yyyyy.zzzzz • 可設定產生出的Token的有效時間 • Auth API – We use JWT to generate our token by these defined parameters • app_key • app_secret
  • 10. Use JWT middleware as API authentication • Middleware 執行順序 • 使用者需先登入才能有權限使用 APIs (have authority to access APIs) login APM API /api/v1/* Get token Access APIs with token Store token Auth API /auth Web /* JWT Middleware
  • 11. Protobuf and gRPC • Protobuf 是一種與語言與平台無關,且可擴展的序列化結構化數 據的描述語言 (IDL) – 支持Golang、Java、C++、Python 等多種語言,支持多個平台 – 高效 即比XML 更小(3 ~ 10倍)、更快(20 ~ 100倍)、更為 簡單 – 擴展性、兼容性好 你可以更新數據結構,而不影響和破壞原 有的舊程序 • gRPC – 基於 HTTP/2 的協定,gRPC 是種協議緩衝協定,它必須事先 定義 IDL(.proto),然後透過 Compile 的支援,編譯成特定語言 框架,傳輸的序列化結構資訊 – 四種調用方式 • Unary RPC: 一元RPC • Server-side streaming RPC: 服務端流式 RPC • Client-side streaming RPC: 客戶端流式 RPC • Bidirectional streaming RPC: 雙向流式 RPC
  • 12. 編寫 Protocol format file (*.proto) • Language Guide: https://developers.google.com/protocol-buffers/docs/proto3 • Protocol Buffer Basics Go: https://developers.google.com/protocol-buffers/docs/gotutorial Defining your protocol format Compiling your protocol buffers Scalar Value Types in Go
  • 13. Example: proto file (IDL) • 用 protoc 來 compile IDL 產生 Golang gRPC 程式碼 • 產生出的 *.pb.go 不能修改 compile test.proto test.pb.go protoc
  • 14. For Instance: Refactoring a Service with gRPC • Implement gRPC for Tracing TCP service Pod: TraceTcpService gRPC Server Pod: Frontend REST APIs + Web UI gRPC Stub gRPC on HTTP2 with TcpSendRecvTime 需實作gRPC Client Side Code based on test.pb.go 與 Gin 界接 需實作gRPC Server Side Code based on test.pb.go 與 後端處理資料的部分 註冊服務
  • 15. Kubernetes Service Discovery • Recap: – Cluster users will user Service (Cluster IP) to connect to the service • Service Discovery By DNS: • The naming rule: – <service_name>.<namespace>.svc.cluster.local – <service_name> – Service’s Cluster IP is a virtual IP which is not changed. – It is not to “access” a pod without a service. – Services are objects that define the desired state of an ultimate set of iptable rule(s) or other kind of implementations. https://tachingchen.com/tw/blog/kubernetes-service-in-detail-1/
  • 16. Kubernetes Service Discovery • Every Microservices should have to define a Service ( Service Name ) in Kubernetes • Define env variables with service name and port number in Deployment YAML or Configmap YAML instead of in the source code. Pod: Frontend env: REST APIs + Web UI Pod
  • 17. Check Pod Health and gracefully shutdown • 使用 binary file 作為發動 probes: – https://github.com/grpc-ecosystem/grpc-health- probe/releases/download/v0.3.6/grpc_health_prob e-linux-amd64 • 使用 Go package: grpc_health_v1 – 實作 liveness & readiness probe response functions 並註冊healthpb到內部的gRPC service • Gracefully shutdown pods and restart them with DaemonSet Pod: A main process └─ service job └─ run other tasks 2. A SIGTERM signal is sent to the main process in each container 3. pass signals to children 1. Pod's liveness probe detects its state to Error
  • 18. PProf – Go Tool Performance Profiler • 利用 Go 內建的分析工具來從內部分析程式的運行,可以知道效 能瓶頸(bottleneck)在哪裡 • pprof 支援了很多種分析類別: – CPU profile : 報告程序的CPU 使用情況,按照一定頻率去採 集應用程序在CPU 和寄存器上面的數據 – Memory Profile(Heap Profile): 報告程序的內存使用情況 – Block Profiling : 報告goroutines 不在運行狀態的情況 – Goroutine Profiling :報告goroutines 的使用情況 – Mutex Profiling:報告互斥鎖的競爭情況
  • 19. Pprof - CPU分析 • 使用方式: 啟動 pprof web server Import pprof related packages 通過交互式終端使用 $ go tool pprof http://0.0.0.0:6060/debug/pprof/profile?seconds=60 執行該命令後,需等待60秒(可調整seconds的值), pprof會進行CPU Profiling。 結束後將默認進入pprof的交互式命令模式, 可以對分析的結果進行查看或導出。 例如: $ top50 flat :本函數的執行耗時 flat% :flat 佔CPU 總時間的比例 sum% :前面每一行的flat 佔比總和 cum :累計量。指該函數加上該函數調用的函數總耗時 cum% :佔CPU 總時間的比例 總耗時: 140ms
  • 20. Pprof - CPU分析 • PProf 可視化界面(Web) – 可視化界面需要輸入go tool pprof 所產生出來的data – 一般預設都會在 user home directory/pprof, 例如: • 查看可視化界面(Web) $ go tool pprof -http=0.0.0.0:8080 ~/pprof/pprof.main.samples.cpu.002.pb.gz (Cum%)
  • 21. Message Queue - ZeroMQ • APM is planned to support Non-Kubernetes environment’s monitoring and tracing included: – Linux Host • We will adopt ZeroMQ as message queue system • Pros – No need Broker – High throughput, low latency – Easily integrated into components, lightweight deployment – Support PUB/SUB, PULL/PUSH, REQ/RES, or Mix together – Supports Linux/Windows • Cons – Messages not guaranteed delivery
  • 22. Message Queue - ZeroMQ • APM uses extended Pub-Sub pattern – Every agent or service using ZeroMQ will not be affected by others – If service is dead, agent won’t be blocked – Bind will bind a IP and Port, Connect needs to know the IP and Port. APM Service XPUB XSUB Proxy Agent publish subscribe Forward & log bind bind connect connect Linux Host (non-K8S) K8S Cluster Host APM ZMQ
  • 23. Message Queue - ZeroMQ • Publisher A (an agent) – send messages: • “Topic_A|content1” • “Topic_B|content2” • Subscriber D (a service) – subscribes a subject: “Topic_A” – Will receive the message only with Topic_A Topic_A Publisher A Subscriber D Topic_B|content2 Topic_A|content1 Proxy Topic_A|content1 Subscriber E Subscriber F Publisher B Publisher C Topic_C|content1
  • 24. Message Queue - ZeroMQ • APM will use extended Pub-Sub pattern APM Service A XPUB XSUB Proxy Linux Docker Agent publish subscribe Forward & log Subscriber Subject: “XXXXX”, “YYYYY” APM Other Service Subscriber Subject: “ZZZZZ”, “XXXXX” connect connect bind bind
  • 25. Use Ingress Controller as Proxy Server • Internet/outside users can access pod’s web via Ingress • We adopt Nginx Ingress Controller with Bare Metal Using NodePort • Traffic routing is controlled by rules defined on the Ingress resource. • Ingress Controller is based on HTTP host header to determine which service it routes to K8S Cluster Pod A (port: 8088) Service A Nginx Ingress controller Internet/ Intranet http://apmweb:30309 Service B Service C Pod B (port: 8080) Pod C (port: 3001) Ingress resource Nodeport: 30309
  • 26. K8S Ingress Nginx controller and rules Ingress Rule examples • 需在DNS Server定義 domain name 與 IP 或是 在/etc/hosts設定 • The HOST header must match the host name specified in the Ingress. • 例如: – Curl: curl --header "HOST: apmweb" http://192.168.122.167:(nodeport)/ – 瀏覽器: http://apmweb:(nodeport)/ • 查詢 Ingress Resources 在K8S v1.22以上的版本 須使用Ingress-Nginx controller version v.1.0.0 Ingress 需使用 networking.k8s.io/v1 在K8S v1.21以下的版本 須使用Ingress-Nginx controller version v.0.48以下 Ingress 需使用 networking.k8s.io/v1beta1 ( Syntax不同)
  • 27. Some fixed issues/bugs • Go 裡面有兩種資料結構,分別是 Array 和 Slice: – Array:清單的長度是固定的(fixed length),屬於原生 型別(primitive type),較少在程式中使用。 – Slice:可以增加或減少清單的長度,使用 [] 定義,例如, []byte 是 byte slice – 要注意Slice更動值後, 所reference到的位置也會變動, 故 要assign回去 • 傳遞Interrupt Signal to Pod’s app – use the CMD in the exec form. • 例如: CMD [ "myapp" ] • gRPC 內的Request arguments, 若少填不會報錯,但會影響到 Server Side
  • 28. Q&A