-
Notifications
You must be signed in to change notification settings - Fork 305
/
Copy pathDevOps Project - Zomato - Kastro.txt
716 lines (562 loc) · 30 KB
/
DevOps Project - Zomato - Kastro.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
DevOps Project - Kastro
Deployment of ZOMATO Project
--------------------------------------------
Repo URL: https://github.com/KastroVKiran/DevOps-Project-Zomato-Kastro.git
1. Launch an Instance (Ubuntu, 24.04, t2.large, 30 GB)
2. Connect to the instance
3. Update the packages
$ switch to root user ---> sudo su
$ sudo apt update -y
4. Install AWS CLI
sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
5. Install Jenkins on Ubuntu
(Reference URL for commands: https://www.jenkins.io/doc/book/installing/linux/#debianubuntu)
#!/bin/bash
sudo apt update -y
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | sudo tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y
/usr/bin/java --version
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl status jenkins
Verifiy Jenkins installation: jenkins --version
5.1. Open Port No. 8080 for VM and access Jenkins
5.2. Setup Jenkins by following the necessary steps
6. Install Docker on Ubuntu
(Reference URL for commands: https://docs.docker.com/engine/install/ubuntu/)
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo usermod -aG docker ubuntu
sudo chmod 777 /var/run/docker.sock
newgrp docker
sudo systemctl status docker
Verifiy Docker installation: docker --version
7. Install Trivy on Ubuntu
(Reference URL for commands: https://aquasecurity.github.io/trivy/v0.55/getting-started/installation/)
sudo apt-get install wget apt-transport-https gnupg
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb generic main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
Verifiy Trivy installation: trivy --version
8. Install Docker Scout
Make sure to Login to DockerHub account in browser
<Follow the process as explained in the video>
9. Install SonarQube using Docker
$ docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
$ docker ps (You can see SonarQube container)
<Follow the process as explained in the video>
10. Installation of Plugins in Jenkins
Install below plugins:
<Follow the process as explained in the video>
11. SonarQube configuration in Jenkins
<Follow the process as explained in the video>
11.1. Tools Configuration in Jenkins
<Follow the process as explained in the video>
11.2. Configuration of SonarQube Token in Jenkins
<Follow the process as explained in the video>
Lets create another credentials for DockerHub. This is being done because, as soon as the docker image is created, it should get pushed to dockerhub.
<Follow the process as explained in the video>
11.3 Configuration of Email notification in Jenkins
As soon as the build happens, i need to get an email notification to do that we have to configure our email.
<Follow the process as explained in the video>
12. System Configuration in Jenkins
<Follow the process as explained in the video>
13. Create webhook in SonarQube
<Follow the process as explained in the video>
14. Create Pipeline Job
Before pasting the pipeline script, do the following changes in the script
1. In the stage 'Tag and Push to DockerHub', give your docker-hub username. Similar thing you should do in 'DockerScoutImage', 'Deploy to container' stages
2. In post actions stage in pipeline, make sure to give the email id you have configured in jenkins.
*********************
Pipeline Script
*********************
pipeline {
agent any
tools {
jdk 'jdk17'
nodejs 'node23'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
}
stages {
stage ("clean workspace") {
steps {
cleanWs()
}
}
stage ("Git Checkout") {
steps {
git 'https://github.com/KastroVKiran/Zomato-Project-Kastro.git'
}
}
stage("Sonarqube Analysis"){
steps{
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=zomato \
-Dsonar.projectKey=zomato '''
}
}
}
stage("Code Quality Gate"){
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage("Install NPM Dependencies") {
steps {
sh "npm install"
}
}
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit -n', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage ("Trivy File Scan") {
steps {
sh "trivy fs . > trivy.txt"
}
}
stage ("Build Docker Image") {
steps {
sh "docker build -t zomato ."
}
}
stage ("Tag & Push to DockerHub") {
steps {
script {
withDockerRegistry(credentialsId: 'docker') {
sh "docker tag zomato kastrov/zomato:latest "
sh "docker push kastrov/zomato:latest "
}
}
}
}
stage('Docker Scout Image') {
steps {
script{
withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){
sh 'docker-scout quickview kastrov/zomato:latest'
sh 'docker-scout cves kastrov/zomato:latest'
sh 'docker-scout recommendations kastrov/zomato:latest'
}
}
}
}
stage ("Deploy to Container") {
steps {
sh 'docker run -d --name zomato -p 3000:3000 kastrov/zomato:latest'
}
}
}
post {
always {
emailext attachLog: true,
subject: "'${currentBuild.result}'",
body: """
<html>
<body>
<div style="background-color: #FFA07A; padding: 10px; margin-bottom: 10px;">
<p style="color: white; font-weight: bold;">Project: ${env.JOB_NAME}</p>
</div>
<div style="background-color: #90EE90; padding: 10px; margin-bottom: 10px;">
<p style="color: white; font-weight: bold;">Build Number: ${env.BUILD_NUMBER}</p>
</div>
<div style="background-color: #87CEEB; padding: 10px; margin-bottom: 10px;">
<p style="color: white; font-weight: bold;">URL: ${env.BUILD_URL}</p>
</div>
</body>
</html>
""",
to: '[email protected]',
mimeType: 'text/html',
attachmentsPattern: 'trivy.txt'
}
}
}
If the build stage of "OWASP FS SCAN" shows 'UNSTABLE BUILD' replace the below script in OWASP FS SCAN stage
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit --update -n', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
Let the pipeline gets built. Meanwhile we will create VMs for monitoring.
------------------------------------------------------------
MONITORING OF APPLICATION
------------------------------------------------------------
15. Launch VM (Name: Monitoring Server, Ubuntu 24.04, t2.large, Select the SG created in the Step 1, EBS: 30GB)
We will install Grafana, Prometheus, Node Exporter in the above instance and then we will monitor
--------------------------------------------------
15.1. Connect to 'Monitoring Server' VM
--------------------------------------------------
--------------------------------------------------
15.2. Installing Prometheus
--------------------------------------------------
First, create a dedicated Linux user for Prometheus and download Prometheus
sudo useradd --system --no-create-home --shell /bin/false prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
Extract Prometheus files, move them, and create directories:
tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
cd prometheus-2.47.1.linux-amd64/
sudo mkdir -p /data /etc/prometheus
sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
Set ownership for directories:
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
Create a systemd unit configuration file for Prometheus:
sudo vi /etc/systemd/system/prometheus.service
Add the following content to the prometheus.service file:
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle
[Install]
WantedBy=multi-user.target
Explanation of the key elements in the above prometheus.service file:
User and Group specify the Linux user and group under which Prometheus will run.
ExecStart is where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml), the storage directory, and other settings.
web.listen-address configures Prometheus to listen on all network interfaces on port 9090.
web.enable-lifecycle allows for management of Prometheus through API calls.
Enable and start Prometheus:
sudo systemctl enable prometheus
sudo systemctl start prometheus
Verify Prometheus's status:
sudo systemctl status prometheus
Press Control+c to come out
Access Prometheus in browser using your server's IP and port 9090:
http://<your-server-ip>:9090
If it doesn't work, in the web link of browser, remove 's' in 'https'. Keep only 'http' and now you will be able to see.
You can see the Prometheus console.
Click on 'Status' dropdown ---> Click on 'Targets' ---> You can see 'Prometheus (1/1 up)'
--------------------------------------------------
15.3. Installing Node Exporter
--------------------------------------------------
cd
You are in ~ path now
Create a system user for Node Exporter and download Node Exporter:
sudo useradd --system --no-create-home --shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
Extract Node Exporter files, move the binary, and clean up:
tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*
Create a systemd unit configuration file for Node Exporter:
sudo vi /etc/systemd/system/node_exporter.service
Add the following content to the node_exporter.service file:
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter --collector.logind
[Install]
WantedBy=multi-user.target
Note: Replace --collector.logind with any additional flags as needed.
Enable and start Node Exporter:
sudo systemctl enable node_exporter
sudo systemctl start node_exporter
Verify the Node Exporter's status:
sudo systemctl status node_exporter
You can see "active (running)" in green colour
Press control+c to come out of the file
------------------------------------------------------------
15.4: Configure Prometheus Plugin Integration
------------------------------------------------------------
As of now we created Prometheus service, but we need to add a job in order to fetch the details by node exporter. So for that we need to create 2 jobs, one with 'node exporter' and the other with 'jenkins' as shown below;
Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.
Prometheus Configuration:
To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the prometheus.yml file.
The path of prometheus.yml is; cd /etc/prometheus/ ----> ls -l ----> You can see the "prometheus.yml" file ----> sudo vi prometheus.yml ----> You will see the content and also there is a default job called "Prometheus" Paste the below content at the end of the file;
- job_name: 'node_exporter'
static_configs:
- targets: ['<MonitoringVMip>:9100']
- job_name: 'jenkins'
metrics_path: '/prometheus'
static_configs:
- targets: ['<your-jenkins-ip>:<your-jenkins-port>']
In the above, replace <your-jenkins-ip> and <your-jenkins-port> with the appropriate IPs ----> esc ----> :wq
Check the validity of the configuration file:
promtool check config /etc/prometheus/prometheus.yml
You should see "SUCCESS" when you run the above command, it means every configuration made so far is good.
Reload the Prometheus configuration without restarting:
curl -X POST http://localhost:9090/-/reload
Access Prometheus in browser (if already opened, just reload the page):
http://<your-prometheus-ip>:9090/targets
Open Port number 9100 for Monitoring VM
You should now see "Jenkins (1/1 up)" "node exporter (1/1 up)" and "prometheus (1/1 up)" in the prometheus browser.
Click on "showmore" next to "jenkins." You will see a link. Open the link in new tab, to see the metrics that are getting scraped
------------------------------------------------------------
15.5: Install Grafana
------------------------------------------------------------
You are currently in /etc/Prometheus path.
Install Grafana on Monitoring Server;
Step 1: Install Dependencies:
First, ensure that all necessary dependencies are installed:
sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common
Step 2: Add the GPG Key:
cd ---> You are now in ~ path
Add the GPG key for Grafana:
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
You should see OK when executed the above command.
Step 3: Add Grafana Repository:
Add the repository for Grafana stable releases:
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
Step 4: Update and Install Grafana:
Update the package list and install Grafana:
sudo apt-get update
sudo apt-get -y install grafana
Step 5: Enable and Start Grafana Service:
To automatically start Grafana after a reboot, enable the service:
sudo systemctl enable grafana-server
Start Grafana:
sudo systemctl start grafana-server
Step 6: Check Grafana Status:
Verify the status of the Grafana service to ensure it's running correctly:
sudo systemctl status grafana-server
You should see "Active (running)" in green colour
Press control+c to come out
Step 7: Access Grafana Web Interface:
The default port for Grafana is 3000
http://<monitoring-server-ip>:3000
Default id and password is "admin"
You can Set new password or you can click on "skip now".
Click on "skip now" (If you want you can create the password)
You will see the Grafana dashboard
The first thing that we have to do in Grafana is to add the data source
Lets add the data source;
<Follow the process as explained in the video>
Click on Dashboards in the left pane, you can see both the dashboards you have just added.
---------------------------------------------
Creation of EKS cluster
---------------------------------------------
We need to run the same application on K8S cluster. In order to do that we need to create a K8S cluster. I will create the cluster using EKS service in AWS using VS code editor.
Note 1: You might get errors while executing the below commands. Make sure to do the required configurations eksctl, kubectl, and other tools.
Note 2: Run the VS Code Editor/Power Shell/Command Prompt as Administrator, to avoid errors.
Open vs code editor and execute the below commands;
Step 01: Create EKS Cluster using eksctl
# Create Cluster. I will keep the cluster name as "kastrocluster"
eksctl create cluster --name=kastrocluster \
--region=ap-northeast-1 \
--zones=ap-northeast-1a,ap-northeast-1c \
--without-nodegroup
If you see any error while executing the above commands in VS Code Editor, it is due to how PowerShell interprets backslashes (\) as line continuation characters. Unlike Unix-like shells, PowerShell does not support line continuation in this way.
To resolve this issue, you can either write the entire command on a single line or use a different method for line continuation. Here are the two approaches:
Approach 1: Single Line Command (In this video i will prefer this)
Simply run the entire command in one line without using backslashes for line continuation:
eksctl create cluster --name=kastrocluster --region=ap-northeast-1 --zones=ap-northeast-1a,ap-northeast-1c --without-nodegroup
Approach 2: Use PowerShell’s Backtick for Line Continuation
If you want to split the command across multiple lines, you can use the backtick character (`) in PowerShell for line continuation:
eksctl create cluster --name=kastrocluster `
--region=ap-northeast-1 `
--zones=ap-northeast-1a,ap-northeast-1c `
--without-nodegroup
Note: Make sure there is no space after the backtick.
It will take atleast 20-25 minutes for the cluster to create.
To verify the cluster creation ---> Goto Cloud Formation service in AWS ----> You should see a stack got created with the name "kastrocluster". Make sure in the vs code editor the cluster will get created. As said earlier it will take atleast 20 minutes.
Once the cluster is ready, you will see "EKS Cluster "kastrocluster" in "us-east-1" region is ready" in vs code editor. wait till you see this.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Get List of clusters
eksctl get cluster
Execute the below in vs code editor;
Step 02: Create & Associate IAM OIDC Provider for our EKS Cluster
To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity provider.
To do so using eksctl we can use the below commands.
# Template
eksctl utils associate-iam-oidc-provider \
--region region-code \
--cluster <cluster-name> \
--approve
# Replace with region & cluster name
eksctl utils associate-iam-oidc-provider \
--region ap-northeast-1 \
--cluster kastrocluster \
--approve
(OR)
eksctl utils associate-iam-oidc-provider --region ap-northeast-1 --cluster kastrocluster --approve
(OR)
eksctl utils associate-iam-oidc-provider `
--region uap-northeast-1 `
--cluster kastrocluster `
--approve
Step 03: Create Node Group with additional Add-Ons in Public Subnets
These add-ons will create the respective IAM policies for us automatically within our Node Group role.
# Create Public Node Group
eksctl create nodegroup --cluster=kastrocluster \
--region=ap-northeast-1 \
--name=kastrodemo-ng-public1 \
--node-type=t3.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=Prajwal \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access
(OR)
eksctl create nodegroup --cluster=kastrocluster --region=ap-northeast-1 --name=kastrodemo-ng-public1 --node-type=t3.medium --nodes=2 --nodes-min=2 --nodes-max=4 --node-volume-size=20 --ssh-access --ssh-public-key=Prajwal --managed --asg-access --external-dns-access --full-ecr-access --appmesh-access --alb-ingress-access
(OR)
eksctl create nodegroup --cluster=kastrocluster `
--region=ap-northeast-1 `
--name=kastrodemo-ng-public1 `
--node-type=t3.medium `
--nodes=2 `
--nodes-min=2 `
--nodes-max=4 `
--node-volume-size=20 `
--ssh-access `
--ssh-public-key=Prajwal `
--managed `
--asg-access `
--external-dns-access `
--full-ecr-access `
--appmesh-access `
--alb-ingress-access
Step 05: Verify Cluster & Nodes
Goto EKS Service in AWS and check for the cluster creation
******************************************
Optional - do it at the end of complete demo
******************************************
Step 06: Delete Node Group
# List EKS Clusters
eksctl get clusters
# Capture Node Group name
eksctl get nodegroup --cluster=<clusterName>
eksctl get nodegroup --cluster=kastrocluster
# Delete Node Group
eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>
eksctl delete nodegroup --cluster=kastrocluster --name=kastrodemo-ng-public1
Step 07: Delete Cluster
# Delete Cluster
eksctl delete cluster <clusterName>
eksctl delete cluster kastrocluster
********************************************************************************
********************************************************************************
Let us deploy the same application in the EKS cluster also
<Follow the process as explained in the video>
------------------------------------------------------------
15.6: Argo CD installation
------------------------------------------------------------
Inorder to monitor k8s with Prometheus, we need to install ArgoCD. Lets do that
Execute the below commands in vs code editor
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
wait for sometime till the namespace gets created.
The above command will create a namespace with "argocd" name
By default the argo CD server is not publicly exposed, so we need to expose it publicly. To do that, execute the below command;
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
(OR) Command Prompt Execution
kubectl patch svc argocd-server -n argocd -p "{\"spec\": {\"type\": \"LoadBalancer\"}}"
After successful execution you should see "patched"
To see the namespace got created or not ----> kubectl get ns ----> you will see argocd namespace
To see the pods available in the argocd namespace ----> kubectl get pods -n argocd ----> you will see the pods
Wait for 5 minutes for the load balancer creation. Once the loadbalancer is created, we will get the load balancer url.
Meanwhile execute the below commands in vs code editor
------------------------------------------------------------
15.7: Monitor Kubernetes with Prometheus
------------------------------------------------------------
Used to monitor Kubernetes cluster.
Additionally, you'll install the node exporter using Helm to collect metrics from your cluster nodes.
Install Node Exporter using Helm
To begin monitoring your Kubernetes cluster, you'll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:
Add the Prometheus Community Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Create a Kubernetes namespace for the Node Exporter:
kubectl create namespace prometheus-node-exporter
Install the Node Exporter using Helm:
helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter
Lets continue with load balancer thing of previous step; execute the below in VS code editor
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
Execute the below command in powershell, if the command doesn't get executed in VS Code Editor
$env:ARGOCD_SERVER = $(kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname')
(Ref URL: https://archive.eksworkshop.com/intermediate/290_argocd/configure/)
To get the loadbalancer url;
echo $ARGOCD_SERVER
Execute the below command in powershell, if the command doesn't get executed in VS Code Editor
echo $env:ARGOCD_SERVER
You will see the load balancer url, copy it and paste in browser. You will see the ArgoCD Homepage.
Username is "admin"
To get the password, execute the below command in vs code editor;
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
Execute the below command in powershell, if the command doesn't get executed in VS Code Editor
$env:ARGO_PWD = (kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | % { [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($_)) })
To see the password;
echo $ARGO_PWD
Execute the below command in powershell, if the command doesn't get executed in VS Code Editor
echo $env:ARGO_PWD
You will see the password. copy and paste it in the argo cd homepage --->login
<Follow the process as explained in the video>
Note: In the repo, in Kubernetes folder, in the deployment.yml file, in the containers section change the dockerhub username
Add a Job to Scrape Metrics on nodeip:9001/metrics in prometheus.yml:
Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:
Go to the monitoring server tab in Moba and execute the below commands;
sudo vi /etc/prometheus/prometheus.yml ----> Paste the below commands at the bottom of screen ---->
- job_name: 'k8s'
metrics_path: '/metrics'
static_configs:
- targets: ['nodeIP:9100']
In the above, to get the "nodeIP", goto EKS in AWS ----> Click on EKS Cluster ----> "Compute" tab ----> Nodes ----> Click on any one node ----> Click on the "instance id" ----> Copy the public ip ----> Paste in the above script
The static_configs section specifies the targets to scrape metrics from, and in this case, it's set to nodeip:9001.
----> esc ----> :wq ----> promtool check config /etc/prometheus/prometheus.yml ----> You should see "Success" ----> Check the validity of the configuration file ----> promtool check config /etc/prometheus/prometheus.yml ----> curl -X POST http://localhost:9090/-/reload
Goto Prometheus and reload. Goto ArgoCD and reload to see whether the pipeline is done or not
Copy the public ip of "nodeIP" which we have done exactly 4 steps above this line ---> Goto browser and paste it:30001 ----> Make sure to open the port 30001 for the "nodeIP:" VM ----> You will see the application
Note: If you see error in Prometheus under "k8s", open port number 9100 for the EC2 instances which were created as part of EKS cluster i.e nodes
After everything is done. Delete everything. Make sure to delete the Cloud Formation Stacks.
========================================================================================================
Kind Request:
Once after you have successfully deployed the App, kindly share your experience on LinkedIn by Tagging me and also provide the YouTube link of this project in your post, as it helps others to access the video quickly.
========================================================================================================
HAPPY LEARNING